uuid
int64 541B
3,299B
| dataset
stringclasses 1
value | text
stringlengths 1
4.29M
|
---|---|---|
1,108,101,566,315 | arxiv | \section{Introduction}
The most extended convention place the mass range of brown dwarfs
(BDs) at $ 13 - 80 \, M_{J}$ (being $M_J$ the mass of Jupiter), having
enough mass to burn deuterium but not for hydrogen fusion~\citep{bur97},
i.e. in between the heaviest giant planets and
the lightest stars.
BDs above $\sim 65 \, M_J$ are thought to fuse lithium and
therefore the detection of the \ion{Li}{I}~$\lambda$~6708~{\AA} could be used
to identify BDs, this is the so-called ``Lithium Test''~\citep{reb92}.
BDs were predicted by \citet{kum62} and \citet{hay63}, but they were not
empirically confirmed until 1995, when the first {\rm field} brown dwarf was
detected \citep[\textbf{Teide 1},][]{reb95}. This occurs the same year as
the discovery of the first extra-solar planet~\citep{may95}.
The first BD companion to a M-dwarf
star was also discovered that year~\citep[\textbf{GJ 229B},][]{nak95}.
During the following two decades high-precision radial velocity (RV) surveys
have shown that close BDs around solar-type stars are
rare~\citep[][and references therein]{gre06}.
Thus, at orbital separations of less than 10 AU, the frequency
BD companions remains below 1~\%~\citep{mar00}, whereas it is $\sim 7$~\% for
giant planets~\citep{udr07,may11} and $\sim 13$~\% for stellar
binaries~\citep{hal03}.
The so-called "Brown dwarf desert"
may be interpreted as the gap between the largest-mass objects that
can be formed in protoplanetary discs, and the smallest-mass clumps that
can collapse and/or fragment in the vicinity of a
protostar~\citep{mag14}.
The mass function, $dN/dm_C \propto m_C^\alpha$, of close planetary and stellar
companions drops away ($\alpha \sim -1$) towards the BD mass
range~\citep{gre06}.
On the other hand, the mass function of isolated substellar objects is
roughly flat or even with linear increase ($\alpha\sim 0$) down to
$\sim 20$~$M_J$~\citep{cha02,kir12}. This may point to a different formation
scenario for close BD companions and BDs in the field and clusters.
\citet{sah11} presented the discovery of nine BD companions from a sample
of 33 solar-type stars that exhibit RV variations caused by a companion
in the mass range $m_C \sin i \sim 13-80$~$M_J$. They used Hipparcos
astrometric data~\citep{per97} to confidently discard some of the BD
candidates. Including literature data, these authors quoted 23
remaining potential BD candidates.
From CORALIE planet-search sample, they obtain an upper limit of 0.6\%
for the frequency of BD companions around Sun-like stars.
Recently, \citet{mag14} have collected all the BD candidates
available in the literature including those in \citet{sah11},
some from the SDSS-III MARVELS survey~\citep{gej08} and some
other RV surveys~\citep[e.g.][]{mar00}.
The metallicity of stars with BD companions have been briefly discussed in
\citep{sah11}. They note that the sample is still too small to claim any
possible metallicity distribution of stars hosting BDs.
\citet{mag14} extended the sample to roughly 65 stars with BD
candidates,
including dwarfs and giants, and stated that the mean metallicity of
their sample is $\langle$[Fe/H]$\rangle$~$=-0.04$ ($\sigma=0.28$), i.e.
remarkably lower than that of stars with giant planets
($\langle$[Fe/H]$\rangle$~$=+0.08$, \citealp{sou08,sou11}).
On the other hand,
stars with only detected ``small'' planets (hereafter ``small''
planet refers to a low-mass planet, including Super-Earths and
Neptune-like planets, with $m_C \sin i < 30 M_\oplus$,
whereas ``giant'' planet refers to high-mass planets, including
Saturn-like and Jupiter-like planets, with
$30 M_\oplus < m_C \sin i < 13 M_J$, see Section~\ref{sec3})
do not seem to require high metal content to form planets within planetary
discs~\citep{sou08,sou11,adi12b}.
\citet{sou11} study a sample of 107 stars with planets (97 giant and
10 small planets)
and found an average metallicity of stars with small planets at about
$\langle$[Fe/H]$\rangle$~$=-0.11$, very similar to that of stars without
detected planets~\citep{sou08}.
Currently, there are two well-established
theories for giant planet formation: core-accretion scenario~\citet{pol96}
and disc gravitational instability~\citep{bos97}.
The core-accretion model is more sensitive to the
fraction of solids in a disc than is the disc-instability model.
The formation of BDs has been also extensively studied. Two main
mechanism have been proposed: molecular cloud
fragmentation~\citep{pad04}, and disc fragmentation~\citep{sta09}.
The latter mechanism, which requires a small fraction of Sun-like stars
should host a massive extended disc, is able to explain most
of the known BDs which may either remain bound to the primary star,
or be ejected into the field~\citep{sta09}.
In this paper, we present a uniform spectroscopic analysis for a sample of
stars with BD companions from \citet{sah11} and we compare the results with
those of a sample of stars with known giant and small planets from previous
works~\citep{adi12b}. The aim of this work is to provide some
information that could be useful to distinguish among the different and
possible formation mechanisms of BD companions.
\begin{table*}
\caption[]{Stellar parameters of the CORALIE sample}
\label{tpar}
\centering
\begin{tabular}{lcccccc}
\noalign{\smallskip}
\noalign{\smallskip}
\noalign{\smallskip}
\hline\hline
\noalign{\smallskip}
Star & $ T_{\rm eff}$ & $ \log g$ & $ \xi_t$ & $\rm [Fe/H]$ & $ M_2 \sin i$ & References \\
& $\rm [K]$ & $\rm [dex]$ & $\rm [cm/s]$ & $\rm [dex]$ & $ [M_J]$
\\
\hline
\noalign{\smallskip}
HD4747 & $5316\pm 50$ & $4.48\pm 0.10$ & $0.79\pm 0.10$ & $-0.21\pm 0.05$ & $46.1$ & 2\\
HD52756 & $5216\pm 65$ & $4.47\pm 0.11$ & $1.11\pm 0.13$ & $0.13\pm 0.04$ & 59.3 & 1\\
HD74014 & $5662\pm 55$ & $4.39\pm 0.08$ & $1.10\pm 0.07$ & $0.26\pm 0.04$ & 49.0 & 1\\
HD89707 & $6047\pm 42$ & $4.52\pm 0.05$ & $0.99\pm 0.06$ & $-0.33\pm 0.03$ & 53.6 & 1\\
HD167665 & $6224\pm 39$ & $4.44\pm 0.04$ & $1.18\pm 0.05$ & $-0.05\pm 0.03$ & 50.6 & 1\\
HD189310 & $5188\pm 50$ & $4.49\pm 0.09$ & $0.94\pm 0.10$ & $-0.01 \pm 0.03$ & 25.6 & 1 \\
HD211847 & $5715\pm 24$ & $4.49\pm 0.05$ & $1.05\pm 0.03$ & $-0.08\pm 0.02$ & 19.2 & 1 \\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
HD3277$^a$ & $5539\pm 49$ & $4.36\pm 0.06$ & $0.91\pm 0.07$ & $-0.06\pm 0.04$ & 64.7 & 1\\
HD17289$^a$ & $5924\pm 32$ & $4.37\pm 0.04$ & $1.15\pm 0.04$ & $-0.11\pm 0.03$ & 48.9 & 1\\
HD30501$^a$ & $5223\pm 27$ & $4.56\pm 0.08$ & $1.18\pm 0.04$ & $-0.06\pm 0.02$ & 62.3 & 1\\
HD43848$^a$ & $5334\pm 92$ & $4.56\pm 0.15$ & $1.35\pm 0.17$ & $0.22\pm 0.06$ & 24.5 & 1\\
HD53680$^a$ & $5167\pm 94$ & $5.37^b\pm 0.29$ & $2.08\pm 0.31$ & $-0.29\pm 0.04$ & 54.7 & 1\\
HD154697$^a$ & $5648\pm 45$ & $4.42\pm 0.05$ & $1.04\pm 0.06$ & $0.13\pm 0.04$ & 71.1 & 1\\
HD164427A$^a$& $6003\pm 27$ & $4.35\pm 0.03$ & $1.19\pm 0.03$ & $0.19\pm 0.02$ & 48.0 & 1\\
HIP103019$^a$& $4913\pm 115$& $4.45\pm 0.28$ & $0.54^c\pm 0.10$ & $-0.30\pm 0.06$ & 52.5 & 1\\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
HD74842$^d$ & $5517\pm 38$ & $4.50\pm 0.06$ & $1.01\pm 0.06$ & $-0.08\pm 0.03$ & -- & 3 \\
HD94340$^d$ & $5902\pm 26$ & $4.19\pm 0.03$ & $1.30\pm 0.03$ & $0.11\pm 0.02$ & -- & 3 \\
HD112863$^d$ & $5342\pm 36$ & $4.57\pm 0.07$ & $1.08\pm 0.07$ & $-0.11\pm 0.03$ & -- & 3\\
HD206505$^d$ & $5392\pm 44$ & $4.46\pm 0.07$ & $1.02\pm 0.07$ & $0.11\pm 0.03$ & -- & 3\\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
\noalign{\smallskip}
\end{tabular}
\tablebib{(1) \citet{sah11}; (2) \citet{san05}; (3) This work.}
\tablefoot{\tablefoottext{a}{These eight stars have companion minimum masses,
$m_C \sin i$, in the BD range determined from spectroscopic RV
measurements, but are discarded in \citet{sah11}, from their
Hipparcos astrometry.}\\
\tablefoottext{b}{The surface gravity of the star HD~53680 is
unusually for its derived effective temperature. A significantly
lower $T_{\rm eff}$ value (probably $ < 4500$~K) is expected from its
weak and narrow $H\alpha$ profile (see Fig.~\ref{fspec} and
Section~\ref{secabu}).}\\
\tablefoottext{c}{The microturbulence of HIP~103019 was calculated
following the expression presented in \citet{adi12c}.}\\
\tablefoottext{d}{These four stars, as a comparison sample, are also
from the CORALIE sample but they do not have detected BD companions}
}
\end{table*}
\section{Observations\label{secobs}}
We analyse data for two different samples obtained with two different
telescopes and instruments: stars with BDs with spectroscopic data
at resolving power $R\sim 50,0000$ taken at the 1.2m-Euler Swiss Telescope
equipped with the CORALIE spectrograph~\citep{udr00a} and stars with
planetary companions observed with the HARPS
spectrograph~\citep{may03} with $R\sim115,000$ installed at the
3.6m-ESO telescope, both of them at
La Silla Observatory (ESO) in Chile.
The individual spectral of each star were reduced in a standard manner, and
latter normalized within the package IRAF\footnote{IRAF is distributed by
National Optical Astronomy Observatories, operated by the Association of
Universities for Research in Astronomy, Inc., under contract with the National
Science Foundation.}, using low-order polynomial fits to the observed
scontinuum.
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics[angle=90]{aaBDsf1.ps}}
\caption{Histograms of the stellar parameters $T_{\rm eff}$, $\log{g}$
and $\rm [Fe/H]$ of our CORALIE sample.}
\label{fma1}
\end{figure}
\section{Sample description and stellar parameters\label{sec3}}
\subsection{Stars with BD-companion candidates}
Our stellar sample has been extracted mostly from F-, G- and K-type
main-sequence stars of the CORALIE RV survey~\citep{udr00a}.
This sample consists of 15 stars with BD companion candidates
reported in~\citet{sah11},
for which the minimum mass, $m_C \sin i$, of most massive companion is
in the brown-dwarf mass range ($13 - 80\, M_J$).
One of these 15 stars, HIP~103019, has been extrated from the HARPS RV
survey~\citep{may03}.
In Table~\ref{tpar} we provide the minimum mass of these 15 BD
candidates.
\citet{sah11} were also able to derive the orbital inclination, $i$,
by using astrometric measurements from Hipparcos~\citep{per97,van07}.
This allowed them to confidently exclude as BD candidates eight stars
from the initial sample of 15 stars because the current mass
determinations, $m_C$, place them in M-dwarf stellar regime.
Stellar parameters of the sample of 14 stars were
collected from \citet{sah11} and one star from \citet{san05}.
Four additional stars from the CORALIE sample without detected BD
companions were analyzed as a comparison/control sample
(see Table~\ref{tpar}).
In Fig.\ref{fma1} we show the histograms of $ T_{\rm eff}$, $\log g$, and
[Fe/H] for our stellar sample.
We only display those stars with available $m_C \sin i$ values in
Table~\ref{tpar}.
We note that \citet{mag14} also collected a sample 65 stars with BD
candidates: 43 stars, 27 dwarfs and 15 giants, with $m_C \sin i$ values
in the range 13-90~$M_J$.
Two stars, included in~\citet{mag14} as BD candidates, HD~30501 and
HD~43848, were discarded by \citet{sah11}, probably because they have
$m_C$ values above but close to the 80~$M_J$ boundary.
Fig.\ref{fma2} depicts the minimum mass of the BD companion $m_c \sin i$
and orbital period against metallicity [Fe/H], including stars of our
sample as well as those stars in the sample of \citet{mag14} for comparison.
We separate giant ($ \log g < 4$) and dwarfs ($ \log g > 4$) stars.
The stars with BD candidates spread over a wide range of orbital
periods, a minimum masses of the BDs, and stellar metallicities.
\subsection{Stars with planetary-mass companions}
The HARPS sub-sample \citep[HARPS-1 sample in][]{adi12b} used in this work
contains 451 stars~\citep{sou08,nev09}, both with and without
planetary companions.
We collect the minimum mass of the most-massive planet in each
planetary system from the encyclopaedia of extra-solar
planets~\footnote{\url{http://exoplanet.eu}}.
The planetary-mass sample is separated in two groups:
(i) {\it small planets} (SP; super-Earth like and Neptune like planets) with
masses of $m_C \sin i < \sim 0.094$~$M_J$ ($\sim 30 M_\oplus$), and
(ii) {\it giant planets} (GP; Saturn like and Jupiter like planets) with
masses in the range $0.094 < m_C \sin i \;[M_J] < 13$.
Two of the stars with giant planets, HD~162020 and HD~202206, within the HARPS
sample have companion masses above 13~$M_J$ and will be considered as BDs
hereafter.
Therefore, our final sample of confirmed BDs contains 9 dwarf stars, with
companions in the mass range $m_C \sin i \sim 13-80$~$M_J$.
In the following, we may refer to ``BD-host stars'' to stars with
confirmed BDs, i.e. with $m_C \sim 13-80$~$M_J$, and ``stars with discarded
BDs'' to those with $m_C \sin i \sim 13-80$~$M_J$ but $m_C > 80$~$M_J$,
according to \citet{sah11}.
The sample of planet-host stars contains 25 stars
with small planets and 78 stars with giant planets. In Table~\ref{tplm}
we provide the minimum mass of the most-massive planet in each planetary
system of the stars in the HARPS sample.
\section{Automatic codes for EW measurements: ARES versus TAME.\label{secew}}
\begin{figure*}
\resizebox{\hsize}{!}{\includegraphics[angle=90]{aaBDsf2a.ps}}
\resizebox{\hsize}{!}{\includegraphics[angle=90]{aaBDsf2b.ps}}
\caption{ARES versus TAME graphics for the stars HD~89707
($\mathrm{S/N}\sim110$) and HD~206505($\mathrm{S/N}\sim70$).
{\it Left panels}: EW measured with TAME versus ARES.
The 1:1 correspondence is shown as a dashed line.
EW differences, $\rm EW_{TAME}-EW_{ARES}$, versus $\rm EW_{ARES}$
({\it middle panels}) and spectral line wavelength ({\it right panels}).
Dash-dotted lines define the mean value of the data points, and dotted lines
define the mean plus the standard deviation.
}
\label{few}
\end{figure*}
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics{aaBDsf3a.ps}}
\resizebox{\hsize}{!}{\includegraphics{aaBDsf3b.ps}}
\caption{Minimum mass of the most-massive companion, $m_C \sin i$, and
orbital period, $P_C$, versus the metallicity of stars with BDs.
The CORALIE sample is depicted as filled symbols, and the sample
in \citet{mag14} is displayed as empty symbols, separated among giant
and dwarf stars.}
\label{fma2}
\end{figure}
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics{aaBDsf4.ps}}
\caption{High-resolution normalized spectra of late G-, K-type stars:
CORALIE spectra of coolest stars in the sample (black), HARPS spectra
(red) of cool stars from the HARPS database~\citep[e.g.][]{sou08} and
the CORALIE spectrum (blue) of HD~53680 and the HARPS spectrum (blue)
of HIP~103019.
The spectra are depicted following a sequence with decreasing
$T_{\rm eff}$ from top to bottom.}
\label{fspec}
\end{figure}
We measure the equivalent widths (EWs) of spectral lines using the
linelists in \citet{sou08,nev09,adi12b} using automatic tools.
We explore two different automatic codes for EWs spectra analysis: the
automatic C++ based code ARES\footnote{The ARES code can be downloaded at:\\
\url{http://www.astro.up.pt/}} \citep{sou07} and a new IDL based code named
TAME\footnote{The TAME code can be downloaded at:\\
\url{http://astro.snu.ac.kr/~wskang/tame/}} \citep{kan12}.
In order to compare these two automatic codes, we measure the EWs of CORALIE
sample with the same input parameters to these two codes.
In Fig.~\ref{few}, we compare the EWs measured using TAME, $EW_{\rm TAME}$,
against those estimated using ARES, $EW_{\rm ARES}$.
The mean value of the EW differences $EW_{\rm TAME}-EW_{\rm ARES}$ is found
at $\sim -1.2\, m\AA$ and $\sim -1.5\, m\AA$ for the stars HD~89707 star
($\mathrm{S/N}\sim110$) and HD~206505($\mathrm{S/N}\sim70$).
The TAME code is very slightly underestimating the EW compared to ARES
measurements, and the scatter of these comparisons is lower than
$\sim 1.5 \, \rm m\AA$.
These EW differences do not exhibit any remarkable dependence on wavelength.
We also tested whether the signal-to-noise ratio (S/N) from our stellar spectra
is the source of this observed tendency and no trend has been found.
The mean value of the EW differences is fluctuating in the
$-2\, \rm m\AA$ to $-1\, \rm m\AA$ range.
The standard deviation of the EW differences improves slightly as the
S/N increases, but it oscillates between $0.5\,\rm m\AA$ to
$1.5\,\rm m\AA$.
This analysis lead us to conclude that both programs show a good agreement
and their differences are not significant and do not have any relevant impact
on the chemical abundance analysis, within the typical error bars of the
EW-based chemical abundance analysis.
\begin{table}
\caption{\ion{Fe}{i} and \ion{Fe}{ii} abundances and standard deviations}
\label{tafe}
\centering
\begin{tabular}{lrr}
\noalign{\smallskip}
\noalign{\smallskip}
\noalign{\smallskip}
\hline
\hline
\noalign{\smallskip}
Star & $\rm[Fe I/H]$ & $\rm[Fe II/H]$ \\
\\
\hline
\noalign{\smallskip}
HD4747 & $-0.28 \pm 0.06 $ & $-0.28 \pm 0.10 $ \\
HD52756 & $ 0.09 \pm 0.10 $ & $ 0.03 \pm 0.15 $ \\
HD74014 & $ 0.23 \pm 0.07 $ & $ 0.16 \pm 0.11 $ \\
HD89707 & $-0.35 \pm 0.09 $ & $-0.40 \pm 0.12 $ \\
HD167665 & $-0.11 \pm 0.10 $ & $-0.09 \pm 0.10 $ \\
HD189310 & $-0.03 \pm 0.10 $ & $-0.10 \pm 0.22 $ \\
HD211847 & $-0.10 \pm 0.06 $ & $-0.11 \pm 0.09 $ \\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
HD3277 & $-0.10 \pm 0.05 $ & $-0.12 \pm 0.08 $ \\
HD17289 & $-0.13 \pm 0.08 $ & $-0.09 \pm 0.13 $ \\
HD30501 & $-0.09 \pm 0.10 $ & $-0.14 \pm 0.18 $ \\
HD43848 & $ 0.18 \pm 0.13 $ & $ 0.14 \pm 0.25 $ \\
HD53680$^\star$ & $-0.37 \pm 0.24 $ & $-0.61 \pm 0.41 $ \\
HD154697 & $ 0.10 \pm 0.06 $ & $ 0.06 \pm 0.10 $ \\
HD164427A & $ 0.15 \pm 0.06 $ & $ 0.14 \pm 0.09 $ \\
HIP103019$^\star$ & $-0.34 \pm 0.29 $ & $-0.68 \pm 0.45 $ \\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
HD74842 & $-0.12 \pm 0.09 $ & $-0.10 \pm 0.11 $ \\
HD94340 & $ 0.10 \pm 0.06 $ & $ 0.08 \pm 0.11 $ \\
HD112863 & $-0.12 \pm 0.09 $ & $-0.14 \pm 0.10 $ \\
HD206505 & $ 0.10 \pm 0.09 $ & $ 0.08 \pm 0.12 $ \\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
\noalign{\smallskip}
\end{tabular}
\tablefoot{\tablefoottext{$\star$}{These stars were discarded for the
abundance analysis due to the large scatter on the $\ion{Fe}{i}$ and
$\ion{Fe}{ii}$ abundances which may be related to the fact that these
stars are probably not well classified and may have in fact
lower $T_{\rm eff}$ (see Section~\ref{secabu})}
}
\end{table}
\section{Chemical abundances\label{secabu}}
We compute the EWs using ARES for consistency with chemical abundance
analysis in \citet{adi12b}. We also use version 2010 of the
MOOG~\footnote{The MOOG code can be
downloaded at: \url{http://www.as.utexas.edu/~chris/moog.html}}
code~\citep{sne73} together with Kurucz ATLAS9 stellar model
atmospheres~\citep{kur93} for chemical abundance determination.
We first check the \ion{Fe}{i} and \ion{Fe}{ii} abundances, using the line
list from \citet{sou08}. The high dispersion, $\sigma$, of the Fe
abundances (see Table~\ref{tafe}) of the stars HD~53680 and
HIP~103019 suggests that these stars may have lower effective
temperatures than those given Table~\ref{tpar}.
In Fig.~\ref{fspec} we depict the normalized spectra of the coolest
stars in the sample together with some spectra of late-G, K-type dwarfs from
the HARPS database~\citep[e.g.][]{sou08}.
The $H\alpha$ profiles of these
two stars do not follow the sequence of temperatures but they
appear to be the coolest objects in Fig.~\ref{fspec}.
In addition, these stars were discarded as candidates of BDs
by \citet{sah11}. We note the unusually high surface gravity
estimated for the star HD~53680 which is likely spurious, consistent
with the large scatter in the Fe abundances and the difference
between \ion{Fe}{i} and \ion{Fe}{ii} abundances. This star is
catalogued in the SIMBAD database as a K6V star in a visual binary
system. The narrow $H\alpha$ line and the large dispersion in
\ion{Fe}{i} and \ion{Fe}{ii} abundances may indicate an even later
spectral type.
For these reasons, they may deserve further analysis and
from this point on, we will not consider them.
We use the linelist in \citet{nev09} on 17 stars of CORALIE remaining sample
(seven stars with confirmed BDs, six with discarded BD candidates,
and four without detected BD companions)
to derive element abundances of \ion{Na}{i}, \ion{Mg}{i}, \ion{Al}{i},
\ion{Si}{i}, \ion{Ca}{i}, \ion{Sc}{i}, \ion{Sc}{ii}, \ion{Ti}{i},\ion{ Ti}{ii},
\ion{V}{i}, \ion{Cr}{i}, \ion{Cr}{ii}, \ion{Mn}{i}, \ion{Co}{i} and \ion{Ni}{i}.
Some lines of the original list are not considered: lines non-detected by
ARES, lines discarded in the \citet{adi12b}, and lines whose abundances
were out of the $3\sigma$ range.
The abundance results are shown in Tables~\ref{tabu1},~\ref{tabu2}
and~\ref{tabu3}.
The chemical abundances of the HARPS sample were obtained from
\citet{adi12b} where they use exactly the same tools and model atmospheres.
\section{Discussion\label{secdisc}}
\citet{gon97, gon00} already noticed that giant planet hosts tend to be
more metal-rich than stars without detected planets.
\citet{san01} provided supporting evidences of a metal-rich origin of
giant-planet host stars and following studies confirmed this result
(e.g.~\citealt{san04,san05,val05}).
Recent studies show that Neptune and super-Earth class planet hosts have a
different metallicity distribution, more similar to stars without planets
(e.g. \citealp{udr06}; \citealp{sou08,sou11}; \citealp{ghe10};
\citealp{may11}; \citealp{buc12}).
In this section, we inspect the abundance ratios of different elements,
$\rm [X/Fe]$, as a function of the metallicity for our BD-companion stellar
sample, as well as comparing them with the planetary-companion stars
analized by \citet{adi12b}.
We also study the distributions of different element
abundances in different samples and we compare them with other stars with
and without planets as a function of the minimum mass of the most-massive
companion of each host star, $m_C \sin{i}$.
\subsection{Galactic abundance trends}
The abundances of the refractory elements in CORALIE sub-sample
exhibit the similar behaviour as other stars with and without planets
analysed in previous works~\citep{nev09,adi12b}
(see Fig.~\ref{ftrend}).
Stars with BDs follow the galactic abundance trend except for some particular
elements (Co, Si, Sc) where the abundances are slightly lower than expected.
These exceptions may be due to the small number of lines to achieve a
reliable mean abundance in certain elements (e.g. ScI for HD~167665).
Stars with BD companions appear to be located at an intermediate range of
metallicity, between stars with and without planets.
In Fig.~\ref{falpha} we display the mean abundance ratio of the
$\alpha$-elements Mg, Si, Ca and Ti (with
[X$_\alpha$/H] computed as the sum of individual element abundances
[X/H] divided by 4, and [X$_\alpha$/Fe]=[X$_\alpha$/H]-[Fe/H])
against the metallicity of stars with small and giant planets
from~\citep{adi12a} together with stars with BDs.
The range of metallicities of stars with confirmed BDs seems
to be narrower than that of the planet hosts although this may be
not statistically significant due to the small number of stars in
the BD sample.
\citet{adi12a,adi12c} remarked that stars with both small and giant
planets at low metallicities [Fe/H]~$< -0.3$~dex, tend to be
$\alpha$-enhanced and therefore to belong chemically to thick-disk
population.
There is only one star with BD at these relatively low metallicities, at
[Fe/H]~$\sim -0.35$~dex, showing a relatively low [X$_\alpha$/Fe] ratio, and
therefore the question whether this behaviour still holds for BD hosts
remains open.
The $\alpha$-element abundance ratios $[{\rm X_\alpha/Fe}]$ of BD hosts
seem to be consistent with the Galactic trend at higher metallicities,
although there are some stars at metallicities below solar with relatively
low [X$_\alpha$/Fe] ratios, even below the trend described by the stars
without detected planets of the HARPS sample. Stars with discarded BDs
and without detected BD candidates also follow the general abundance
trend.
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics{aaBDsf5.ps}}
\caption{$\alpha$-element abundance ratios $[{\rm X_\alpha/Fe}]$
against [Fe/H] for the samples of stars without planets (green empty squares),
with small planets (black empty triangles), with giant
planets (red empty triangles), BD-host stars (blue filled circles),
stars without known BD companions from the CORALIE sample (blue empty circles),
and stars with discarded BD candidates (violet filled circles).}
\label{falpha}
\end{figure}
\subsection{Element abundance distributions}
The histograms displayed in Fig.~\ref{fha} allow us to study the abundance
distribution [X/H] of our sample for each element.
Stars without planets have a
maximum abundance at $ \sim -0.1$ for most speciess.
Stars with giant planets exhibit a more metal-rich maximum at
$\sim 0.2-0.3$~dex, while the stars with small planets (whose maximum
is at $\sim -0.1$~dex) resemble to the ``single'' ones.
This general behaviour already noticed in previous papers~\citep{nev09,adi12a},
is in a good agreement with the so-called metallicity effect, i.e. the
strong correlation between stellar metallicity, and the likelihood of finding
giant planets~\citep{san01}.
The abundance distribution of the sample of confirmed BD-host stars appears to
be located in between the stars with small planets and stars with giant
planets.
In fact, some element distributions (Na, Si, Mg, Mn) are more similar to those
of giant planets, whereas for others elements (Ti, Cr, Co, Ni) the behaviour
is closer to the stars without planets (see Fig.~\ref{fha}).
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics{aaBDsf6a.ps}}
\resizebox{\hsize}{!}{\includegraphics{aaBDsf6b.ps}}
\caption{Histogram of $\alpha$-element abundances, $[{\rm X_\alpha/H}]$,
(top panel) of the samples of stars without planets (green continuum
line), stars with small planets (black dashed-double-dotted line),
stars with giant planets (red dashed-dotted line), stars with
confirmed BD companions (blue dashed line) and stars with discarded
BD candidates, i.e. binaries with low-mass M-dwarf companions
(violet dotted line).
The left y-axis of the top panel is labeled with the number of stars
with and without small and giant planets whereas the right y-axis
shows the number of stars with BD-companion candidates.
The lower panel shows cumulative histogram.}
\label{fhalpha}
\end{figure}
In Figs.~\ref{fhalpha} and~\ref{fhfe} we display the distribution and
cumulative histograms of the $\alpha$-element abundances (including the
species \ion{Mg}{i}, \ion{Si}{i}, \ion{Ca}{i}, \ion{Ti}{i}),
$[{\rm X_\alpha/H}]$, and the Fe-peak element abundances (including the
species \ion{Cr}{i}, \ion{Mn}{i}, \ion{Co}{i}, \ion{Ni}{i}),
$[{\rm X_{\rm Fe}/H}]$.
Here it appears more clear the fact that confirmed BDs seems to
behave differently from giant planets.
Although the sample is small, it may tentatively
point to a bimodal distribution with two peaks, one at the position of the
small planet distribution and one at the position of the giant planets.
However, the mean values of the $[{\rm X_\alpha/H}]$
and $[{\rm X_{\rm Fe}/H}]$ abundances are roughly solar.
The $[{\rm X_\alpha/H}]$ and $[{\rm X_{\rm Fe}/H}]$ cumulative histograms
support the previous statement. Stars with small planets and
without detected planets go together, whereas the stars with BDs
exhibit a slightly different behaviour with a later growth of the
cumulative histogram which resembles that of stars with small planets
only at [X/H]~$>0.1$~dex.
The cumulative histogram of stars with giant planets clearly manifest
a later increase towards high metallicities reaching the saturation
at [X/H]~$\sim 0.3$~dex.
We perform a K-S test to statistically evaluate the
significance of this apparerent different behaviour
(see Table~\ref{tks}).
This test provides a clear difference between the BD-host sample
and the GP sample but the SP and NP sample seems to be
statistically very similar to the BD sample.
The number of stars with confirmed BDs must be increased in
order to be able to distinguish these populations.
On the other hand, in Table~\ref{tks} we also show the
same K-S test for the stars with discarded BDs which are in fact
binaries hosting low-mass M dwarfs. Although again there are only six
stars in this sample, it appears to be statiscally different from all
the samples of GP, SP and NP, especially for the Fe-peak element
abundances. However, the significance is lower than the comparison
with confirmed BD-host stars.
\begin{table}
\caption{Significance of the K-S test of the BD sample}
\label{tks}
\centering
\begin{tabular}{lrrr}
\hline
\hline
BD$^\star$ & SP & GP & NP \\
\hline
& \multicolumn{3}{c}{$\alpha$-element abundances} \\
\hline
Discarded & 0.33 & 0.42 & 0.31 \\
Confirmed & 0.87 & 0.08 & 0.83 \\
\hline
& \multicolumn{3}{c}{Fe-peak element abundances} \\
\hline
Discarded & 0.15 & 0.53 & 0.19 \\
Confirmed & 0.72 & 0.13 & 0.77 \\
\hline
\end{tabular}
\tablefoot{\tablefoottext{$\star$}{K-S test to evaluate the significance
of the behaviour of the abundances [X$_\alpha$/H] and [X$_{\rm Fe}$/H]
of BD hosts and stars with discarded BDs (see Section~\ref{secdisc}),
in comparison with stars with small planets (SP), stars with giant
planets (GP) and stars without detected planets (NP).
The K-S statistics, with values between 0 and 1, give small values
to the significance if the cumulative distribution of BD sample is
significantly different from SP, GP or NP samples.}
}
\end{table}
\subsection{Abundance ratios $\rm [X/H]$ against companion mass}
The $m_C \sin i$ values of the BD candidates in our sample are
higher than $13\, M_J$ (the star with the lightest BD companion
analyzed in this work is orbiting HD~211847, $m_C \sin i \sim 19.2\,
M_J$) but two of the most-massive giant planets of HARPS sample
exceed this value.
The boundary between brown dwarfs and giant planets has been
extensively investigated in the literature, and some works agree with
the definition that brown dwarfs can burn the deuterium that is
present when they form, and giant
planets cannot~\citep[e.g.][]{bur97}. In \citet{bod13}, the
borderline between giant planets and brown dwarfs is found to depend
only slightly on different parameters, such as core mass, stellar
mass, formation location, solid surface density in the protoplanetary
disc, disc viscosity, and dust opacity.
More than 50~\% of the initial deuterium is burned for masses
above $11.6 - 13.6$~$M_J$, in agreement with previous determinations
that do not take the formation process into account.
Thus, we keep the mass $\sim 13 M_J$ to distinguish
among giant planets and brown dwarfs.
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics{aaBDsf7a.ps}}
\resizebox{\hsize}{!}{\includegraphics{aaBDsf7b.ps}}
\caption{Same as Fig.~\ref{fhalpha} but for Fe-peak element
abundances.}
\label{fhfe}
\end{figure}
The metal-content of stars at birth surely affects the formation of
the planetary-mass companions, but is this also true for stars with BD
companions?
To answer this question we depicted in Fig~\ref{fam} the chemical
element abundances, [X/H], as a function of the minimum mass of the
most-massive substellar companion, that could be a small planet, a
giant planet or a brown dwarf according to their $m_C \sin i$ values.
Qualitatively, these element abundances seem to progressively
increase with the companion mass from small planets until reaching a
maximum at about 1~$M_J$ and then slightly decrease when entering in
the BD regime.
The scatter in [X/H] may be due to the different intrinsic
metallicities of the stars at every bin in $m_C \sin i$.
In Fig.~\ref{fama} we display the mean values of these element
abundances, [X/H], in bins of $m_C \sin i$.
The standard deviations from the mean values are different for
every element, but they stay around $0.15-0.2$~dex.
These mean element abundances keep roughly constant for small planets
with masses lower than $0.04\, M_J$.
From this point on, the abundances grow with the companion mass even
in the giant-planet companion range, from low-mass up to Jupiter-mass
planets, reaching a maximum in $ \sim 0.8\, M_J$ (see e.g.
\ion{Si}{i}, \ion{Ti}{i}, \ion{Cr}{i}, and \ion{Ni}{i}).
For more massive giant-planet companions, these abundances start to
decrease slowly with the companion mass towards high-mass BD
companions.
The stars with BD-companion just follow the decreasing trend of the
stars with giant planets.
\begin{figure*}
\includegraphics[width=17.5cm]{aaBDsf8.ps}
\caption{Abundance ratios $[{\rm X_\alpha/H}]$ and $[{\rm X_{Fe}/H}]$
against minimum mass of the most-massive companion $ m_C \sin i$
of $\alpha$-elements (left-top panel) and iron-peak elements
(right-top panel), for stars with small planets (black empty
triangles), with giant planets (red empty triangles), with
confirmed BD companions (blue filled circles). The average values are
shown as violet three-dotted-dashed lines.
Lower panels show the mean abundances of stars with small and giant
planets, and confirmed BD companions, in equal-sized bins
appropriated for the logarithmic scale of companion masses as in
Fig.~\ref{fama}. Error bars represents the standard deviation of the mean
divided by the square root of the number of stars in each bin.
The green solid line depicts the parabolic fit of the data, whereas
the violet three-dotted-dashed lines show the zero-order fits.}
\label{fmamf}
\end{figure*}
We decided to also use the mean values of the $\alpha$-elements,
$[{\rm X_\alpha/H}]$ and Fe-peak elements, $[{\rm X_{\rm Fe}/H}]$, of each
star in these samples. Thus, in Fig.~\ref{fmamf} we display these
abundances as a function of the minimum mass of the most-massive
companion, $m_C \sin i$.
The scatter of these abundances is high due to the
different global metallicity of different stars.
The average values of these abundances
are shown in Table~\ref{tmamf} for the SP, GP and BD samples. One can
see in Fig.~\ref{fmamf} the different levels of these three samples,
although the BD sample shows an average value consistent with the SP
sample within the error bars (see Table~\ref{tmamf}).
\begin{table}
\caption{Mean abundance of SP, GP and BD samples}
\label{tmamf}
\centering
\begin{tabular}{lrrr}
\hline
\hline
Method$^\star$ & SP & GP & BD \\
\hline
& \multicolumn{3}{c}{$\alpha$-element abundances} \\
\hline
Average & $-0.04\pm0.03$ & $0.12\pm0.02$ & $-0.01\pm0.07$ \\
Fit & $-0.05\pm0.01$ & $0.12\pm0.01$ & $0.01\pm0.04$ \\
\hline
& \multicolumn{3}{c}{Fe-peak element abundances} \\
\hline
Average & $-0.11\pm0.04$ & $0.11\pm0.02$ & $-0.03\pm0.08$ \\
Fit & $-0.10\pm0.02$ & $0.12\pm0.01$ & $-0.02\pm0.05$ \\
\hline
\end{tabular}
\tablefoot{\tablefoottext{$\star$}{Average values of the
abundances [X$_\alpha$/H] and [X$_{\rm Fe}$/H]
of the samples of SP, GP and BDs, depicted as three-dotted-dashed
lines in top panels of Fig.~\ref{fmamf}, together with the values
provided by zero-order fits to the weighted average of these element
abundances, displayed as three-dotted-dashed lines in bottom panels
of Fig.~\ref{fmamf}. The error bars of the average values show
$\Delta_\sigma=\sigma/\sqrt(N)$, with $\sigma$ equal to the standard
deviation, and $N$, the number of stars in each sample.
The fit values have the errors of the coefficients of the zero-order
functions.}
}
\end{table}
In the lower panels of Fig.~\ref{fmamf}, we depict the weighted
average of these abundances at each mass bin and there the trend
appears more clear.
We fit a parabolic function, using the IDL routine \textsc{curvefit}, to
these mean values of the $\alpha$-element
and Fe-peak element abundances. The peaks of these trends have a
maximum of abundance at $\sim 0.15 \pm 0.01$~dex and companion mass of
$m_C \sin i \sim 1.42\pm 0.17$ and $1.32\pm 0.18$~$M_J$, respectively.
The parabolic fit, with $\chi_\nu^2$ values of 3.2 and 2.3
respectively, provides a better representation of the average
abundances than the linear fit (with $\chi_\nu^2\sim 10.5$ and 8.6).
We perform an F-test to confirm that the parabolic curve fits
these data set better than the linear fit.
We use the IDL routine \textsc{mpftest} from Markwardt
library\footnote{The Markwardt IDL library can be downloaded at:\\
\url{http://cow.physics.wisc.edu/~craigm/idl/idl.html}}.
The result reveals
a significance level of $1\times10^{-5}$ for $\alpha$-elements
($\rm F = 45.8$) and $3\times10^{-6}$ for iron-peak elements
($\rm F = 37.9$), implying that the parabolic fit is significantly
better than the linear.
We also perform a fit of a three zero-order function of three
levels describing the SP, GP and BD samples and the values are
given in Table~\ref{tmamf}, with $\chi_\nu^2$ values of 3.0 and 2.6
for $\alpha$-element and Fe-peak element abundances, respectively.
We compare this 3-step model with the parabolic fit using an F-test,
resulting in $F$ values of -0.5 and 0.6 which gives a significance
level of 0 and $0.5$ for $\alpha$-elements and
Fe-peak elements, respectively. Therefore, the 3-step model
provides a similar description of the data than the parabolic fit.
We also check that a 2-step model provides a worse fit than the 3-step
model (with $\chi_\nu^2\sim 3.6$ and 3.1 for $\alpha$-elements and
Fe-peak elements, respectively).
Finally, \citet{sah11} noticed that there is a lack of BD companions
with masses in the range $m_C \sin i \sim $~35--55~$M_J$.
More recently, \citet{mag14} have collected the known BD companions
from different studies and confirm this gap for stars with for
periods shorter than 100 days.
Although the statistics may be still poor these authors
suggest that BD companions below this gap, i.e. with
$m_C \sin i < 42$~$M_J$, may have formed in protoplanetary disks
as giant planets, probably through the disk instability-fragmentation
mechanism~\citep{bos97,sta09},
whereas BD companions with $m_C \sin i > 42$~$M_J$ may have
formed by molecular cloud fragmentation as
stars~\citep{pad04,hen08}.
In bottom panels of Fig.~\ref{fmamf} we may see that on average
stars with BDs at masses $m_C \sin i$ below 42~$M_J$ have
higher $\alpha$-element and Fe-peak element abundances than stars
above this mass.
In fact, stars with massive BD companions have more similar
abundances to those of stars without planets.
This might tentatively support the above statement with the low-mass
BD companions being formed in protoplanetary disks as giant planets,
and the high-mass BDs being formed by cloud fragmentation as stars.
\section{Conclusions\label{sec7}}
We have analyzed a subsample of stars with candidate BD companions
from the CORALIE radial velocity survey. We derive chemical abundances
of several elements including $\alpha$-elements and Fe-peak elements.
A comparison with the chemical abundances of stars with giant planets
shows that BD-host stars seem to behave differently.
In particular, we compute the abundance histograms [X$_\alpha$/H] and
[X$_{\rm Fe}$/H], revealing a mean abundance at about solar for the
BD-host sample whereas for stars without planets (NP) and with small
planets (SP) remains at $-0.1$~dex, and for stars with giant planets
(GP) at roughly$+0.10$~dex.
The cumulative histograms of [X$_\alpha$/H] and
[X$_{\rm Fe}$/H] abundances exhibit the same situation, with the
stars without planets and with small planets going together,
similarly to the stars with BDs.
However, the stars with giant planets reach a later
saturation at [X/H]~$\sim 0.3$~dex.
A Kosmogorov-Smirnov (K-S) test does not show a statisticallly
significant difference between the cumulative distribution of
SP, NP and BD samples, but clearly separates the GP and BD samples.
Finally, we depict the [X$_\alpha$/H] and [X$_{\rm Fe}$/H] abundances
versus the minimum mass of the most-massive substellar companion,
$m_C \sin i$, and we find a peak of these element abundances for a
companion mass $m_C \sin i \sim 1.3-1.4\, M_J$, with the abundances
growing with the companion mass from small planets to Jupiter-like
planets and after decresing towards massive BD companions.
A 3-step model also provides a similar description of the data
with no statistically significant difference with the parabolic
model.
Recently, \citet{sah11} and \citet{mag14} have suggested that the
formation mechanism may be different for BD companion below and
above 42~$M_J$. We find that BDs below this mass tend to have higher
abundances than those above this mass, which may support this
conclusion and BDs with $m_C \sin i < 42$~$M_J$ may form by disk
instability-fragmentation whereas high-mass BD may form as stars
by cloud fragmentation.
\section{Acknowledgments}
D.M.S. is grateful to the Spanish Ministry of Education, Culture and
Sport for the financial support from a collaboration grant, and to
the PhD contract funded by Fundaci\'on La Caixa.
J.I.G.H. and G.I. acknowledge financial support from the
Spanish Ministry project MINECO AYA2011-29060, and J.I.G.H.
also from the Spanish Ministry of Economy and Competitiveness
(MINECO) under the 2011 Severo Ochoa Program MINECO SEV-2011-0187.
N.C.S. acknowledges the support by the European Research
Council/European Community under the FP7 through Starting Grant
agreement number 239953, and the support in the form of a Investigador
FCT contract funded by Funda\c{c}\~ao para a Ci\^encia e a Tecnologia
(FCT) /MCTES (Portugal) and POPH/FSE (EC).
This research has made use of the IRAF facilities, and the SIMBAD
database, operated at the CDS, Strasbourg, France.
\bibliographystyle{aa}
|
1,108,101,566,316 | arxiv | \section{\label{sec:intro} Introduction}
Two processes act in concert to align grains with the interstellar magnetic
field: (1) the grain's principal axis of greatest moment of inertia
$\hat{a}_1$ aligns with respect to its angular momentum vector $\mathbf{J}$
and (2) $\mathbf{J}$ aligns with respect to the magnetic field vector
$\mathbf{B}$. Purcell (1979) noted that internal mechanisms for dissipating
rotational energy drive the grain to its lowest energy state for a given
$\mathbf{J}$, namely steady rotation with $\hat{a}_1 \parallel \mathbf{J}$.
This occurs on a much shorter timescale than that on which external
processes align $\mathbf{J}$ relative to $\mathbf{B}$.
Purcell (1979) identified two internal dissipation mechanisms. Inelastic
dissipation results from the periodic mechanical
stresses experienced by a grain that does not
rotate steadily about a principal axis. The existence of this process
is fairly obvious, but calculating the dissipation rate is
a challenging problem (see, e.g., Sharma et al. 2005 and references therein).
Purcell (1979) introduced
a second, subtle effect, which he termed ``Barnett dissipation''.
When a grain does not rotate steadily about a principal axis, the angular
velocity vector $\bomega$
varies periodically in a coordinate system attached to the
grain. If the grain consists of a paramagnetic material, then the
microscopic spins (with gyromagnetic ratio $\gamma_g$)
attempt to align with the fictitious
``Barnett-equivalent''
magnetic field $\mathbf{B}_{\rm BE} = \bomega / \gamma_g$. As
the grain magnetization attempts to follow $\mathbf{B}_{\rm BE}$, rotational
kinetic energy is dissipated. This process is analogous to a magnetic
resonance experiment, where the dissipated energy is provided instead by the
applied radiation field.
Purcell (1979) provided a heuristic derivation of the Barnett dissipation
rate for oblate
grains with dynamic symmetry. ``Dynamic symmetry'' refers to the case that
$I_2 = I_3$,
where $I_i$ are the moments of inertia associated with the principal axes
$\hat{a}_i$. Thus, for oblate grains with this symmetry,
$I_1 > I_2 = I_3$.
(Henceforth, the term ``oblate'' shall always refer to dynamic,
rather than geometric, symmetry.)
In this case, $\mathbf{B}_{\rm BE}$ consists of a static
component $(\omega_{\parallel}/\gamma_g) \hat{a}_1$ plus a component
$\mathbf{B}_{\rm BE, \, rot}$ with
magnitude $\omega_{\perp}/\gamma_g$ that rotates in the
$\hat{a}_2-\hat{a}_3$ plane with angular speed $\omega_{\rm rot}$.
Solving the Euler equations yields
\be
\label{eq:omega_parallel_oblate}
\omega_{\parallel} = \frac{J}{I_1} \cos \gamma~~~,
\ee
\be
\omega_{\perp} = \frac{J}{I_2} \sin \gamma~~~,
\ee
and
\be
\omega_{\rm rot} = \frac{J(I_1-I_2)}{I_1 I_2} \cos \gamma~~~,
\ee
where $\gamma$ is the (constant) angle between $\mathbf{J}$ and $\hat{a}_1$.
Assuming $\mathbf{J}$ and $I_1$ are constant,
it is
convenient to introduce a dimensionless measure of the rotational energy
$E$:
\be
\label{eq:q}
q \equiv \frac{2 I_1 E}{J^2} = 1 + (r_2 - 1) \sin^2 \gamma~~~,
\ee
where the final equality is for oblate grains.
Note that, for oblate grains, $q$ ranges from 1 to $r_2 \equiv I_1 / I_2$.
Purcell (1979) argued that the dissipation rate is given by
\be
\label{eq:Barnett_diss}
\left( \frac{dE}{dt} \right)_{\rm Bar} = - V \chi^{\prime \prime}
B_{\rm BE, \, rot}^2 \omega_{\rm rot}~~~,
\ee
where $V$ is the grain volume and $\chi^{\prime \prime}$ is the
imaginary component of the magnetic susceptibility.
It is worth noting that, although this expression (as well as a variant in
Lazarian \& Draine 1999b) is widely used in grain alignment theory, it has
not yet been rigorously derived or experimentally verified.
Purcell adopted the low-frequency susceptibility
\be
\label{eq:chi_imag}
\chi^{\prime \prime} \approx \chi_0 \omega_{\rm rot} T_2~~~,
\ee
where $\chi_0$ is the static susceptibility and $T_2$ is the
spin-spin relaxation time.
Combining equations (\ref{eq:omega_parallel_oblate}) through
(\ref{eq:chi_imag}) yields
\be
\label{eq:dqdt_low_J}
\frac{dq}{dt} = - \tau_{\rm Bar}^{-1} (q-1) (r_2 - q)
\ee
with
\be
\label{eq:tau_Bar}
\tau_{\rm Bar} = \frac{\gamma_g^2 I_1 I_2^2}{2 \chi_0 V T_2 J^2}~~~.
\ee
More realistic approximations for $\chi^{\prime \prime}$ (e.g., Draine \&
Lazarian 1999) yield more complicated expressions for $dq/dt$, but retain
the linear dependence on $(q-1)$ and $(r_2-q)$ near $q=1$ and $q=r_2$,
respectively.
In the inverse process of Barnett dissipation, a fluctuation spontaneously
transfers some energy from the thermal reservoir provided by the grain
vibrational modes to the grain rotation. Lazarian \& Draine (1997) showed
that these thermal fluctuations can play an important role in grain alignment.
They examined the classic alignment model, developed by Purcell (1975, 1979),
in which a systematic torque $\mathbf{\Gamma}_{\rm sys}$, fixed in grain body
coordinates, spins the grain to ``suprathermal'' rotation. Thermal rotation,
arising solely from collisions with particles from a gas with temperature
$T_{\rm gas}$, is characterized by
$J \sim J_{\rm th} \equiv \sqrt{I_1 k_B T_{\rm gas}}$
($k_B$ is Boltzmann's constant).
Suprathermally-rotating grains, with $J \gg J_{\rm th}$, are impervious to
disalignment by random collisions with gas atoms.
Thus, $\mathbf{J}$ can gradually align with $\mathbf{B}$ via the
Davis-Greenstein (1951) mechanism. Purcell (1979) found that the most
important systematic torque results from the formation (and subsequent
ejection) of H$_2$ molecules at special sites on the grain surface.
The distribution of molecule-forming surface sites can change rapidly
compared with the Davis-Greenstein alignment rate (see, e.g., Lazarian
1995).
As a result of this resurfacing, $\mathbf{\Gamma}_{\rm sys} \cdot \hat{a}_1$
may reverse sign, sometimes spinning the grain down to
thermal rotation. (In inertial coordinates, $\mathbf{\Gamma}_{\rm sys}$
is always parallel or
anti-parallel to $\mathbf{J}$, when averaged over the grain rotation.)
These episodes, known as crossovers, were first studied
by Spitzer \& McGlynn (1979), who concluded that thorough disalignment of
$\mathbf{J}$ relative to $\mathbf{B}$
occurs after passage through a small number of crossovers. Lazarian \&
Draine (1997) found that the small disalignment of $\hat{a}_1$ from
$\mathbf{J}$ during periods of suprathermal rotation (due to thermal
Barnett
fluctuations) limits the minimum value of $J$ during a crossover, thereby
limiting the disalignment of $\mathbf{J}$ with $\mathbf{B}$.
Although Lazarian \& Draine (1997) found that thermal fluctuations during
periods of suprathermal rotation may aid grain alignment, these same authors
soon concluded that thermal fluctuations during periods of slow rotation
may severely suppress alignment.
Lazarian \& Draine (1999a) introduced the concepts of thermal flipping and
thermal trapping.
For an oblate grain, internal thermal fluctuations cause the angle $\gamma$
between $\mathbf{J}$ and $\hat{a}_1$ to vary stochastically (see
eq.~\ref{eq:q}). Whenever $\gamma$ crosses
$\pi/2$ (a ``thermal flip''),
$\mathbf{\Gamma}_{\rm sys} \cdot \mathbf{J}$ changes
sign. If flips occur rapidly, then the grain can only achieve suprathermal
rotation if it reaches, by random walk, a sufficiently high $J$ that the
flipping timescale (which increases with $J$) exceeds the spin-up timescale.
Grains for which suprathermal rotation is thereby suppressed are
``thermally trapped''.
Purcell (1979) considered only the contribution of electron paramagnetism
to Barnett dissipation. Lazarian \& Draine (1999b) showed that nuclear
paramagnetism can yield much larger dissipation rates for thermally-rotating
grains. When including ``nuclear relaxation'' in their analysis, they found
that grains with size up to $1 \micron$ are trapped.
Thus, the Purcell (1979) scenario of Davis-Greenstein alignment of
suprathermally rotating grains appears to fail, unless the grains contain
superparamagnetic inclusions (Jones \& Spitzer 1967).
Radiative torques (Harwit 1970a, b; Dolginov 1972; Draine \& Weingartner
1996, 1997; Weingartner \& Draine 2003; Hoang \& Lazarian 2008;
Lazarian \& Hoang 2007, 2008),
which are not fixed in grain body coordinates, have the potential to rapidly
align $\mathbf{J}$ with $\mathbf{B}$. Weingartner \& Draine (2003)
found that radiative torques can drive grains into various alignment states,
some characterized by thermal rotation and some by suprathermal rotation.
For the former states, the grains were thought to undergo rapid flipping.
This result was confirmed by additional calculations in Hoang \&
Lazarian (2008), who also noted that thermally rotating, aligned grains
may ultimately reach aligned states characterized by
suprathermal rotation, due to random gas
atom impacts.
Thermal flipping appears to play a critical role in grain alignment theory,
precluding the Purcell scenario (i.e., Davis-Greenstein alignment with
suprathermal rotation suppressing disalignment)
and affecting the aligned grain states in
the radiative torque scenario. Thus, a quantitative estimate of the flipping
rate is needed. This can be accomplished with the use of the Langevin
and/or Fokker-Planck equations. (Gardiner 2004 provides an excellent
introduction to stochastic methods.) In \S 2, I will
show that thermal flipping as described by Lazarian \& Draine (1999a) is,
in fact, not possible.
\section{\label{sec:langevin_eqn} The Langevin Equation for Internal
Relaxation}
The Langevin equation is a stochastic differential equation describing
the time evolution of the grain rotational energy:
\be
\label{eq:langevin}
dq = A(q) \, dt + \sqrt{D(q)} \, dw~~~,
\ee
where $dw$ is a Gaussian random variable with variance $dt$.
For Barnett dissipation (in the Purcell 1979 approximation),
the drift coefficient $A(q)$ is given by the right hand side of equation
(\ref{eq:dqdt_low_J}). Ideally, the diffusion
coefficient $D(q)$ would also be derived from the model for Barnett
relaxation, but no model has been developed with sufficient detail for
this to be possible. Instead, $D(q)$ can be determined (to within a
constant of integration) by demanding that
the probability current $S(q)$ vanish at all $q$ when thermal
equilibrium obtains. If $f(q) dq$ is the probability that the dimensionless
energy lies between $q$ and $q+dq$, then
\be
\label{eq:prob_current}
S(q) = A f - \frac{1}{2} \frac{d(fD)}{dq}
\ee
(eq.~5.2.8 in Gardiner 2004).
Weingartner \& Draine (2003) defined the quantity
\be
\label{eq:s}
s \equiv 1 - \frac{2}{\pi} \int_0^{\alpha_{\rm max}} d\alpha \left[
\frac{I_3 (I_1 - I_2 q) + I_1 (I_2 - I_3) \cos^2
\alpha}{I_3 (I_1 - I_2) + I_1 (I_2 - I_3) \cos^2 \alpha} \right]^{1/2}~~~,
\ee
where
\be
\alpha_{\rm max} \equiv \cases{\pi/2 &, $q \le I_1/I_2$\cr
\cos^{-1} \left[ \frac{I_3 (I_2 q -I_1)}{I_1 (I_2 - I_3)} \right]^{1/2}
&, $q > I_1/I_2$\cr}~~~,
\ee
and showed that the density of energy states is constant in $s$. This
holds for grains with arbitrary $I_1$, $I_2$, $I_3$. For oblate grains
($I_2 = I_3$),
\be
s = 1 - \left( \frac{r_2 - q}{r_2 - 1} \right)^{1/2}~~~.
\ee
Thus, for oblate grains, the thermal equilibrium distribution function is
\be
f_{\rm TE}(q) \propto \exp(-kq) \frac{ds}{dq} \propto \exp(-kq)
(r_2 - q)^{-1/2}~~~,
\ee
where
\be
k \equiv \frac{J^2}{2 I_1 k_B T_d}~~~.
\ee
The thermal equilibrium distribution function is more complicated for grains
lacking dynamic symmetry, but still depends on $k$.
The impossibility of thermal flipping can be simply demonstrated
by examining the relaxation at $q=r_2$ in the limit that the dust temperature
$T_d \rightarrow 0$. In this limit,
$k \rightarrow \infty$. As $T_d \rightarrow 0$, fluctuations cease to
contribute to the probability current $S(q)$, implying that
$d(fD)/dq \rightarrow 0$. This limiting behavior must hold for all $q$
(including $q=r_2$) and for any physically realizable probability
distribution $f(q)$. Normalization of $f(q)$ requires that any divergence
at $q=r_2$ be shallower than $f(q) \propto (r_2 -q)^{-1}$, unless
$f(q) = \delta(q-r_2)$.
Of course, the contribution of drift to the current,
$A(q) f(q)$, must also vanish at $q=r_2$. Evidently, it is necessary that
$A(q)$ falls off linearly or faster with $(r_2-q)$ near $q=r_2$. Note that
the Barnett dissipation rate of equation
(\ref{eq:dqdt_low_J}) does satisfy this condition.
Equation (\ref{eq:tau_Bar}) suggests that $\tau_{\rm Bar} \rightarrow 0$ as
$T_d \rightarrow 0$, since $\chi_0 \propto T_d^{-1}$. However, this cannot
be correct, since $A(q = r_2)$ would be undefined rather than
zero as $T_d \rightarrow 0$.
Equation (\ref{eq:tau_Bar}) does not hold for $T_d$ lower than the
Curie temperature. For such low temperatures, the material is ferromagnetic,
suggesting that $\tau_{\rm Bar}$ approaches a non-zero constant as
$T_d \rightarrow 0$.
Focusing now on the fluctuating term at $q=r_2$,
\be
\label{eq:lim}
\lim_{(k^{-1}, r_2-q) \rightarrow (0,0)} \frac{d[f(k, q) D(k,q)]}{dq} = 0~~~.
\ee
The limit only exists if it takes the same value for all paths along which
$(k^{-1}, r_2-q) \rightarrow (0,0)$. Since there exist paths for which
$(r_2-q) \rightarrow 0$ arbitrarily more rapidly than $k^{-1} \rightarrow 0$,
$d(fD)/dq$ may not contain any divergences with respect to $q=r_2$.
Thus, for $(r_2 -q) \ll 1$, $fD$ must either (1) be independent of $(r_2-q)$
or (2) fall off linearly or faster with $(r_2-q)$. If condition (1)
holds for a particular distribution $f_1(q)$, then it will not hold for
another distribution $f_2(q)$ having a different dependence on $(r_2-q)$ near
$q=r_2$. Thus, condition (2) must generally obtain, implying that
$D$ must fall off as $(r_2-q)^2$ or faster near
$q=r_2$. This implies that $D$ and $dD/dq$ both vanish at $q=r_2$.
If the diffusion coefficient is smooth with respect to $k$
(in the sense that $dD/dk$ exists for all $k$), then these conditions must
be satisfied for all $k$.
Thus, $A$, $D$, and $dD/dq$ all vanish at $q=r_2$, making this point a
``natural boundary'' (see \S 5.2.1e of Gardiner 2004).
A system can never reach a natural boundary if it begins at a different
point (i.e., with a different value of $q$). However, the system must
reach $q=r_2$ and return to lower $q$ (with a different sign
for $\cos \gamma$) in order for a flip to occur (see \S 2.5.2 of Weingartner
\& Draine 2003). Consequently, thermal
flipping is prohibited. This conclusion does not
depend on the form of $A(q)$, except that $A(q)$ decreases as $(r_2-q)$ or
faster for $q$ near $r_2$. It holds for any type of internal relaxation
and for grains with or without dynamic symmetry, so long as $dD/dk$ exists
for all $k$.
If thermal flipping is truly prohibited, then this result must obtain
regardless of the choice of variable. Although the current $S$ is
independent of variable, the two terms composing it, representing drift
and diffusion, are not.
When transforming variables in stochastic differential equations, the
ordinary rules of calculus only apply for linear transformations. Otherwise,
Ito's formula must be used (see \S 4.3.3 of Gardiner 2004). When the
Langevin equation (\ref{eq:langevin}) is transformed to variable $y(q)$,
the result is
\be
\label{eq:ito}
dy = \left[ A(q) \frac{dy}{dq} + \frac{1}{2} D(q) \frac{d^2 y}{dq^2} \right] dt
+ \sqrt{D(q)} \frac{dy}{dq} dw~~~.
\ee
Note the additional contribution to the drift coefficient when the Langevin
equation is written in the new variable.
Ito's formula, along with the relation $f(q) dq = f(y) dy$, yields
\be
\label{eq:diff_term}
\frac{d}{dy} \left[ f(y) D(y) \right] = \frac{d}{dq} \left[ f(q) D(q)
\right] + f(q) D(q) \frac{d^2 y/dq^2}{dy/dq}~~~.
\ee
If $y(q) \propto (r_2 -q)^p$ (with $p \ne 0$) for $q$ near $r_2$, then
\be
\frac{d^2 y/dq^2}{dy/dq} \propto (r_2 -q)^{-1}~~~.
\ee
Since $D(q) f(q) \propto (r_2 -q)^n$ with $n > 1$, the second
term in equation (\ref{eq:diff_term}) vanishes at $q=r_2$. Thus, if the
diffusion
contribution to the current vanishes at the point $q=r_2$ for variable $q$,
then it does so for arbitrary variable.
To illustrate the above arguments in a concrete setting, I will now
discuss the diffusion coefficient $D(q)$ for an oblate grain and the
approximate dissipation rate of equation (\ref{eq:dqdt_low_J}).
Setting the probability current equal to zero for thermal equilibrium yields
\be
\label{eq:D_q}
D(q) = \frac{1}{f_{\rm TE}(q)} \left[ D(1) f_{\rm TE}(1) + 2
\int_1^q A(q^{\prime}) f_{\rm TE}(q^{\prime}) dq^{\prime} \right]~~~.
\ee
Upon integrating,
\begin{eqnarray}
\nonumber
\label{eq:D}
k^2 \tau_{\rm Bar} D(q) & = & [3 + 2 k (q-1)] (r_2 - q) + C(k) (r_2 - q)^{1/2}
\exp(kq)\\
& & - k^{-1/2} [3 + 2 k (r_2 -1)] (r_2 - q)^{1/2} \exp[-k(r_2 -q)]
\int_0^{\sqrt{k (r_2 -q)}} \exp(x^2) dx
\end{eqnarray}
with
\begin{eqnarray}
\nonumber
C(k) & = & k^2 \exp(-k) (r_2 -1)^{-1/2} D(q=1, k) \tau_{\rm Bar} - 3 \exp(-k)
(r_2 -1)^{1/2}\\
& & + k^{-1/2} [3 + 2 k (r_2 -1)] \exp(-k r_2)
\int_0^{\sqrt{k (r_2 -1)}} \exp(x^2) dx~~~.
\label{eq:C}
\end{eqnarray}
For $(r_2 - q) \ll 1$,
\be
k^2 \tau_{\rm Bar} D(q, k) \approx C(k) \exp(kq) (r_2 - q)^{1/2} +
\frac{4}{3} k^2 (r_2 -1) (r_2 -q)^2~~~~,~~(r_2 - q) \ll 1~~~.
\ee
The term containing $C(k)$ does not fall off sufficiently quickly with
$(r_2 -q)$. Thus, $C(k) = 0$ identically (for all $k$, if $dD/dk$ exists
for all $k$). The remaining term varies as $(r_2-q)^2$, the shallowest
permissible dependence. Note that the term containing $C(k)$ satisfies
condition (1) following equation (\ref{eq:lim}) for the thermal equilibrium
distribution $f_{\rm TE}(q)$, but not the required general condition (2).
Given the above general argument prohibiting thermal flipping induced by
internal relaxation, one may ask how Lazarian \& Draine (1999a) concluded
that thermal flipping is possible. Their analysis was highly approximate
and did not employ any diffusion coefficient. Nevertheless, their
estimate of the flipping rate agreed well with the detailed analysis of
Roberge \& Ford (1999), which made use of the Barnett relaxation
diffusion coefficient calculated by Lazarian \& Roberge (1997).
Lazarian \& Roberge (1997) solved a modified version of equation
(\ref{eq:prob_current}) for the diffusion coefficient, in which they
used the angle $\gamma$ rather than $q$ as the variable. They considered
oblate grains and the approximate Barnett dissipation rate
in equation (\ref{eq:dqdt_low_J}). Using equation (\ref{eq:q}) to
substitute for $q$ in terms of $\gamma$ in equation (\ref{eq:dqdt_low_J}),
they adopted
\be
\label{eq:A_LR97}
A(\gamma) = - \frac{r_2 -1}{2 \tau_{\rm Bar}} \sin \gamma \cos \gamma
\ee
(see their eqs.~1, 2, 4, and 16).
In Purcell's (1979) heuristic derivation of the Barnett dissipation rate,
he obtained the rate at which the rotational energy $E$ decreases.
In other words, he obtained the drift coefficient $A(E)$.
Since $q \propto E$, there is no additional contribution to the drift
coefficient arising from Ito's formula (\ref{eq:ito}) when the
Langevin equation is written in the variable $q$. However, the variable
in Lazarian \& Roberge (1997) is the angle $\gamma$, which is a non-linear
function of $E$ (eq.~\ref{eq:q}). The additional contribution to
the drift coefficient was not included in their analysis. Despite this
error, the above general argument should still yield vanishing $D$ and
$dD/d\gamma$ at $\gamma = \pi/2$, and thus no thermal flipping.
In the vicinity of $\gamma = \pi/2$ (corresponding to $q=1$), the
diffusion coefficient calculated by Lazarian \& Roberge (1997; their
eq.~18) is
\be
\tau_{\rm Bar} D(\gamma) = \tau_{\rm Bar} D(\gamma = \pi/2) + \left\{ 1 -
\left[ \left(r_2 -1 \right) k - \frac{1}{2} \right] \tau_{\rm Bar}
D(\gamma = \pi/2) \right\} (\gamma -\pi/2)^2~~~.
\ee
If $D(\gamma = \pi/2)$ is taken to be zero, then
$D \propto (\gamma -\pi/2)^2$, as required. In this case, thermal flipping
does not occur. Lazarian \& Roberge (1997) argued that $D$ should be
smooth with respect to $\gamma$ and demanded that $d^2 D/d\gamma^2$ exist for
all $k$. This condition is satisfied if $D(\gamma = \pi/2) = 0$ or if
$D \propto k^n$ with $n \le -1$. Lazarian \& Roberge (1997) chose
$D \propto k^{-1}$, which admits thermal flipping but is inconsistent with the
requirement that $D(\gamma)$ falls off at least as quickly as
$(\gamma -\pi/2)^2$ for $\gamma$ near $\pi/2$. [There appear to be some
typographical errors in Lazarian \& Roberge 1997. In their eq.~19,
$D \propto k^{-1/2}$ rather than $k^{-1}$. In their eq.~18,
$D(\gamma = \pi/2) = 1$ rather than falling off as $k^{-1/2}$ or $k^{-1}$.]
Lazarian \& Roberge (1997) tested their result for the diffusion coefficient
by numerically evolving their Langevin equation for a large number of
Barnett timescales and computing the average value of the internal
alignment factor
\be
\label{eq:Q_X}
Q_X \equiv \frac{3}{2} \left[ \langle \cos^2 \gamma \rangle - \frac{1}{3}
\right]~~~.
\ee
This can also be evaluated by simple integration for a thermal distribution
(their eq.~10). The results of their simulations agreed to high accuracy
with the direct calculations. They adopted the wrong Langevin equation
but the correct thermal equilibrium
distribution function. Their success with the test
indicated that they solved equation (\ref{eq:prob_current}) correctly
given their drift coefficient, but this drift coefficient does not
describe Barnett dissipation when angle $\gamma$ is taken as the variable.
As a confidence-building check on the conclusion that thermal flipping is
prohibited, I numerically evolved
the Langevin equation for the case that $k=1$ and $r_2 = 1.5$, taking
$C=0$. A fixed time step size is attempted at each step. Sometimes
this results in overshooting $q=1$; in these cases, smaller steps are
tried until the resulting $q$ exceeds 1. These overshooting incidents
become fractionally less common as the base step size is decreased (from
$10^{-2} \tau_{\rm Bar}$ to $10^{-5} \tau_{\rm Bar}$). The total duration
of a simulation is about $10^5 \tau_{\rm Bar}$. At no time, for any
of the base step sizes, did $q$ ever overshoot $r_2$. Incidentally, the
simulations yielded the correct value for the alignment factor $Q_X$, although
the convergence was slower
than for the simulations in Lazarian \& Roberge (1997).
The above argument that $D$ and $dD/dq$ both vanish at $q=r_2$ made no
reference to the form of $A(q)$. Thus, this conclusion also holds when
more realistic Barnett dissipation rates are adopted,
and even for grains lacking dynamic symmetry. In all of these cases,
$A(q=r_2) = 0$, since the grain is in steady rotation when $q=r_2$.
Thus, $q=r_2$ is a natural boundary for the most general treatment of
Barnett relaxation, if $dD/dk$ exists for all $k$. Since a grain lacking
dynamic symmetry must reach (and, in general, cross)
$q=r_2$ in order to flip (see \S 2.5.2 of Weingartner \& Draine 2003),
thermal flipping associated with Barnett relaxation is ruled out
generally. The only caveat is that $dD/dk$ must exist for all $k$.
Although this seems natural, a detailed model of Barnett relaxation would
be needed to confirm that this condition is indeed satisfied.
The discussion here has focused on Barnett relaxation, since Barnett
dissipation appears to dominate inelastic dissipation for most thermally
rotating grains (Lazarian \& Efroimsky 1999), especially when nuclear
paramagnetism contributes. However, the argument against thermal flipping
applies equally well for inelastic relaxation. Lazarian \&
Efroimsky (1999) did not constrain the form of the dissipation rate
near $q=r_2$, but Sharma et al.~(2005) found the same form as in
equation (\ref{eq:dqdt_low_J}) for the special case of an oblate spheroid.
\section{Conclusion}
In conclusion, it appears that thermal flipping is not possible, so long as
$dD/dk$ exists for all $k$ and the inertia tensor does not vary with time.
A detailed model of Barnett relaxation is needed to examine the behavior
of $dD/dk$.
Because of grain vibrations, the inertia tensor exhibits continual, small
variations. As a result, the location of the natural boundary at
$q = r_2 \equiv I_1/I_2$ wanders slightly (B.T. Draine, private communication).
Further work is needed to examine whether this can give rise to flips and, if
so, at what rate.
External processes (e.g., gas atom impacts) may
also induce flips (with accompanying changes in $\mathbf{J}$), but have
recently been neglected in comparision with
internal relaxation (e.g., Weingartner \& Draine 2003). These now merit
further scrutiny as well.
\acknowledgements
I am grateful to Bruce Draine, Wayne Roberge, and Alex Lazarian for
illuminating discussions and comments on the manuscript.
JCW is a Cottrell Scholar of Research Corporation.
Support for this work, part of the Spitzer Space Telescope Theoretical
Research Program, was provided by NASA through a contract issued by the
Jet Propulsion Laboratory, California Institute of Technology under a
contract with NASA.
|
1,108,101,566,317 | arxiv | \section{introduction}
Let $X$ be a projective variety over complex numbers $\mathbb{C}.$ Let $Aut^0(X)$ be the connected component, containing the identity automorphism of the group of all algebraic automorphisms of $X.$ Then $Aut^0(X)$ has a structure of an algebraic group (see \cite[Theorem 3.7, p.17]{MO}). Further, the Lie algebra of this automorphism is isomorphic to the space of all tangent vector fields on $X,$ that is the space $H^0(X,\Theta_{X})$ of all global sections of the tangent sheaf $\Theta_{X}$ of $X$ (see \cite[Lemma 3.4, p.13]{MO}).
Let $G$ be a simple algebraic group of adjoint type over $\mathbb{C}.$ Demazure \cite{Dem aut} studied automorphism group of a partial flag variety, i.e., a homogeneous variety of the form $G/P,$ where $P$ is a parabolic subgroup of $G.$ Further, Demazure proved that all the higher cohomology groups of the tangent bundle of a partial flag variety vanish. As a particular case of his result, it follows that the connected component containing the identity automorphism of the group of all algebraic automorphisms of a full flag variety (i.e., a homogeneous variety of the form $G/H,$ where $H$ is a Borel subgroup of $G$) is identified with $G.$
By Kodira-Spencer theory, the vanishing of the first cohomology group of the tangent bundle of a partial flag variety implies that partial flag varieties admit no local deformation of their complex structure. In other words, for any continuous family of complex varieties $X_{y}$ parameterized by a complex variety $Y,$ with $X_{y}$ is topologically isomorphic to $X$ for all $y,$ and $X_{0}$ is analytically isomorphic to $X,$ then $X_{y}$ is analytically isomorphic to $X$ in a neighborhood of $0\in Y.$
Let $B$ be a Borel subgroup of $G.$ Let $F$ be a projective $B$-variety. Consider the variety
\begin{equation*}
E:=G\times_{B} F=G\times F/\sim,
\end{equation*}
where the action of $B$ on $G\times F$ is given by $b\cdot(g, f)=(gb^{-1}, bf)$ for all $g\in G, b\in B,$ $f\in F$ and $``\sim"$ denote the equivalence relation defined by the action. The equivalence class of $(g, f)$ is denoted by $[g, f].$ Note that there is a natural action of $G$ on $E$ given by $g'\cdot[g,f]=[g'g,f],$ where $g'\in G, [g,f]\in E.$ Then $E$ is a projective variety together with a $G$-action on it, we call it $G$-twisted variety.
In this article, we study the connected component containing the identity automorphism of the group of all algebraic automorphisms of some particular G-twisted variety.
Let $V$ be a $B$-module. Let $\mathcal{L}(V)$ be the associated homogeneous vector bundle on $G/B$ corresponding to the $B$-module $V.$ We denote the cohomology modules $H^j(G/B, \mathcal{L}(V))~(j\ge 0)$ by $H^j(G/B, V)(~j\ge 0)$ for short (see \ref{sec2}).
Our main results of this article are the following.
\begin{theorem}[See Theorem \ref{thm 3.3}]\label{thm 1.1}
Let $F$ be an irreducible projective $B$-variety.
Let $E=G \times_{B} F$ be the $G$-twisted variety associated to $F.$ Let $\Theta_{E}$ (respectively, $\Theta_{F}$) be the
tangent sheaf of $E$ (respectively, of $F$). Then we have
\begin{itemize}
\item [(i)] $Aut^{0}(E)=G,$ if $H^{0}(G/B, H^{0}(F, \Theta_{F}))=0.$
\item [(ii)] Assume that $H^j(F,\mathcal{O}_{F})$ vanish for all $j\ge 1,$ where $\mathcal{O}_{F}$ denotes the structure sheaf on $F.$ Then $H^{1}(E, \Theta_{E})=H^0(G/B, H^{1}(F, \Theta_{F})),$ if $H^j(G/B, H^0(F,\Theta_{F}))=0$ for $j = 1, 2.$
\end{itemize}
\end{theorem}
Let $T$ be a maximal torus of $G$ and let $R$ be the set of roots with respect to $T.$ Let $R^{+}\subset R$ be a set of positive roots. Let $B^{+}$ be the Borel subgroup of $G$ containing $T,$ corresponding to $R^{+}.$ Let $B$ be the Borel subgroup of $G$ opposite to $B^{+}$ determined by $T.$ Let $W=N_{G}(T)/T$ denote the Weyl group of $G$ with respect to $T.$ For $w\in W,$ let $X(w):= \overline{BwB/B}$ denote the Schubert variety in $G/B$ corresponding to $w.$
Consider the diagonal action of $G$ on $G/B\times G/B.$ Then there is a $G$-equivariant isomorphism $$\xi : G\times_{B} G/B \longrightarrow G/B\times G/B$$ given by $$[g , g'B]\mapsto (gB, gg'B),$$ where $g,g'\in G.$
Thus any $G$-stable closed irreducible subset of $G/B\times G/B$ is given by $\xi(G\times_{B} X(w)).$ Then closed irreducible $G$-stable subsets of $G/B\times G/B$ are precisely of the form $\{ \xi(G\times_{B}X(w)): w\in W\}$ (see \cite[Definition 2.2.6, p.69-70]{BK}).
For $w\in W,$ let $\mathcal{X}(w):=\xi (G\times_{B}X(w)).$ Then $\mathcal{X}(w)$ is equipped with the structure of a closed subvariety of $G/B\times G/B,$ this $G$-twisted variety is called $G$-Schubert variety associated to $w.$ Now onwards we omit $\xi$ and simply write $\mathcal{X}(w)$ to be $G\times_{B} X(w).$ Then
we prove
\begin{proposition}[See Proposition \ref{prop 4.3}]\label{prop 1.3}
Assume that $G$ is simply-laced. Let $w\in W$ be
such that $w\neq w_{0},$ where $w_{0}$ denotes the longest element of $W.$ Let $\Theta_{\mathcal{X}(w)}$ (respectively, $\Theta_{X(w)}$) be the tangent sheaf of $\mathcal{X}(w)$ (respectively, of $X(w)$). Then we have
\begin{itemize}
\item [(i)] $Aut^0(\mathcal{X}(w))= G.$
\item [(ii)] $H^1(\mathcal{X}(w),\Theta_{\mathcal{X}(w)})=H^0(G/B, H^1(X(w) ,\Theta_{X(w)})).$
\end{itemize}
\end{proposition}
Thus if $G$ is simply-laced and $w\neq w_{0}\in W,$ then by Proposition \ref{prop 1.3}, we conclude that the vanishing of the first cohomology group of the tangent sheaf on $X(w)$ implies the vanishing of the first cohomology group of the tangent sheaf of $\mathcal{X}(w).$
Let $w=s_{i_1}s_{i_2}\cdots s_{i_r}$ be a reduced expression and let $\underline{
i}:=(i_1, \ldots, i_r).$ Let $Z(w,\underline{
i})$ be the Bott-Samelson-Demazure-Hansen variety (natural desingularization of $X(w)$) associated to $(w, \underline{i}).$ It was first introduced by Bott and Samelson in a differential geometric and topological context (see \cite{BS}). Demazure \cite{Dem1} and Hansen \cite{Han} independently adapted the construction in algebro-geometric situation, which explains the reason
for the name. For the sake of simplicity, we write BSDH-variety instead of Bott-Samelson-Demazure-Hansen variety.
There is a natural left action of $B$ on $Z(w,\underline{i}).$ Let $\mathcal{Z}(w, \underline{i})= G\times_{B}Z(w, \underline{i}).$ Then the
$G$-twisted variety $\mathcal{Z}(w,\underline{i})$ is a smooth projective variety and it is a natural desingularization of $\mathcal{X}(w)$ (see \cite[Corollary 2.2.7, p.70]{BK}), we call it $G$-Bott-Samelson-Demazure-Hansen variety ($G$-BSDH-variety for short). Then we prove
\begin{proposition}[See Proposition \ref{prop 4.9}]\label{prop 1.5}
Assume that $G$ is simply-laced. Let $\Theta_{\mathcal{Z}(w,\underline{i})}$ be the
tangent sheaf on $\mathcal{Z}(w, \underline{i}).$ Then we have
\begin{itemize}
\item [(i)] $Aut^0(\mathcal{Z}(w, \underline{i}))=G.$
\item [(ii)] $H^{j}(\mathcal{Z}(w, \underline{i}), \Theta_{\mathcal{Z}(w,\underline{i})})=0$ for $j\ge 1.$
\end{itemize}
\end{proposition}
Thus if $G$ is simply-laced, then by Proposition \ref{prop 1.5}, we conclude that $Aut^0(\mathcal{Z}(w, \underline{i}))$ and $H^j(\mathcal{Z}(w,\underline{i}), \Theta_{\mathcal{Z}(w,\underline{i})})$ ($j\ge 1$) are independent of a reduced expression $\underline{i}$ of $w.$
By Proposition \ref{prop 1.5}(ii), $H^2(\mathcal{Z}(w, \underline{i}), \Theta_{\mathcal{Z}(w,i)})=0.$ Hence by \cite[p.273]{Huy}, we conclude that $\mathcal{Z}(w, \underline{i})$ has unobstructed deformation for a simply-laced group $G.$
Further, by Proposition \ref{prop 1.5}(ii), $H^1(\mathcal{Z}(w, \underline{
i}), \Theta_{\mathcal{Z}(w,\underline{
i})})= 0.$ Hence by \cite[Proposition 6.2.10, p.272]{Huy}, we conclude that $G$-BSDH-varieties are locally rigid for simply-laced group $G.$
It should be mentioned here that if $G$ is not simply-laced, then $H^1(\mathcal{Z}(w, \underline{i}), \Theta_{\mathcal{Z}(w,\underline{i})})$
might be non-zero (see Example \ref{ex4.12}).
In the view of the above results the following questions are open:
{\bf Open problems:}
\begin{itemize}
\item[(1)] Assume that $G$ is not simply-laced. What is the connected component containing the identity automorphism of the group of all algebraic automorphisms of $\mathcal{X}(w)$ or $\mathcal{Z}(w, \underline{i})?$
\vspace{.1cm}
\item [(2)] What is the group of all algebraic automorphisms of $\mathcal{X}(w)$ or $\mathcal{Z}(w, \underline{i})?$
\end{itemize}
The organization of the paper is as follows. In Section 2, we set up notation and recall some preliminaries. In Section 3, we prove Theorem \ref{thm 1.1}. In Section 4, we prove Proposition \ref{prop 1.3} and Proposition \ref{prop 1.5}.
{\it Acknowledgement.}
The first named author would like to thank the Infosys Foundation for the partial financial support. The second and third named authors would like to thank Department of Atomic Energy, Government of India [project no. 12-R\&D-TFR-5.01-0500] for the funding.
\section{Notation and Preliminaries} \label{sec2}
In this section, we set up some notation and preliminaries. We refer to \cite{BK}, \cite{Hum1}, \cite{Hum2}, \cite{Jan} for preliminaries in algebraic groups and Lie algebras.
Let $G,$ $T,$ $B,$ $R,$ $R^{+},$ and $W$ be as in the introduction. Let $S=\{\alpha_{1},\ldots, \alpha_{n}\}$ denote the set of simple roots in $R^{+},$ where $n$ is the rank of $G.$ For $\beta \in R^{+},$ we use the notation $\beta>0.$ Let $W=N_{G}(T)/T$ denote the Weyl group of $G$ with respect to $T.$ The simple reflection
in $W$ corresponding to $\alpha_{i}$ is denoted by $s_{i}$. For $w\in W,$ let $\ell(w)$ denote the length of $w.$ For a subset $J\subset S,$ let $W_{J}$ be the subgroup of $W$ generated by $\{s_{\alpha}: \alpha \in J\}.$ For a subset $J\subseteq S,$ let $P_{J}$ be the standard parabolic subgroup of $G,$ i.e., $P_{J}$ is generated by $B$ and $n_{w},$ where $w\in W_{J}$ and $n_{w}$ is a representative of $w$ in $G.$ The subgroup $W_{J}\subseteq W$ is called Weyl group of $P_{J}.$ For a simple root $\alpha_{i},$ we denote the corresponding parabolic subgroup simply by $P_{\alpha_{i}},$ it is called minimal parabolic subgroup corresponding to $\alpha_{i}.$
Let $\mathfrak{g}$ be the Lie algebra of $G.$ Let $\mathfrak{h}\subset \mathfrak{g}$ be the Lie algebra of $T$ and $\mathfrak{b}\subset \mathfrak{g}$ be the Lie algebra of $B.$
Let $X(T)$ denote the group of all characters of $T.$ We have $X(T)\otimes_{\mathbb{Z}} \mathbb{R}=Hom_{\mathbb{R}}(\mathfrak{h}_{\mathbb{R}}, \mathbb{R}),$ the dual of the real form of $\mathfrak{h}.$ The positive definite $W$-invariant form on $Hom_{\mathbb{R}}(\mathfrak{h}_{\mathbb{R}}, \mathbb{R})$ induced by the Killing form of $\mathfrak{g}$ is denoted by $(- , -).$ We use the notation $\langle -,- \rangle,$ to denote $\langle \mu, \alpha\rangle=\frac{2(\mu, \alpha)}{(\alpha, \alpha)}$ for every $\mu\in X(T)\otimes_{\mathbb{Z}} \mathbb{R}$ and $\alpha \in R.$
We denote by $X(T)^{+}$ the set of dominant characters of $T$ with respect to $B^{+}.$ Let $\rho$ denote
the half sum of all positive roots of $G$ with respect to $T$ and $B^{+}.$ For any simple root $\alpha,$ we denote the fundamental weight corresponding to $\alpha$ by $\omega_{\alpha}.$ For $1\le i \le n,$ let $h(\alpha_{i})\in \mathfrak{h}$ be the fundamental co-weight corresponding to $\alpha_{i}.$ That is ; $\alpha_{i}(h(\alpha_{j})) = \delta_{ij},$ where $\delta_{ij}$ is Kronecker delta.
We recall that the BSDH-variety corresponding to a reduced expression $\underline{i}= (i_1, i_2, \ldots, i_r)$ of $w =s_{i_{1}}s_{i_{2}}\cdots s_{i_{r}}$ is defined by
\begin{equation*}
Z(w,\underline{i})=\frac{P_{\alpha_{i_{1}}}\times P_{\alpha_{i_{2}}}\times \cdots \times P_{\alpha_{i_{r}}}}{B\times B\times \cdots \times B},
\end{equation*}
where the action of $B\times B \times \cdots \times B$ on $P_{\alpha_{i_{1}}}\times P_{\alpha_{i_{2}}}\times \cdots \times P_{\alpha_{i_{r}}}$ is given by $(p_1, p_2,\ldots , p_r)(b_1, b_2, \ldots , b_r) = (p_1\cdot b_1, b_{1}^{-1}\cdot p_2 \cdot b_2, \ldots , b^{-1}_{r-1}\cdot p_r\cdot b_r),$ $p_j\in P_{\alpha_{i_{j}}},$ $b_j \in B$ (see \cite[Definition 1, p.73]{Dem1}, \cite[Definition 2.2.1, p.64]{BK}). The equivalence class of $(p_1,...,p_{r})$ is denoted by $[p_1,...,p_r].$
Note that $Z(w, \underline{i})$ is a smooth projective variety. The BSDH-varieties are equipped with a $B$-equivariant morphism
\begin{equation*}
\phi_{w}: Z(w,\underline{i})\longrightarrow G/B
\end{equation*}
defined by $$[p_1,...,p_r]\mapsto p_1\cdots p_rB.$$ Then $\phi_{w}$ is the natural birational surjective morphism from $Z(w, \underline{i})$ to $X(w).$
Let $f_{r}:Z(w, \underline{i})\longrightarrow Z(ws_{i_{r}}, \underline{i}')$ denote the map induced by the projection
$P_{\alpha_{i_{1}}}\times P_{\alpha_{i_{2}}}\times \cdots \times P_{\alpha_{i_{r}}}\longrightarrow P_{\alpha_{i_{1}}}\times P_{\alpha_{i_{2}}}\times \cdots \times P_{\alpha_{i_{r-1}}},$ where $\underline{i}'= (i_1, i_2, \ldots, i_{r-1}).$ Then we observe that $f_r$ is a $P_{\alpha_{i_{r}}}/B \simeq \mathbb{P}^{1}$-fibration.
For a $B$-module $V,$ let $\mathcal{L}(w,V)$ denote the restriction of the associated homogeneous vector bundle on $G/B$ to $X(w).$ By abuse of notation, we denote the pull back of $\mathcal{L}(w, V)$ via $\phi_{w}$ to $Z(w, \underline{i})$ also by $\mathcal{L}(w,V),$ when there is no confusion. Since for any $B$-module $V$ the vector bundle $\mathcal{L}(w,V)$ on $Z(w, \underline{i})$ is the pull back of the homogeneous vector bundle from $X(w),$ we conclude that the cohomology modules $H^j(Z(w, \underline{i}),\mathcal{L}(w,V))\simeq H^j(X(w),\mathcal{L}(w,V))$ for all $j\ge 0$ (see \cite[Theorem 3.3.4(b)]{BK}), are independent of
choice of reduced expression $\underline{i}.$ Hence we denote $H^j(Z(w,\underline{i}),\mathcal{L}(w,V))$ by $H^j(w,V).$ In particular, if $\lambda$ is character of $B,$ then we denote the cohomology modules $H^j(Z(w, \underline{i}),\mathcal{L}(w,\lambda))$ by $H^j(w,\lambda).$
For $\lambda\in X(T),$ let $\mathbb{C}_{\lambda}$ denote the one dimensional $B$-module associated to $\lambda.$ Here, we recall the following result due to Demazure \cite[p.271]{Dem1} on short exact sequence of $B$-modules:
\begin{lemma} \label{lemma 2.1}
Let $\alpha$ be a simple root and $\lambda\in X(T)$ be such that $\langle \lambda , \alpha \rangle \ge 0.$ Let $ev:H^0(s_{\alpha}, \lambda)\longrightarrow \mathbb{C}_{\lambda}$ be the evaluation map. Then we have
\begin{enumerate}
\item[(1)] If $\langle \lambda, \alpha \rangle =0,$ then $H^0(s_{\alpha}, \lambda)\simeq \mathbb{C}_{\lambda}.$
\item[(2)] If $\langle \lambda , \alpha \rangle \ge 1,$ then $\mathbb{C}_{s_{\alpha}(\lambda)}\hookrightarrow H^0(s_{\alpha}, \lambda) $, and there is a short exact sequence of $B$-modules:
$$0\rightarrow H^0(s_{\alpha}, \lambda-\alpha)\longrightarrow H^0(s_{\alpha}, \lambda)/\mathbb{C}_{s_{\alpha}(\lambda)}\longrightarrow \mathbb{C}_{\lambda}\rightarrow 0.$$ Further more, $H^{0}(s_{\alpha}, \lambda- \alpha)=0$ when $\langle\lambda , \alpha \rangle=1.$
\item[(3)] Let $n=\langle \lambda ,\alpha \rangle.$ As a $B$-module, $H^0(s_{\alpha}, \lambda)$ has a composition series
$$0\subseteq V_{n}\subseteq V_{n-1}\subseteq \dots \subseteq V_{0}=H^0(s_{\alpha},\lambda)$$
such that $V_{i}/V_{i+1}\simeq \mathbb{C}_{\lambda - i\alpha}$ for $i=0,1,\dots,n-1$ and $V_{n}=\mathbb{C}_{s_{\alpha}(\lambda)}.$
\end{enumerate}
\end{lemma}
We define the dot action by $w\cdot \lambda= w(\lambda + \rho)-\rho.$ As a consequence of exact sequences of Lemma \ref{lemma 2.1}, we have the following.
Let $w\in W$, $\alpha$ be a simple root, and set $v=ws_{\alpha}$.
\begin{lemma} \label{lemma 2.2}
If $\ell(w) =\ell(v)+1$, then we have
\begin{enumerate}
\item If $\langle \lambda , \alpha \rangle \geq 0$, then
$H^{j}(w , \lambda) = H^{j}(v, H^0({s_\alpha, \lambda}) )$ for all $j\geq 0$.
\item If $\langle \lambda ,\alpha \rangle \geq 0$, then $H^{j}(w , \lambda ) = H^{j+1}(w , s_{\alpha}\cdot \lambda)$ for all $j\geq 0$.
\item If $\langle \lambda , \alpha \rangle \leq -2$, then $H^{j+1}(w , \lambda ) = H^{j}(w ,s_{\alpha}\cdot \lambda)$ for all $j\geq 0$.
\item If $\langle \lambda , \alpha \rangle = -1$, then $H^{j}( w ,\lambda)$ vanish for every $j\geq 0$.
\end{enumerate}
\end{lemma}
The following consequence of Lemma \ref {lemma 2.2} will be used to compute the cohomology modules in this paper.
Now onwards we will denote the Levi subgroup of $P_{\alpha}$($\alpha \in S$) containing $T$ by $L_{\alpha}$ and the subgroup $L_{\alpha}\cap B$ by $B_{\alpha}.$
\begin{lemma}\label{lemma 2.3}
Let $V$ be an irreducible $L_{\alpha}$-module. Let $\lambda$
be a character of $B_{\alpha}$. Then we have
\begin{enumerate}
\item As $L_{\alpha}$-modules, $H^j(L_{\alpha}/B_{\alpha}, V \otimes \mathbb C_{\lambda})\simeq V \otimes
H^j(L_{\alpha}/B_{\alpha}, \mathbb C_{\lambda})$ for every $j\ge 0.$
\item If
$\langle \lambda , \alpha \rangle \geq 0$, then
$H^{0}(L_{\alpha}/B_{\alpha} , V\otimes \mathbb{C}_{\lambda})$
is isomorphic as $L_{\alpha}$-module to the tensor product of $V$ and
$H^{0}(L_{\alpha}/B_{\alpha} , \mathbb{C}_{\lambda})$. Further, we have
$H^{j}(L_{\alpha}/B_{\alpha} , V\otimes \mathbb{C}_{\lambda}) =0$
for every $j\geq 1$.
\item If
$\langle \lambda , \alpha \rangle \leq -2$, then
$H^{0}(L_{\alpha}/B_{\alpha} , V\otimes \mathbb{C}_{\lambda})=0$,
and $H^{1}(L_{\alpha}/B_{\alpha} , V\otimes \mathbb{C}_{\lambda})$
is isomorphic to the tensor product of $V$ and $H^{0}(L_{\alpha}/B_{\alpha} ,
\mathbb{C}_{s_{\alpha}\cdot\lambda})$.
\item If $\langle \lambda , \alpha \rangle = -1$, then
$H^{j}( L_{\alpha}/B_{\alpha} , V\otimes \mathbb{C}_{\lambda}) =0$
for every $j\geq 0$.
\end{enumerate}
\end{lemma}
\begin{proof} Proof (1).
By \cite[Proposition 4.8, p.53, I]{Jan} and \cite[Proposition 5.12, p.77, I]{Jan},
for all $j\geq 0$, we have the following isomorphism of $L_{\alpha}$-modules:
$$H^j(L_{\alpha}/B_{\alpha}, V \otimes \mathbb C_{\lambda})\simeq V \otimes
H^j(L_{\alpha}/B_{\alpha}, \mathbb C_{\lambda}).$$
Proof of (2), (3) and (4) follows from Lemma \ref{lemma 2.2} by taking $w=s_{\alpha}$ and
the fact that $L_{\alpha}/B_{\alpha} \simeq P_{\alpha}/B$.
\end{proof}
Let $p: \widetilde{G}\longrightarrow G$ be the universal cover. Let $\widetilde{L}_{\alpha}$ (respectively, $\widetilde{B}_{\alpha}$) be the inverse image of $L_{\alpha}$ (respectively, $B_{\alpha}$).
Recall the structure of indecomposable $\widetilde{B}_{\alpha}$ and
$B_{\alpha}$-modules (see \cite[Corollary 9.1, p.130]{BKS}).
\begin{lemma}\label{lemma 2.4}
\begin{enumerate}
\item
Any finite dimensional indecomposable $\widetilde{B}_{\alpha}$-module $V$ is isomorphic to
$V^{\prime}\otimes \mathbb{C}_{\lambda}$ for some irreducible representation
$V^{\prime}$ of $\widetilde{L}_{\alpha}$ and for some character $\lambda$ of $\widetilde{B}_{\alpha}$.
\item
Any finite dimensional indecomposable $B_{\alpha}$-module $V$ is isomorphic to
$V^{\prime}\otimes \mathbb{C}_{\lambda}$ for some irreducible representation
$V^{\prime}$ of $\widetilde{L}_{\alpha}$ and for some character $\lambda$ of $\widetilde{B}_{\alpha}.$
\end{enumerate}
\end{lemma}
\section{Connected Automorphism group of a $G$-twisted variety}
In this section we study the connected component containing the identity automorphism of the group of all algebraic automorphisms of a $G$-twisted variety.
Let $F$ be an irreducible projective $B$-variety and $E=G\times_{B} F$ be the $G$-twisted variety associated to $F.$ Consider the natural projection map $$\pi: E\longrightarrow G/B,$$ given by $$[g,f]\mapsto gB,$$ where $g\in G, f\in F.$ Then for the natural action of $G$ on $G/B,$ $\pi$ is a $G$-equivariant fibration over $G/B$ with fiber $F.$
{\bf Observation:} For a $G$-twisted variety $E,$ if the action of $B$ on $F$ extends to an action of $G$ on $F,$ then the map $$\psi: G\times F\longrightarrow G/B\times F,$$ given by $$(g,f)\mapsto [gB, gf],$$ where $g\in G$ and $f\in F,$ induces $G$-equivariant isomorphism $G\times_{B}F \longrightarrow G/B\times F,$ where $G$ acts diagonally on $G/B \times F.$
\begin{proposition}\label{prop 3.1}
Then $\pi$ induces a surjective homomorphism $\pi_{*} :Aut^0(E)\longrightarrow G$ of algebraic groups. In particular, $Aut^0(E)=\ker(\pi_{*})\rtimes G,$ where $\ker(\pi_{*})$ denotes the kernel of $\pi_{*}.$
\end{proposition}
\begin{proof}
Since $F$ is an irreducible projective variety, we have $\pi_{*}\mathcal{O}_{E} =\mathcal{O}_{G/B},$ where $\mathcal{O}_{E}$ and $\mathcal{O}_{F}$ denote the structure sheaf on $E$ and $F$ respectively. Therefore, by \cite[Corollary 2.2, p.45]{Bri}, $\pi$ induces an algebraic group homomorphism $\pi_{*}: Aut^0(E)\longrightarrow Aut^0(G/B).$ Further, since $Aut^0(G/B)=G$ (see \cite{Dem aut}, \cite[Theorem 2, p.75]{Akh}), we have $\pi_{*}: Aut^0(E) \longrightarrow G.$
Let $\sigma: G\longrightarrow Aut^0(E)$ be the map induced by the natural action of $G$ on $E.$ Note that $\sigma$ is not a trivial map as the action of $G$ on $E$ is effective. Since $G$ is simple, $\ker\sigma$ is a finite central subgroup of $G.$ Moreover, since $G$ is of adjoint type, $\ker\sigma$ is trivial.
Thus, $\sigma :G \longrightarrow Aut^0(E)$ is an injective homomorphism of algebraic groups. Hence, $\pi_{*}$ is a surjective homomorphism of algebraic groups. Therefore, we have $Aut^0(E) = \ker(\pi_{*})\rtimes G.$
\end{proof}
It would be an interesting question to ask when does there exist an isomorphism between $E$ and $G/B\times F?$
We have already observed that if the action of $B$ on $F$ extends to an action of $G$ on $F,$ then there is an isomorphism between $E$ and $G/B \times F.$
Here, we give another sufficient condition under which there is an isomorphism between $E$ and $G/B\times F.$
\begin{proposition}
Assume that there exists a $B$-equivariant morphism $\Phi: E\longrightarrow F$ such that $\Phi_{*}\mathcal{O}_{E}=\mathcal{O}_{F}.$ Then we have
\begin{itemize}
\item [(i)] $E\simeq G/B\times F.$
\item [(ii)] $Aut^0(E)=G\times Aut^0(F).$
\end{itemize}
\end{proposition}
\begin{proof}
Proof of (i):
Since $\Phi_{*}\mathcal{O}_{E} =\mathcal{O}_{F},$ by \cite[Corollary 2.2, p.45]{Bri} $\Phi$ induces an algebraic group homomorphism $\Phi_{*}: Aut^0(E)\longrightarrow Aut^0(F).$ Note that by Proposition \ref{prop 3.1}, $G\subset Aut^0(E).$ Thus $G$ acts on $F$ via the map $\Phi_{*},$ i.e., the action of $G$ on $F$ is given by $g*f=\Phi(g\cdot z),$ where $g\in G, f\in F$ and $z\in \Phi^{-1}(f)$ (see \cite[Proof of Proposition 2.1, p.42]{Bri}). Further, since $\Phi$ is a $B$-equivariant, this action of $G$ on $F$ is an extension of the $B$ action on $F.$ Therefore, by the above observation we have $E\simeq G/B\times F$ as $G$-equivariant.
Proof of (ii): By using (i) and \cite[Corollary 2.3, p.46]{Bri}, we have $Aut^0(E)= Aut^0(G/B)\times Aut^0(F).$ Moreover, since $Aut^0(G/B)=G$ (see \cite{Dem aut}), we have $Aut^0(E)= G\times Aut^0(F).$
\end{proof}
\begin{theorem}\label{thm 3.3}
Let $F,$ $E$ be as before. Let $\Theta_F$ (respectively, $\Theta_{E}$) be the tangent sheaf of $F$ (respectively, of $E$). Then we have
\begin{itemize}
\item [(i)] $Aut^0(E)=G,$ if $H^0(G/B, H^0(F, \Theta_{F}))= 0.$
\item[(ii)] Assume that $F$ satisfies $H^j(F, \mathcal{O}_{F})=0$ for all $j\ge 1.$ Then $H^1(E, \Theta_{E})=H^0(G/B, H^1(F, \Theta_{F})),$ if $H^j(G/B,H^0(F, \Theta_{F}))=0$ for $j = 1, 2.$
\end{itemize}
\end{theorem}
\begin{proof}
Proof of (i): Recall that $\pi: E\longrightarrow G/B$ is the natural projection given by $[g, f]\mapsto gB,$ where $g\in G,$ and $f\in F.$
Consider the exact sequence of $\mathcal{O}_E$-modules
\begin{equation}\label{eq3.1}
0\longrightarrow \mathcal{R}\longrightarrow \Theta_{E}\longrightarrow \pi^*\Theta_{G/B}\longrightarrow 0,
\end{equation}
where $\mathcal{R}$ denotes the relative tangent sheaf with respect to the map $\pi.$
Since $H^0(F, \mathcal{O}_F)=\mathbb{C}$ and $\pi$ is a projective morphism, we have
\begin{equation}\label{eq3.2}
\pi_*(\pi^{*}\mathcal{O}_{G/B})=\pi_{*}\mathcal{O}_E=\mathcal{O}_{G/B}.
\end{equation}
Therefore, \eqref{eq3.1} induces the following long exact sequence
\begin{equation}\label{eq3.4}
0\rightarrow H^0(E, \mathcal{R})\rightarrow H^0(E, \Theta_{E})\rightarrow H^0(E, \pi^{*}\Theta_{G/B})\rightarrow H^1(E, \mathcal{R})\rightarrow H^1(E, \Theta_{E})\rightarrow \cdots
\end{equation}
of $B$-modules.
Now by using projection formula (see \cite[Chapter III, Ex 8.3, p.253]{Har}) and \eqref{eq3.2}, we have
\begin{equation}\label{eq3.3}
\pi_{*}(\pi^*\Theta_{G/B}) = \Theta_{G/B}\otimes \pi_{*}\mathcal{O}_{E} = \Theta_{G/B}.
\end{equation}
Further, since $H^0(G/B, \Theta_{G/B})=\mathfrak{g}$ (see \cite{Dem aut},\cite[Theorem 2, p.75 and Theorem 1, p.130]{Akh}), we have $H^0(E, \pi^*\Theta_{G/B})=\mathfrak{g}.$
On the other hand, by using the argument as in the proof of Proposition \ref{prop 3.1}, we see that $\sigma: G\longrightarrow Aut^0(E)$ is an injective homomorphism of algebraic groups. Since Lie($Aut^0(E)$) = $H^0(E,\Theta_{E})$
(see \cite[Lemma 3.4, p.13]{MO}), the differential $d\sigma :\mathfrak{g}\longrightarrow H^0(E, \Theta_{E})$ is an injective
homomorphism of Lie algebras.
Therefore, \eqref{eq3.4} gives the following short exact sequence
\begin{equation}\label{eq3.5}
0\longrightarrow H^0(E, \mathcal{R})\longrightarrow H^0(E, \Theta_{E})\longrightarrow H^0(E, \pi^{*}\Theta_{G/B})\longrightarrow 0
\end{equation}
of $B$-modules.
Now, since the restriction of $\mathcal{R}$ to $F$ coincides with the tangent sheaf $\Theta_{F}$ of $F,$ it follows that $H^0(E, \mathcal{R})=H^0(G/B, H^0(F, \Theta_{F})).$ Thus, we have $H^0(E, \mathcal{R})= 0,$ as $H^0(G/B, H^0(F, \Theta_{F} )) = 0.$
Therefore, by using \eqref{eq3.5}, we have $H^0(E, \Theta_E)=\mathfrak{g},$ as $H^0(E, \pi^*\Theta_{G/B}) = \mathfrak{g}.$ Hence, $Aut^0(E)=G.$
Proof of (ii): Since $H^j(F,\mathcal{O}_F)= 0$ for $j \ge 1,$ we have
\begin{equation}\label{eq3.6}
R^j\pi_{*}(\pi^{*}\mathcal{O}_{G/B})=R^j\pi_{*}\mathcal{O}_{E}= 0 \text{~for~} j\ge 1.
\end{equation}
Therefore, by using projection formula (see \cite[Chapter III, Ex 8.3, p.253]{Har})
and \eqref{eq3.6}, we have
\begin{equation}\label{eq3.7}
R^j\pi_{*}(\pi^*\Theta_{G/B})= \Theta_{G/B}\otimes R^j\pi_{*}(\mathcal{O}_E)= 0 \text{~for all~} j\ge 1.
\end{equation}
The $E_{2}^{i,j}$ term of Leray spectral sequence for $\pi$ and $\pi^{*}\Theta_{G/B}$ is
\begin{equation}
E_{2}^{i,j}= H^i(G/B, R^j\pi_{*}(\pi^{*}\Theta_{G/B}))
\end{equation}
Since $R^j\pi_{*}(\pi^{*}\Theta_{G/B})=0$ for all $j\ge 1$ (see \eqref{eq3.7}), we have $E_{2}^{i,j}= 0$ for $j\ge 1.$ Therefore, by using degenerate case of Leray spectral sequence and \eqref{eq3.3}, we have $$H^j(E, \pi^*\Theta_{G/B})=H^j(G/B, \pi_{*}(\pi^*\Theta_{G/B}))=H^j(G/B, \Theta_{G/B})$$ for $j\ge 1.$
Now, since $H^j(G/B, \Theta_{G/B})=0$ for all $j\ge 1$ (see \cite{Dem aut},\cite[Theorem 2, p.75 and Theorem 1, p.130]{Akh}), we have $H^j(E,\pi^*\Theta_{G/B}) = 0$ for $j\ge 1.$
Therefore, \eqref{eq3.4} induces the following exact sequence
\begin{equation*}
0\rightarrow H^0(E, \mathcal{R})\rightarrow H^0(E, \Theta_{E})\rightarrow H^0(E, \pi^{*}\Theta_{G/B})\rightarrow H^1(E, \mathcal{R})\rightarrow H^1(E, \Theta_{E})\rightarrow 0
\end{equation*}
of $B$-modules
and
\begin{equation}\label{eq3.9}
H^j(E, \mathcal{R})\simeq H^j(E, \Theta_{E}) \text{~for~} j \ge 2.
\end{equation}
Moreover, by using \eqref{eq3.5} and \eqref{eq3.9}, we have
\begin{equation}
H^j(E, \mathcal{R})\simeq H^j(E, \Theta_{E}) \text{~for~} j\ge 1.
\end{equation}
Now, since $H^j(G/B, H^0(F, \Theta_F )) = 0$ for $j = 1, 2,$ by using the five term exact sequence associated to the spectral sequence, we have $H^1(E, \Theta_{E})= H^0(G/B, H^1(F, \Theta_{F})).$
\end{proof}
\begin{corollary}
Let $F,$ $E$ be as in Theorem \ref{thm 3.3} and $F$ satisfies $H^0(G/B, H^0(F,\Theta_{F}))=0$ but $H^0(F, \Theta_{F})\neq 0.$ Then $E$ is not isomorphic to $G/B\times F.$ In particular, the action of $B$
on $F,$ can't be extended to an action of $G$ on $F.$
\end{corollary}
\begin{proof}
If $E\simeq G/B\times F,$ then by \cite[Corollary 2.3, p.46]{Bri}, $Aut^0(E)=Aut^0(G/B)\times Aut^0(F).$ Since $Aut^0(G/B)=G$ (see \cite{Dem aut}), we have $Aut^0(E)= G \times Aut^0(F).$ Further, since $H^0(F, \Theta_{F})\neq 0,$ by \cite[Lemma 3.4, p.13]{MO}, we conclude that $Aut^0(F)$ is not a trivial group. Therefore, $Aut^0(E)\neq G,$ which shows contradiction to Theorem \ref{thm 3.3}(i).
\end{proof}
{Note:} For any $G$-twisted variety $E,$ we have the following observations from the proof of Theorem \ref{thm 3.3}.
\begin{itemize}
\item[(1)] Let $\pi_{*}: Aut^0(E)\longrightarrow G$ be as in Proposition \ref{prop 3.1}. Then we have Lie($\ker(\pi_*))=H^0(E,\mathcal{R})=H^0(G/B,H^0(F,\Theta_{F})).$
\end{itemize}
\begin{itemize}
\item[(2)] Lie($Aut^0(E)$) fits into the exact sequence
\begin{equation*}
0\rightarrow H^0(E,\mathcal{R})\rightarrow H^0(E, \Theta_{E})\rightarrow \mathfrak{g} \rightarrow 0
\end{equation*}
of $G$-modules. Since $G$ is simple, the above exact sequence splits, i.e., $H^0(E, \Theta_{E})=H^0(E, \mathcal{R})\oplus \mathfrak{g}$ as $G$-modules.
\end{itemize}
\begin{remark}
All the results of this section hold in more general and natural setting, where $G$ is a connected reductive group over an algebraically closed field of characteristic zero, $B$ is a Borel subgroup of $G.$ One just has to replace $G$ with the adjoint group $G_{ad}$ at appropriate places.
\end{remark}
\section{$G$-Schubert variety and $G$-BSDH-variety}
Throughout this section we assume $G$ to be simply-laced. In this section we study the connected component containing the identity automorphism of the group of all algebraic automorphisms of a $G$-Schubert variety and $G$-BSDH-variety.
\subsection{Connected automorphism group of a $G$-Schubert variety:}
We recall some results on automorphism group of a Schubert variety (see \cite[Theorem 4.2(1), p.772]{Kan}). In \cite{Kan}, Theorem 4.2 is stated for smooth Schubert variety but proof goes for any Schubert variety. Here we give a brief sketch of the proof.
Let $\alpha_{0}$ denote the highest root of $G$ with respect to $T$ and $B^{+}.$ For the left action of $G$ on $G/B,$ let $P={\text{Stab}}_{G}(X(w))$ denote the stabilizer of $X(w)$ in $G.$ Then $P= P_{I(w)}$ for some subset $I(w)\subseteq S.$ Let $\varphi_{w}: P_{I(w)}\longrightarrow Aut^0(X(w))$ be the natural map induced by the natural left action of $P_{I(w)}$ on $X(w).$ Let $\mathfrak{p}_{I(w)}$ denote the Lie algebra of $P_{I(w)}.$
\begin{lemma}\label{lemma 4.1}
Then the natural map $\varphi_{w}$ is surjective homomorphism of algebraic groups.
\end{lemma}
\begin{proof}
Recall from \cite[Lemma 3.4, p.13]{MO} that Lie($Aut^0(X(w))$)= $H^0(X(w),\Theta_{X(w)}).$
To prove $\varphi_{w}$ is surjective it is enough to prove that $d\varphi_{w}: \mathfrak{p}_{I(w)}\longrightarrow H^0(X(w), \Theta_{X(w)})$ is
surjective.
Let $\Theta_{G/B}$ be the tangent sheaf of $G/B.$ Then note that $\Theta_{G/B}$ is the sheaf corresponding to the tangent bundle $\mathcal{L}(\mathfrak{g/b})$ of $G/B.$ Further, we
have $H^0(X(w), \Theta_{X(w)})\subseteq H^0(X(w), \Theta_{G/B}|_{X(w)})= H^0(w,\mathfrak{g/b}).$
By \cite[Lemma 3.5, p.770]{Kan}, the restriction map $H^0(G/B, \mathfrak{g/b})\longrightarrow H^0(w,\mathfrak{g/b})$ is surjective.
Thus for $D'\in H^0(X(w), \Theta_{X(w)})\subseteq H^0(w, \Theta_{G/B}|_{X(w)})= H^0(w, \mathfrak{g/b}),$ there exists $D\in H^0(G/B, \Theta_{G/B})$ such that image under the restriction map is $D'.$ Consequently, $D$ preserves the ideal sheaf of $X(w)$ in $G/B,$ and hence $D\in$ Lie$(\text{Stab}_G(X(w)))=\mathfrak{p}_{I(w)}.$ Therefore, proof of the lemma follows.
\end{proof}
We recall some definitions and facts which we use later (see \cite{Bot},\cite{Akh},\cite{Snow}).
Let $\lambda \in X(T).$ Then $\lambda$ is called singular if $\langle \lambda, \alpha \rangle = 0$ for some $\alpha \in R^{+},$ otherwise it is called non-singular.
The index of $\lambda$ is defined to be ind$(\lambda):=min\{\ell(w)|w(\lambda)\in X(T)^{+}\}.$
Fact 1. If $G$ is simply-laced and $\beta\in R$ such that $\beta+\rho$ is non-singular, then either $\beta=\alpha_{0}$ or negative of a simple root.
Fact 2. If $G$ is simply-laced and $\beta \in R$ such that $\beta+\rho$ is non-singular, then index of $\beta+\rho$ is either $0$ or $1$ (see \cite[p.47-48]{Snow}). Further, if the index of $\beta+\rho$ is $0$ (respectively,
1), then $\beta= \alpha_{0}$ (respectively, $\beta$ is a negative of a simple root).
Now we prove
\begin{lemma}\label{lemma 4.2}
Assume that $w\neq w_{0} \in W.$ Then we have $H^j(G/B, H^0(X(w), \Theta_{X(w)}))=0$
for all $j\ge 0.$
\end{lemma}
\begin{proof}
By Lemma \ref{lemma 4.1}, we have the following short exact sequence
\begin{equation}\label{eq4.1}
0\longrightarrow \mathcal{K}_{w}\longrightarrow \mathfrak{p}_{I(w)}\longrightarrow H^0(X(w), \Theta_{X(w)})\longrightarrow 0
\end{equation}
of $B$-modules, where $\mathcal{K}_{w}$ denotes the kernel of $\varphi_{w}.$
Since $w\neq w_{0},$ by the argument as in \cite[Proof of Theorem 4.1, p.771]{Kan}, we have
$H^0(G/B, \mathfrak{p}_{I(w)})=0.$ Further, by using \cite[Lemma 3.4, p.770]{Kan}, we have $H^j(G/B, \mathfrak{p}_{I(w)})=0$ for $j\ge 1.$
Therefore, the long exact sequence associated to \eqref{eq4.1}, gives $$H^j(G/B, H^0(X(w), \Theta_{X(w)}))=H^{j+1}(G/B, \mathcal{K}_{w})$$ for $j\ge 0.$
Let $J=\{\alpha \in S : (\mathcal{K}_{w})_{-\alpha}\neq 0\}.$ Then by \eqref{eq4.1} we observe that $\mathbb{C}h(\alpha)\oplus \mathbb{C}_{-\alpha}\subseteq \mathcal{K}_{w}$ for all $\alpha \in J.$ Indeed, for $\alpha \in J,$ if $\mathbb{C}h(\alpha)\nsubseteq \mathcal{K}_{w},$ then by using \eqref{eq4.1} we have $\mathbb{C}h(\alpha)\subseteq H^0(X(w), \Theta_{X(w)}).$ Hence, $\mathbb{C}h(\alpha)\oplus \mathbb{C}_{-\alpha}\subseteq H^0(X(w), \Theta_{X(w)})$ as $H^0(X(w), \Theta_{X(w)})$ is a $B$-module.
Now consider the natural projection map $p: G/B \longrightarrow G/P_{J}.$ Note that $P_{J}/B \simeq L_{J}/B_{J}= B_{J}w_{0,J}B_{J}/B_{J},$ where $w_{0,J}$ denotes the longest element of $W_{J}.$ Thus we have $H^j(P_{J}/B, \mathcal{K}_{w})\simeq H^j(L_{J} /B_{J}, \mathcal{K}_{w})$ for $j\ge 0,$ where $L_{J}$ is the Levi subgroup of $P_{J}$ and $B_{J}=B\cap L_{J}.$
Fix a reduced expression $s_{i_{1}}s_{i_{2}}\cdots s_{i_{r}}$ of $w_{0,J}.$ We use this reduced
expression to compute $H^j(L_{J}/B_{J},\mathcal{K}_{w})$ for $j\ge 0.$
Since $\alpha_{i_{r}}\in J,$ by using \cite[Lemma 3.3, p.767]{Kan} $\mathbb{C}h(\alpha_{i_{r}})\oplus \mathbb{C}_{-\alpha_{i_{r}}}$ is an indecomposable
$B_{\alpha_{i_{r}}}$-summand of $\mathcal{K}_{w}.$ Thus by using Lemma \ref{lemma 2.4}, $\mathbb{C}h(\alpha_{i_{r}})\oplus\mathbb{C}_{-\alpha_{i_{r}}}=V\otimes \mathbb{C}_{-\omega_{i_{r}}},$ where $V$ is the standard two dimensional irreducible $\widetilde{L}_{\alpha_{i_{r}}}$-module. Thus by Lemma \ref{lemma 2.3}(4), we have $H^j(\widetilde{L}_{\alpha_{i_{r}}}/\widetilde{B}_{\alpha_{i_{r}}}, \mathbb{C}h(\alpha_{i_{r}})\oplus \mathbb{C}_{-\alpha_{i_{r}}})= 0$ for $j= 0,1.$
Let $V_{1}$ be an indecomposable $B_{\alpha_{i_{r}}}$-summand of $\mathcal{K}_{w}$ other than $Ch(\alpha_{i_{r}})\oplus \mathbb{C}_{-\alpha_{i_{r}}}.$ Then by Lemma \ref{lemma 2.4}, we have $V_{1}=V'\otimes \mathbb{C}_{a\omega_{i_{r}}},$ where $V'$ is an irreducible representation of $\widetilde{L}_{\alpha_{i_{r}}}$ and $a\in \mathbb{Z}.$ Since $G$ is simply-laced and non-zero weights of $V_{1}$ are roots, we have $a=-1,0, 1.$ Hence, by using Lemma \ref{lemma 2.3}, we get $H^1(\widetilde{L}_{\alpha_{i_{r}}}/\widetilde{B}_{\alpha_{i_{r}}}, V_{1})=0.$
Therefore, $H^1(\widetilde{L}_{\alpha_{i_{r}}}/\widetilde{B}_{\alpha_{i_{r}}}, \mathcal{K}_{w})=0$ and neither the weight space $\mathbb{C}h(\alpha_{i_{r}})$ nor $\mathbb{C}_{-\alpha_{i_{r}}}$ appear
in $H^0(\widetilde{L}_{\alpha_{i_{r}}}/B_{\alpha_{i_{r}}}, \mathcal{K}_{w}).$
Next we compute $H^j(\widetilde{L}_{\alpha_{i_{r-1}}}/\widetilde{B}_{\alpha_{i_{r-1}}}, H^0(\widetilde{L}_{i_{r}}/\widetilde{B}_{\alpha_{i_{r}}}, \mathcal{K}_{w}))$ for $j= 0, 1.$
By \cite[Lemma 3.3, p.767]{Kan}, $\mathbb{C}h(\alpha_{i_{r-1}})\oplus \mathbb{C}_{-\alpha_{i_{r-1}}}$ is an indecomposable $B_{\alpha_{i_{r-1}}}$-
submodule of $\mathcal{K}_{w}.$ So, $\mathbb{C}h(\alpha_{i_{r-1}})\oplus \mathbb{C}_{-\alpha_{i_{r-1}}}$ is also an indecomposable $B_{\alpha_{i_{r-1}}}$-submodule of $H^0(\widetilde{L}_{\alpha_{i_{r}}}/\widetilde{B}_{\alpha_{i_{r}}}, \mathcal{K}_{w}).$
Thus by using the argument similar to above, we get
\begin{itemize}
\item $H^1(\widetilde{L}_{\alpha_{i_{r-1}}}/\widetilde{B}_{\alpha_{i_{r-1}}}
, H^0(\widetilde{L}_{\alpha_{i_{r}}}/\widetilde{B}_{\alpha_{i_{r}}}, \mathcal{K}_{w}))=0$
\item neither $\mathbb{C}h(\alpha_{i_{r-1}})$ nor $\mathbb{C}_{-\alpha_{i_{r-1}}}$ appear in $H^0(\widetilde{L}_{\alpha_{i_{r-1}}}/\widetilde{B}_{\alpha_{i_{r-1}}}, H^0
(\widetilde{L}_{\alpha_{i_{r}}}/\widetilde{B}_{\alpha_{i_{r}}}, \mathcal{K}_{w})).$
\end{itemize}
Now we compute $H^j(\widetilde{L}_{\alpha_{i_{r-2}}}/\widetilde{B}_{\alpha_{i_{r-2}}}, H^0
(\widetilde{L}_{\alpha_{i_{r-1}}}/\widetilde{B}_{\alpha_{i_{r-1}}}
, H^0(\widetilde{L}_{\alpha_{i_{r}}}/\widetilde{B}_{\alpha_{i_{r}}}, \mathcal{K}_{w})))$ for $j = 0, 1.$
Now if $\alpha_{i_{r-2}}= \alpha_{i_{r}},$ then clearly $\mathbb{C}h(\alpha_{i_{r-2}})\oplus \mathbb{C}_{-\alpha_{i_{r-2}}}\nsubseteq H^0(\widetilde{L}_{\alpha_{i_{r-1}}}/\widetilde{B}_{\alpha_{i_{r-1}}}, H^0(\widetilde{L}_{\alpha_{i_{r}}}/\widetilde{B}_{\alpha_{i_{r}}},\mathcal{K}_{w}))$ as $\mathbb{C}h(\alpha_{i_{r}})\oplus \mathbb{C}_{-\alpha_{i_{r}}}\nsubseteq H^0(\widetilde{L}_{\alpha_{i_{r}}}/\widetilde{B}_{\alpha_{i_{r}}}, \mathcal{K}_{w}).$
Thus by using the argument similar to above, we get
\begin{itemize}
\item $H^1(\widetilde{L}_{\alpha_{i_{r-2}}}/\widetilde{B}_{\alpha_{i_{r-2}}}, H^0
(\widetilde{L}_{\alpha_{i_{r-1}}}/\widetilde{B}_{\alpha_{i_{r-1}}}
, H^0(\widetilde{L}_{\alpha_{i_{r}}}/\widetilde{B}_{\alpha_{i_{r}}}, \mathcal{K}_{w})))= 0$
\item neither $\mathbb{C}h(\alpha_{i_{r-2}})$ nor $\mathbb{C}_{-\alpha_{i_{r-2}}}$ appear in $$H^0(\widetilde{L}_{\alpha_{i_{r-2}}}/\widetilde{B}_{\alpha_{i_{r-2}}}, H^0(\widetilde{L}_{\alpha_{i_{r-1}}}/\widetilde{B}_{\alpha_{i_{r-1}}}, H^0(\widetilde{L}_{\alpha_{i_{r}}}/\widetilde{B}_{\alpha_{i_{r}}}, \mathcal{K}_{w})))=0.$$
\end{itemize}
On the other hand, if $\alpha_{i_{r-2}}\neq \alpha_{i_{r}},$ then we proceed as we have proceeded before for $\alpha_{i_{r-1}}.$
Then proceeding successively and using the argument similar to above, we finally get that zero weights, and negative of simple roots don't occur in $H^0(L_{J}/B_{J},\mathcal{K}_{w}),$ and $H^j(L_{J}/B_{J}, \mathcal{K}_{w})=0$ for all $j\ge 1.$
Thus by using Leray spectral sequence for the degenerate case we obtain $H^j(G/B, \mathcal{K}_{w})=H^j(G/P_{J}, H^0(P_{J}/B, \mathcal{K}_{w}))$ for $j\ge 1.$
Note that $H^0(P_{J}/B, \mathcal{K}_{w})$ is a $B$-module such that non-zero weights are among the roots other than negative of simple roots. Thus by using Fact 2, and \cite[Lemma
1.1, p.353]{MS}, we have $H^j(G/P_{J}, H^0(P_{J}/B, \mathcal{K}_{w}))= 0$ for $j\ge 1.$
Therefore, $H^j(G/B, H^0(X(w), \Theta_{X(w)})) = 0$ for $j\ge 0.$
\end{proof}
\begin{proposition}\label{prop 4.3}
Assume that $w\neq w_{0} \in W.$ Then we have
\begin{itemize}
\item [(i)] $Aut^0(\mathcal{X}(w))=G.$
\item[(ii)] $H^1(\mathcal{X}(w), \Theta_{\mathcal{X}(w)})=H^0(G/B, H^1(X(w), \Theta_{X(w)})).$
\end{itemize}
\end{proposition}
\begin{proof}
Proof of (i): By Lemma \ref{lemma 4.2}, we have $H^0(G/B, H^0(X(w), \Theta_{X(w)})) = 0.$ Therefore, by Theorem \ref{thm 3.3}(i), we have $Aut^0(\mathcal{X}(w))= G.$
Proof of (ii): By Lemma \ref{lemma 4.2}, we have $H^j(G/B, H^0(X(w), \Theta_{X(w)}))= 0$ for all $j\ge 1.$ Therefore, by Theorem \ref{thm 3.3}(ii), we have $H^1(\mathcal{X}(w), \Theta_{\mathcal{X}(w)})= H^0(G/B, H^1(X(w), \Theta_{X(w)})).$
\end{proof}
\begin{corollary}
Let $w\in W$ be such that $w\neq id, w_{0},$ where $id$ denotes the identity element of $W.$ Then $\mathcal{X}(w)$ is not isomorphic to $G/B \times X(w).$
\end{corollary}
\begin{proof}
If $\mathcal{X}(w)=G/B \times X(w),$ then by \cite[Corollary 2.3, p.46]{Bri}, $Aut^0(\mathcal{X}(w))=Aut^0(G/B)\times Aut^0(X(w)).$ Since $Aut^0(G/B)= G$ (see \cite{Dem aut}), we have $Aut^0(\mathcal{X}(w))= G\times Aut^0(X(w)).$ Further, since $w\neq id,$ by using \cite[Theorem 4.2(2), p.772]{Kan}, it follows that $Aut^0(X(w))$ is not a trivial group. Hence, $Aut^0(\mathcal{X}(w))\neq G,$ which shows contradiction to Proposition \ref{prop 4.3}(i) as $w\neq w_{0}.$
\end{proof}
\begin{remark}
Note that if $w= w_{0},$ then $\mathcal{X}(w)= G/B\times G/B.$ Thus $Aut^0(\mathcal{X}(w))=G\times G$ and $H^j(\mathcal{X}(w), \Theta_{X(w)}) = 0$ $( j\ge 1)$ for $w=w_{0}.$
\end{remark}
\subsection{Connected automorphism group of a $G$-BSDH variety:}
We recall the following result on automorphism group of a BSDH-variety which we use later (see \cite{CKP}).
Let $w= s_{i_{1}}s_{i_{2}}\cdots s_{i_{r}}$ be a reduced expression and $\underline{i}:=(i_1,\ldots, i_r).$ Let $J(w, \underline{i})= \{\alpha_{i_{j}}: \langle \alpha_{i_{j}}, \alpha_{i_{k}}\rangle =0 \text{~for ~all~} 1 \le k\le j-1\}.$ Let $\mathfrak{p}_{J(w,\underline{i})}$ denote the Lie algebra of $P_{J(w,\underline{i})}.$
\begin{lemma}\label{lemma 4.6}
Let $\Theta_{Z(w,\underline{i})}$ be the tangent sheaf of $Z(w, \underline{i}).$ Then we have $H^0(Z(w, \underline{i}), \Theta_{Z(w,\underline{i})})$ is the quotient of $\mathfrak{p}_{J(w,\underline{i})}.$ Furthermore, $H^j(Z(w,\underline{i}), \Theta_{Z(w,\underline{i})})=0$ for all $j\ge 1.$
\end{lemma}
\begin{proof}
See \cite[Theorem 7.1(4), p.690]{CKP} and \cite[Proposition 3.1(2), p.673]{CKP}.
\end{proof}
\begin{remark}
\begin{itemize}
\item[(i)] If $G$ is simply-laced, then $H^j(Z(w,\underline{i}),\Theta_{Z(w,\underline{i})})$ ($j\ge 1$) are independent of choice of reduced expression $\underline{i}$ of $w.$
\end{itemize}
\begin{itemize}
\item[(ii)] If $G$ is not simply-laced, then $H^1(Z(w,\underline{i}), \Theta_{Z(w,\underline{i})})$ depends on the choice of a reduced expression $\underline{i}$ of $w$ (see \cite{CKP}, \cite{CK}, \cite{KS1}, and \cite{KS2}).
\end{itemize}
\end{remark}
\begin{lemma}\label{lemma 4.8}
$H^j(G/B, H^0(Z(w,\underline{i}), \Theta_{Z(w,\underline{i})}))=0$ for $j\ge0.$
\end{lemma}
\begin{proof}
By Lemma \ref{lemma 4.6}, we have the following short exact sequence
\begin{equation}\label{eq4.2}
0\longrightarrow \mathcal{K}_{w,\underline{i}}\longrightarrow \mathfrak{p}_{J(w,\underline{i})}\longrightarrow H^0(Z(w, \underline{i}),\Theta_{Z(w,\underline{i})})\longrightarrow 0
\end{equation}
of $B$-modules, where $\mathcal{K}_{w,\underline{i}}$ denotes the kernel.
Since $J(w,\underline{i})\neq S,$ by using \cite[Proof of Theorem 4.1, p.771]{Kan} and \cite[Lemma 3.4, p.770]{Kan}, we have $H^j(G/B, \mathfrak{p}_{J(w,\underline{i})})=0$ for all $j\ge 0.$
Therefore, the long exact sequence associated to \eqref{eq4.2} gives
$$H^j(G/B, H^0(Z(w,\underline{i}), \Theta_{Z(w,\underline{i})}))=H^{j+1}(G/B, \mathcal{K}_{w,\underline{i}})$$ for $j\ge 0.$
Let $J= \{\alpha\in S: (\mathcal{K}_{w,\underline{i}})_{-\alpha}\neq 0\}.$ Then by \eqref{eq4.2} we observe that $\mathbb{C}h(\alpha)\oplus \mathbb{C}_{-\alpha}\subseteq \mathcal{K}_{w,\underline{i}}$ for all $\alpha\in J.$ Indeed, for $\alpha \in J,$ if $\mathbb{C}h(\alpha)\nsubseteq \mathcal{K}_{w,\underline{i}},$ then by using \eqref{eq4.2} we have $\mathbb{C}h(\alpha)\subseteq H^0(Z(w, \underline{i}), \Theta_{Z(w,\underline{i})}).$ Hence, $\mathbb{C}h(\alpha)\oplus \mathbb{C}_{-\alpha}\subseteq H^0(Z(w, \underline{i}), \Theta_{Z(w,\underline{i})})$ as $H^0(Z(w, \underline{i}), \Theta_{Z(w,\underline{i})})$ is a $B$-modules.
Now consider the natural projection map $p: G/B\longrightarrow G/P_{J}.$ Note that $P_{J}/B\simeq L_{J} /B_{J}= B_{J}w_{0,J}B_{J}/B_{J},$ where $w_{0,J}$ denotes the longest element of $W_{J}.$ Thus we have $H^j(P_{J}/B, \mathcal{K}_{w,\underline{i}})\simeq H^j(L_{J} /B_{J} , \mathcal{K}_{w,\underline{i}})$ for $j\ge 0,$ where $L_{J}$ is the Levi
subgroup of $P_{J}$ and $B_{J}=B \cap L_{J}.$
Fix a reduced expression $s_{i_{1}}s_{i_2}\cdots s_{i_r}$ of $w_{0,J}.$ We use this reduced expression to compute $H^j(L_{J} /B_{J} , \mathcal{K}_{w,\underline{i}})$ for $j \ge 0.$
Rest of the argument of the proof is similar to the proof of Lemma \ref{lemma 4.2}.
\end{proof}
\begin{proposition}\label{prop 4.9}
\begin{itemize}
\item [(i)] $Aut^0(\mathcal{Z}(w,\underline{i}))= G.$
\item [(ii)] $H^j(\mathcal{Z}(w, \underline{i}), \Theta_{\mathcal{Z}(w,\underline{i})})= 0$ for $j\ge 1.$
\end{itemize}
\end{proposition}
\begin{proof}
Proof of (i): By Lemma \ref{lemma 4.8}, we have $H^0(G/B, H^0(Z(w, \underline{i}), \Theta_{Z(w,\underline{i})})) = 0.$ Therefore, by Theorem \ref{thm 3.3}(i), we have $Aut^0(\mathcal{Z}(w,\underline{i}))= G.$
Proof of (ii): Recall that $\pi : \mathcal{Z}(w,\underline{i})\longrightarrow G/B$ is the natural projection map. Then by the proof of Theorem \ref{thm 3.3}, we have $H^j(\mathcal{Z}(w,\underline{i}), \Theta_{\mathcal{Z}(w,\underline{i})}) = H^j(\mathcal{Z}(w, \underline{i}), \mathcal{R})$ for $j \ge 1,$ where
$\mathcal{R}$ denotes the tangent bundle with respect to $\pi.$ Since the restriction of $\mathcal{R}$ to $Z(w,\underline{i})$ is $\Theta_{Z(w,\underline{i})}$, by Lemma \ref{lemma 4.6}, we have $H^j(Z(w, \underline{i}), \mathcal{R}|_{Z(w,\underline{i})}) = H^j(Z(w, \underline{i}), \Theta_{Z(w,\underline{i})}) = 0$ for $j \ge 1.$ Hence, by applying Leray spectral sequence for degenerate case we obtain $H^j(\mathcal{Z}(w, \underline{i}), \mathcal{R})=H^j(G/B, H^0(Z(w, \underline{i}), \Theta_{Z(w,\underline{i})})).$ On the other hand, by Lemma \ref{lemma 4.8}, $H^j(G/B, H^0(Z(w, \underline{i}), \Theta_{Z(w,\underline{i})}))= 0$ for all $j\ge 1.$ Hence, the proof of (ii) follows.
\end{proof}
\begin{corollary}
Assume that $w\neq id.$ Then $\mathcal{Z}(w, \underline{i})$ is not isomorphic to $G/B \times Z(w, \underline{i}).$
\end{corollary}
\begin{proof}
If $\mathcal{Z}(w, \underline{i})=G/B \times Z(w, \underline{i}),$ then by\cite[Corollary 2.3, p.46]{Bri}, we have $Aut^0(\mathcal{Z}(w, \underline{
i}))=Aut^0(G/B) \times Aut^0(Z(w, \underline{i})).$ Since $Aut^0(G/B)=G$ (see \cite{Dem aut}), we have $Aut^0(\mathcal{Z}(w,\underline{i}))=G \times Aut^0(Z(w, \underline{
i})).$ Further, since $w\neq id,$ by using \cite[Theorem 7.1(4), p.690]{CKP}, it follows that $Aut^0(Z(w, \underline{i}))$ is not a trivial group. Hence, $Aut^0(\mathcal{Z}(w, \underline{i}))\neq G,$ which shows contradiction to Proposition \ref{prop 4.9}(i).
\end{proof}
\begin{corollary}
$Aut^0(\mathcal{Z}(w,\underline{i})),$ and $H^j(\mathcal{Z}(w,\underline{i}), \Theta_{\mathcal{Z}(w,\underline{i})})$ ($j\ge 1$) are independent of choice of reduced expression $\underline{i}$ of $w.$
\end{corollary}
\begin{proof}
Follows from Proposition \ref{prop 4.9}.
\end{proof}
We conclude this article by giving an example which shows that if $G$ is not simply-laced, then $H^1(\mathcal{Z}(w,\underline{i}), \Theta_{\mathcal{Z}(w,\underline{i})})$ might not vanish.
\begin{example}\label{ex4.12}
Let $G=SO(5,\mathbb{C}).$ Let $w= s_1s_2s_1,$ $\underline{i}=(1, 2, 1),$ and $w_1= s_1s_2,$ $\underline{i}_{1}=(1, 2).$ Then $H^1(\mathcal{Z}(w, \underline{i}), \Theta_{\mathcal{Z}(w,\underline{i})})\neq 0.$
\end{example}
\begin{proof}
Recall that we have the following short exact sequence
\begin{equation*}
0\longrightarrow \mathcal{L}(\alpha_{1})\longrightarrow \Theta_{Z(w,\underline{i})}\longrightarrow f_{3}^{*}\Theta_{Z(w_{1},\underline{i}_{1})}\longrightarrow 0.
\end{equation*}
Note that $H^1(w, \alpha_{1})=\mathbb{C}_{\alpha_{1}+\alpha_{2}}\oplus \mathbb{C}_{\alpha_{2}}$ and $H^0(Z(w_1, \underline{i}_{1}), \Theta_{Z(w_1,\underline{i}_{1})})_{\mu} = 0$ for $\mu = \alpha_1+\alpha_2,\alpha_2.$
Therefore, by using long exact sequence associated to the above short exact sequence we have
\begin{equation*}
0 \longrightarrow H^1(w, \alpha_{1})\longrightarrow H^1(Z(w, \underline{i}), \Theta_{Z(w,\underline{i})})\longrightarrow H^1(Z(w_1,\underline{i}_{1}), \Theta_{Z(w_1,\underline{i}_1)})\longrightarrow 0.
\end{equation*}
On the other hand, we note that $H^1(Z(w_1, \underline{i}_{1}), \Theta_{Z(w_1,\underline{i}_{1})})= 0.$
Therefore, $H^1(Z(w,\underline{i}), \Theta_{Z(w,\underline{i})})= H^1(w,\alpha_{1}) = \mathbb{C}_{\alpha_{1}+\alpha_{2}}\oplus \mathbb{C}_{\alpha_{2}}.$
Note that $w_0= s_1s_2s_1s_2.$ Then $H^0(G/B, H^1(w,\alpha_{1}))= \mathbb{C}_{\alpha_{1}+\alpha_{2}}\oplus \mathbb{C}_{\alpha_{2}}\oplus \mathbb{C}h(\alpha_2)\oplus \mathbb{C}_{-\alpha_{2}}\oplus \mathbb{C}_{-(\alpha_{1}+\alpha_{2})}=V(\omega_1),$ where $V(\omega_1)$ denotes the finite dimensional irreducible $G$-module with highest weight $\omega_1.$
\end{proof}
{\bf Declarations}
{\bf Conflict of Interest} The authors declare that they have no conflict of interest.
|
1,108,101,566,318 | arxiv | \section*{Acknowledgements}
We acknowledge the financial support of the Royal Society (S.B.D.) and
the UK EPSRC, and invaluable discussions with Igor Mazin, Michelle Johannes and Zahid Hasan.
This experiment was performed with the approval of the Japan
Synchrotron Radiation Research Institute (JASRI, proposal nos. 2005A0092-ND3a-np and 2005B0182-ND3a-np).
This work was partially supported by a Grant-in-Aid for Scientific Research (No.18340111) from the Ministry of Education,
Culture, Sports, Science and Technology, Japan.
|
1,108,101,566,319 | arxiv | \section{Introduction}
Imagine we have $N$ conditionally independent draws of $\boldsymbol{X}$ from a model and we would like to consider a variational approximation under the variational posterior $q(\cdot|\eta)$. In a number of interesting cases a variational bound is not available directly, but a bound becomes available if we augment the model with $N$ additional variational parameters $\zeta_1, ..., \zeta_N$. Specially we are interested in cases where the lower bound has the following form:
\[
\log p(\boldsymbol{X}_1, ..., \boldsymbol{X}_N) \ge \mathcal{H}(\eta) + \sum_n^N \mathcal{F}(\boldsymbol{X}_n,\zeta_n, \eta) = \mathcal{L}
\]
Typically the sum over the data $\mathcal{F}(\boldsymbol{X}_n,\zeta_n, \eta)$ is a bound on the likelihood, and $\mathcal{H}(\eta)$ is the negative Kullback Leibler divergence.
This bound contains variational parameters $\zeta_1,...,\zeta_N$ that must be optimized, but that increase in dimension proportional to the number of records. This required increase in dimension can make large data inference intractable, instead we consider using a variational auto-encoder $\zeta_n = f_\Xi(\boldsymbol{X}_n)$ which reduces the dimension of the optimization problem to $(\eta,\Xi)$. Alternatively, if a variational EM algorithm exists, then there may be a tractable known expression for $\zeta_n = f(\boldsymbol{X}_n)$ which does not require learning the parameters of an auto-encoder, simplifying the learning process.
Due to the sum structure, it is also possible to use Stochastic Gradient Descent (SGD) or the Robbins Monro algorithm by considering a noisy version of the bound. Combining these two steps results in the following noisy (but fast) objective:
\[
\hat{\mathcal{L}}(x_n,\eta,\Xi) = \frac{1}{N} \mathcal{H}(\eta) + \mathcal{F}(\boldsymbol{X}_n, f_\Xi(\boldsymbol{X}_n), \eta)
\]
While this method uses a variational auto-encoder $f_\Xi(\cdot)$ (if no EM step is available) unlike \cite{kingma2013auto,kingma2015variational} it uses an analytical lower bound in place of the re-parameterization trick. Analytical bounds are available for the output layers of many deep neural networks, including binary classifiers, categorical classifiers, Poisson count models and they do not require Monte Carlo samples to be drawn from the variational distribution.
The key advantage to this method is that training may be done by simply replacing the output layer with the auto-encoding analytical bound. This can then be training using standard SGD without the re-parameterization trick and with less noise in the gradients. Our method does however employ additional analytical bounds that the Kingma and Welling algorithm does not, meaning an additional approximation is used.
Models that have this form might be fully Bayesian treatments of latent variable models that are immediately recognizable as models that can be treated with variational auto-encoders, but there are also models, including Bayesian logistic regression, which also can be put into this form; the use of a variational auto-encoder in this setting doesn't have the usual ``auto-encoding'' interpretation of reconstructing the input data.
The methodology also can be applied to solve integrated maximum likelihood problems for latent variable models of the following form:
\[
p(\boldsymbol{X}_n|\theta) = \int p(\boldsymbol{X}_n,z_n|\theta) dz_n \ge e^{\mathcal{F}(\zeta_n,\theta)}.
\]
Using a variational auto-encoder $\zeta_n = f_\Xi(\boldsymbol{X}_n)$, we bound the dimension to the size of $\theta,\Xi$ and can obtain noisy estimates of the bound:
\[
\mathcal{\hat{L}}(\boldsymbol{X}_n,\theta,\Xi) = \mathcal{F}(f_\Xi(\boldsymbol{X}_n),\theta)
\]
\section{Binary Classification Output Layer}
We consider the logistic regression model:
\[
\boldsymbol{\beta} \sim \mathcal{N}(\boldsymbol{\mu}_{\beta},\boldsymbol{\Sigma}_{\beta}), \hspace{2cm} y_n|\boldsymbol{X}_n,\boldsymbol{\beta} \sim {\rm Bernoulli}(\sigma(\boldsymbol{X}_n^T \boldsymbol{\beta})).
\]
We can also view $\boldsymbol{X}$ as the outputs from the second last layer of a deep network, and $\beta$ as the weights of the final layer. We can bound the the posterior using both the ELBO and the Jaakola and Jordan bound \cite{jaakkola1997variational} \cite{ormerod2010explaining}, and with respect to a variational distribution $q(\cdot)$ which we make a normal distribution of the form $\boldsymbol{\beta} \sim \mathcal{N}(\boldsymbol{\mu}_q,\boldsymbol{\Sigma}_q)$:
\begin{align*}
\rm ELBO & = - {\rm KL}(\boldsymbol{\mu}_q,\boldsymbol{\Sigma}_q,\boldsymbol{\mu}_{\beta},\boldsymbol{\Sigma}_\beta) + \mathop{\mathbb{E}_q[\log p(D|\beta)]}\\
& = - {\rm KL}(\boldsymbol{\mu}_q,\boldsymbol{\Sigma}_q,\boldsymbol{\mu}_{\beta},\boldsymbol{\Sigma}_\beta) + \sum_n^N y_n \boldsymbol{X}_n^T \boldsymbol{\mu}_q - \mathop{\mathbb{E}_q[\log(1 + \exp(\boldsymbol{X}_n^T \boldsymbol{\beta}))]}\\
& \ge - {\rm KL}(\boldsymbol{\mu}_q,\boldsymbol{\Sigma}_q,\boldsymbol{\mu}_{\beta},\boldsymbol{\Sigma}_\beta) + \sum_n^N y_n \boldsymbol{X}_n^T \boldsymbol{\mu}_q - \frac{1}{2} \boldsymbol{X}_n^T \boldsymbol{\mu}_q + \max_{\zeta_n} ( A(\zeta_n) ( (\boldsymbol{X}_n^T \boldsymbol{\mu}_q)^2 + \boldsymbol{X}_n^T \boldsymbol{\Sigma}_q X_n ) + C(\zeta_n))
\end{align*}
\newpage
where:
\[
A(\zeta) = -\tanh(\zeta/2)/(4 \zeta) \hspace{2cm} C(\zeta) = \zeta/2 -\log(1+e^\zeta) + \zeta \tanh(\zeta/2) /4.
\]
and
\[
{\rm KL}(\boldsymbol{\mu}_q,\boldsymbol{\Sigma}_q,\boldsymbol{\mu}_{\beta},\boldsymbol{\Sigma}_\beta) = { 1 \over 2 }\log \frac{| \boldsymbol{\Sigma}_{\beta} |}{| \boldsymbol{\Sigma}_q |} + { 1 \over 2 } \operatorname{tr} \left( \boldsymbol{\Sigma}_{\beta}^{-1} \boldsymbol{\Sigma}_q \right) + { 1 \over 2 } \left( \boldsymbol{\mu}_{\beta} - \boldsymbol{\mu}_q\right)^{\rm T} \boldsymbol{\Sigma}_{\beta}^{-1} ( \boldsymbol{\mu}_{\beta} - \boldsymbol{\mu}_q ) + { k \over 2 }
\]
We can simplify the problem by finding a function $\zeta_n = f_\Xi(\boldsymbol{X}_n,\boldsymbol{\mu}_q,\boldsymbol{\Sigma}_q)$, while we may learn such an ``auto-encoding'' function with a deep net, in this case we can simply substitute the appropriate step from the variational EM algorithm giving:
\[
\zeta_n = f(\boldsymbol{X}_n) = \sqrt{\boldsymbol{X}_n^T \boldsymbol{\Sigma}_q \boldsymbol{X}_n + (\boldsymbol{X}_n^T \boldsymbol{\mu}_q)^2}
\]
\begin{align*}
& \mathcal{L}(\boldsymbol{X},\boldsymbol{y},\boldsymbol{\mu}_q,\boldsymbol{\Sigma}_q) = - {\rm KL}(\boldsymbol{\mu}_q,\boldsymbol{\Sigma}_q,\boldsymbol{\mu}_{\beta},\boldsymbol{\Sigma}_\beta) \\
& + \sum_n^N y_n \boldsymbol{X}_n^T \boldsymbol{\mu}_q + A( f(\boldsymbol{X}_n) ) ( ( \boldsymbol{X}_n^T \boldsymbol{\mu}_q)^2 + \boldsymbol{X}_n^T \boldsymbol{\Sigma}_q \boldsymbol{X}_n ) - \frac{1}{2} \boldsymbol{X}_n^T \boldsymbol{\mu}_q + C( f(\boldsymbol{X}_n) ),\\
\end{align*}
Our likelihood lower bound becomes deterministic and can be easily optimised by any SGD based method.
\section{Multiclass Latent Variable Model}
Consider the following latent variable model, which models the behavior of $U$ user sessions interacting with $P$ products, where the number of events for the session of user $u$ is denoted $T_u$. The purpose of the model is to identify products that are of similar type that are often viewed together in the same session. The model over a single session has the following form:
\[
\boldsymbol{\omega}_u \sim \mathcal{N}(\boldsymbol{0},\boldsymbol{I}), \hspace{2cm} v_{u,1},...,v_{u,T_u} \sim {\rm categorical}({\rm softmax}(\boldsymbol{\Psi} \boldsymbol{\omega}_u + \boldsymbol{\rho})).
\]
The log probability can be written:
\begin{align*}
\log ~ & p(\boldsymbol{v},\boldsymbol{\omega}|\boldsymbol{\Psi},\boldsymbol{\rho}) = \sum_u^U \left( \sum_t^{T_u} \boldsymbol{\Psi}_{v_{u,t}}
\boldsymbol{\omega}_u + \boldsymbol{\rho}_{v_{u,t}} \right) \\
& -T_u \log\{ \sum_p^P \exp( \boldsymbol{\Psi}_p \boldsymbol{\omega}_u +\boldsymbol{\rho}_p)\} - \frac{K}{2} \log( 2 \pi )-
\frac{1}{2} \boldsymbol{\omega}_{u}^T
\boldsymbol{\omega}_{u},
\end{align*}
we can bound the integrated log likelihood using the Bouchard bound \cite{bouchard2007efficient} \cite{rohde2019latent} with respect to a variational distribution $q(\cdot)$, which we parametrerize as a normal distribution such that $\boldsymbol{\omega} \sim \mathcal{N}(\boldsymbol{\mu}_q,\boldsymbol{\Sigma}_q)$:
\begin{align*}
& \mathcal{L} = \sum_u - \frac{K}{2} \log( 2 \pi )- \frac{1}{2} \{ \boldsymbol{\mu}_{q_u}^T \boldsymbol{\mu}_{q_u} + {\rm trace} (\boldsymbol{\Sigma}_{q_u}) \} + \frac{1}{2} \log |2 \pi e \boldsymbol{\Sigma}_{q_u} |\\
& + \sum_u^U \left( \sum_t^{T_u} \boldsymbol{\Psi}_{v_{u,t}} \boldsymbol{\mu}_{q_u} + \boldsymbol{\rho}_{v_{u,t}} \right) -T_u [
a_u + \sum_p^P \frac{\boldsymbol{\Psi}_p \boldsymbol{\mu}_{q_u} +\boldsymbol{\rho}_p - a_u - \xi_{u,p}}{2} \\
& + \lambda_{\rm JJ}(\xi_{u,p})
\{(\boldsymbol{\Psi}_p \boldsymbol{\mu}_{q_u}+\boldsymbol{\rho}_p-a_u)^2 + \boldsymbol{\Psi}_p \boldsymbol{\Sigma}_{q_u} \boldsymbol{\Psi}_p^T - \xi_{u,p}^2 \} + \log(1 + e^{\xi_{u,p} })
],\\
\end{align*}
where
\[
\lambda_{\rm JJ}(\xi) = \frac{1}{2\xi} \left( \frac{1}{1+e^{-\xi}} - \frac{1}{2} \right).
\]
We then use the following variational auto-encoders:
\[
\boldsymbol{\mu}_{q_u} = g_\Xi^\mu (\boldsymbol{v}_u), \hspace{1cm} \boldsymbol{\Sigma}_{q_u} = g_\Xi^\Sigma (\boldsymbol{v}_u), \hspace{1cm} a_u = g_\Xi^a (\boldsymbol{v}_u).
\]
For $\xi_{u,p}$, rather than using the auto-encoder, we can use an explicit update (derived from the variational EM algorithm \cite{rohde2019latent}):
\[
\xi_{u,p} = g^\xi(\boldsymbol{v}_u,p) = \sqrt{\boldsymbol{\Psi}_p \boldsymbol{\Sigma}_{q_u} \boldsymbol{\Psi}_p^T + (\boldsymbol{\Psi}_p \boldsymbol{\mu}_{q_u} + \boldsymbol{\rho}_p-a_u)^2 }
\]
Substituting the auto-encoders and update into the lower bound causes the optimization problem to be written as a finite sum over each of the $U$ time-lines, thus allowing SGD to be applied. More remarkably it also causes the denominator of the softmax to decompose into a sum over UP terms. This allows not only a fast computation of the bound by sampling individual records, but an even faster (and noiser) bound to be computed by also sampling a small subset of the P items involved in the partition function. This can accelerate learning when P is large, which otherwise requires heuristics such as the famous but non-probabilistic word2vec (skipgram with negative sampling) algorithm \cite{word2vec}. Our proposed method is similar to \cite{raman2016ds} but our use of an auto-encoder means that plain SGD is all that is required.
The noisy lower bound becomes:
\begin{align*}
& \hat{\mathcal{L}}(v_{u,1},...,v_{u,T_u} ,s_1,...s_S,\Xi,\boldsymbol{\Psi}) \\
& = - \frac{K}{2 U} \log( 2 \pi )- \frac{1}{2 U} \{ g_\Xi^\mu (\boldsymbol{v}_u)^T g_\Xi^\mu (\boldsymbol{v}_u) + {\rm trace} (g_\Xi^\Sigma (\boldsymbol{v}_u) ) \} + \frac{1}{2 U} \log |2 \pi e g_\Xi^\Sigma (\boldsymbol{v}_u) |\\
& + \left( \sum_t^{T_u} \boldsymbol{\Psi}_{v_{u,t}} g_\Xi^\mu (\boldsymbol{v}_u) + \boldsymbol{\rho}_{v_{u,t}} \right) - T_u [
g_\Xi^a (\boldsymbol{v}_u) + \frac{P}{S} \sum_s^S \frac{\boldsymbol{\Psi}_{p_s} g_\Xi^\mu (\boldsymbol{v}_u) +\boldsymbol{\rho}_{p_s} - g_\Xi^a (\boldsymbol{v}_u) - g^\xi(\boldsymbol{v}_u,p) }{2} \\
& + \lambda_{\rm JJ}(g^\xi(\boldsymbol{v}_u,p) )
\{(\boldsymbol{\Psi}_{p_s} g_\Xi^\mu (\boldsymbol{v}_u) +\boldsymbol{\rho}_{p_s}-g_\Xi^a (\boldsymbol{v}_u) )^2 + \boldsymbol{\Psi}_{p_s} g_\Xi^\Sigma (\boldsymbol{v}_u) \boldsymbol{\Psi}_{p_s}^T - g^\xi(\boldsymbol{v}_u,p)^2 \} \\
& + \log(1 + e^{ g^\xi(\boldsymbol{v}_u,p) })],\\
\end{align*}
where $v_{u,1},...,v_{u,T_u}$ are the items associated with session $u$ and $s_1,...,s_S$ are $S<P$ negative items randomly sampled.
\section{Experiments}
\subsection{Jaakola and Jordan Logistic Regression SGD}
In order to test the accuracy of our method, we simulated a logistic regression dataset of size 900 where $\boldsymbol{X}$ has 50 features, 100 samples are held out for validation. We compute an approximate posterior using the Stan probabilistic programming language \cite{carpenter2017stan} which we take to be the gold standard, we also compute a posterior using the variational EM algorithm (the original use of the Jaakola and Jordan bound) (VB EM), the Local Re-parameterization Trick (LRT) and our proposed method Jaakola and Jordan SGD (JJ SGD). We take a kernel density estimate of the MCMC samples and plot the marginal posteriors for $\beta_0,...,\beta_5$ in Figure \ref{posterior_visualisation}. It is apparent that all variational methods capture the mean well, but underestimate the posterior variance. The method that best captures the posterior variance is the local reparaemterization trick. The stochastic gradient descent Jaakola and Jordan and the variational Bayes EM algorithm - both of which use the Jaakola and Jordan bound underestimate the variance by a similar amount. There is no obvious benefit for the use of the full covariance matrix used by VB EM as it has a similar level of fit to the true posterior as our proposed method SGD JJ. SGD JJ is good at capturing the mean but it is worse than LRT at capturing the variance and has similar performance to VB EM.
\begin{figure}[H]
\centering
\subfloat{
\includegraphics[width=60mm]{post_beta0}
}
\subfloat{
\includegraphics[width=60mm]{post_beta1}
}
\hspace{0mm}
\subfloat{
\includegraphics[width=60mm]{post_beta2}
}
\subfloat{
\includegraphics[width=60mm]{post_beta3}
}
\hspace{0mm}
\subfloat{
\includegraphics[width=60mm]{post_beta4}
}
\subfloat{
\includegraphics[width=60mm]{post_beta5}
}
\caption{Posterior approximations for $\beta_0,..,\beta_5$.}
\label{posterior_visualisation}
\end{figure}
In order to test the speed of the method we consider a harder problem and benchmarked only with LRT as the most scalable alternative. We again simulate a logistic regression problem this time with 9000 records and 2000 features. The cost per iteration of both methods are quite similar, each running on a CPU we get about 1 iterations per second. The loss curve of the two methods are shown in Figure \ref{loss}, due to the Monte Carlo noise in the loss the LRT is slightly more difficult to optimize than SGD JJ and SGD JJ reaches its (higher) loss after typically fewer epochs.
\begin{figure}[H]
\centering
\includegraphics[width=70mm]{loss_epoch.png}
\caption{Loss curve of LRT and SGD JJ}
\label{loss}
\end{figure}
Broadly we conclude that JJ SGD is less accurate than LRT, it iterates at the same speed and due to the fact it doesn't use Monte Carlo methods it has a less noisy loss, which in some situations allows faster convergence, although LRT may also be made less noisy by the use of Polyak Ruppert averaging.
\subsection{Bouchard Softmax Latent Variable Model Variational Autoencoder}
We evaluate the session based recommendation algorithm using data simulated from the RecoGym simulator \cite{rohde2018recogym}. RecoGym is an environment for testing recommendation algorithms in an interactive environment applying reinforcement learning and bandit style evaluation to recommendation. We use the simulator with 1000 products, we sample 200 user timeliness for training and 100 for testing. We train our model both using the noisy softmax partition function approximation (sampling 200 products) and also without using the noisy softmax approximation i.e. (summing over all 1000 products without sampling). It is notable that the speedups from approximating the partition function are limited by fixed costs required for processing the numerator of the softmax i.e. the ``positive examples'', for large numbers of products the ``negative sampling'' variate iterates approximately three times faster. We use a latent factor size of 200, the variational auto-encoder is linear with the means unconstrained and the variances and $a$ parameters coming from a softplus transform. The covariance matrix is constrained to be diagonal. We compare the method with some simple recommendation baselines: popularity (Pop) a non-personalized recommendation strategy that recommends the most popular products to everybody, Item k-nearest neighbors (Itemknn) we estimate the empirical correlation matrix and then recommend the five items that is most correlated to the most recently viewed items. Finally we also present results training the model using the classic Kingma and Welling algorithm that does not employ the Bouchard bound. All results are shown in Table \ref{table2}. The metrics presented are recall at 5 and truncated discounted cumulative gain at 5 (see \cite{liang2018variationalold} for a definition). We see the re-parameterization trick performs the best closely followed by models utilizing the Bouchard bound using the full partition function and using the noisy approximation of the partition function respectively.
\begin{table}[H]
\center
\begin{tabular}{llrr}
\toprule
Algorithm & Recall@5 & TDCG@5 \\
\midrule
Itemknn & 0.088 & 0.116 \\
Pop & 0.090 & 0.090 \\
Bouch/AE & 0.179 & 0.201\\
Bouch/AE/NS & 0.165 & 0.191\\
RT & 0.208 & 0.233 \\
\bottomrule
\end{tabular}
\vspace{10pt}
\caption{Results for a model trained on 200 RecoGym user time-lines with 1000 products. Test set size is 1000 user time-lines.}
\label{table2}
\end{table}
\section{Conclusion}
In this paper, we have studied the use of analytical variational bounds in a modern deep learning setting, using auto-encoders or analytical EM steps to write the model as a sum so it can be trained using SGD. It is noteworthy that in this setting the variational auto-encoder doesn't necessarily have a classic ``auto-encoding'' interpretation, rather it is a dimensionality reduction technique that causes a variational bound with parameters growing with the dataset size to have a restricted dimension. The method applies both to latent variable models and to other methods that are not normally viewed in a latent variable setting such as logistic regression.
A significant advantages of the proposed method is primarily that a Bayesian approximation requires nothing more than SGD based optimization. Both the Jaakola and Jordan and auto-encoding Bouchard bound were shown to be viable approximations for Bayesian logistic regression and the latent variable session model respectively. The circumstances where the proposed method performs well or badly with respect to alternatives, such as the local re-parameterization trick, remains a subject of further work. Clearly the re-parameterization trick and the local re-parameterization trick provide very strong baselines in terms of both accuracy and speed.
A further advantage of the use of the Bouchard bound is the ability to do a fast approximation of the partition function by an algorithm that resembles the ``negative sampling'' heuristic but is motivated in a fully probabilistic setting. This method has promise in allowing fully probabilistic models to be applied to categorical variables with large number of classes.
\bibliographystyle{ACM-Reference-Format}
|
1,108,101,566,320 | arxiv | \section{The Brauer Group for Trivial Actions}\label{sec-4}
In this section we want to give a precise description
of the set $\mathcal{E}_G(X)$
of exterior equivalence classes of $C_0(X)$-linear
actions $\beta:G\to \Aut\(C_0(X,\K)\)$ in the case where $G$ is
smooth,
$G_{\subab}$ is compactly
generated, and $X$ is a second countable locally compact
Hausdorff space. This analysis also allows a
description of the Brauer group $\Br_G(X)$ of
\cite{ckrw}
for a trivial $G$-space $X$,
and, more generally, a description
of the set of exterior equivalence classes of
locally inner actions $\alpha:G\to \Aut(A)$
when $A\in \MyMath{\ewcr}(X)$.
A \cs-dynamical system $(A,G,\alpha)$ is called a
\emph{$C_0(X)$-system} if $A$ is a \cox-algebra{} and each $\alpha_s$ is
\MyMath{C_0(X)}-linear. Two systems $(A,G,\alpha)$ and $(B,G,\beta)$ are
Morita equivalent if there is a pair $({\modulefont X},\mu)$ consisting of an $A
\,\mathord{\mathop{\text{--}}\nolimits_{\relax}}\, B$-im\-prim\-i\-tiv\-ity bi\-mod\-u\-le{} ${\modulefont X}$ and a strongly continuous action $\mu$ of $G$ on
${\modulefont X}$ by linear transformations such that
\[
\alpha_s\(\lip A<x,y>\)=
\blip A<\mu_s(x),\mu_s(y)>
\quad\text{and}\quad
\beta_s\(\rip B<x,y>\)=
\brip B<\mu_s(x),\mu_s(y)>
\]
for all $x,y\in
{\modulefont X}$ and $ s\in G$.
The actions of $A$ and $B$ on ${\modulefont X}$ extend to the multiplier algebras
$M(A)$ and $M(B)$. In particular, if $A$ and $B$ are \cox-algebra s, then
${\modulefont X}$ is both a left and a right \MyMath{C_0(X)}-module. We say that two
\MyMath{C_0(X)}-systems $(A,G,\alpha)$ and $(B,G,\beta)$ are \emph{\MyMath{C_0(X)}-Morita
equivalent} if they are Morita equivalent and if $f\cdot x=x\cdot f$
for all $x\in{\modulefont X}$ and $f\in\MyMath{C_0(X)}$.
If we let $G$ act trivially on $X$, then the equivariant Brauer group $\br_G(X)$
of \cite{ckrw} is the collection of $\MyMath{C_0(X)}$-Morita equivalence classes
of $\MyMath{C_0(X)}$-systems $(A,G,\alpha)$ where $A$
is a separable continuous-trace $\cs$-algebra with spectrum
$X$.
Then $\Br_G(X)$ forms an
abelian group \cite[Theorem~3.6]{ckrw}.
Recall that the group multiplication is defined using the
balanced tensor product
\begin{equation}
\label{eq-prod*}
[A,\alpha][B,\beta] = [A\otimes_X B,\alpha\otimes_X \beta].
\end{equation}
The identity is the class of $\(\MyMath{C_0(X)},\id\)$ and the inverse of
$[A,\alpha]$ is given by the class of the conjugate system $(\overline
A,\bar\alpha)$.
The collection of $[A,\alpha]$ in $\br_G(X)$ such that the
Dixmier-Douady class of $A$ is zero is a subgroup. Note that each such
element has a representative of the form $\(C_0(X,\K),\alpha\)$.
That we can identify this subgroup with
\MyMath{\E_G(X)}{} follows from the next proposition. In particular, \MyMath{\E_G(X)}{} is also an
abelian
group with multiplication given by \eqref{eq-prod*} after identifying
$C_0(X,\K)\tensor_XC_0(X,\K)$ with \MyMath{C_0(X)}{} via a \MyMath{C_0(X)}-isomorphism.
\begin{prop}
\label{prop-help}
Suppose that $\alpha,\gamma:G\to\Aut\(C_0(X,\K)\)$ are \MyMath{C_0(X)}-actions such
that $[C_0(X,\K),\alpha]=[C_0(X,\K),\gamma]$ in $\br_G(X)$. Then $\alpha$ and
$\gamma$ are exterior equivalent.
\end{prop}
\begin{proof}
Since $C_0(X,\K)$ is stable, it follows from \cite[Lemma~3.1]{ckrw} that
$\gamma$ is exterior equivalent to an action $\beta$ of the form
$\beta=\Phi\circ \alpha$ for some \MyMath{C_0(X)}-linear automorphism $\Phi$ of
$\MyMath{C_0(X)}$. Thus it will suffice to see that $\beta$ is exterior
equivalent to~$\alpha$.
Since a $C_0(X)$-linear automorphism
of $C_0(X,\K)$ is locally inner \cite{pr1},
we may find an open cover $\set{U_i}_{i\in I}$
of $X$ and continuous functions $u^i$ from $U_i$ to $U(\mathcal{H})$
for each $i\in I$ such that
$\Phi(f)(x)=u_i(x)f(x)u_i(x)^*$ for all $f\in C_0(U_i,\K)$.
Moreover, on each overlap $U_{ij}$, there exist continuous functions
$\chi_{ij}\in C(U_{ij},\mathbb{T})$
such that $u_j(x)=\chi_{ij}(x)u_i(x)$ for all $x\in U_{ij}$.
Since $\beta_s$ is a \MyMath{C_0(X)}-automorphism, there are automorphisms
$\beta^x_s$ for each $x\in X$ such that $\beta_s(f)(x)=\beta^x_s\(f(x)\)$.
Since $\alpha_s=\Ad\Phi\circ \beta_s$ it follows for all $x\in U_i$,
\[
\alpha_s(f)(x)=u_i(x)^*\bar\beta^x_s\(u_i(x)\)
\beta^x_s(f)(x)\bar\beta^x_s(u_i(x)^*)u
_i(x),
\]
where $\bar\beta_s^x$ is the canonical extension of $\beta_s^x$ to
$M\(A(x)\)$.
If $x\in U_{ij}$, then
\[
u_j(x)^*\beta^x_s\(u_j(x)\)=\overline{\chi_{ij}(x)}u_i(x)^*\bar\beta^x_s
\({\chi_{i
j}(x)}u_i(x)\)=
u_i(x)^*\beta^x_s\(u_i(x)\).
\]
Consequently, we can define a map from $G$ to $U(\mathcal{H})$
by $v_s(x)=u_i(x)^*\bar\beta^x_s\(u_i(x)\)$.
Moreover, $s\mapsto v_s$ is strictly continuous, and we have
$\alpha=\Ad v\circ \beta$. Thus we only need to verify that $v$ is a
$1$-cocycle.
For all $x\in
X$ we get
\begin{align*}
v_{st}(x)&=u_i(x)^*\bar\beta^x_{st}\(u_i(x)\)=u_i(x)^*\bar
\beta^x_s\(\bar\beta^x_t\(u_i(x)\)\)
\\
&=u_i(x)^*\bar\beta^x_s\(u_i(x)\)\bar\beta^x_s\(u_i(x)^*\bar
\beta^x_t\(u_i(x)\)\)=v_s(x)\bar\beta
^x_s\(v_t(x)\),
\end{align*}
which implies $v_{st}=v_s\beta_s(v_t)$.
\end{proof}
\begin{remark}
\label{rem-help}
Suppose that $\gamma:G\to \Aut\(C_0(X,\K)\)$ is a \MyMath{C_0(X)}-automorphism group.
In the sequel, we will write $\gamma^o$ for the ``inverse''
automorphism group in $\MyMath{\E_G(X)}$. That is,
$[C_0(X,\K),\gamma]^{-1}:=[C_0(X,\K),\gamma^o]$. \propref{prop-help} implies
that $\gamma^o$ is unique up to exterior equivalence and that
$\gamma^o\otimes_X\gamma$ is exterior equivalent to $\id\otimes_X\id$.
\end{remark}
The next lemma is a mild strenthening of \cite[Lemma~3.3]{doir} to our
setting.
\begin{lem}
\label{lem-fix}
Suppose that $\beta:G\to\Aut\(C_0(X,\K)\)$ is a \MyMath{C_0(X)}-linear action and
that $[\omega_x]$ is the Mackey obstruction for the induced
automorphism group $\beta^x$ on the fibre over $x$. Then the
\emph{Mackey obstruction map} $\phi^\beta:X\to H^2(G,\T)$ given by
$\phi^\beta(x):=[\omega_x]$ is continuous.
\end{lem}
\begin{proof}
Fix $x_0\in X$ and suppose that $\set{x_n}$ is a sequence converging
to $x_0$ in $X$. It will suffice to show that $[\omega_{x_n}]$
converges to $[\omega_{x_0}]$ in $H^2(G,\T)$. Let $M$ be the compact
set $\set{x_n}_{n=1}^\infty\cup \set{x_0}$. Let $\beta^M$ be the
induced action on $C(M,\K)$. Since $\beta$ and $\beta^M$ induce the
same action on the fibres, we have $\phi^{\beta^M}=\phi^\beta\restr
M$. Thus it will suffice to see that the former is continuous. But
$H^2(M;\Z)$ is trivial; any principal $\T$-bundle over $M$ is locally
trivial and therefore trivial. If follows from the Phillips-Raeburn
exact sequence \cite[Theorem~2.1]{pr1}
that $\beta^M$ is inner. As in Remark~\ref{rem-fix}, there is an
obstruction to $\beta^M$ being unitary given by a cocycle $\zeta\in
Z^2\(G,C(M,\T)\)$, and $\phi^{\beta^M}(x)=[\zeta(x)]$. Since for each
$s,t\in G$, $\zeta(s,t)$ is continuous, it follows that $\zeta(x_n)$
converges to $\zeta(x)$ pointwise. Therefore $\zeta(x_n)\to\zeta(x)$
in $Z^2(G,\T)$ \cite[Proposition~6]{moore3}. Since $H^2$ has the
quotient topology, the result follows.
\end{proof}
Using the above lemma, the discussion in the introduction shows
that there is a homomorphism{} $\Phi$ from
\MyMath{\E_G(X)}{} to $C\(X,H^2(G,\T)\)$ which assigns to each $[\alpha]$ in \MyMath{\E_G(X)}{} its
``Mackey obstruction map'' $\phi^\alpha$.
\begin{thm}
\label{brauer}
Suppose that $G$ is smooth. Then
the homomorphism
$\Phi: \mathcal{E}_G(X)\to C\(X,H^2(G,\mathbb{T})\)$ given by $[\beta]\mapsto
\varphi^{\beta}$ is surjective and the short exact sequence
\[
1 \arrow{e} \ker\Phi\arrow{e} \mathcal{E}_G(X)\arrow{e}
C\(X,H^2(G,\mathbb{T})\)\arrow{e} 1
\]
splits. If, in addition, $G_{\subab}$ is compactly generated, then
\[
\mathcal{E}_G(X)\cong H^1(X,\widehat{\sheaf G}_{\subab})\oplus
C\(X,H^2(G,\mathbb{T})\)\]
as abelian groups.
\end{thm}
\begin{proof}
We have to construct a splitting homomorphism
$\Phi^*:C\(X, H^2(G,\mathbb{T})\)\to \mathcal{E}_G(X)$ for $\Phi$.
Recall from \cite[Theorem 5.1]{ckrw} and
\cite[Proposition 3.1]{horr} that there is a canonical
homomorphism
$\mu:H^2\(G,C(X,\mathbb{T})\)\to
\mathcal{E}_G(X)$ defined as follows: Let $\sigma\in Z^2\(G,
C(X,\mathbb{T})\)$, and let $L^{\sigma(x)}$ denote
the left regular $\sigma(x)$-representation, where
$\sigma(x)$ denotes evaluation of $\sigma$ at $x\in X$.
A representative for the class $\mu([\sigma])\in \mathcal
E_G(X)$ is then given by the action $\beta^{\sigma}:G\to
\Aut\(C_0(X,\K\(L^2(G)\)\)$ defined by
$$\beta^\sigma_s(f)(x)=\Ad L^{\sigma(x)}_s\(f(x)\),\quad f\in
C_0(X,\K(L^2(G))).$$
(If $G$ is finite
we have to stabilize this action in order to get an action on
$C_0\(X,\K\(L^2(G)\)\)\otimes \K\cong C_0(X,\K)$.)
By Lemma~\ref{cocycle} we know that
$H^2(G,\mathbb{T})$ is locally compact and that
there exists an element $\zeta\in
Z^2(G,\specnp{H^2(G,\mathbb{T})})$ such that evaluation of $\zeta$
at a point
$[\omega]\in H^2(G,\mathbb{T})$ is a cocycle representing $\omega$.
If we give $C(H^2(G,\mathbb{T}),\mathbb{T})$ the compact-open topology, then we can
view $\specnp{H^2(G,\mathbb{T})}$ as a subset of $C(H^2(G,\mathbb{T}),\mathbb{T})$.
Furthermore, if $\varphi\in C\(X,H^2(G,\mathbb{T})\)$, then
$\zeta\circ \varphi(s,t)(x):=\zeta(s,t)\(\varphi(x)\)$ defines a
Borel cocycle $\zeta\circ \varphi\in Z^2\(G,C(X,\mathbb{T})\)$.
We claim that
$\Phi^*(\varphi):=\mu([\zeta\circ \varphi])$
defines a splitting
homomorphism for $\Phi$. To see that it is a homomorphism
just notice that if $\varphi,\psi\in C\(X,H^2(G,\mathbb{T})\)$, then since
$\zeta(s,t)\in \specnp{H^2(G,\mathbb{T})}$,
$\zeta(s,t)\(\varphi(x)\)\zeta(s,t)\(\psi(x)\)=
\zeta(s,t)\(\varphi(x)\psi(x)\)$.
Thus $\varphi\mapsto
[\zeta\circ \varphi]$ is a homomorphism of $C\(X,
H^2(G,\mathbb{T})\)$ into $H^2\(G, C(X,\mathbb{T})\)$.
By the construction of
$\mu$ we can choose a representative $\beta$ for
$\mu([\zeta\circ\varphi])$ such that $\beta^x$ is implemented
by a
$\zeta\(\varphi(x)\)$-representation $V:G\to U(\mathcal{H})$.
Since $\zeta\(\varphi(x)\)$ is a representative for
$\varphi(x)$, it follows that $\Phi\circ \Phi^*=\id$.
We have shown that if $G$ is smooth, then
$$1\arrow{e} \ker\Phi\arrow{e} \mathcal{E}_G(X)\arrow{e}
C\(X,H^2(G,\mathbb{T})\)\arrow{e} 1$$
is a split short exact sequence. If, in addition,
$G_{\subab}$ is compactly generated, then we know from
Corollary~\ref{cor-point} that the
Phillips-Raeburn obstruction
$\beta\mapsto \zeta(\beta)\in H^1(X,\widehat{\sheaf G}_{\subab})$ of Proposition
\ref{prop-PRobs} defines a bijection of $\ker\Phi$ onto
$H^1(X,\widehat{\sheaf G}_{\subab})$, which by Lemma~\ref{lem-PR3.10} is
multiplicative. This completes the proof.
\end{proof}
Let $F:\Br_G(X)\to H^3(X,\mathbb{Z})$ denote the
forgetful homomorphism described in the introduction.
It admits a natural splitting map,
which assigns to an element
$\delta\in H^3(X,\mathbb{Z})$
the (equivalence class of the) system $(A_\delta,G,\id)$,
where $A_\delta$ is
the unique stable continuous-trace $\cs$-algebra with
Dixmier-Douady
invariant $\delta$ and $\id$ denotes the trivial action of
$G$ on $A_\delta$. Since $\ker F$ is naturally isomorphic
to $\mathcal E_G(X)$ by Proposition \ref{prop-help}, we
obtain the following as an immediate corollary.
\begin{cor}\label{cor-brauer}
Suppose that $G$ is smooth and that $G_{\subab}$ is
compactly generated. Then, for any trivial $G$-space $X$, we have a
group isomorphism
$$\br_G(X)\cong
H^1(X,\widehat{\sheaf G}_{\subab})\oplus C\(X,H^2(G,\mathbb{T})\)\oplus H^3(X;\mathbb{Z}),$$
where $H^3(X;\mathbb{Z})$ denotes third integral \v Cech cohomolgy.
\end{cor}
We conclude this section with a discussion of some
special cases.
\begin{example}
\label{ex-brauer}
If $G$ is connected and $H^2(X,\mathbb{Z})$ is countable
then it follows from \cite[\S6.3]{ckrw} that the
homomorphism $\mu:H^2\(G,C(X,\mathbb{T})\)\to \mathcal{E}_G(X)$
described in the proof of Theorem~\ref{brauer} is
actually an isomorphism (in particular, all $C_0(X)$-actions are
inner). Under this isomorphism, the Mackey obstruction map
$\Phi:
\mathcal{E}_G(X)\to C\(X, H^2(G,\mathbb{T})\)$ corresponds to the
evaluation map
$H^2(G,C(X,\mathbb{T})\to C\(X, H^2(G,\mathbb{T})\)$ and the kernel
of $\Phi$ corresponds to the subgroup $H^2_{\text{pt}}\(G, C(X,\mathbb{T})\)$
of pointwise trivial elements in $H^2\(G, C(X,\mathbb{T})\)$.
\end{example}
If $G$ is not connected, there are usually lots of
$C_0(X)$-linear actions of $G$ on
$C_0(X,\K)$ which are not inner, for instance, if $G=\mathbb{Z}^n$,
then $H^2\(\mathbb{Z}^n, C(X,\mathbb{T})\)\cong C\(X, H^2(\mathbb{Z}^n,\mathbb{T})\)$ by
\cite[Corollary 1.5]{judymc}, but $H^1(X,\widehat{\mathbb{Z}}^n)$
is often nontrivial (for instance for $G=\mathbb{Z}$ and
$X=S^2$). In any case, if $G$ is smooth and if $\zeta$ is as in
\lemref{cocycle}, then the map
$\varphi\mapsto
\zeta\circ \varphi$ from $ C\(X, H^2(G,\mathbb{T})\)$ to $H^2\(G, C(X,\mathbb{T})\)$ is a
splitting homorphism for the exact sequence
$$1\arrow{e} H_{\text{pt}}^2\(G, C(X,\mathbb{T})\)\arrow{e} H^2\(G,
C(X,\mathbb{T})\)\arrow{e}
C\(X, H^2(G,\mathbb{T})\)\arrow{e} 1.$$
\begin{example}
\label{ex-two}
If $G$ is smooth and $G_{\subab}$ is a vector group
(i.e., $G_{\subab}$ is isomorphic to some $\mathbb{R}^l$ for $l\geq 0$)
then $H^1(X,\widehat{\sheaf G}_{\subab})=0$ and $\mathcal{E}_G(X)\cong
C\(X,H^2(G,\mathbb{T})\)$. This applies to all simply connected
and connected Lie groups.
\end{example}
\begin{example}
If $G$ is any second countable
locally compact group such that $H^2(G,\mathbb{T})$ is trivial
(e.g., if $G=\mathbb{R}$, $\mathbb{T}$, $\mathbb{Z}$, or any connected and simply
connected semisimple Lie group), then
$G$ serves as a representation group for itself.
Hence, if
$G_{\subab}$ is also compactly generated, then $\mathcal{E}_G(X)\cong
H^1(X,\widehat{\sheaf G}_{\subab})$. If, in addition,
$G_{\subab}$ is a vector group, then it follows from Example~\ref{ex-two} that
$\mathcal{E}_G(X)$ is trivial.
\end{example}
\begin{example}
It follows from the previous example, that if
$G$ is any connected and simply connected
Lie group with $H^2(G,\mathbb{T})$ trivial, then $\mathcal
E_G(X)$ is trivial. Since $H^2\(G, C(X,\mathbb{T})\)$ imbeds injectively
into $\mathcal{E}_G(X)$ by \cite[\S6.3]{ckrw},
it follows that for such groups
$H^2\(G, C(X,\mathbb{T})\)$ is trivial for all $X$. For compact $X$
this was shown in \cite[Theorem 2.6]{heros}.
\end{example}
\endinput
Our classification of $\mathcal{E}_G(X)$ is based
on ideas developped by Judy Packer \cite{judymc}. She
showed that if
$G$ is an \emph{elementary abelian group}, i.e.,
$G\cong \mathbb{R}^n\times \mathbb{Z}^m\times \mathbb{T}^l\times F$ for some
$n,m,l\geq 0$ and $F$ a finite abelian group,
then $\mathcal{E}_G(X)$ is isomorphic to
$H^1(X,\widehat{{\sheaf G}})\oplus C\(X, H^2(G,\mathbb{T})\)$
as abelian groups (see also \cite[Theorem~5.3]{prw}).
As the main result of this section, we will show that, after replacing
$H^1(X,\widehat{{\sheaf G}})$
by $H^1(X,\widehat{\sheaf G}_{\subab})$,
a similar decomposition holds for all smooth
groups $G$ with $G_{\subab}$ compactly generated.
This gives
new information even if $G$ is abelian, since
by Corollary~\ref{corcompgen} our result applies not only to
elementary abelian groups, but to all
second countable compactly generated abelian groups.
To fix ideas, we start with a brief discussion of the
(well known) case $X=\{pt\}$.
If $\beta:G\to \Aut\(\K(\mathcal{H})\)$ is an action, then, from the
short exact sequence
of polish groups
$$1\to \mathbb{T}\to U(\mathcal{H})\stackrel{\Ad}{\to} \Aut(\K)\to 1$$
we obtain an obstruction $[\omega_{\beta}]\in
H^2(G,\mathbb{T})$ to lifting
$\beta$ to a strongly continuous homomorphism $V:G\to U(\mathcal{H})$;
this, $[\omega_{\beta}]=0$ if and only if $\beta$ is unitary.
This obstruction is
called the {\em Mackey obstruction\/} of $\beta$.
Notice that if $\omega\in Z^2(G,\mathbb{T})$, then $[\omega]=[\omega_{\beta}]$
if and only if there exists an $\omega$-representation
$V:G\toU(\mathcal{H})$ which implements $\beta$; that is,
$V$ is a Borel map satisfying
$V_e=1$, $V_sV_t=\omega(s,t)V_{st}$ for $s,t\in G$, and
$\beta_s=\Ad V_s$ for all $s\in G$.
Using this it is not hard to show that
$[\beta]\mapsto[\omega_{\beta}]$ is indeed a group isomorphism
between $\mathcal{E}_G(\{pt\})\to H^2(G,\mathbb{T})$; the inverse
is given by $[\sigma]\mapsto [\Ad L^{\sigma} ]$,
where $L^{\sigma}:G\to U\(L^2(G)\)$ denotes the left
regular
$\sigma$-representation defined by
$(L^\sigma_s\xi)(t)=\sigma(s,s^{-1}t)\xi(s^{-1}t)$.
(If $G$ is finite then we stabilize in order to get
actions on $\K\(L^2(G)\)\otimes\K\cong \K$.)
Let $\beta:G\to \Aut\(C_0(X,\K)\)$ be a $C_0(X)$-linear action
and let $[\omega_x]\in H^2(G,\mathbb{T})$ denote the Mackey-obstruction
for the induced automorphism group $\beta^x$ acting on the fibre over
$\set x$.. Then it follows from \cite[Lemma 3.3]{doir}
that the map $\varphi^{\beta}:X\to H^2(G,\mathbb{T})$ given by $
\varphi^{\beta}(x)=[\omega_x]$ is continuous. Moreover,
since the evaluation map $\mathcal{E}_G(X)\to\mathcal
E_G(\{x\})$ sending $ [\beta]\mapsto [\beta^x]$ is a homomorphism for
each $x\in X$, it follows from the discussion above
that $[\beta]\mapsto \varphi^{\beta}$ is a group
homomorphism
$$\Phi:\mathcal{E}_G(X)\to C\(X,H^2(G,\mathbb{T})\).$$
Notice that $\varphi^{\beta}=0$ if and
only if $\beta$ is pointwise unitary. Thus $\ker\Phi$ consists
of all exterior equivalence classes of pointwise unitary
actions of $G$ on $C_0(X,\K)$.
\section{Locally Inner Actions on \MyMath{\ewcr}-Algebras}\label{sec-5}
In this section we want to use our
description of $\mathcal{E}_G(X)$ to describe
locally inner actions on elements of $\MyMath{\ewcr}(X)$.
The next lemma provides an
analogue for the Mackey-obstruction map $\beta\mapsto
\varphi^{\beta}\in C\(X,H^2(G,\mathbb{T})\)$ of Theorem~\ref{brauer} in
case of locally inner actions on general elements of $\MyMath{\ewcr}(X)$.
\begin{lem}\label{lem-function}
Let $G$ be a second countable locally compact group,
$A\in \MyMath{\ewcr}(X)$, and let $\alpha:G\to \Aut(A)$ be locally
inner. For each $x\in X$ let $U$ be an open neighborhood
of $x$ such that the restriction
$\alpha^{U}:G\to\Aut(A_{U})$ of $\alpha$ is inner, and let
$[\sigma]\in H^2\(G, C(U,\mathbb{T})\)$ be the obstruction for
$\alpha^{U}$ being unitary. Then
$\varphi^{\alpha}(x):=[\sigma(x)]$
determines a well defined continuous map
$\varphi^{\alpha}:X\to H^2(G,\mathbb{T})$. If $\beta$ is exterior equivalent
to $\alpha$, then $\varphi^\alpha=\varphi^\beta$.
\end{lem}
\begin{proof}
We have to show that if $U_1$ and $U_2$ are two
open neighborhoods of $x$ such that, for $i=1,2$,
there exist cocycles $\sigma^i\in Z^2\(G, C(U_i,\mathbb{T})\)$ and
$\sigma^i$-homomorphism $V^i:G\to \mathcal{U}M\(C_0(U_i,\K)\)$ which
implement $\alpha^i:=\alpha^{U_i}$, then
$[\sigma^1(x)]=[\sigma^2(x)]$ in $H^2(G,\mathbb{T})$.
Let $U_{ij}:=U_1\cap U_2$ and let $V^{ij}_s$ denote the image
of $V^i_s$ in $\mathcal{U}M(A_{U_{ij}})$, $i,j\in \{1,2\}$, and let
$\sigma^{ij}$ denote the restriction of $\sigma^i$ to
$U_{ij}$; that is,
$\sigma^{ij}(s,t)(x)=\sigma^i(s,t)(x)$ for all $x\in U_1\cap
U_2$. Then $V^{ij}$ is a $\sigma^{ij}$-homomorphism which
implements the restriction
$ \alpha^{U_{ij}}:G\to \Aut(A_{U_{ij}})$.
Thus it follows that
$[\sigma^{12}]=[\sigma^{21}]\in H^2\(G, C(U_{ij},\mathbb{T})\)$,
which in particular implies that
$[\sigma^1(x)]=[\sigma^2(x)]$. Thus
$\varphi^{\alpha}$ is well defined. The continuity of
$\varphi^{\alpha}$ follows from the continuity of the
evaluation map $x\mapsto[\sigma(x)]$ on $U$ as shown in the proof of
\lemref{lem-fix}.
Finally, suppose that $\beta=\Ad(w)\circ\alpha$ for some $1$-cocycle
$w$. Since $\varphi^\alpha$ is defined locally, we can assume that
$\alpha=\Ad(V)$ for some $\sigma$-homomorphism $V$. Then
$\beta=\Ad(wV)$, and it is easily checked that $wV$ is a
$\sigma$-homomorphism.
\end{proof}
Notice that if $\beta:G\to \Aut\(C_0(X,\K)\)$ is a
$C_0(X)$-linear action, then the element $\varphi^{\beta}$
constructed above is the same as the map $\varphi^{\beta}$
which appeared in Theorem~\ref{brauer}. Notice also that
$\varphi^{\alpha}=0$ if and only if all of the induced actions
$\alpha^x$ of $G$ on the fibres $A(x)$ of $A$ are unitary.
\begin{prop}\label{prop-locuni}
Let $G$ be a smooth group such that $G_{\subab}$ is compactly
generated, and let $A\in \MyMath{\ewcr}(X)$. Let
$\Phi^*:C\(X,H^2(G,\mathbb{T})\)\to \mathcal{E}_G(X)$ denote the
splitting homomorphism for the short exact sequence
$$1\arrow{e} H^1(X,\widehat{\sheaf G}_{\subab})\arrow{e}\mathcal{E}_G(X)\arrow{e}
C\(X,H^2(G,\mathbb{T})\)\arrow{e}
1$$
as constructed in the proof of Theorem~\ref{brauer}.
If $\alpha:G\to \Aut(A)$ is locally inner and
$[\gamma]=\Phi^*(\varphi^{\alpha})$, then
$\alpha\otimes_X{\gamma^o}$ is locally unitary.
\end{prop}
\begin{proof}
By the construction of $\Phi^*$ we know that there
exists a Borel cocycle $\sigma\in H^2\(G,C(X,\mathbb{T})\)$ and
a $\sigma$-homomorphism $W:G\to \mathcal{U}M\(C_0(X,\K)\)$ such that
${\gamma^o}=\Ad W$ and such that
$[\sigma(x)]=\varphi^{\alpha}(x)^{-1}$ in $H^2(G,\mathbb{T})$ for
all $x\in X$. Since $\alpha$ is locally inner, we know
further that for each $x\in X$ there exists a neighborhood
$U$ of $x$, a cocycle $\omega\in Z^2\(G, C(U,\mathbb{T})\)$, and
a $\omega$-homomorphism $V:G\to \mathcal{U}M(A_U)$ such that
$\alpha^U=\Ad V$. Then
$\varphi^{\alpha}(x)=[\omega(x)]$ for all $x\in U$ by Lemma
\ref{lem-function}. It follows that the restriction
$(\alpha\otimes_X\gamma^o)^U$ of $\alpha\otimes_X\gamma^o$
to
$\(A\otimes_X C_0(X,\K)\)_U\cong
A_U\otimes_{C_0(U)}C_0(U,\K)$ is implemented by the
$\omega\cdot\sigma$-homomorphism $s\mapsto V_s\otimes_UW_s$.
Since $[\omega\cdot\sigma(x)]=0$ for all $x\in U$, it follows
now from \cite[Theorem 2.1]{ros2} that there exists a
neighborhood $U_1\subseteq U$ of $x$ such that
the restriction of $\omega\cdot\sigma$ to $U_1$ is trivial
in $H^2\(G, C(U_1,\mathbb{T})\)$. But this implies that
the restriction of $\alpha\otimes_X{\gamma^o}$ to
$\(A\otimes_X C_0(X,\K)\)_{U_1}$ is unitary.
This completes the proof.
\end{proof}
\begin{thm}\label{thm-locinner}
Let $G$ be a smooth group such that $G_{\subab}$ is compactly
generated.
Suppose that $A\in \MyMath{\ewcr}(X)$, and that $\alpha:G\to \Aut(A)$
is locally inner.
Then there exists a $C_0(X)$-linear action
$\beta^{\alpha}:G\to \Aut\(C_0(X,\K)\)$,
unique up to exterior equivalence, such that the stabilized
action $\alpha\otimes_X\id$ on $A\otimes_X C_0(X,\K)\ (\cong
A\otimes\K)$ is
exterior equivalent to the diagonal action $\id\otimes_X\beta^\alpha$
of $G$ on $A\otimes_X C_0(X,\K)$.
Moreover, if $\MyMath{\mathcal{LI}}_G(A)$ denotes the set of exterior
equivalence classes of locally inner $G$-actions on $A$,
then $\alpha\mapsto\beta^{\alpha}$ factors through a
well defined injective map
$[\alpha]\mapsto [\beta^{\alpha}]$ of $\MyMath{\mathcal{LI}}_G(A)$ into
$\mathcal{E}_G(X)$, which is a bijection if $A$ is stable.
\end{thm}
\begin{proof} Let $\alpha:G\to \Aut(A)$ be locally inner,
let $\Phi^*:C\(X,H^2(G,\mathbb{T})\)\to \mathcal{E}_G(X)$ denote
the splitting homomorphism of Theorem~\ref{brauer},
and let $[\gamma]=\Phi^*(\varphi^{\alpha})$.
Then, by Proposition~\ref{prop-locuni},
$\alpha\otimes_X{\gamma^o}$ is a locally unitary action of
$G$ on $A\otimes\K$.
Let $\zeta(\alpha\otimes_X{\gamma^o})\in H^1(X,\widehat{\sheaf G}_{\subab})$
denote the Phillips-Raeburn obstruction (see Proposition
\ref{prop-PRobs}). If $\delta:G\to \Aut\(C_0(X,\K)\)$ is locally
unitary with
$\zeta(\delta)=\zeta(\alpha\otimes_X{\gamma^o})$, then
it also follows from Proposition~\ref{prop-PRobs} that
$\alpha\otimes_X{\gamma^o}$ is exterior equivalent to
the diagonal action $\id\otimes_X\delta$ of
$G$ on $A\otimes_X C_0(X,\K)$.
Since taking diagonal actions on balanced tensor products
preserves exterior equivalence by Lemma~\ref{lem-PR3.10}, and
since ${\gamma^o}\otimes_X\gamma$ is exterior equivalent
to the trivial action $\id\otimes_X\id$ (Remark~\ref{rem-help}), it
follows that for
$\beta'=\delta\otimes_X\gamma$ on $A\otimes_XC_0(X,\K)\otimes_XC_0(X,\K)$,
\begin{align*}
\alpha\otimes_X\id\otimes_X \id&\sim
\alpha\otimes_X({\gamma^o}\otimes_X\gamma)
\sim(\alpha\otimes_X{\gamma^o})\otimes_X\gamma
\sim (\id\otimes_X\delta)\otimes_X\gamma \\
&\sim \alpha\otimes_X\beta',
\end{align*}
where $\sim$ denotes exterior equivalence. Since
$C_0(X,\K)\otimes_XC_0(X,\K)\cong C_0(X,\K)$, it follows that
$\alpha\otimes_X\id\sim \id\otimes_X\beta$ for some $\beta$ on $C_0(X,\K)$.
We have to show that $\beta$ is unique up to exterior
equivalence. For this observe that
$\alpha\otimes_X\id\sim\id\otimes_X\beta$ implies that
$\varphi^{\alpha}=\varphi^{\alpha\otimes_X\id} =
\varphi^{\id\otimes_X\beta}=\varphi^{\beta}$ (\lemref{lem-function}).
Thus, if $\beta'$ is another $C_0(X)$-action of $G$ on
$C_0(X,\K)$ such that $\alpha\otimes_X\id\sim
\id\otimes_X\beta'$, then it follows that
$\varphi^{\beta'}=\varphi^{\alpha}=\varphi^{\beta}$.
Thus if $[\gamma]=\Phi^*(\varphi^{\alpha})$ is as
above, then
$$\id\otimes_X(\beta'\otimes_X{\gamma^o})\sim
\alpha\otimes_X\id\otimes_X{\gamma^o}\sim
\id\otimes_X(\beta\otimes_X{\gamma^o}),$$
which implies that the Phillips-Raeburn obstructions
$\zeta(\beta'\otimes_X{\gamma^o})$,
and
$\zeta(\beta\otimes_X{\gamma^o})$ coincide (\lemref{lem-PR3.10}).
But then it follows from \propref{prop-PRobs} that
$\beta'\otimes_X{\gamma^o}\sim \beta\otimes_X{\gamma^o}$
which, via multiplication with $\gamma$, implies that
$\beta\sim \beta'$.
It follows that there is a well defined
map $[\alpha]\mapsto[\beta^{\alpha}]$ from
$\MyMath{\mathcal{LI}}_G(A)$ into $\mathcal{E}_G(A)$ which is determined by
the property that
$\alpha\otimes_X\id\sim \id\otimes_X\beta^{\alpha}$.
Since $\beta\sim\beta'$ implies that $\id\otimes_X\beta\sim
\id\otimes_X\beta'$, this map is injective.
Finally, if $A$ is stable, then we can define
an inverse by choosing a fixed \MyMath{C_0(X)}-isomorphism
$\Theta: A\otimes\K\to A$ (\cite[Lemma~4.3]{pr2}), and defining
$[\beta]\to [\Ad\Theta\circ(\id\otimes_X\beta)]$
of $\mathcal{E}_G(X)$ onto $\MyMath{\mathcal{LI}}_G(A)$.
\end{proof}
We immediately get the following corollary.
\begin{cor}\label{cor-locinner}
Let $G$ be a smooth group such that $G_{\subab}$ is
compactly generated. Let $X$ be a second countable
locally compact space and let $A\in \MyMath{\ewcr}(X)$.
Then $\alpha:G\to\Aut(A)$ is locally inner if and only if
there exists $[\beta]\in \mathcal{E}_G(X)$ such that
the stabilized action $\alpha\otimes_X\id$ is exterior equivalent
to $\id\otimes_X\beta$.
\end{cor}
\section{Introduction}
One of the original motivations for the study of \cs-algebras arose
from the desire to understand the representation theory of locally
compact groups. As is eloquently described in Rosenberg's survey
article \cite[\S3]{ros5}, the modern Mackey-Green machine shows that
to make further progress in this direction, it will be necessary to
have detailed knowledge of certain twisted crossed product
\cs-algebras --- either those studied by Green \cite{green1}, or more
generally, Busby-Smith twisted crossed products as studied by Packer
and Raeburn \cite{para1}. In view of the Packer-Raeburn Stabilization
Trick \cite[Theorem~3.4]{para1}, it really suffices to consider
ordinary crossed products $A\rtimes_\alpha G$. Of course any sort of
general classification of crossed products is out of the question;
however, as Rosenberg outlines in his ``Research Problem 1'' from
\cite{ros5}, it would be quite valuable and interesting to obtain
detailed information in the
special case of an action with a ``single orbit type'' acting on a
continuous-trace \cs-algebra. There is a considerable volume of
work in this direction --- for example, \cite{pr2,rw,rr,doir, echros,
ech4} and other references cited in \cite[\S3]{ros5}. Notice that
the case of nonvanishing Mackey obstructions is treated only in
\cite{echros, ech4},
and that in the majority of the published results it is assumed
that the group acting is abelian.
In this article, we consider a general family of dynamical systems
which include all spectrum fixing actions of a wide class of locally
compact groups acting on continuous-trace \cs-algebras.
Since the crossed product by any action of $G$ on a stable continuous-trace
\cs-algebra $A$ with single orbit type and constant stabilizer $N$
can be decomposed, via the stabilization trick, into an spectrum fixing
action of $N$ and an action of $G/N$ on $A\rtimes_\alpha N$ with
$G/N$ acting freely on $\specnp{(A\rtimes_{\alpha} N)}$, a detailed
description of spectrum fixing actions and their crossed products provides a
major step towards a general solution to Rosenberg's ``Research
Problem 1.'' Following ideas developed in \cite{pr2, 90a, ckrw,
judymc, prw} (and others) we are going to describe our actions in
terms of topological invariants living in Moore's group cohomology and
certain sheaf cohomology groups. In a forthcoming paper \cite{ew3}, we
will use these topological invariants to give a precise
bundle-theoretic description of the corresponding crossed products.
Since our methods require separability in a number of essential ways,
we will assume from the onset that {\em all automorphism groups are
second countable and that the \cs-algebras on which they act are
separable}.
Unlike much of the earlier work, we do not
assume that our groups are abelian, nor do we assume
that the associated Mackey obstructions vanish as in
\cite{pr2}, nor do we assume that they are constant as in
\cite{echros}. Moreover, it turns out that one
basic reason for actions on
continuous-trace \cs-algebras being more manageable than
actions on arbitrary \cs-algebras is that
for suitable $G$ (e.g., if the
abelianization $G_{\subab}=G/\overline{[G,G]}$ is compactly generated),
any spectrum fixing action of $G$ on a continuous-trace \cs-algebra $A$ is
\emph{locally inner}. (This follows from
the proof of \cite[Corollary~2.2]{ros2}.)
This means that each point in the spectrum has
an open neighborhood $U$ such that the action on the corresponding ideal
$A_U$ of $A$ is {\em inner\/} in the sense that for all $s\in G$ there is a
unitary $u_s\in\mathcal{U}M(A_U)$ such that
$\alpha_s|_{A_U}=\Ad u_s$. Thus it is natural to try to classify locally
inner actions on arbitrary \cs-algebras rather than restricting ourselves
to actions on continuous-trace algebras. This turns out to
be possible for a large class of \cs-algebras, namely those
whose primitive ideal space $\Prim(A)$ has a second countable
locally compact complete regularization $X$ as described in
\secref{sec-1}.
This class of algebras includes all unital \cs-algebras, all
\cs-algebras with Hausdorff primitive ideal spaces, and all
quasi-standard \cs-algebras in the sense of Archbold and Somerset
\cite{as}. If $X$ is given, then we will denote by $\MyMath{\ewcr}(X)$ the class
of
\cs-algebras for which the complete regularization of $\Prim(A)$ is
homeomorphic to $X$.
From the point of view of dynamical systems, two actions $\alpha$ and
$\beta$ of $G$ on $A$ are considered to be ``the same'' if they are
\emph{exterior equivalent} (see \S 2 for the precise definition). For
instance, if $\alpha$ and $\beta$ are exterior equivalent, then the
corresponding crossed products are isomorphic. An action $\alpha:G\to
\Aut(A)$ is exterior equivalent to the trivial action if and only if
it is \emph{unitary}; that is, if there is a strictly continuous
\emph{homomorphism} $u:G\to\mathcal{U}M(A)$ such that $\alpha_s=\Ad(u_s)$ for all $s\in
G$. As will become apparent, the crucial step in describing locally
inner actions in general turns out to be describing the collection
$\MyMath{\E_G(X)}$ of exterior equivalence classes of $\MyMath{C_0(X)}$-linear actions on
$A=C_0(X,\K)$. (Here and in the sequel, $\K$ will denote the compact
operators on a separable infinite dimensional Hilbert space $\mathcal{H}$.) In
fact, $\MyMath{\E_G(X)}$ is an abelian group. It is a special case of
\cite[Theorem~3.6]{ckrw} that the collection of Morita equivalence
classes of \MyMath{C_0(X)}-systems $(A,G,\alpha)$, where $A$ is a
continuous-trace \cs-algebra with spectrum $X$, forms a group
$\br_G(X)$ with respect to the operation
\begin{equation*}
[A,\alpha][B,\beta]:=[A\otimes_X B,\alpha\otimes_X\beta],
\end{equation*}
where $A\otimes_X B$ denotes the balanced tensor product $A\otimes_{C(X)} B$
and $\alpha\otimes_X \beta$ is the action on the quotient induced by
$\alpha\otimes \beta$.
In this situation, two actions on $C_0(X,\K)$ are Morita equivalent if and
only if they are exterior equivalent (\propref{prop-help}).
Therefore we can
identify \MyMath{\E_G(X)}{} with the subgroup of $\br_G(X)$ equal to the
kernel of the Forgetful homomorphism{}
$F:\br_G(X)\to{\operatorname{Br}}(X)\cong H^3(X;\Z)$, which maps the class of
$[A,\alpha]$ to the Dixmier-Douady class $\delta(A)$ of $A$ in the
third cohomology of $X$. Since $G$ acts trivially here, $F$ is
clearly surjective and admits a natural splitting map so that the
equivariant Brauer group
$\br_G(X)$ is isomorphic to $ \MyMath{\E_G(X)}\oplus H^3(X;\Z)$.
To fix ideas and notation, consider the
(well known) case $X=\{pt\}$.
If $\beta:G\to \Aut\(\K(\mathcal{H})\)$ is an action, then, from the
short exact sequence
of polish groups
\[1\arrow{e} \mathbb{T}\arrow{e} U(\mathcal{H})\arrow{e,t}{\Ad} \Aut(\K)\arrow{e} 1\]
we obtain an obstruction $[\omega_{\beta}]\in
H^2(G,\mathbb{T})$ (Moore's group cohomology) to lifting
$\beta$ to a strongly continuous homomorphism $V:G\to U(\mathcal{H})$;
thus, $[\omega_{\beta}]=0$ if and only if $\beta$ is unitary.
This obstruction is
called the {\em Mackey obstruction\/} of $\beta$.
Notice that if $\omega\in Z^2(G,\mathbb{T})$, then $[\omega]=[\omega_{\beta}]$
if and only if there exists an $\omega$-representation
$V:G\toU(\mathcal{H})$ which implements $\beta$; that is,
$V$ is a Borel map satisfying
$V_e=1$, $V_sV_t=\omega(s,t)V_{st}$ for $s,t\in G$, and
$\beta_s=\Ad V_s$ for all $s\in G$.
Using this it is not hard to show that
$[\beta]\mapsto[\omega_{\beta}]$ is indeed a group isomorphism
between $\mathcal{E}_G(\{pt\})\to H^2(G,\mathbb{T})$; surjectivity follows
from the fact that for each $\omega\in Z^2(G,\mathbb{T})$ there
is at least one $\omega$-representation, namely the left
regular $\omega$-representation on $L^2(G)$ defined by
$(L^\omega_s\xi)(t)=\omega(s,s^{-1}t)\xi(s^{-1}t)$.
Let $\beta:G\to \Aut\(C_0(X,\K)\)$ be a $C_0(X)$-linear action
and let $[\omega_x]\in H^2(G,\mathbb{T})$ denote the Mackey-obstruction
for the induced automorphism group $\beta^x$ acting on the fibre over
$x$. The map $\varphi^{\beta}:X\to H^2(G,\mathbb{T})$ given by $
\varphi^{\beta}(x)=[\omega_x]$ is continuous (when
$G$ is compactly generated, this follows from \cite[Lemma 3.3]{doir}; for
the general case see \lemref{lem-fix} below.)
Since the evaluation map $\mathcal{E}_G(X)\to\mathcal
E_G(\{x\})$ sending $ [\beta]\mapsto [\beta^x]$ is a homomorphism for
each $x\in X$, it follows from the discussion above
that $[\beta]\mapsto \varphi^{\beta}$ is a group
homomorphism
$$\Phi:\mathcal{E}_G(X)\to C\(X,H^2(G,\mathbb{T})\).$$
Notice that $\varphi^{\beta}=0$ if and
only if
each irreducible representation of $C_0(X,\K)$ can be extended
to a covariant representation of
$\big(C_0(X,\K),G,\beta\big)$; that is, if and only if $\beta$ is
{\em pointwise unitary}. Thus $\ker \Phi$
consists of all exterior equivalence
classes of pointwise unitary actions of $G$ on $C_0(X,\K)$.
In \cite{judymc} (see also \cite{prw}), Packer observed that if $G$ is an
elementary abelian group (i.e., $G$ is of the form
$\R^n\times\T^m\times\Z^k\times F$,
$F$ finite abelian), then
$\Phi$ is surjective and admits a splitting map. Moreover, a theorem of
Rosenberg (see \thmref{thm-ros}) implies that under these hypotheses,
$\ker \Phi$ coincides with the (equivalence classes of) locally
unitary actions. Therefore the Phillips-Raeburn obstruction map (see
\corref{cor-point}) gives an isomorphism of $\ker \Phi$ with the
isomorphism classes of principal $\widehat G$-bundles over $X$, or
equivalently, with the sheaf cohomology group $H^1(X,\widehat{\sheaf
G})$. (If $G$ is an abelian group, we will use the corresponding
caligraphic letter $\sheaf G$ to denote the sheaf of germs of
continuous $G$-valued functions.) Thus as abelian groups, $\MyMath{\E_G(X)}\cong
H^1(X,\widehat{\sheaf G}) \oplus C\(X,H^2(G,\T)\)$, and $\br_G(X)\cong
H^1(X,\widehat{\sheaf G}) \oplus C\(X,H^2(G,\T)\) \oplus H^3(X;\Z)$. Notice
that the critical steps in Packer's argument are to find a splitting
map for $\Phi$ and to identify $\ker \Phi$ using Rosenberg's theorem.
Our first main result is to produce a splitting map for $\Phi$ in the
case that $G$ has a representation group in the sense of Moore (see
Definition~\ref{def-3.1}). Such groups are called \emph{smooth} and
comprise a large class of locally compact groups including all compact
groups, all discrete groups, and all compactly generated abelian
groups (see Remark~\ref{rem-rep} and \corref{corcompgen}). Smooth
groups $G$ have the property that $H^2(G,\T)$ is locally compact and
Hausdorff. Thus if $G_{\subab}$ is compactly generated, then $G$ satisfies
the hypotheses of Rosenberg's theorem and allows us to identify
$\ker \Phi$ with locally unitary actions of $G$. A suitable
modification of the Phillips-Raeburn theory (see \secref{sec-2}) gives
an isomorphism of $\ker \Phi$ with $H^1(X,\widehat{\sheaf G}_{\subab})$
(\corref{cor-point}). Thus we obtain the following result.
\begin{thmnn}
[{\thmref{brauer} \& \corref{cor-brauer}}]
Suppose that $G$ is smooth and that $G_{\subab}$ is compactly generated.
Then
\begin{equation*}
\mathcal{E}_G(X)\cong H^1(X,\widehat{\sheaf G}_{\subab})\oplus C\(X,H^2(G,\T)\),
\end{equation*}
and for any trivial $G$-space $X$,
\begin{equation*}
\br_G(X)\cong H^1(X,\widehat{\sheaf G}_{\subab})\oplus C\(X,H^2(G,\T)\) \oplus H^3(X;\Z).
\end{equation*}
\end{thmnn}
This gives
new information even if $G$ is abelian, since
by Corollary~\ref{corcompgen} our result applies not only to
elementary abelian groups, but to all
second countable compactly generated abelian groups.
Our final result is a consequence of the above theorem and the
observation that if $\alpha$ is a locally inner action of a smooth
group $G$ on $A\in\MyMath{\ewcr}(X)$, then there is a $[\gamma]\in\MyMath{\E_G(X)}$ such that
$\alpha\otimes_X \gamma$ is locally unitary on $A\otimes_XC_0(X,\K)$
(\propref{prop-locuni}).
\begin{thmnn}
[{\thmref{thm-locinner}}]
Suppose that $A\in \MyMath{\ewcr}(X)$ and $\alpha:G\to\Aut(A)$ is a locally inner
action of a smooth group on $A$. If $G_{\subab}$ is compactly
generated, then there is a unique $[\beta^\alpha]\in\MyMath{\E_G(X)}$ such that
$\alpha\otimes_X
\id$ is exterior equivalent to $\id\otimes_X \beta^\alpha$ on
$A\otimes_X C_0(X,\K)$. In fact, the map $[\alpha]\mapsto [\beta^\alpha]$
is a well-defined injective map from the collection of exterior
equivalence classes of locally inner actions on $A$ to \MyMath{\E_G(X)}. This
correspondence is bijective if $A$ is stable.
\end{thmnn}
It follows that if $A\in\MyMath{\ewcr}(X)$ and if $(A,G,\alpha)$ is locally
inner, then the crossed product $A\rtimes_\alpha G$ is Morita
equivalent to one of the special form $\big(A\otimes_X
C_0(X,\K)\big)\rtimes_{\id\otimes_X\beta} G$, where $\beta$ is in \MyMath{\E_G(X)}.
Having this, it is possible to describe the crossed product in
terms of the invariants associated to $\beta$ and a representation
group for $G$; this we do in \cite{ew3}.
Our work is organized as follows. \secref{sec-1} is devoted to some
preliminary results on \MyMath{C_0(X)}-algebras and on the complete regularization
of \MyMath{\Prim(A)}. In \secref{sec-2} we consider locally unitary actions on
$\MyMath{\ewcr}(X)$-algebras, and extend the Phillips-Raeburn
classification scheme to this setting. Since smooth groups play such
an important r\^ole in the sequel, we devote \secref{sec-3} to
developing some basic results about
representation groups. The chief result connecting representation
groups to the splitting of $\Phi$ is the characterization of smooth
groups given in \lemref{cocycle}. In \secref{sec-4}, we prove the
first of our main results (\thmref{brauer}) which describes \MyMath{\E_G(X)}. Our
description of locally inner actions is given in \secref{sec-5}.
Since Rosenberg's theorem plays a key r\^ole, we provide a discussion
of possible extensions of his theorem in \secref{sec-appendix}. We
give examples which show that the hypotheses are sharp --- that is,
the major assumptions that $H^2(G,\T)$ is Hausdorff and that $G_{\subab}$ is
compactly
generated are both necessary in general. On the other hand,
we also show that
Rosenberg's theorem holds for a strictly larger class of groups if we
restrict ourselves to actions on continuous-trace \cs-algebras with
locally connected spectrum (\thmref{append}). This class of groups
contains all connected nilpotent Lie groups and all
$\text{[FD]}\;\bar{}${} groups (\corref{cor-FD}) --- a
class of groups which contains all known examples of groups $G$ for which
$\cs(G)$ is a continuous-trace
\cs-algebra.
\endinput
\section{Locally Unitary Actions}\label{sec-2}
In this section we want to see that the Phillips-Raeburn
classification scheme for locally unitary actions
of abelian groups can be extended to
all second countable locally compact groups acting on
\MyMath{\ewcr}-algebras.
Suppose that $A\in\MyMath{\ewcr}(X)$ and that $\alpha:G\to\Aut(A)$ is locally
unitary. Then there is a cover $\boldsymbol{U}=\set{U_i}_{i\in I}$ such that
$\alpha^{U_i}= \Ad(u^i)$ for strictly continuous homomorphism s
$u^i:G\to\mathcal{U}M(A_{U_i})$. For convenience, we will write $U_{ij}$ for
$U_i\cap U_j$, $A_{ij}$ in place of $A_{U_{ij}}$, $\alpha^{ij}$ for
$\alpha^{U_{ij}}$, and $u^{ij}$ for $(u^i)^{U_{ij}}$. Even though
$u^{ij}\not= u^{ji}$, both $u^{ij}$ and $u^{ji}$ implement
$\alpha^{ij}$. It follows that for each $s\in G$,
$u_s^{ij}(u_s^{ji})^*$ belongs to $\mathcal{Z}\UM(A_{ij})$. In order to
identify $\mathcal{Z}\UM(A_{ij})$ with $C(U_{ij},\T)$ (with the compact-open
topology), we need the following lemma.
\begin{lem}
\label{lem-crrestr}
Suppose that $A\in\MyMath{\ewcr}(X)$ and that $U$ is open in $X$. Then $A_U \in
\MyMath{\ewcr}(U)$.
\end{lem}
\begin{proof}
Let $q:\MyMath{\Prim(A)}\to X$ be the quotient map. Then we can identify
$\Prim(A_U)$ with $q^{-1}(U)$, and $\Glimm(A_U)$ is the
quotient of the latter with topology induced by $C^b\(q^{-1}(U)\)$.
We have to show that $\Glimm(A_U)$ can be identified with $U$ with the
relative topology. However since any $f\in\cb\(\prima\)$ restricts to an
element of $C^b\(q^{-1}(U)\)$, it is clear that $P\sim Q$ in
$q^{-1}(U)$ implies that $P\sim Q$ in \MyMath{\Prim(A)}. On the other hand,
suppose that $P\sim Q$ in \MyMath{\Prim(A)}{} and that $f\in C^b\(q^{-1}(U)\)$.
Since $X$ is locally compact and $U$ is an open \nbhd{} of
$q(P)=q(Q)$, there is a $g\in C_c^+(X)$ with $g\(q(P)\)=1$ and
$\supp(g) \subseteq U$. Therefore we may view $h= f(g\circ q)$ as an
element of $\cb\(\prima\)$. Since $h(P)=h(Q)$ by assumption, we must have
$f(P)=f(Q)$. Thus the two equivalence relations coincide on
$q^{-1}(U)$ and we can identify $U$ with $\Glimm(A_U)$ at least as a
set.
Let $\tau_r$ be the relative topology on $U$. A similar
argument to that in the previous paragraph shows that any element of
$C^b\(q^{-1}(U)\)$ agrees at least locally with an element of $\cb\(\prima\)$.
Thus $C^b(U,\tau_r)$ and $C^b(U,\tau_{\text{cr}})$ coincide. Since both
topologies on $U$ are completely regular (Hausdorff) topologies, and
therefore are determined by the zero sets of $C^b(U)$
\cite[Theorem~3.7]{gill-jer}, the topologies coincide.
\end{proof}
Now the previous lemma allows us to conclude that
$A_{ij}\in\MyMath{\ewcr}(U_{ij})$ so that we may identify $\mathcal{Z}\UM(A_{ij})$ with
$C(U_{ij}, \T)$ as claimed. Notice that $u^{ij}(x)=u^i(x)$.
Since
$C(U_{ij},\T)$ has the compact open
topology and $s\mapsto u_s^{ij}(u_s^{ji})^*$ is continuous, it follows
that
\[
(x,s)\mapsto u_s^{i}(x)u_s^{j}(x)^*
\]
is jointly continuous from $U_{ij}\times G$ to $\T$.
In particular, for each $x\in U_{ij}$, $s\mapsto
u_s^{i}(x)u_s^{j}(x)^*$ is a continuous character $\gamma_{ij}(x)$
on $G$, and
\begin{equation}
\label{eq-*}
u_s^{i}(x)=\gamma_{ij}(x)(s)u^{j}_s(x).
\end{equation}
Since any character has to kill the closure of the commutator subgroup
$[G,G]$, we will always view $\gamma_{ij}(x)$ as a character on the
abelian group $G_{\subab}:=G/\overline{[G,G]}$.
The group $G_{\subab}$ is a locally compact abelian group usually called the
\emph{abelianization} of $G$. Notice that the joint continuity
implies that the functions $\gamma_{ij}:U_{ij}\to\widehat{G}_{\subab}$ are continuous
when $\widehat{G}_{\subab}$ is given the usual locally compact dual topology (of
uniform convergence on compacta). A straightforward computation using
the definition of the $\gamma_{ij}$'s shows that if $x\in U_{ijk}$,
then
\[
\gamma_{ij}(x)\gamma_{jk}(x)=\gamma_{ik}(x).
\]
Thus the collection $\gamma=\set{\gamma_{ij}}$ defines a $1$-cocycle
in\footnote{If $G$ is an abelian group,
we use the caligraphic letter $\sheaf G$ to denote the corresponding
sheaf of germs of continuous $G$-valued functions on $X$.}
$Z^1(\boldsymbol{U},\widehat{\sheaf G}_{\subab})$ and therefore a class $\zeta$ in $H^1(X,\widehat{\sheaf G}_{\subab})$.
We claim this class depends only on
$(A,G,\alpha)$. Suppose we had taken a different cover
$\set{V_j}_{j\in J}$ and homomorphism s $v^i$. Passing to a common refinement
allows us to assume that $I=J$ and that $U_i=V_i$. Then since $u_i$
and $v_i$ both implement $\alpha^i$ over $U_i$, an argument similar to
that above implies they differ by a central multiplier $\lambda_i:U_i\to
\widehat{G}_{\subab}$. Then it is easy to see that we get cohomologous cocycles.
One usually writes $\zeta(\alpha)$ for the class $\zeta$, and
$\zeta(\alpha)$ is called the \emph{Phillips-Raeburn obstruction}.
\begin{remark}
\label{rem-a}
When $G$ is abelian and $A$ is type~I with spectrum $X$, then
$\zeta(\alpha)$ is the classical Phillips-Raeburn obstruction of
\cite{pr2}. That is, $\zeta(\alpha)$ is the class of the principal
$\widehat G$-bundle given by the restriction map $p:\spec{\acg} \to X$
as in \cite[Theorem~2.2]{pr2}.
\end{remark}
\begin{prop}
\label{prop-PRobs}
Let $X$ be a second countable locally compact Hausdorff space.
Suppose that $A\in\MyMath{\ewcr}(X)$ and
that $\alpha:G\to\Aut(A)$ is a second countable locally compact,
locally unitary automorphism group. Then the transition functions
\eqref{eq-*} define a class $\zeta(\alpha)$ in $H^1(X,\widehat{\sheaf G}_{\subab})$ which
depends only on $(A,G,\alpha)$. If $(A,G,\beta)$ is another such
system, then $\zeta(\alpha)=\zeta(\beta)$ if and only if $\alpha$ and
$\beta$ are exterior equivalent. In particular, $\alpha$ is unitary
if and only if $\zeta(\alpha)=1$\footnote{We are writing the product
in $H^1$ multiplicatively; therefore $1$ denotes the trivial
element.}.
Furthermore if $A$ is stable, then every
class in $H^1(X,\widehat{\sheaf G}_{\subab})$ is equal to $\zeta(\alpha)$ for some locally
unitary action $\alpha:G\to\Aut(A)$.
\end{prop}
\begin{proof}
[Proof of all but the last assertion]
We have already seen that $\zeta(\alpha)$ depends only on
$(A,G,\alpha)$. Now suppose that $(A,G,\beta)$ is another locally
unitary action with
$\zeta(\beta)=\zeta(\alpha)$.
Then we can find a cover
$\set{U_i}_{i\in I}$ and $u^i,v^i:G\to \mathcal{U}M(A_{U_i})$ such that $u^i$
implements $\alpha^i$, $v^i$ implements $\beta^i$, and such that
\begin{equation}
\label{eq-**}
u^{i}_s(x)u^{j}_s(x)^*=\gamma_{ij}(x)(s)=v^{i}_s(x)v^{j}_s(x)^*.
\end{equation}
Let $w^i_s(x):= u^i_s(x)v^i_s(x)^*$. Then $s\mapsto w^i_s(\cdot)$ is
a strictly continuous map of $G$ into $\mathcal{U}M(A_{U_i})$. Then it is
easy to see that $\alpha^i$ is exterior equivalent to $\beta^i$ via $w^i$.
However, if $x\in U_{ij}$, then \eqref{eq-**} implies that
\begin{equation*}
w^i_s(x)w^j_s(x)^* = u^i_s(x)v^i_s(x)^*v^j_s(x)u^j_s(x)^* = 1.
\end{equation*}
Consequently, we can define $w_s(x)=w^i_s(x)$ if $x\in U_i$. Since
each $w^i$ defines a strictly continuous map into $\mathcal{U}M(A_{U_i})$ and
since $x\mapsto\|a(x)\|$ vanishes at infinity for each $a\in A$, it is
not hard to see that $w$ is a strictly continuous map from $G$ into
$\mathcal{U}M(A)$. Therefore, $\alpha$ and $\beta$ are exterior equivalent.
Conversely, if $\alpha$ and $\beta$ are exterior equivalent via
$w:G\to\mathcal{U}M(A)$, then with $\set{U_i}_{i\in I}$ and $u^i,v^i:G\to
\mathcal{U}M(A_{U_i})$ as above, we must have unimodular scalars
$\lambda_i(x)(s)$ for all $x\in U_i$ and $s\in G$ such that
$\lambda_i(x)(s) = u^i_s(x)^* w_s(x)v^i_s(x)$. As above, we may view
these as continuous functions from $U_i$ to $\widehat{G}_{\subab}$. Also, if $x\in
U_{ij}$, then
\begin{align*}
u^i_s(x)^*u^j_s(x)& =
u^i_s(x)^*w_s(x)v^i_s(x)v^i_s(x)^*v^j_s(x)\(u^j_s(x)^*w_s(x)v^j_s(x)\)^*
\\
&= \lambda_i(x)(s)\overline{\lambda_j(x)(s)}v^i_s(x)^*v^j_s(x).
\end{align*}
It follows that $\zeta(\alpha)=\zeta(\beta)$.
\end{proof}
To prove that every class in $H^1(X,\widehat{\sheaf G}_{\subab})$ arises when $A$ is
stable, we want to recall some facts about balanced tensor products.
Suppose that $A$ and $B$ are \cox-algebra s. Let $I$ be the ideal in
$A\mtensor B$ generated by
\[
\set{a\cdot f\otimes b - a\otimes f\cdot b: \text{$a\in A$, $b\in B$,
and $f\in \MyMath{C_0(X)}$}}.
\]
The \emph{(maximal) \MyMath{C_0(X)}-balanced tensor product} of $A$ and $B$ is
defined to be the quotient
\[
A\tensor_X B:= (A\mtensor B)/I.
\]
\begin{remark}
Balanced tensor products have been studied by several authors, and
quite recently by Blanchard \cite{blanchard,blanchard2}. In
particular if $X$ is compact, then $A\tensor_X B$ coincides with Blanchard's
$A\otimes_X^M B$. Moreover, $A\tensor_X B$ is a \cox-algebra{} and, writing
$a\tensor_X b$ for the image of $a\otimes b$ in $A\tensor_X B$, we have
\begin{equation}
\label{eq-(a)}
f\cdot(a\tensor_X b)=f\cdot a\tensor_X b = a\tensor_X f\cdot
b\quad\text{for all $f\in\MyMath{C_0(X)}$.}
\end{equation}
We intend to discuss these and other properties of $\tensor_X$
elsewhere \cite[\S2]{ew3}.
Here we will be satisfied with the special cases
outlined below.
\end{remark}
In this work, we shall always assume that $A$ and $B$ are separable,
and that $B$ is nuclear --- in fact, it will suffice to consider only
the case where $B=C_0(X,\K)$. Then \cite[Lemma~1.1]{rw} applies and
we can identify $\Prim(A\tensor_X B)$ with
\begin{equation}
\label{eq-(b)}
\set{(P,Q)\in\Prim(A)\times\Prim(B):\sigma_A(P)=\sigma_B(Q)}.
\end{equation}
In this case, \eqref{eq-(a)} is a straightforward
consequence of \eqref{eq-(b)} and the definition of $I$. Moreover,
for all $x\in X$,
\begin{equation}
\label{eq-fibres}
A\tensor_X B(x)\cong A(x)\otimes B(x),
\end{equation}
and $(a\tensor_X b)(x)=a(x)\otimes b(x)$.
(Note that we write simply $\otimes$ when one of the factors is
nuclear.)
If $B=C_0(X,\K)$, then
$\Prim\(A\tensor_X C_0(X,\K)\)$ can be identified with $\Prim(A)$.
Moreover since $C_0(X,\K)\cong C_0(X)\otimes \K$, the map
$a\tensor_X(f\otimes T)\mapsto a\cdot f\otimes T$ extends to a
\MyMath{C_0(X)}-linear isomorphism of $A\tensor_XC_0(X,\K)$ onto $A\otimes \K$. Notice
that if $U$ is open in $X$, then this isomorphism identifies
$(A\tensor_XC_0(X,\K))_U$ with $A_U\otimes \K$. Furthermore, if $A$ is
stable, then we can choose an isomorphism of $A\otimes \K$ and $A$
which induces the identity map on the primitive ideal spaces (assuming
$\Prim(A\otimes\K)$ has been identified with \MyMath{\Prim(A)})
\cite[Lemma~4.3]{pr2}. Then $A\tensor_XC_0(X,\K)$ is isomorphic to $A$
and $(A\tensor_XC_0(X,\K))_U$ is identified with $A_U$.
\begin{lem}[cf., {\cite[Proposition~3.10]{pr2}}]
\label{lem-PR3.10}
Suppose that $X$ is a second countable locally compact Hausdorff space
and that $A\in\MyMath{\ewcr}(X)$. Then $A\tensor_XC_0(X,\K)\in\MyMath{\ewcr}(X)$. If
$\alpha:G\to\Aut(A)$ and $\beta:G\to\Aut\(C_0(X,\K)\)$ are
$\MyMath{C_0(X)}$-automorphism groups, then the diagonal action
$\alpha\otimes\beta$ on $A\otimesC_0(X,\K)$ induces an action
$\alpha\tensor_X\beta$ on $A\tensor_XC_0(X,\K)$. If $\gamma$ is exterior
equivalent to $\alpha$ and $\delta$ is exterior equivalent to $\beta$,
then $\alpha\tensor_X\beta$ is exterior equivalent to
$\gamma\tensor_X\delta$. Finally if $\alpha$ and $\beta$ are locally
unitary, then so is $\alpha\tensor_X \beta$; moreover,
\[
\zeta\(\alpha\tensor_X \beta)=\zeta(\alpha)\zeta(\beta)\quad
\text{in $H^1(X,\widehat{\sheaf G}_{\subab})$}.
\]
\end{lem}
\begin{remark}
If $B$ is an arbitrary element of $\MyMath{\ewcr}(X)$, then it seems to be
difficult to decide whether $A\tensor_X B$ is in $\MyMath{\ewcr}(X)$ --- even if
$A$ and $B$ are both nuclear. However if one of the algebras is nuclear
and $A$ or $B$ has Hausdorff primitive ideal space, then one can replace
$C_0(X,\K)$ by
$B$ in the above and obtain the same results.
\end{remark}
\begin{proof}
Since $\Prim(A\tensor_XC_0(X,\K))$ can be identified with \MyMath{\Prim(A)},
$A\tensor_XC_0(X,\K)$ is certainly in $\MyMath{\ewcr}(X)$. If $\alpha_s$ and
$\beta_s$ are \MyMath{C_0(X)}-linear, then $\alpha_s\otimes\beta_s$ maps the
balancing ideal into itself
and $\alpha\tensor_X\beta$ is a well-defined action on $A\tensor_X B$.
Now suppose that $u:G\to\mathcal{U}M(A)$ and $v:G\to\mathcal{U}M\(C_0(X,\K)\)$ are strictly
continuous $1$-cocycles such that
$
\alpha_s(a)=u_s\gamma_s(a)u_s^*$ and $\beta_s(b)=v_s\delta_s(b)
v_s^*$ for all $s\in G$, $a\in A$, and $b\inC_0(X,\K)$.
Since the image of \MyMath{C_0(X)}{} sits in the center of the respective
multiplier algebras, it is clear that each $u_s$ and $v_s$ commutes
with the \MyMath{C_0(X)}-actions. Therefore $u_s\otimes v_s$ defines a well
defined element $w_s:=u_s\tensor_X v_s$ in $\mathcal{U}M\(A\tensor_XC_0(X,\K)\)$. The
continuity of $s\mapsto w_s(t)$ is clear for $t$ in the algebraic
tensor product $A\odotC_0(X,\K)$. This suffices to show strict continuity
as each $w_s$ has norm one. Routine calculations show that $w_s$ is a
$1$-cocycle implementing an exterior equivalence between
$\alpha\tensor_X \beta$ and $\gamma\tensor_X\delta$.
Finally, suppose that $\alpha$ and $\beta$ are locally unitary. Then
we can find a cover $\boldsymbol{U}=\set{U_i}$ such that $\alpha^{U_i}$ is
implemented by a homomorphism{} $u^i:G\to\mathcal{U}M(A_{U_i})$ and $\beta^{U_i}$ by a
homomorphism{} $v^i:G\to\mathcal{U}M\(C_0(U_i,\K)\)$. Let $\gamma=\set{\gamma_{ij}}$ and
$\eta=\set{ \eta_{ij}}$ be the corresponding cocycles representing
$\zeta(\alpha)$ and $\zeta(\beta)$. As above, we obtain a homomorphism{} $w^i =
u^i\otimes_{U_i}v^i$ which implements $(\alpha\tensor_X\beta)^{U_i}=
\alpha^{U_i} \otimes_{U_i} \beta^{U_i}$ on $A_{U_i}\otimes_{U_i}
C_0(U_i,\K) \cong \(A\tensor_X C_0(X,\K)\)_{U_i}$. Thus
$\alpha\tensor_X\beta$ is locally unitary. Moreover since
$w^{i}(x)= u^{i}(x)\otimes v^{i}(x)$,
\begin{align*}
w^i_s(x)&=\gamma_{ij}(x)(s)u^j_s(x)\otimes \eta_{ij}(x)(s)v^j_s(x) \\
&= \(\gamma_{ij}(x)(s)\eta_{ij}(x)(s)\)w^j_s(x).
\end{align*}
The result follows.
\end{proof}
\begin{proof}
[Proof of the final assertion in \propref{prop-PRobs}]
Let $\zeta_0\in H^1(X,\widehat{\sheaf G}_{\subab})$.
As we remarked above, when $A$ is stable there is an isomorphism of
$A$ and $A\tensor_XC_0(X,\K)$ carrying $A_U$ onto $\(A\tensor_XC_0(X,\K)\)_U$.
Thus it will suffice to produce a locally unitary action $\alpha$ on
$A\tensor_XC_0(X,\K)$ with $\zeta(\alpha)=\zeta_0$.
It follows from \cite[Theorem~3.8]{pr2} and Remark~\ref{rem-a}
that there is a locally
unitary action $\tilde\beta:G_{\subab}\to\Aut\(C_0(X,\K)\)$ with
$\zeta(\tilde\beta)=\zeta_0$.
Now we simply lift $\tilde\beta$ to $G$. That is $\beta_s:=
\beta_{sH}$ where $H:=\overline{[G,G]}$.
It is straightforward to check that $\zeta(\beta)=\zeta(\tilde\beta)
=\zeta_0$.
Now the result follows by applying \lemref{lem-PR3.10} to
$\alpha:=1\tensor_X \beta$.
\end{proof}
\begin{remark}
Since two actions with the same Phillips-Raeburn obstruction are
exterior equivalent, the above argument makes it clear that \emph{any}
locally unitary action of $G$ on a stable \MyMath{\ewcr}-algebra $A$ is lifted from
an action of $G_{\subab}$. In fact, there is a one-to-one correspondence
between exterior equivalence classes of actions of $G$ on $A$ and
exterior equivalence classes of actions of $G_{\subab}$ on $A$.
\end{remark}
We end this section with a short discussion on locally unitary action
on continuous-trace \cs-algebras. Assume
that $A$ is a separable continuous trace $\cs$-algebra with
spectrum $X$. An action $\alpha:G\to \Aut(A)$ is called
{\em pointwise unitary} if $\alpha$ is
$C_0(X)$-linear and the action on each fibre $A(x)$ is unitary,
or, equivalently,
if each $\rho\in \hat A$
can be extended to a covariant representation $(\rho, V)$
of $(A,G,\alpha)$. In general, a pointwise unitary action need not be
locally unitary (see \secref{sec-appendix}). Despite this, pointwise
unitary actions are locally unitary under mild additional hypotheses.
The strongest result in this direction is due to Rosenberg.
\begin{thm}[{\cite[Corollary 1.2]{ros2}}]\label{thm-ros}
Let $A$ be a separable continuous-trace $\cs$-algebra with
spectrum $X$ and let $G$ be a second countable locally
compact group such that $G_{\subab}$ is compactly generated and
$H^2(G,\mathbb{T})$ is Hausdorff. Then every pointwise unitary action
of $G$ on $A$ is locally unitary.
\end{thm}
Thus as a direct corollary of this and \propref{prop-PRobs}, we obtain:
\begin{cor}\label{cor-point}
Let $A$ and $G$ be as above and assume in addition that $A$
is stable. Then the Phillips-Raeburn obstruction map
$\alpha\to \zeta(\alpha)$ induces a bijection
between the exterior equivalence classes of pointwise unitary
actions
of $G$ on $A$ and $H^1(X,\widehat{\sheaf G}_{\subab})$.
\end{cor}
\section{Preliminaries}\label{sec-1}
If $A$ is a \cs-algebra, then we will write $\MyMath{\Prim(A)}$ for the space
of primitive ideals of $A$ with the Jacobson topology. This topology
is badly behaved in general and may satisfy only the $T_0$-axiom for
separability. On the other hand, \MyMath{\Prim(A)}{} is always locally
compact\footnote{We do not require that compact or locally compact
spaces be Hausdorff.}, and \MyMath{\Prim(A)}{} is second countable whenever $A$
is separable \cite[\S3.3]{dix}.
The Jacobson topology on \MyMath{\Prim(A)}{} not
only describes the ideal structure of $A$, but also allows us to
completely describe the center $\mathcal{Z}\M(A)$ of the multiplier algebra
$M(A)$ of $A$. If $a\in A$, then we will write
$a(P)$ for the image of $a$ in the quotient $A/P$, then
the Dauns-Hofmann Theorem allows us to identify
$\cb\(\prima\)$ with $\mathcal{Z}\M(A)$ as follows: if $f\in\cb\(\prima\)$ and if $a\in A$, then
$f\cdot a$ is the unique element of $A$ satisfying $(f\cdot a)(P) =
f(P)a(P)$ for all $P\in\MyMath{\Prim(A)}$,
and every element of $\mathcal{Z}\M(A)$ is of this form (cf.,
\cite[Corollary~4.4.8]{ped} or \cite{may}). Note that
$A$ is a nondegenerate central Banach $\cb\(\prima\)$-module.
Since the topology on \MyMath{\Prim(A)}{} can be awkward to deal with, a natural
alternative is to use the following definition.
\begin{definition}
Suppose that $X$ is a locally compact Hausdorff space. A
\emph{\cox-algebra} is a \cs-algebra $A$ together with a $*$-homomorphism{}
$\Phi_A:\MyMath{C_0(X)}\to\mathcal{Z}\M(A)$ which is \emph{nondegenerate} in the
sense that
\[
\Phi_A\(\MyMath{C_0(X)}\)\cdot A:=\sp\set{\Phi_A(f)a:\text{$f\in \MyMath{C_0(X)}$ and $a\in A$}}
\]
is dense in $A$.
\end{definition}
\cox-algebra s have enjoyed a considerable amount of attention recently and
there are a number of good treatments available
\cite{blanchard,blanchard2,may1}. We recall some of the basic
properties here.
If $(A,\Phi_A)$ is a \cox-algebra, then there is a continuous map
$\sigma_A:\MyMath{\Prim(A)}\to X$ such that $\Phi_A(f)=f\circ\sigma_A$. (Here
and in the sequel, we \emph{identify} $\mathcal{Z}\M(A)$ with $\cb\(\prima\)$ via the
Dauns-Hofmann Theorem.)
As the converse is clear, $A$ is a \cox-algebra{} if and only if there is a
continuous map from $\MyMath{\Prim(A)}$ to $X$. We will usually suppress
$\Phi_A$ and $\sigma_A$ and write $f\cdot a$ in place of $\Phi_A(f)a$
or $(f\circ\sigma_A)\cdot a$. Notice that $A$ is a nondegenerate
central Banach \MyMath{C_0(X)}-module satisfying
\begin{equation}
\label{eq-//}
(f\cdot a)^*=a^*\cdot \bar f.
\end{equation}
Furthermore, any nondegenerate central \MyMath{C_0(X)}-module satisfying
\eqref{eq-//} is a \cox-algebra.
Suppose that $U$ is open in $X$ and that $J$ is the ideal of functions
in \MyMath{C_0(X)}{} vanishing off $U$. Then the Cohen factorization theorem
(\cite{cohen}, \cite[Proposition~1.8]{blanchard}) implies that
\begin{equation*}
\overline{J\cdot A}:=\overline\sp\set{f\cdot a:\text{$f\in J$ and
$a\in A$}}
=\set{f\cdot a:\text{$f\in J$ and
$a\in A$} }.
\end{equation*}
(The point being that it is unnecessary to take either the closure or the
span in the final set.)
Anyway, $J\cdot A$ is an ideal of $A$ which will be denoted by $A_U$.
For each $x\in X$, we write $A(x)$ for the quotient of $A$ by
$A_{X\setminus\set x}$. If $a\in A$, then we write $a(x)$ for the
image of $a$ in $A(x)$. We refer to $A(x)$ as \emph{the fibre of $A$
over $x$}. Notice that it is possible that $A(x)=\set0$.
Even so, we often view elements of $A$ as ``fields'' in
$\bigoplus_{x\in X} A(x)$. This point of view is justified by the
following.
\begin{lem}
[{\cite{blanchard,may1}}]
Suppose that $A$ is a \cox-algebra. For each $a\in A$, the map $x\mapsto
\|a(x)\|$ is upper semicontinuous; that is, $\set{x\in
X:\|a(x)\|\ge\epsilon}$ is closed for all $\epsilon\ge0$.
Furthermore,
\[
\|a\|=\sup_{x\in X}\|a(x)\|.
\]
\end{lem}
\begin{remark}
The map
$x\mapsto
\|a(x)\|$ is continuous for all $a\in A$ if and only if
$\sigma_A$ is
open \cite{lee2,may1}. In this case,
$A$ is the section algebra of a
\cs-bundle over the image of $\sigma_A$ \cite[\S1]{fell4}.
\end{remark}
\begin{remark}
Notice that if $A$ is a \cox-algebra, then each $m\inM(A)$ defines a
multiplier $m(x)\in M\(A(x)\)$. If $m\inM(A)$
and $a\in A$, then $ma(x)=m(x)a(x)$.
\end{remark}
Given a \cs-algebra $A$, it is natural to look for a nice space $X$
which makes $A$ a \cox-algebra. Since $X$ will be the image of \MyMath{\Prim(A)}{}
by a continuous map, it is reasonable to look for a
``Hausdorffication
of $\MyMath{\Prim(A)}$''. Regrettably, there are a
horrifying number of alternatives to chose from (cf., e.g.,
\cite[Chap.~III
\S3]{dauns-hofmann}). For our purposes, the appropriate
notion is the \emph{complete regularization}. If $P$ and $Q$
belong to \MyMath{\Prim(A)}, then we define
$P\sim Q$ if $f(P)=f(Q)$ for all $f\in \cb\(\prima\)$. Then $\sim$ is an
equivalence relation and the set $\MyMath{\Prim(A)}/\!\!\sim$ is denoted by
$\ifmmode\Glimm(A)\else$\Glimm(A)$\fi$
\cite{as}. If we give \ifmmode\Glimm(A)\else$\Glimm(A)$\fi{} the weak topology $\tau_{\text{cr}}$ induced by the
functions in $\cb\(\prima\)$, then $\(\ifmmode\Glimm(A)\else$\Glimm(A)$\fi,\tau_{\text{cr}}\)$ is a completely regular
space \cite[Theorem~3.7]{gill-jer}. The quotient map
$q:\MyMath{\Prim(A)}\to\ifmmode\Glimm(A)\else$\Glimm(A)$\fi$ is called the \emph{complete regularization map}.
It is not clear that $\tau_{\text{cr}}$ coincides with the quotient
topology\footnote{These topologies do differ in general
\cite[3J.3]{gill-jer}; however, we know of no examples where they
differ for $\ifmmode\Glimm(A)\else$\Glimm(A)$\fi$.} $\tau_{\text{q}}$
on $\ifmmode\Glimm(A)\else$\Glimm(A)$\fi$, although one certainly has $\tau_{\text{cr}}\subseteq \tau_{\text{q}}$. In
particular, $q$ is continuous; moreover the map $f\mapsto f\circ q$ is
an isomorphism of $C^b\(\ifmmode\Glimm(A)\else$\Glimm(A)$\fi)$ and $\cb\(\prima\)$
\cite[Theorem~3.9]{gill-jer}.
Furthermore, $\tau_{\text{cr}}$ is the only completely regular topology on \ifmmode\Glimm(A)\else$\Glimm(A)$\fi{}
such that the functions induced by $\cb\(\prima\)$ are continuous
\cite[Theorem~3.6]{gill-jer}.
Here it will be necessary to have the complete regularization $\(\ifmmode\Glimm(A)\else$\Glimm(A)$\fi,\tau_{\text{cr}}\)$
be locally compact. Regrettably, this can
fail to be the case
\cite[Example~9.2]{dauns-hofmann}. Even if the complete
regularization is locally compact, we have been unable to show that it
must be second countable if $A$ is separable. Consequently, we must
include both these assumptions in our applications.
\begin{definition}
We will call a separable \cs-algebra a \emph{\MyMath{\ewcr}-algebra} if the complete
regularization $X:=\(\ifmmode\Glimm(A)\else$\Glimm(A)$\fi,\tau_{\text{cr}}\)$ of \MyMath{\Prim(A)}{} is a second countable
locally compact Hausdorff space. If $X$ is a second countable
locally compact Hausdorff space, then we will write $\MyMath{\ewcr}(X)$ for the
collection of \MyMath{\ewcr}-algebras with complete regularization homeomorphic
to $X$.
\end{definition}
Despite the pathologies mentioned above, the class of \MyMath{\ewcr}-algebras is
quite large. It clearly contains all separable $C^*$-algebras with
Hausdorff primitive ideal space \MyMath{\Prim(A)}{}. If $A$ is unital,
then \MyMath{\Prim(A)}{} is compact. Since the complete regularization map is continuous,
\ifmmode\Glimm(A)\else$\Glimm(A)$\fi{} is compact. Since
$\cb\(\prima\)=C^b\(\ifmmode\Glimm(A)\else$\Glimm(A)$\fi\)$ is actually a closed subalgebra of $A$ in this
case, $C^b\(\ifmmode\Glimm(A)\else$\Glimm(A)$\fi\)$ is separable and $\ifmmode\Glimm(A)\else$\Glimm(A)$\fi$ is second
countable\footnote{There is an embedding of \ifmmode\Glimm(A)\else$\Glimm(A)$\fi{} into
$\specnp{C^b\(\ifmmode\Glimm(A)\else$\Glimm(A)$\fi\))}$ which is the Stone-\v Cech compactification
$\beta\(\ifmmode\Glimm(A)\else$\Glimm(A)$\fi\)$.
Since subset of a
locally compact Hausdorff space is locally compact if and only if it
is open in its closure,
$\ifmmode\Glimm(A)\else$\Glimm(A)$\fi$ is locally compact exactly when it is open in
its Stone-\v Cech compactification.}.
Thus
\emph{every unital \cs-algebra is a \MyMath{\ewcr}-algebra}.
Another large class of \MyMath{\ewcr}-algebras is provided by the
\emph{quasi-standard \cs-algebras} studied in \cite{as}.
Recall that a \cs-algebra is called quasi-standard if (1)~defining
$P\approx Q$ when $P$ and $Q$ cannot be separated by open sets in
\MyMath{\Prim(A)}{} is an equivalence relation on \MyMath{\Prim(A)}, and~(2) the
corresponding quotient map is open. If $A$ is quasi-standard, then
$\sim$ and $\approx$ coincide and $A$ is \MyMath{\ewcr}{}
\cite[Proposition~3.2]{as}. In fact a number of interesting group
\cs-algebras
turn out to be quasi-standard \cite{arch-kan,kan-sch-tay}.
Let $M(A)$ be the multiplier algebra of $A$.
Recall that a net $\set{T_i}$ converges to $T$ in the strict topology
on $M(A)$ if
and only if $T_ia\to Ta$ and $T_i^*a\to T^*a$ for all $a\in A$. If
the net is bounded, then it suffices to take $a$ in the unit ball of
$A$. In fact, if each $T_i$ is unitary, then it suffices to check only
that $T_ia\to Ta$ for $a\in A$ with $\|a\|\le 1$.
Consequently if $A$ is separable, then the unitary group
$\mathcal{U}M(A)$ is a second countable topological group in the strict
topology
which admits a
complete metric (compatible with the topology). That is, $\mathcal{U}M(A)$ is
a Polish group. Since $\mathcal{Z}\UM(A)$ is closed in $\mathcal{U}M(A)$, it too is a
Polish group.
For notational convenience, let $X=\(\ifmmode\Glimm(A)\else$\Glimm(A)$\fi,\tau_{\text{cr}})$ be the complete
regularization of $\MyMath{\Prim(A)}$. Then we can identify $\mathcal{Z}\M(A)$ with
$C^b\(X)$, and $\mathcal{Z}\UM(A)$ with $C(X,\T)$. However it is not
immediately obvious how to describe the strict topology on $C(X,\T)$.
Our next result says that when $X$ is second countable and locally
compact, then
the strict topology on $C(X,\T)$ coincides with the compact-open
topology (the topology of uniform convergence on compacta).
\begin{lem}
\label{lem-co}
Suppose that $X$ is a second countable locally compact Hausdorff space
and that $A\in\MyMath{\ewcr}(X)$. Then $\mathcal{Z}\UM(A)$ with the strict topology is
homeomorphic to $C(X,\T)$ with the compact-open topology.
\end{lem}
\begin{remark}
The lemma holds for $X=(\ifmmode\Glimm(A)\else$\Glimm(A)$\fi,\tau_{\text{cr}})$ whenever $C(X,\T)$ is a Polish
group in the compact-open topology.
In general,
$X$ is a
$\sigma$-compact, completely regular space. If $\tau_{\text{q}}=\tau_{\text{cr}}$, then $X$
is compactly generated (or a $k$-space) by \cite[43H(3)]{willard}, and
at least in a compactly generated space, the limit of continuous
functions in the compact-open topology is continuous. In order that
$C(X,\T)$ be metric, it seems to be necessary that $X$ be
``hemicompact'' \cite[43G(3)]{willard}. In any case, if $X$ is
hemicompact, then the compact open topology is metric and complete.
In this case, $C(X,\T)$ is Polish, at least
\emph{provided} that
$X$ is second countable --- so that $C(X,\T)$ is separable.
However we have been unable to show that $X$ is second countable ---
even if $X$ is locally compact.
\end{remark}
\begin{proof}
Suppose that $f_n\to f$ uniformly on compacta in $C(X,\T)$. If $a\in
A$ is nonzero and if $\epsilon>0$, then the image of
\[
C=\set{P\in\MyMath{\Prim(A)}:\|a(P)\|\ge\epsilon/2}
\]
is compact in $X$ \cite[\S3.3]{dix}. Thus there is an $N$ such that
$n\ge N$ implies that $|f_n\(q(P)\)-f\(q(P)\)|<\epsilon/\|a\|$ for all
$P\in C$. If $P\notin C$, then for all $n$,
\[
\|f_n\cdot a(P)-f\cdot
a(P)\|\le \epsilon.
\]
This proves that $f_n\to f$ strictly.
Since $X$ is second countable and locally compact, there is a sequence
of compact sets $\set{K_n}$ in $X$ such that $X=\bigcup_n K_n$ and
such that every compact set $K$ in
$X$ is contained in some $K_n$. Then if $\set{V_n}$ is a countable
basis for the topology of $\T$, we get a sub-basis $\set{U_{n,m}}$
for the
compact-open topology on $C(X,\T)$ by setting
\[
U_{n,m}:=\set{f\in C(X,\T):f(K_n)\subseteq V_m}.
\]
It follows that $C(X,\T)$ is second countable in the compact-open
topology. Using the $K_n$'s, it is easy to construct a complete
metric on $C(X,\T)$ compatible with the compact-open topology. Thus,
$C(X,\T)$ is a Polish group in the compact open topology. Since the
first part of the proof shows that the identity map is continuous from
the compact-open topology to the strict topology, the result follows
from the Open Mapping Theorem \cite[Proposition~5(b)]{moore3}.
\end{proof}
An automorphism $\alpha$ of a \cs-algebra $A$
is called \emph{inner} if there is a $u\in\mathcal{U}M(A)$ such that $\alpha =
\Ad(u)$. (Recall that $\Ad(u)(a):=uau^*$.) An action
$\alpha:G\to\Aut(A)$ is called \emph{inner} if $\alpha_s$ is inner for
each $s\in G$. An action is called \emph{unitary} if there is a
strictly continuous homomorphism{} $u:g\to\mathcal{U}M(A)$ such that $\alpha_s=\Ad(u_s)$
for all $s\in G$.
Unitary actions are considered trivial; for example, if $\alpha$ is
unitary, then the crossed product $A\rtimes_\alpha G$ is isomorphic to
$A\tensor_{\text{max}} \cs(G)$. Also two actions $\alpha:G\to\Aut(A)$ and
$\beta:G\to\Aut(A)$ are called \emph{exterior equivalent} if there is
a strictly continuous map $w:G\to\mathcal{U}M(A)$ such that
\begin{enumerate}
\item
$\alpha_s(a) =
w_s\beta_s(a) w_s^*$ for all $a\in A$ and $s\in G$, and
\item
for all
$s,t\in G$, $w_{st}=w_s\beta_s(w_t)$.
\end{enumerate}
In this event, we call $w$ a $1$-cocycle. Actions
$\alpha:G\to\Aut(A)$ and $\beta:G\to\Aut(B)$ are called \emph{outer
conjugate} if there is a $*$-isomorphism $\Phi:A\to B$ such that
$\beta$ and $\Phi\circ\alpha\circ\Phi^{-1}$ are exterior equivalent.
Although unitary actions are trivial from the point of view of
dynamical systems,
inner actions can be quite interesting. Another class of interesting
actions are those which are locally inner or even unitary.
\begin{definition}
\label{def-2.3}
Let $X$ be a second countable locally compact Hausdorff space and $G$
a second countable locally compact group.
Suppose that $A\in\MyMath{\ewcr}(X)$ and that $\alpha:G\to\Aut(A)$ is an action.
Then $\alpha$ is called \emph{locally unitary} (\emph{locally
inner})
if every point in $X$
has a \nbhd{} $U$ such that $A_U$ is invariant under $\alpha$ and the
restriction $\alpha^U$ of $\alpha$ to $A_U$ is unitary (inner).
\end{definition}
\begin{remark}
If $A$ has Hausdorff spectrum $X$, then the
above definition coincides with the usual notion of a locally unitary
action (cf., \cite[\S1]{pr2} and \cite[\S1]{ros2}).
\end{remark}
Recall from \lemref{lem-co} that if $A\in \MyMath{\ewcr}(X)$ then the group
$\mathcal{Z}\UM(A)$ of $A$, equipped with the strict
topology, is isomorphic to the polish group $C(X,\mathbb{T})$
equipped with the compact-open topology. Thus we obtain a short exact
sequence
of polish groups
$$1\arrow{e} C(X,\mathbb{T})\arrow{e} \mathcal{U}M(A)\arrow{e} \Inn(A)\arrow{e} 1.$$
If $\alpha:G\to \Aut(A)$ is inner, then $\alpha$ defines a continuous
homomorphism{} of $G$ into $\Inn(A)$ with its Polish topology
\cite[Corollary~0.2]{rr}.
Thus, we can choose a
Borel map $V:G\to \mathcal{U}M(A)$ such that $V_e=1$ and such that
$\alpha=\Ad V$. (For example, if $c:\Inn(A)\to \mathcal{U}M(A)$ is a Borel
section such that $c(\id)=1$, then $V=c\circ \alpha$ will do
the job.) Then $V$ determines a Borel cocycle $\sigma\in
Z^2\(G,C(X,\mathbb{T})\)$ via the equation $V_sV_t=\sigma(s,t)V_{st}$
for $s,t\in G$. The class $[\sigma]\in H^2\(G, C(X,\mathbb{T})\)$
only depends on $\alpha$ and is the unique obstruction for
$\alpha$ being unitary (see \cite[Corollary 0.12]{rr}). In what
follows we will refer to $V$ as a \emph{$\sigma$-homomorphism} of
$G$ into
$\mathcal{U}M(A)$ which implements~$\alpha$.
\begin{remark}
\label{rem-fix}
Suppose that $\alpha:G\to\Aut\(C_0(X,\K)\)$ is an inner automorphism group
which is implemented by a $\sigma$-homomorphism{} as above. Then the Mackey
obstruction for the induced action $\alpha^x$ on the fibre over $x$ is
the class of $\sigma(x)$, where $\sigma(x)$ is the cocycle in
$Z^2(G,\T)$ obtained by evaluation at $x$:
$\sigma(x)(s,t):=\sigma(s,t)(x)$.
\end{remark}
\section{Representation Groups}\label{sec-3}
In this section we want to discuss the notion of
representation groups of second countable locally compact
groups as introduced by Moore in \cite{moore2}.
Recall that if
$1\to Z\to H\to G\to 1$
is a second countable locally compact
central extension of $G$ by the abelian group
$Z$, then the transgression map
$\operatorname{tg}:\widehat{Z}= H^1(Z,\mathbb{T})\to H^2(G,\mathbb{T})$ is defined as follows:
Let $c:G\to H$ be a Borel section
for the quotient map $H\to G$ such that $c(eZ)=e$. Then
$\sigma(s,t):=c(s)c(t)c(st)^{-1}$
is a cocycle in $Z^2(G,Z)$ (Moore-cohomology with values
in the trivial $G$-module $Z$).
If $\chi\in \widehat{Z}$, then
$\sigma_{\chi}(s,t):=\chi(\sigma(s,t))$
defines a cocycle
$\sigma_{\chi}\in Z^2(G,\mathbb{T})$ and then
$\operatorname{tg}(\chi):=[\sigma_{\chi}]$
is the cohomology class of $\sigma_{\chi}$ in $H^2(G,\mathbb{T})$.
\begin{definition}[Moore]
\label{def-3.1}
Let $G$ be a second countable locally compact group and let
$H$ be a central extension of $G$ by some abelian group $Z$
such that the transgression map
$\operatorname{tg}:\widehat{Z}\to H^2(G,\mathbb{T})$ is bijective.
Then $H$ (or rather the extension $1\to Z\to H\to G\to 1$)
is called a {\it
representation group} for
$G$. A group $G$ is called {\em smooth} if it has a representation group
$H$.
\end{definition}
\begin{remark}\label{rem-rep}
The question of
which groups have a representation group was studied extensively
by Moore in \cite{moore2}. If $H$ is a representation group for $G$,
then the transgression map
$\operatorname{tg}: \widehat{Z}\to
H^2(G,\mathbb{T})$ is actually a homeomorphism by
\cite[Theorem 2.2]{moore2} and
\cite[Theorem 6]{moore4}, so that in this case $H^2(G,\mathbb{T})$ is always
locally compact and Hausdorff. Conversely, if $G$ is almost connected
(i.e., $G/G_0$ is compact) and $H^2(G,\mathbb{T})$ is Hausdorff, then $G$ is smooth
by \cite[Proposition 2.7]{moore2}. Since
\cite[Theorem A and following remark~(2)]{moore1} implies that
$H^2(G,\mathbb{T})$ is isomorphic to $\mathbb{R}^k$ for some $k\geq 0$ whenever $G$
is a connected and simply connected Lie group, it follows that such groups
are
smooth.
If $G$ is a connected semisimple Lie group, then the universal covering group
$H$ of $G$ is a representation group for $G$ by \cite[Proposition 3.4]{moore2}.
Finally, every compact group is smooth
(see the discussion preceding
\cite[Proposition 3.1]{moore2}), and every
discrete group is smooth by \cite[Theorem~3.1]{moore2} (see also
\cite[Corollary 1.3]{para2}).
\end{remark}
We will see below that, in addition to the above, every second countable
compactly generated abelian group is smooth (\corref{corcompgen}).
To prove our results, we need the
following characterization of smooth groups.
\begin{lem}
\label{cocycle}
Let $G$ be a second countable locally compact group.
Then $G$ is smooth if and only if there exists a
second countable locally compact Hausdorff topology on $H^2(G,\mathbb{T})$
and a Borel cocycle $\zeta\in Z^2\(G, \specnp{H^2(G,\mathbb{T})}\)$ such
that for each $[\omega]\in H^2(G,\mathbb{T})$ evaluation of $\zeta$ at
$[\omega]$ gives an element in $Z^2(G,\mathbb{T})$ representing $[\omega]$.
\end{lem}
\begin{proof}
Assume $1\to Z\to H\to G\to 1$ is a
representation group for
$G$.
Since $\operatorname{tg}$ is an isomorphism, we can define an isomorphism
$\operatorname{tg}_*:Z\to \specnp{H^2(G,\mathbb{T})}$ by $ \operatorname{tg}_*(z)([\omega])=\operatorname{tg}^{-1}([\omega])(z)$.
Let $c:G\to H$ be a Borel cross section with
$c(e_G)=e_H$ ($e_G$ denoting the unit in $G$).
For $s,t\in G$, define $\zeta(s,t):=\operatorname{tg}_*(c(s)c(t)c(st)^{-1})$. Then
$\zeta\in Z^2(G, \specnp{H^2(G,\mathbb{T})})$, and if
$[\omega]\in H^2(G,\mathbb{T})$, then
we obtain $\zeta(s,t)([\omega])=\operatorname{tg}^{-1}([\omega])(c(s)c(t)c(st)^{-1})$.
Thus, by the definition of $\operatorname{tg}$ (see the discussion above)
$(s,t)\mapsto \zeta(s,t)([\omega])$ is a cocycle representing $[\omega]$.
This is what we wanted.
For the converse, assume that there is a second countable
locally compact Hausdorff topology on $H^2(G,\mathbb{T})$ and
let $\zeta\in Z^2(G,\specnp{H^2(G,\mathbb{T})})$ be such that evaluation at
each $[\omega]\in H^2(G,\mathbb{T})$ gives a representative for $[\omega]$.
Let $H$ denote the extension $G\times_\zeta \specnp{H^2(G,\mathbb{T})}$ of $G$
by $\specnp{H^2(G,\mathbb{T})}$ given by
$\zeta$. This is the set $G\times \specnp{H^2(G,\mathbb{T})}$
with multiplication defined by
$$(s,\chi)(t,\mu)=(st, \zeta(s,t)\chi\mu),$$
and equipped with the unique locally compact group topology inducing
the product Borel structure on $G\times \specnp{H^2(G,\mathbb{T})}$
(see \cite[Theorem 7.1]{m5}).
Then $$1\to \specnp{H^2(G,\T)}\to H\to G\to 1$$ is a central
extension of $G$ by $\specnp{H^2(G,\T)}$.
Let $c:G\to H$ denote the canonical section $c(s)= (s, 1)$.
Then the transgression map is a map from $H^2(G,\mathbb{T})\cong
\specnp{(\specnp{H^2(G,\T)})}$
to itself, and for each $[\omega]\in H^2(G,\T)$ we obtain a representative
for $\operatorname{tg}([\omega])$ by taking the cocycle
\[\nu(s,t)=(c(s)c(t)c(st)^{-1})([\omega])=\zeta(s,t)([\omega]).\]
Therefore the transgression map $\operatorname{tg}:H^2(G,\mathbb{T})\to H^2(G,\mathbb{T})$
is the identity, and $H$ is a representation group for $G$ as required.
\end{proof}
\begin{remark}\label{rem-isom}
If $1\to Z\to H\to G\to 1$ is a representation group
for $G$ and if $\zeta\in Z^2(G,\specnp{H^2(G,\mathbb{T})})$ is
constructed as above such that the evaluation map is
the identity on $H^2(G,\mathbb{T})$, then $G\times_{\zeta}\specnp{H^2(G,\mathbb{T})}$ is
actually isomorphic $H$.
An isomorphism is given by $(s,[\sigma])\mapsto c(s)(\operatorname{tg}_*)^{-1}([\sigma])$,
where $c:G\to H$ denotes the Borel section defining $\zeta$ as above.
\end{remark}
We now show that, under some weak additional
assumptions, the direct product of
smooth groups is again smooth.
\begin{prop}\label{represent}
Suppose that $G_1$ and $G_2$ are smooth and let
$B(G_1,G_2)$ denote the group of continuous bicharacters $\chi:G_1\times G_2\to
\mathbb{T}$. If $B(G_1,G_2)$ is locally compact with respect to the compact open
topology, then $G_1\times G_2$ is smooth. In particular, if
the abelianizations $(G_1)_{\text{\normalfont ab}}$, $(G_2)_{\text{\normalfont ab}}$ of $G_1$
and $G_2$ are compactly generated, then $G_1\times G_2$ is smooth.
\end{prop}
\begin{proof}
In fact we are going to construct a representation group for
$G_1\times G_2$ as follows: Choose central extensions
$$1\to \specnp{H^2(G_i,\mathbb{T})}\to H_i\stackrel{q_i}{\to}G_i\to 1$$
for $i=1,2$ such that the respective transgression maps are both equal
to the identity
(see Lemma~\ref{cocycle} and Remark~\ref{rem-isom}).
By assumption $B(G_1,G_2)$ is locally compact, so the dual group
$\specnp{B(G_1,G_2)}$ is also a locally compact group.
For each pair $(s_1,s_2)\in G_1\times G_2$ define $\eta(s_1,s_2)\in
\specnp{B(G_1,G_2)}$ by $\eta(s_1,s_2)(\chi)=\chi(s_1,s_2)$,
$\chi\in B(G_1,G_2)$.
Let
$$H=H_1\times H_2\times \specnp{B(G_1,G_2)}$$
with multiplication defined by
\begin{equation}
\label{eq-extra}
(h_1, h_2, \mu)(l_1, l_2,\nu)=
\big(h_1l_1, h_2l_2,\mu\nu\eta(q_1(h_1),q_2(l_2))\big).
\end{equation}
Then clearly
$$Z= \specnp{H^2(G_1,\mathbb{T})}\times
\specnp{H^2(G_2,\mathbb{T})}\times\specnp{B(G_1,G_2)}$$
is a central subgroup of $H$, and we obtain a short exact sequence
$$1\to Z\to H\to G_1\times G_2\to 1.$$
We claim that $H$ is a representation group for $G_1\times G_2$.
For this recall that if
$\omega_1\in Z^2(G_1,\mathbb{T})$, $\omega_2\in Z^2(G_2,\mathbb{T})$ and $\chi\in B(G_1,G_2)$,
then $\omega_1\otimes\omega_2\otimes\chi$
defined by
\[
\omega_1\otimes\omega_2\otimes\chi((s_1,s_2),(t_1,t_2))=
\omega_1(s_1,t_1)\omega_2(s_2,t_2)\chi(s_1,t_2).
\]
is a cocycle in $Z^2(G_1\times G_2,\mathbb{T})$.
By \cite[Theorem~9.6]{m4} and
\cite[Propositions~1.4 and 1.6]{klepp} we
know that the map
$$([\omega_1],[\omega_2],\chi)\mapsto [\omega_1\otimes\omega_2\otimes\chi]$$
is an (algebraic) isomorphism of $H^2(G_1,\mathbb{T})\times H^2(G_2,\mathbb{T})\times
B(G_1,G_2)$ onto $H^2(G_1\times G_2,\mathbb{T})$, from which it follows that
$Z$ is isomorphic to $\specnp{H^2(G_1\times G_2,\mathbb{T})}$.
Now choose Borel sections $c_i:G_i\to H_i$ and define
$\zeta_i\in Z^2(G_i, \specnp{H^2(G_i,\mathbb{T})})$ by
$\zeta_i(s,t)=c_i(s)c_i(t)c_i(st)^{-1}$, for $s,t\in G_i$.
Since the transgression maps are both equal to the identity map, we see that
evaluation of $\zeta_i$ at $[\omega_i]\in H^2(G_i,\mathbb{T})$ is a cocycle
representing $[\omega_i]$. Defining $c:G_1\times G_2\to H_1\times H_2\times
\specnp{B(G_1,G_2)}$ by $c(s_1,s_2)=(c_1(s_1), c_2(s_2), 1)$, we easily compute
$$
\zeta((s_1,s_2),(t_1,t_2)):=c(s_1,s_2)c(t_1,t_2)c(s_1t_1, s_2t_2)^{-1}=
(\zeta_1(s_1,t_1),\zeta_2(s_2,t_2),
\eta(s_1,t_2)).$$
Thus, evaluating $\zeta$ at
$[\omega_1\otimes\omega_2\otimes\chi]\in H^2(G_1\times G_2,\mathbb{T})$ gives a
cocycle representing this class. This proves the claim.
The final assertion now follows from the fact that
$B(G_1, G_2)=B((G_1)_{\text{\normalfont ab}},(G_2)_{\text{\normalfont ab}})$ and
\cite[Theorem 2.1]{klepp}.
\end{proof}
\begin{cor}\label{corcompgen}
Every second countable compactly generated abelian group is smooth.
\end{cor}
\begin{proof}
By the structure theorem for compactly generated abelian groups
(\cite[Theorem~9.8]{hr}), we know
that $G\cong \mathbb{R}^n\times K\times \mathbb{Z}^m$ for some $n,m\geq 0$
and some compact group $K$. By the results mentioned in Remark~\ref{rem-rep},
it follows that $\mathbb{R}^n,K$ and $\mathbb{Z}^m$ are smooth.
Now apply the proposition.
\end{proof}
The example given at the bottom of \cite[p.\ 85]{moore2} shows that
there are nonsmooth abelian locally compact groups.
The group constructed there is a direct product of $\mathbb{R}$
with an infinite direct sum of copies of $\mathbb{Z}$. Thus it also provides
an example of two smooth groups whose direct product is not
smooth. Thus the assumption on $B(G_1,G_2)$ in
Proposition~\ref{represent}
is certainly not superfluous.
It is certainly interesting to see specific examples of
representation groups. Some explicit constructions can be found in
\cite{moore2} and
\cite[Corollary 1.3 and Examples 1.4]{para2}. For instance,
if $G=\mathbb{R}^2$ (resp.\ $G=\mathbb{Z}^2$), then the three
dimensional Heisenberg group (resp.\ discrete Heisenberg group)
is a representation group for $G$. In the following example,
we use Proposition~\ref{represent} to construct representation groups
for $\mathbb{R}^n$.
\begin{example}\label{ex-rn}
Let $G=\mathbb{R}^n$ and, as a set, let $H_n=\mathbb{R}^{{n(n+1)}/{2}}$.
We write an element of $H_n$ as
$\mathfrak s= (s_i, s_{j,k})$, $1\leq i\leq n$, ${1\leq j<k\leq n}$.
Define
multiplication on $H_n$ by
$\mathfrak s\mathfrak t=((st)_i, (st)_{j,k})$ with
$$(st)_i:=s_i+t_i\quad\text{and}\quad
(st)_{j,k}:=s_{j,k}+t_{j,k}+s_jt_k.$$
Then $H_n$ is clearly a central extension of $\mathbb{R}^n$ by
$\mathbb{R}^{{(n-1)n}/{2}}$.
We claim that $H_n$ is a representation group for $\mathbb{R}^n$.
Since $H^2(\mathbb{R},\mathbb{T})$ is trivial, this is certainly true for $n=1$.
For the step $n\to n+1$ assume that $H_n$ is a representation group for
$\mathbb{R}^n$. For $\vec s=(s_1,\ldots,s_n)\in \mathbb{R}^n$ define $\chi_{{\vec{s}}}
\in B(\mathbb{R}^n,\mathbb{R})$ by
$$\chi_{\vec{s}}((t_1,\ldots, t_n), r):=e^{ir(s_1t_1+\cdots+s_nt_n)}.$$
Since $\chi_{\vec{s}}(\vec t,r)=\chi_{r{\vec{s}}}(\vec t,1)$,
$s\mapsto \chi_{\vec s}$
is an isomorphism of $\mathbb{R}^n$ onto $B(\mathbb{R}^n,\mathbb{R})$, and we
see that $\specnp{B(\mathbb{R}^n,\mathbb{R})}$ is isomorphic to $\mathbb{R}^n$ via
the map $\vec t\mapsto \eta_{\vec t}$
defined by $\eta_{\vec t}(\chi_{\vec{s}})=\chi_{\vec{s}}(\vec t,1)$, for $\vec t\in \mathbb{R}^n$.
Moreover, if $({\vec{s}},r)\in \mathbb{R}^n\times \mathbb{R}$, and
$\eta({\vec{s}},r)\in \specnp{B(\mathbb{R}^n,\mathbb{R})}$ is defined by
$\eta({\vec{s}},r)(\chi_{\vec t})=\chi_{\vec t}({\vec{s}},r)$,
then we get the identity $\eta({\vec{s}},r)=\eta_{r{\vec{s}}}$.
It follows now from Proposition~\ref{represent} and \eqref{eq-extra}
that
$H'=H_n\times \mathbb{R}\times \mathbb{R}^n$ with multiplication defined by
$$\big((s_i, s_{j,k}), r, \vec t\big)\big((s_i', s_{j,k}'), r', \vec t'\big)=
\big((s_i, s_{j,k})(s_i', s_{j,k}'), r+r',\vec t+\vec t'+r'{\vec{s}}\big)$$
is a representation group for $\mathbb{R}^{n+1}$. Putting $s_{n+1}=r$
and $s_{j,n+1}=t_j$, $1\leq j\leq n$, we see that this formula coincides with
the multiplication formula for $H_{n+1}$.
\end{example}
Using similar arguments, it is not hard to show that a representation group for
$\mathbb{Z}^n$ is given by the integer subgroup of $H_n$ constructed above,
i.e., assuming that all $s_i$ and $s_{j,k}$ are integers.
Notice that the group $H_n$ constructed above is isomorphic to the
connected and simply connected two-step nilpotent Lie group
corresponding to the universal two-step nilpotent
Lie algebra generated by $X_1, \ldots, X_n$ and the commutators $[X_j,X_k]$,
$1\leq j<k\leq n$. Note that any connected two-step
nilpotent Lie group is a quotient of one of these
groups (e.g., see \cite[p.~409]{brown}).
We conclude this section with a discussion of which conditions will
imply that all representation groups of a given group $G$ are
isomorphic. Schur observed that even finite groups can have
nonisomorphic representation groups \cite{schur}, and Moore considers
the case for $G$ compact or discrete in\cite[\S 3]{moore2}. Here we
give a sufficient condition for the uniqueness of the representation
group (up to isomorphism) valid for all smooth $G$.
\begin{prop}\label{prop-unique}
Let $G$ be smooth and let $Z:=\specnp{H^2(G,\mathbb{T})}$.
Then the representation groups
of $G$ are unique (up to isomorphism of groups) if
every abelian extension $1\to Z\to H\to G_{\subab}\to 1$
splits.
In particular,
if
$G_{\subab}$ is isomorphic to
$\mathbb{R}^n\times
\mathbb{Z}^m$ or if $Z$ is isomorphic to $\mathbb{R}^n\times \mathbb{T}^m$, for some $n,m\geq 0$,
then all representation groups of $G$ are isomorphic.
\end{prop}
\begin{proof}
Let $1\to Z\to H_1\to G\to 1$ and $1\to Z\to H_2\to G\to 1$
be two representation groups of $G$.
By Lemma~\ref{cocycle} and Remark~\ref{rem-isom} we
may assume that (up to isomorphism) both extensions are given by
cocycles $\zeta_1,\zeta_2\in Z^2(G,Z)$ such that the
transgression maps $H^2(G,\mathbb{T})=\widehat{Z}\to H^2(G,\mathbb{T})$ induced by
$\zeta_1$ and $\zeta_2$ are the identity maps.
Now let $\sigma=\zeta_1\circ \zeta_2^{-1}\in Z^2(G,Z)$ and let
$1\to Z\to L\to G\to 1$ denote the extension defined by $\sigma$.
We want to show that this extension splits (then $\sigma\in B^2(G,Z)$
and $[\zeta_1]=[\zeta_2]\in H^2(G,Z)$).
Since $\chi\circ \sigma=(\chi\circ \zeta_1)\cdot (\chi\circ \zeta_2^{-1})$
and $[\chi\circ \zeta_1]=[\chi\circ \zeta_2]\in H^2(G,\mathbb{T})$, it follows
that the transgression map $\widehat{Z}\to H^2(G,\mathbb{T})$ induced by
$\sigma$ is trivial. But this implies that
any character of $Z$ can be extended to a character of $L$,
which implies that
$\widehat{L}_{\text{\normalfont ab}}$ is an extension
$1\to \widehat{G}_{\subab}\to \widehat{L}_{\text{\normalfont ab}}\to \widehat{Z}\to 1$.
By assumption (using duality), this extension splits.
Thus we find an injective homomorphism
$\chi\mapsto \mu_{\chi}$ from $\widehat{Z}\to\widehat{L}_{\text{\normalfont ab}}$
such that
each $\mu_{\chi}$ is an extension of $\chi$ to $L$.
Let $\tilde{G}=\{s\in L: \mu_{\chi}(s)=1\;\text{for all}\;\chi\in
\widehat{Z}\}$.
Then $\tilde{G}\cap Z=\{e\}$ and $\tilde{G}\cdot Z =L$.
To see the latter, let $l\in L$ and let $z\in Z$ such that
$\mu_{\chi}(l)=\chi(z)$ for all $\chi\in \widehat{Z}$. Then
$lz^{-1}\in \tilde{G}$. It follows that the quotient map $q:L\to G$
restricts to an isomorphism $\tilde{G}\to G$. This proves all but the
final statement.
By duality, it suffices to prove the final assertion only for $G_{\subab}$
isomorphic to $\R^n\times\Z^m$. By induction, it suffices to consider
only the cases $\Z$ and $\R$. Since the first is straightforward,
we will show only that if $G$ is abelian, then any continuous open
surjection $q:G\to\R$ has a continuous section. By
\cite[Theorem~24.30]{hr}, we may assume that $G=\R^m\times H$, where
$H$ has a compact, open subgroup. It follows that $q\restr{\R^m}$ is
surjective. Thus there is an $x\in\R^m$ such that $q(x)=1$. Then we
can define $q^*:\R\to\R^m$ by $q(\lambda)=\lambda\cdot x$.
\end{proof}
\section{Rosenberg's Theorem}\label{sec-appendix}
One of the important ingredients for the
proof of our results was Rosenberg's theorem
(see Theorem~\ref{thm-ros}) which implies that if
(a)~$G_{\subab}$ is compactly generated and if (b)~$H^2(G,\mathbb{T})$ is
Hausdorff, then any pointwise unitary action
on a separable continuous-trace $\cs$-algebra
$A$ is automatically locally unitary.
Our interest in smooth groups is partially explained by the fact that
all smooth groups with
$G_{\subab}$ compactly generated satisfy these
assumptions.
We give examples below which
show that neither of conditions (a)~and (b) can be weakened in
general. On the other hand, if we assume that $A$ has continuous trace
with locally connected spectrum, then the
class of
groups with the property that pointwise unitary actions on $A$
are automatically locally unitary is significantly larger than the
class of groups which satisfy the conditions of Rosenberg's
theorem (\thmref{append}).
\begin{example}
Suppose that $G$ is a second countable locally compact abelian
group acting freely and properly on
a separable locally compact space $X$ such that $X$ is {\em not\/} a
locally trivial principal $G$-bundle.
Although Palais's Slice Theorem \cite[Theorem~4.1]{palais} implies that
$G$ cannot be a Lie group, we can, for example,
take $G=\prod_{n=1}^{\infty}\{1,-1\}$,
$X=\prod_{n=1}^{\infty}\mathbb{T}$, and let $G$ act by translation on $X$.
Let $\alpha$ denote the corresponding
action of $G$ on $C_0(X)$ and let
$A=C_0(X)\rtimes_{\alpha} G$.
Then $A$ has continuous trace by \cite[Theorem 17]{green2}, and
the dual action $\widehat{\alpha} $ of $\widehat{G}$
is pointwise unitary \cite[Proof of Theorem~3.1]{doir}.
If $\widehat{\alpha}$ would be locally unitary, then
$\spec{(C_0(X)\rtimes_{\alpha}G)\rtimes_{\widehat{\alpha}}
\widehat{G}}\cong X$
would be a locally trivial principal $G$-bundle with respect to the
double dual action
$\widehat{\widehat{\alpha}}$ \cite{pr2}.
But by the Takesaki-Takai duality theorem,
this implies that $X$ is a locally trivial
principal $G$ bundle with respect to the
original action; this contradicts our original assumption.
\end{example}
Since $\widehat G$ is discrete, $G$ is smooth and therefore
$H^2(G,\T)$ is Hausdorff. Of course, $\widehat G$ is not compactly
generated as
required in Rosenberg's theorem.
We are now going to construct in Example~\ref{ex-3} a
pointwise unitary action
of a compactly generated group $G$ on a continuous-trace algebra $A$
which is not locally unitary. Of course, $H^2(G,\mathbb{T})$ will fail to be
Hausdorff.
Before we start, recall that if $N$ is a closed central
subgroup of a second countable locally compact group $G$, then the
inflation-restriction sequence
\[
H^1(N,\mathbb{T})^G \arrow{e,t}{\operatorname{tg}} H^2(G/N,\mathbb{T}) \arrow{e,t}{\inf} H^2(G,\mathbb{T})
\]
is exact at $H^2(G/N,\mathbb{T})$, where $H^1(N,\mathbb{T})^G$ denotes the group
of $G$-invariant characters of $N$ and $\inf$ denotes
the inflation map
\cite[p.\ 53]{moore1}.
\begin{example}[cf., {\cite[p.\ 85]{moore2}}]
Our group $G$ will be a central extension of $\T^2$ by $\R^2$.
For each $\lambda \in \mathbb{R}$ let $\omega_{\lambda}$ denote
the two-cocycle
\[
\omega_{\lambda}\((s_1,t_1),(s_2,t_2)\)=e^{i\lambda s_1t_2}
\]
on $\mathbb{R}^2$. Since the real Heisenberg group is a representation group
for $\R^2$ (Example~\ref{ex-rn}), $\lambda
\mapsto [\omega_{\lambda}]$ is an isomorphism between
$\mathbb{R}$ and $H^2(\mathbb{R}^2,\mathbb{T})$. Let
$\theta$ be any irrational number and let $\omega_1$ and
$\omega_{\theta}$ denote the cocycles in $Z^2(\mathbb{R}^2,\mathbb{T})$ corresponding to $1$ and
$\theta$, respectively.
Let $G_1=\mathbb{R}^2\times_{\omega_1}\mathbb{T}$ be the central extension of $\mathbb{R}^2$ by
$\mathbb{T}$ corresponding to $\omega_1$, and let
$G=(\mathbb{R}^2\times_{\omega_1}\mathbb{T})\times_{\omega_{\theta}}\mathbb{T}$ denote the
central extension
of $G_1$ corresponding to the inflation of $\omega_{\theta}$ to $G_1$.
Then $G$ is a central extension of $\mathbb{R}^2$ by $\mathbb{T}^2$, and is
therefore a
connected two-step nilpotent group of dimension four.
Since the cocycles involved are continuous,
$G$ is homeomorphic to
the direct product $\mathbb{R}^2\times\mathbb{T}^2$ with multiplication
given by
$$(s_1,t_1, z_1, w_1)(s_2,t_2,z_2,w_2)=
(s_1+s_2, t_1+t_2, e^{is_1t_2}z_1z_2,e^{i\theta s_1t_2}w_1w_2).$$
There is a natural continuous section $c$ from
$\R^2\cong G/\T^2$ onto
$G$ given by
$c(s_1,s_2):=(s_1,s_2,1,1)$.
Using the formula for the transgression map
\[
\operatorname{tg}:\mathbb{Z}^2\cong H^1(\mathbb{T}^2,\mathbb{T})\to H^2(\mathbb{R}^2,\mathbb{T}),
\]
a straightforward computation shows that
$\operatorname{tg}(l,m)=[\omega_{l+\theta m}]$.
Since $\mathbb{Z}+\theta \mathbb{Z}$ is dense in $\mathbb{R}$ and $\inf$
is continuous, the identity is not closed in $H^2(G,\T)$; in other
words, $H^2(G,\T)$ is not Hausdorff.
\end{example}
\begin{example}
\label{ex-3}
We shall construct a pointwise unitary action of
the group
$G$ from the previous example
which is not locally unitary.
Let $X=\{\frac{1}{n}:n\in\mathbb{N}\}\cup \{0\}$. We define
$\alpha:G\to\Aut\(C(X,\K)\)$ as follows.
Since $\mathbb{Z}+\theta\mathbb{Z}$ is dense in
$\mathbb{R}$, we find a sequence $(\lambda_n)_{n\in\mathbb{N}}\subseteq
\mathbb{Z}+\theta\mathbb{Z}$
such that $\lambda_n\to 0$ in $\mathbb{R}$ while
$\lambda_n\neq\lambda_m\neq 0$ for all $n,m\in \mathbb{N}$, with $n\neq m$.
Putting $\lambda_0=0$ and $\lambda_{1/n}:=\lambda_n$ we obtain
a continuous map $x\mapsto\lambda_x$ of $X$ to $ \mathbb{Z}+\theta\mathbb{Z}\subseteq \mathbb{R}$.
For each $x\in X$ let $\dot V_x:\mathbb{R}_2\to U(L^2(\mathbb{R}^2)$ denote
the regular $\omega_{\lambda_x}$-representation of $\mathbb{R}^2$, which is given by
the formula
\[
(\dot V_{x}(s,t)\xi)(s',t')=e^{i\lambda_x s(t'-t)}\xi(s'-s,t'-t),
\]
and let $V_x:G\to U\(L^2(\mathbb{R}^2)\)$ denote the inflation of
$\dot V_x$ to $G$.
Since $x\mapsto\lambda_x$ is continuous, it follows that we obtain
a strongly continuous action $\alpha:G\to\Aut\(C(X,\K )\)$ with
$\K=\K\(L^2(\mathbb{R}^2)\)$ given by defining
\[
\alpha_g(a)(x)=V_x(g)a(x)V_x(g)^*.
\]
Since $V_x$ is an $\inf(\omega_{\lambda_x})$-representation for each $x\in X$
and $\alpha$ is implemented pointwise by the representations $V_x$,
and since each $[\omega_{\lambda_x}]$ lies in the
range of the transgression map, it follows that $\alpha$ is pointwise unitary.
We claim that $\alpha$ is not locally unitary.
Since $X$ has only one accumulation point,
$\alpha$ is locally unitary if and only if it is unitary.
So assume that there were a strictly continuous
homomorphism $U:G\to U\(C(X,\K)\)$ which implements $\alpha$.
Thus, for each $x\in X$ and $g\in G$ we would obtain
$$U_x(g)a(x)U_x( g)^*= V_x(g)a(x)V_x(g)^*,$$
from which it follows that
$$V_x^*(g)U_x(g)=\gamma_x(g)1$$
for some $\gamma_x(g)\in \mathbb{T}$.
Since,
by construction, the maps $(x,g)\to V_x(g)$ and $ (x,g)\to U_x(g)$ are
strongly continuous,
$(x,g)\to \gamma_x(g)$ defines a continuous map $\gamma:X\times G\to\mathbb{T}$.
Moreover, since $V_x|_{\mathbb{T}^2}\equiv 1$, it follows
that $\chi_x=\gamma_x|_{\mathbb{T}^2}$ is a character of $\mathbb{T}^2$ for all
$x\in X$.
By continuity we have $\chi_{\frac{1}{n}}\to \chi_{0}$ in
${\widehat{\T}}^2$.
Moreover, since $V_{0}$ is a unitary representation,
it follows that $V_{0}$ and $U_0$ are both unitary
representations which implement $\alpha$ at the point $0$.
But this implies that $\gamma_{0}$ is a character of $G$.
Thus, multiplying each $U_x$ with $\overline{\gamma}_{0}$, we may assume that
$U_0=V_0$. In particular, this implies
that $\chi_{0}$ is the trivial character of $\mathbb{T}^2$.
We finally show that $\chi_{\frac{1}{n}}$ is not trivial for all
$n\in \mathbb{N}$. Since ${\widehat{\T}}^2\cong\Z^2$ is discrete,
this will contradict
the
fact that $\chi_{\frac{1}{n}}\to\chi_{0}$.
Assume that $\chi_{\frac{1}{n}}$ is trivial for some $n\in \mathbb{N}$.
Then $U_{\frac{1}{n}}|_{\mathbb{T}^2}\equiv 1$, from which it follows that
$U_{\frac{1}{n}}$ is actually inflated from some unitary representation
$\dot U_{\frac{1}{n}}:\mathbb{R}^2 \toU\(L^2(\mathbb{R}^2)\)$.
Since, by construction, $V_{\frac{1}{n}}$ is inflated from the regular
$\omega_{\lambda_n}$-representation, say $\dot V_{\frac{1}{n}}$
of $\mathbb{R}^2$, it follows that $\dot U_{\frac{1}{n}}$ and $\dot V_{\frac{1}{n}}$
implement the same action of $\mathbb{R}^2$ on $\K\(L^2(\mathbb{R}^2)\)$,
which contradicts the fact that $[\omega_{\lambda_n}]$,
the Mackey-obstruction for the action implemented by
$V_{\frac{1}{n}}$, is non-trivial
in $H^2(\mathbb{R}^2,\mathbb{T})$.
\end{example}
Note that the space $X$ in the above example is totally disconnected;
in particular, the point $0$ has no connected \nbhd s in $X$.
The following theorem shows that this lack of connectedness plays
a crucial r\^ole in our counterexample.
\begin{thm}
\label{append}
Suppose that $N$ is a closed
normal subgroup of a second countable locally compact group $G$ such that
\begin{enumerate}
\item
$G/N$ is compactly generated,
\item $H^2(G/N,\mathbb{T})$ is Hausdorff, and
\item $H^2(N,\mathbb{T})$ is Haudorff and $N_{\subab}:=
N/\overline{[N,N]}$ is compact.
\end{enumerate}
Suppose further that $A$ is a separable continuous-trace $\cs$-algebra
such that $\hat A$ is locally connected.
Then any pointwise unitary action of $G$ on $A$ is
automatically locally unitary.
\end{thm}
\begin{proof}
Let $\alpha$ be a pointwise unitary action of $G$ on $A$.
Since the properties of being
unitary and locally unitary are preserved under Morita equivalence of
systems (\cite[Proposition~3]{ech5}),
we can replace $(A,\,G,\,\alpha)$ with $(A\otimes\K,\,G,\,\alpha\otimes\id)$
and assume
that $A$ is stable.
Clearly, $(A,\,N,\,\alpha\restr N)$ is pointwise unitary, so by
Rosenberg's theorem, it is locally unitary. Since $G$ must act trivially on
$\hat A$, we can replace $A$ by an ideal and assume that $A=C_0(X,\K)$
and that $\alpha\restr N=\Ad(u)$ for a strictly continuous
homomorphism $u:N\to\mathcal{U}M\(C_0(X,\K)\)$.
Localizing further if necessary,
we claim that
$u$ is a Green twisting map; that is,
$\alpha_s(u_n)=u_{sns^{-1}}$ for all $s\in G$
and $n\in N$.
Since $M\(C_0(X,\K)\)$ can be
identified with the bounded strictly continuous functions from $X$ to
$B(H)$ \cite[Corollary~3.4]{apt}, and since the strict
topology on $U(\mathcal{H})$ coincides with the strong topology, it is not
hard to see that we may view $u$ as a strongly continuous function
from $X\times N$ to $U(\mathcal{H})$ such that for all $n\in N$ and $x\in X$,
we have $\alpha_n(a)(x)=u(x,n)a(x)u(x,n)^*$.
In order to show that $u$ defines a Green twisting map for
$\alpha$, we need to show that
$\alpha_s\(u(\cdot,n)\)=u(\cdot,sns^{-1})$ for all $s\in G$ and $n\in
N$.
However by assumption, for each $x\in X$ there is a unitary
representation $V_x:G\to B(\mathcal{H})$ such that $\alpha_s(a)(x)=V_x(s)a(x)
V_x(s)^*$ for all $a\in A$ and $s\in G$. Since both $V_x$ and
$u(x,\cdot)$ implement the same automorphism of $\K$, there is a
character $\gamma_x$ of $N_{\subab}$ such that $u(x,n)=\gamma_x(n)V_x(n)$ for
all $n\in N$. Now if we abuse notation slightly and write
$\alpha_s(u)(x,n)$ for $\alpha_s\(u(\cdot,n)\)(x)$, then
\begin{align*}
\alpha_s(u)(x,n) & = V_x(s)u(x,n)V_x(s)^* =
\gamma_x(n)V_x(s)V_x(n)V_x(s^{-1}) \\
&=\gamma_x(n) V_x(sns^{-1}) = \gamma_x(n)\overline{\gamma_x(sns^{-1})}
u(x,sns^{-1}).
\end{align*}
For each $x\in X$, $sN\in G/N$, and $n\in N$,
define $\lambda(x,sN)(n) = \gamma_x(n)\overline{\gamma_x(sns^{-1})} =
\gamma_x(n) \overline{s\cdot\gamma_x(n)}$,
where $s\cdot\gamma:=\gamma(s\cdot
s^{-1})$.
Clearly, $u$ will be a twisting map exactly when we can arrange for
$\lambda$ to be identically one\footnote{This invariant was also
studied \cite[\S5]{90a}. The definition was slightly different there
--- partly to make equations such as \eqref{eq-lamb'} more
attractive
than we require here.}.
Since $A=C_0(X,\K)$, it
is not hard to see that the map
\[
(x,sN,n)\mapsto u(x,sns^{-1})\alpha_s(u)(x,n) = \lambda(x,sN)(n)1_A
\]
is continuous. Consequently, we can view $\lambda$ as a continuous
function from $X\times G/N$ into $\widehat{N}_{\subab}$. Notice that
\begin{equation}
\label{eq-lamb'}
\lambda(x,stN) =
t\cdot\lambda (x,sN)\lambda(x,tN).
\end{equation}
Fix $x_0\in X$. Since we may pass to still another ideal of $A$, it
will suffice to produce a \nbhd{} $U$ of $x_0$ in
$X$ such that $\lambda(x,sN)(n)=1$ for all $sN\in G$, $n\in N$, and
$x\in U$. Of course, replacing $u(x,n)$ by
$\overline{\gamma_{x_0}(n)}u(x,n)$, we may assume that
$\lambda(x_0,sN)(n)=1$ for all $s\in G$ and $n\in N$.
In fact, since $\widehat{N}_{\subab}$ is discrete, given any $t\in G$, there is a
\nbhd{} $U_{t}\times V_t\subseteq X\times G$ of $(x_0,t)$ such that
$\lambda(x,sN)=1$ provided $(x,sN)\in U_t\times V_t$. In view of
\eqref{eq-lamb'}, $\lambda(x,sN)=1$ for all $x\in U_t$ and $sN$ in the
subgroup of $G/N$ generated by $V_t$. By condition~(1.), there is
a compact set $K$ which generates $G/N$. We can choose
$t_1,\dots,t_n$ and \nbhd s $(U_{t_1}\times V_{t_1}),\dots,(U_{t_n}\times
V_{t_n})$
such that $K\subseteq \bigcup_i V_{t_i}$. Then we can let
$U=\bigcap_{i=1}^n U_{t_i}$. Then $\lambda(x,sN)(n)=1$ for all $x\in U$,
$s\in G$, and $n\in N$. Then after passing to the ideal of $A$
corresponding to $U$, we can indeed assume that $u$ is a Green
twisting map.
Now let $\rho_{x_0}$ be the element of $\hat A$
corresponding to the point $x_0$ as chosen above.
Then by the above constructions, there exists
a covariant representation $(\rho_{x_0}, V_0)$ of
$(A,\,G,\,\alpha)$ such that $V_0|_N=\rho_{x_0}\circ u$,
which just means that
$(\rho_{x_0}, V_0)$ preserves the twist $u$ in the sense of Green.
Since $A$ is stable, it follows from \cite[Corollary~1]{ech5}
that $(A,\,G,\,\alpha,\,u)$ is exterior equivalent to $(A,\,G,\,\beta,\,1)$,
for some action $\beta$ of $G$ on $A$. Then
$\beta$ is inflated from an
action $\dot\beta$ of $G/N$
(see Remark~1 on page~176 of \cite{ech5}). We are now going to show
that $\dot\beta$ is also pointwise unitary.
Since $\hat A$ is
locally connected we may localize further in order to assume that
$\hat A$ is connected, and we may also assume that $\hat A$
is compact.
Let $(\rho_{x_0},U_0)$ denote the representation of
$(A,\,G,\,\beta)$ corresponding to $(\rho_{x_0}, V_0)$ via the
exterior equivalence between $(A,\,G,\,\alpha,\,u)$ and
$(A,\,G,\,\beta,\,1)$. Then $(\rho_{x_0},U_0)$ preserves $1$,
since $(\rho_{x_0}, V_0)$ preserves $u$. Thus it follows that
$U_0$ is inflated from a representation $\dot U_0$ of $G/N$.
Now, by assumption, $\beta$ is pointwise unitary, which implies that
$\dot\beta$ induces the trivial action of $G/N$ on $\hat A$.
For each $\rho\in \hat A$ let $[\omega_{\rho}]\in H^2(G/N,\mathbb{T})$
denote the Mackey obstruction to extend $\rho$ to a covariant
representation of $(A,\,G/N,\,\dot\beta)$. Then $[\omega_{\rho_{x_0}}]=0$
and
the map $\rho\mapsto [\omega_\rho]$ is continuous by \cite[Lemma~3.3]{doir}
(or Lemma 5.3 above). Since $\hat A$ is connected, it follows that
its image, say $M$, is a compact and connected subset of $H^2(G/N,\mathbb{T})$.
But since $\beta$ is pointwise unitary, it follows that
$M$ lies in the kernel of the inflation map $\inf:H^2(G/N,\mathbb{T})\to
H^2(G, \mathbb{T})$, and hence in the image of the transgression map
$\operatorname{tg}: H^1(N,\mathbb{T})^G \to H^2(G/N,\mathbb{T})$.
By assumption, $N_{\subab}$ is compact, so
$H^1(N,\mathbb{T})^G$
is discrete and countable (by the separability assumptions).
Thus $M$ is a countable and connected compact Hausdorff space,
which implies that
$M$ consists of a single point.
(For example, Baire's Theorem implies that a countable compact
Hausdorff space has a clopen point.)
But since $[\omega_{\rho_{x_0}}]$
is trivial, it follows that $[\omega_{\rho}]$ is trivial for all
$\rho\in \hat A$; in other words, $\dot\beta$ is pointwise unitary.
Now we can apply Rosenberg's theorem to the system $(A,\,G/N,\,\dot\beta)$,
from which follows that $\dot\beta$ is locally unitary.
But this implies that $\beta$, and hence also $\alpha$ is
locally unitary.
\end{proof}
We now recall that $G$ is a $\text{[FD]}\;\bar{}${}~\emph{group} if
$\overline{[G,G]}$ is compact and $G/\overline{[G,G]}$ is abelian.
These groups are of particular interest since every
type~I $\text{[FD]}\;\bar{}${} group has a continuous-trace group \cs-algebra
\cite[Lemma 6]{echkan} and there
are no known examples of groups with continuous-trace \cs-algebra which
are not $\text{[FD]}\;\bar{}${} groups.
\begin{cor}\label{cor-FD}
Suppose that $G$ is a separable compactly generated
$[FD]\bar{}$-group, or that $G$ is a connected nilpotent Lie group.
Then every pointwise unitary action of $G$ on a separable
continuous-trace algebra with locally connected spectrum is
locally unitary.
\end{cor}
\begin{proof}
If $G$ is a separable compactly generated $\text{[FD]}\;\bar{}$-group,
then the theorem applies with the normal subgroup $N=\overline{[G,G]}$.
So assume that $G$ is a connected nilpotent Lie group.
Then there exists a maximal torus $T$ in the center of $G$ such that
$G/T$ is simply connected\footnote{Any connected Lie group
is a quotient of a simply connected Lie group by some central discrete group,
thus the center of a connected nilpotent group is of the form $\mathbb{R}^l\times
\mathbb{T}^m$ and the quotient of $G$ by $\mathbb{T}^m$ is a simply connected nilpotent
group},
and
hence
$H^2(G/T,\mathbb{T})$ is Hausdorff, since
$G/T$ is smooth.
\end{proof}
|
1,108,101,566,321 | arxiv | \section{Introduction}
\label{sec:intro}
Planning collision-free paths for multiple robots, an easily stated yet difficult problem, has been actively studied for decades \cite{ErdLoz86, LavHut98b, LunBer11, Rya08, Sil05, StaKor11, Sur09, BerOve05, BerSnoLinMan09, Zel92}. The hardness of the problem mainly resides with the coupling between the robots' paths which leads to an enormous state space and branching factor. As such, algorithms that are both complete and (distance) optimal, such as the A$^*$ \cite{HarNilRap68} algorithm and its variants, do not perform well on tightly coupled problems beyond very small ones. On the other hand, faster algorithms for finding the paths generally do not provide optimality guarantees: Sifting through all feasible path sets for optimal ones greatly increases the search space, which often makes these problems intractable.
In this paper, we investigate the problem of planning optimal paths for multiple robots with individual goals. The robots have identical but non-negligible sizes, are confined to some arbitrary connected graph, and are capable of moving from one vertex to an adjacent vertex in one time step. Collision between robots is not allowed, which may occur when two robots attempt to move of the same vertex or move along the same edge in different directions. For this general setting, we propose a network flow based integer linear programming (ILP) model for finding robot paths that are time optimal or distance optimal. Our time optimality criterion seeks to minimize the number of time steps until the last robot reaches its goal; distance optimality seeks to minimize the total distance (each edge has unit distance) traveled by the robots. Taking advantage of the state of the art ILP solvers (Gurobi is used in this paper), our method can plan time optimal, collision-free paths for several dozens of robots on graphs with hundreds of vertices within minutes.
As a universal subroutine, collision-free path planning for multiple robots finds applications in tasks spanning assembly \cite{HalLatWil00, Nna92}, evacuation \cite{RodAma10}, formation control \cite{BalArk98, PodSuk04, ShuMurBen07, SmiEgeHow08, TanPapKum04}, localization \cite{FoxBurKruThr00}, object transportation \cite{MatNilSim95, RusDonJen95}, search and rescue \cite{JenWheEva97}, and so on. Given its importance, path planning for multi-robot systems has remained as a subject of intense study for many decades. Given the vast size of the available literature, we will only mention related research on discrete $\mpp$ and refer the readers to \cite{ChoLynHutKanBurKavThr05, Lat91, Lav06} and the references therein for a more comprehensive review of the subject.
From an algorithmic perspective, discrete $\mpp$ is a natural extension of the single robot path planning problem: One may combine the state spaces of all robots and treat the problem as a planning problem for a single robot. A$^*$ algorithm can then be used to compute distance optimal solutions to these problems. However, since naive A$^*$ scales poorly due to the curse of dimensionality, additional heuristic methods were proposed to improve the computational performance. One of the first such heuristics, Local Repair A$^*$ (LRA$^*$) \cite{Zel92}, plans robot paths simultaneously and performs local repairs when conflicts arise. Focusing on fixing the (locality) shortcomings of LRA$^*$, Windowed Hierarchical Cooperative A$^*$ (WHCA$^*$) \cite{Sil05} proposed to use a space-time window to allow more choices for resolving local conflicts while limiting the search space size at the same time. For additional heuristics exploring various specific local and global features, see \cite{LunBer11, Rya08, Sur09}.
Formulations of $\mpp$ problems with optimality guarantee have also been studied. The most general optimality criterion is the total path length traveled by all robots, which is consistent with the distance heuristic used by the A$^*$ algorithm. Since A$^*$ is the best possible among all such algorithms for finding distance optimal solutions, one should not expect complete and true optimal algorithms to exist that perform much better than the basic A$^*$ algorithm in all cases. Nevertheless, this does not prevent algorithms from quickly solving certain instances optimally. One such algorithm that is also complete, MGS$x$, is presented in \cite{StaKor11} (note that the grid world formulation in \cite{StaKor11}, which allows diagonal moves in general, even in the presence of diagonal obstacles, does not carry over to general graphs or geometric models in robotics). For time optimality, for a version of the $\mpp$ problem that resembles our formulation more closely, it was shown that finding a time optimal solution is NP-hard \cite{Sur10}, implying that our formulation is also intractable \cite{YuArxiv-1205-5263}. Finally, it was shown that finding the least number of moves for the $N\times N$-generalization of the 15-puzzle is NP-hard \cite{RatWar90}. Here, time optimality equals distance optimality, which is not the case in general.
The main contributions of this paper are twofold. First, adapting the constructions from \cite{YuLav12WAFR-A}, we develop ILP models for solving time optimal and distance optimal $\mpp$ problems. The resulting algorithms are shown to be complete. Our approach is quite general and easily accommodates other formulations of the $\mpp$ problems, including that of \cite{StaKor11}. Second, we provide thorough computational evaluations of our models' performance: With a state-of-the-art ILP solver, our models are capable of solving large problem instances with few dozens of robots fairly fast. Such a result is in some sense the best we can hope for because the best possible algorithm for such problems cannot run in polynomial time unless $P = NP$. As an added bonus, we also show that the (time optimal) algorithm works well as a subroutine for quickly solving $\mpp$ problems (non-optimally)\footnote{The software (written in Java, including a programming interface), as well as all examples used in our evaluation, are available at \texttt{http://msl.cs.uiuc.edu/{\texttildelow}jyu18/pe/mapp.html}.}.
The rest of the paper is organized as follows. We provide problem definitions in Section \ref{sec:definition}, along with a motivating example. Section \ref{sec:planning-and-flow} relates $\mpp$ to multiflow, establishing the equivalence between the two problems. In Section \ref{sec:algorithm}, ILP models are provided for obtaining time optimal and distance optimal solutions, respectively. Section \ref{sec:puzzle} is devoted to briefly discussing basic properties of the $n^2$-puzzle, which is an interesting benchmark problem on its own. We evaluate the computational performance of our algorithm in Section \ref{sec:evaluation} and conclude in Section \ref{sec:conclusion}.
\section{Multi-robot Path Planning on Graphs}\label{sec:definition}
\subsection{Problem Formulation}
Let $G = (V, E)$ be a connected, undirected, simple graph (i.e., no multi-edges), in which $V = \{v_i\}$ is its vertex set and $E = \{(v_i, v_j)\}$ is its edge set. Let $R = \{r_1, \ldots, r_n\}$ be a set of robots that move with unit speeds along the edges of $G$, with initial and goal locations on $G$ given by the injective maps $x_I, x_G: R \to V$, respectively. The set $R$ is effectively an index set. A {\em path} or {\em scheduled path} is a map $p_i: \mathbb Z^+ \to V$, in which $\mathbb Z^+ := \mathbb N \cup \{0\}$. Intuitively, the domains of the paths are discrete time steps. A path $p_i$ is {\em feasible} for a single robot $r_i$ if it satisfies the following properties: 1. $p_i(0) = x_I(r_i)$; 2. For each $i$, there exists a smallest $k_i^{\min} \in \mathbb Z^+$ such that for all $k \ge k_i^{\min}$, $p_i(k) \equiv x_G(r_i)$; 3. For any $0 \le k < k_i^{\min}$, $(p_i(k), p_i(k+1)) \in E$ or $p_i(k) = p_i(k+1)$. We say that two paths $p_i, p_{j}$ are in {\em collision} if there exists $k \in \mathbb Z^+$ such that $p_i(k) = p_{j}(k)$ (collision on a vertex, or {\em meet}) or $(p_i(k), p_i(k+1)) = (p_j(k+1), p_j(k))$ (collision on an edge, or {\em head-on}). If $p(k) = p(k+1)$, then the robot stays at vertex $p(k)$ between the time steps $k$ and $k+1$.
\begin{pro}[$\mpp$ on Graphs]\label{pimpp} Given $(G, R, x_I, x_G)$, find a set of paths $P = \{p_1, \ldots, p_n\}$ such that $p_i$'s are feasible paths for respective robots $r_i$'s and no two paths $p_i, p_j$ are in collision.
\end{pro}
A natural criterion for measuring path set optimality is the number of time steps until the last robot reaches its goal. This is sometimes called the {\em makespan}, which can be computed from $\{k_i^{\min}\}$ for a feasible path set $P$ as
\begin{displaymath}
T_P = \max_{1 \le i \le n}k_i^{\min}.
\end{displaymath}
Another frequently used objective is distance optimality, which counts the total number of edges traveled by the robots. We point out that distance optimality and time optimality cannot be satisfied at the same time in general: In Fig. \ref{fig:optimality}, let the dotted straight line have length $t$ and the dotted arc has length $1.5t$ from some large even number $t$. The four solid line segments are edges with unit length. Assuming that robot 1, 2 are to move from the locations marked with solid circles to the locations marked with gray dotted circles. Time optimal paths take $1.5t + 2$ time steps with a total distance of $2.5t + 4$; distance optimal paths take $2t + 3$ time steps with a total distance of $2t + 4$.
\begin{figure}[htp]
\begin{center}
\includegraphics[width=0.16\textwidth]{optimality.eps}
\end{center}
\vspace*{-1mm}
\caption{\label{fig:optimality} Time optimality and distance optimality cannot be satisfied simultaneously for this setup.}
\end{figure}
\vspace*{-1mm}
In this paper, we work with graphs on which the only possible collisions are meet or head-on collisions. This assumption is a mild one: For example, a 2D grid with unit edge lengths is such a graph for robots with radii of no more than $\sqrt{2}/4$. As a last note, our formulation allows multiple robots to move at the same time step as long as no collision occurs. On a graph, this allows robots on any cycle to ``rotate''.
\vspace*{-1mm}
\subsection{A Motivating Example}
\begin{figure}[htp]
\begin{center}
\begin{tabular}{ccc}
\includegraphics[width=0.08\textwidth]{8-puzzle-1.eps} & \hspace{10mm} &
\includegraphics[width=0.08\textwidth]{8-puzzle-2.eps} \\
(a) && (b)\\
\end{tabular}
\end{center}
\vspace*{-1mm}
\caption{\label{fig:example} a) A 9-puzzle problem. b) The desired goal state.}
\end{figure}
\vspace*{-1mm}
To better characterize what we solve in this paper, look at the example in Fig. \ref{fig:example}. We call this problem a 9-puzzle, which is a variant of the 15-puzzle \cite{RatWar90}; it is also related to the ``H'' example in \cite{LavHut98b}. Given the robots as numbered in Fig. \ref{fig:example}(a), we want to get them into the {\em state} ({\em configuration} is also used in this paper to refer to the same, depending on the context) given in Fig. \ref{fig:example}(b) (such a configuration is often referred to as {\em row major} ordering). Coming up with a feasible solution for such a highly constrained problem is non-trivial, let alone solving it with an optimality guarantee. The time optimal algorithm we present in this paper solves this problem instance under 0.1 second. The solution is given in Fig. \ref{fig:puzzle-8-sol}. The time optimality of the solution is evident: It takes at least four steps for robot 9 to reach its goal.
\begin{figure}[htp]
\begin{center}
\includegraphics[width=0.35\textwidth]{8-puzzle-sol.eps}
\end{center}
\vspace*{-1mm}
\caption{\label{fig:puzzle-8-sol} A 4-step solution from our algorithm. The directed edges show the moving direction of the robots at the tail of the edges.}
\end{figure}
\vspace*{-1mm}
\section{Multi-robot Path Planning and Multiflow}\label{sec:planning-and-flow}
\subsection{Network Flow}
In this subsection we provide a summary of the network flow problem formulation pertinent to the introduction of our algorithm. For surveys on network flow, see \cite{Aro89, ForFul62}. A {\em network} $\mathcal N = (G, c_1, c_2, S)$ consists of a directed graph $G = (V, E)$ with $c_1, c_2: E \to \mathbb Z^+$ as the maps defining the capacities and costs on edges, respectively, and $S \subset V$ as the set of sources and sinks. We let $S = S^+ \cup S^-$, with $S^+$ denoting the set of sources and $S^-$ denoting the set of sink vertices. For a vertex $v \in V$, let $\delta^+(v)$ (resp. $\delta^-(v)$) denote the set of edges of $G$ going to (resp. leaving) $v$. A feasible (static) $S^+, S^-$-flow on this network $\mathcal N$ is a map $f: E \to \mathbb Z^+$ that satisfies edge capacity constraints,
\begin{equation}\label{c1}
\forall e \in E, \quad f(e) \le c_1(e),
\end{equation}
the flow conservation constraints at non terminal vertices,
\begin{equation}\label{c2}
\forall v \in V \backslash S, \quad \displaystyle\sum_{e\in \delta^+(v)} f(e)\,\,\, - \sum_{e\in \delta^-(v)} f(e) = 0,
\end{equation}
and the flow conservation constraints at terminal vertices,
\begin{equation}\label{flow-value}
\begin{array}{ll}
F(f) &= \displaystyle\sum_{v \in S^+} (\sum_{e\in \delta^-(v)} f(e)\,\,\, - \sum_{e\in \delta^+(v)} f(e)) \\
& = \displaystyle\sum_{v \in S^-} (\sum_{e\in \delta^+(v)} f(e)\,\,\, - \sum_{e\in \delta^-(v)}f(e)).
\end{array}
\end{equation}
The quantity $F(f)$ is called the {\em value} of the flow $f$. The classic (single-commodity) {\em maximum flow} problem asks the question: Given a network $\mathcal N$, what is the maximum $F(f)$ that can be pushed through the network? The {\em minimum cost maximum flow} problem further requires the flow to have minimum total cost among all maximum flows. That is, we want to find a flow among all maximum flows that also minimizes the quantity
\begin{equation}\label{min-cost-max-flow}
\sum_{e \in E} c_2(e)\cdot f(e).
\end{equation}
The above formulation concerns a single commodity, which corresponds to all robots being inter exchangeable. For $\mpp$, the robots are not inter exchangeable and must be treated as different commodities. {\em Multi-commodity flow} or {\em multiflow} captures the problem of flowing different types of commodities through a network. Instead of having a single flow function $f$, we have a flow function $f_i$ for each commodity $i$. The constraints (\ref{c1}), (\ref{c2}), and (\ref{flow-value}) become
\begin{equation}\label{c1m}
\forall i, \forall e \in E, \quad \sum_i \,\, f_i(e) \le c_1(e),
\end{equation}
\begin{equation}\label{c2m}
\forall \, i, \forall \, v \in V \backslash S, \quad \displaystyle\sum_{e\in \delta^+(v)} f_i(e)\,\,\, - \sum_{e\in \delta^-(v)} f_i(e) = 0,
\end{equation}
\begin{equation}\label{flow-value-m}
\begin{array}{lll}
\forall i, & &\displaystyle\sum_{v \in S^+} (\sum_{e\in \delta^-(v)} f_i(e)\,\,\, - \sum_{e\in \delta^+(v)} f_i(e)) \\
&=& \displaystyle\sum_{v \in S^-} (\sum_{e\in \delta^+(v)} f_i(e)\,\,\, - \sum_{e\in \delta^-(v)}f_i(e)).
\end{array}
\end{equation}
Again, maximum flow and minimum cost flow problems can be posed for a multiflow setup.
\subsection{Equivalence between $\mpp$ and multiflow}
Viewing robots as commodities, we may connect $\mpp$ and multiflow. This relationship (Theorem \ref{t:mpp}) was stated in \cite{YuLav12WAFR-A} without full proof, which is provided here for completeness. To make the presentation clear, we use as an example the simple graph $G$ in Fig. \ref{fig:pimpp}(a), with initial locations $\{s_i^+\}, i = 1, 2$ and goal locations $\{s_i^-\}, i = 1, 2$. An instance of Problem \ref{pimpp} is given by $(G, \{r_1, r_2\}, x_I: r_i \mapsto s^+_i, x_G: r_i \mapsto s^-_i)$. We now convert this problem to a network flow problem, $\mathcal N' = (G', c_1, c_2, S^+ \cup S^-)$. Given the graph $G$ and a natural number \begin{figure}[htp]
\begin{center}
\begin{tabular}{cc}
\includegraphics[height=0.16\textwidth]{network-pimpp.eps} &
\includegraphics[height=0.16\textwidth]{gadget-pimpp.eps} \\
(a) & (b)\\
\end{tabular}
\end{center}
\vspace*{-2mm}
\caption{\label{fig:pimpp} a) A simple $G$. b) A gadget for splitting an undirected edge through time steps.}
\end{figure}
\vspace*{-2mm}
$T$, we create $2T+1$ copies of vertices from $G$, with indices $0, 1, 1', \ldots$, as shown in Fig. \ref{fig:pimpp-n}. For each vertex $v \in G$, denote these copies $v(0) = v(0)', v(1), v(1)', v(2), \ldots, v(T)'$. For each edge $(u, v) \in G$ and time steps $t, t+1$, $0 \le t < T$, add the gadget shown in Fig. \ref{fig:pimpp}(b) between $u(t)', v(t)'$ and $u(t+1), v(t+1)$ (arrows from the gadget are omitted from Fig. \ref{fig:pimpp-n} since they are small). For the gadget, we assign unit capacity to all edges, unit cost to the horizontal middle edge, and zero cost to the other four edges. This gadget ensures that two robots cannot travel in opposite directions on an edge in the same time step. To finish the construction of Fig. \ref{fig:pimpp-n}, for each vertex $v \in G$, we add one edge between every two successive copies (i.e., we add the edges $(v(0),v(1)), (v(1), v(1)'), \ldots, (v(T), v(T)')$). These correspond to the green and blue edges in Fig. \ref{fig:pimpp-n}. For all green edges, we assign them unit capacity and cost; for all blue edges, we assign them unit capacity and zero cost.
\vspace*{-1mm}
\begin{figure}[htp]
\begin{center}
\includegraphics[width=0.35\textwidth]{t-network-pimpp.eps}
\end{center}
\vspace*{-4mm}
\caption{\label{fig:pimpp-n} The time-expanded network ($T = 2$).}
\end{figure}
\vspace*{-1mm}
Fig. \ref{fig:pimpp-n} (with the exception of edges $e_1$ and $e_2$, which are not relevant until Section \ref{sec:algorithm}), called a {\em time-expanded network} \cite{Aro89}, is the desired $G'$. For the set $S$, we may simply let $S^+ = \{v(0): v \in \{s^+_i\} \}$ and $S^- = \{v(T)': v \in \{s^-_i\}\}$. The network $\mathcal N' = (G', c_1, c_2, S^+ \cup S^-)$ is now complete; we have reduced Problem \ref{pimpp} to an integer maximum multiflow problem on $\mathcal N'$ with each robot from $R$ as a single type of commodity.
\begin{theorem}\label{t:mpp}Given an instance of Problem \ref{pimpp} with input parameters $(G, R, x_I, x_G)$, there is a bijection between its solutions (with maximum number of time steps up to $T$) and the integer maximum multiflow solutions of flow value $n$ on the time-expanded network $\mathcal N'$ constructed from $(G, R, x_I, x_G)$ with $T$ time steps.
\end{theorem}
{\sc Proof.} (Injectivity) Assume that $P = \{p_1, \ldots, p_n\}$ is a solution to an instance of Problem \ref{pimpp}. For each $p_i$ and every time step $t = 0, \ldots, T$, we mark the copy of $p_i(t)$ and $p_i(t)'$ (recall that $p_i(t)$ corresponds to a vertex of $G$) at time step $t$ in the time-expanded graph $G'$. Connecting these vertices of $G'$ sequentially (there is only one way to do this) yields one unit of flow $f_i$ on $\mathcal N'$ (after connecting to appropriate source and sink vertices in $S^+, S^-$, which is trivial). It is straightforward to see that if two paths $p_i, p_{j}$ are not in collision, then the corresponding flows $f_i, f_j$ on $\mathcal N'$ are vertex disjoint paths and therefore do not violate any flow constraint. Since any two paths in $P$ are not in collision, the corresponding set of flows $\{f_1, \ldots, f_n\}$ is feasible and maximal on $\mathcal N'$.
(Surjectivity) Assume that $\{f_1,\ldots, f_n\}$ is a integer maximum multiflow on the network $\mathcal N'$ with $|f_i| =1$. First we establish that any pair of flows $f_i, f_j$ are vertex disjoint. To see this, we note that $f_i, f_j$ (both are unit flows) cannot share the same source or sink vertices due to the unit capacity structure of $\mathcal N'$ enforced by the blue edges. If $f_i, f_j$ share some non-sink vertex $v$ at time step $t > 0$, both flows then must pass through the same blue edge (see Fig. \ref{fig:pimpp}(b)) with $v$ being either the head or tail vertex, which is not possible. Thus, $f_i, f_j$ are vertex disjoint on $\mathcal N'$. We can readily convert each flow $f_i$ to a corresponding path $p_i$ (after deleting extra source vertex, sink vertices, vertices in the middle of the gadgets, and tail vertices of blue edges) with the guarantee that no $p_i, p_j$ will collide due to a meet collision. By construction of $\mathcal N'$, the gadget we used ensures that a head-on collision is also impossible. The set $\{p_1, \ldots, p_n \}$ is then a solution to Problem \ref{pimpp}. ~\qed
\subsection{Accommodating other formulations}
Our network flow based approach for encoding the $\mpp$ problem is fairly general; we illustrate this using two examples. The first is the grid world formulation from \cite{StaKor11}, which allows (single) diagonal crossings. That is, for vertices $v_1, \ldots, v_4$ on the four corners of a square cell with $v_1, v_3$ and $v_2, v_4$ diagonal to each other, respectively, it is possible for a robot to move from $v_1$ to $v_3$ provided that $v_3$ is unoccupied and the $v_2$-$v_4$ diagonal is not used in the same time step. To include this constraint in the ILP model, we may simply add the gadget structure in Fig. \ref{fig:gadget2} to the time-expanded network construction. The inclusion of the gadget will allow a single diagonal crossing; the extra paths do not create an issue since no two robots can go through a single vertex at the same time step (enforced by the blue dotted edges in Fig. \ref{fig:pimpp-n}).
\vspace*{-2mm}
\begin{figure}[htp]
\begin{center}
\includegraphics[height=0.16\textwidth]{cell-gadget.eps}
\end{center}
\vspace*{-4mm}
\caption{\label{fig:gadget2} A gadget for allowing diagonal crossings.}
\end{figure}
\vspace*{-2mm}
For a second example, in some $\mpp$ formulations, head-on collisions may be allowed. For instance, two adjacent CPUs may exchange two units of data in parallel but no single CPU may hold multiple units of data. To allow this, we simply do not use the gadget from Fig. \ref{fig:pimpp}(b) when the time-expanded network is constructed.
\section{Algorithmic Solutions for Optimal Multi-robot Path Planning}\label{sec:algorithm}
Given the time-expanded network $\mathcal N' = (G', c_1, c_2, S^+ \cup S^-)$, it is straightforward to create an integer linear programming (ILP) model with different optimality objectives. We investigate two objectives in this section: Time optimality or makespan (the time when the last robot reaches its goal) and distance optimality (the total distance traveled by all robots).
\subsection{Time optimality}
Time optimal solutions to Problem \ref{pimpp} can be obtained using a maximum multiflow formulation. As a first step, we introduce a set of $n$ {\em loopback} edges to $G'$ by connecting each pair of corresponding goal and start vertices in $S$, from the goal to the start. For convenience, denote these loopback edges as $\{e_1, \ldots, e_n\}$ (e.g., edges $e_1, e_2$ in Fig. \ref{fig:pimpp-n}). These edges have unit capacity and zero cost. Next. for each edge $e_j \in G'$, create $n$ binary variables $x_{1, j}, \ldots, x_{n,j}$ corresponding to the flow through that edge, one for each robot. $x_{i, j} = 1$ if and only if robot $r_i$ passes through $e_j$ in $G'$. The variables $x_{i,j}$'s must satisfy two edge capacity constraints and one flow conservation constraint,
\begin{equation}\label{to1}
\begin{array}{cc}
\forall\, e_j \in G', & \displaystyle\sum_{i=1}^n x_{i,j} \le 1\\
\forall\, 1 \le i, j \le n, \,i \ne j, & \displaystyle x_{i, j} = 0,
\end{array}
\end{equation}
\begin{equation}\label{to2}
\forall\, v \in G' \textrm{ and } 1 \le i \le n, \displaystyle\sum_{e_j \in \delta^+(v)} x_{i,j} = \sum_{e_j \in \delta^-(v)} x_{i,j}.
\end{equation}
The objective function is
\begin{equation}\label{to3}
\max \sum_{1 \le i \le n} x_{i,i}.
\end{equation}
For each fixed $T$, the solution to the above ILP problem equaling $n$ means that a feasible solution to Problem \ref{pimpp} is found. We are to find the minimal $T$ that yields such a feasible solution. To do this, we start with $T$ being the maximum over all robots the shortest possible path length for each robot, ignoring all other robots. We then build the ILP model for this $T$ and test for a feasible solution. If the model is not feasible, we increase $T$ and try again. The first feasible $T$ is the optimal $T$. The robots' paths can be extracted based on the proof of Theorem \ref{t:mpp}. The algorithm is complete: Since the problem is discrete, there is only a finite number of possible states. Therefore, for some sufficiently large $T$, there must either be a feasible solution or we can pronounce that none can exist. Calling this algorithm \tompp\, (time optimal $\mpp$), we have shown the following.
\begin{proposition}\label{p:time}Algorithm \tompp\, is complete and returns a solution with minimum makespan to Problem \ref{pimpp} if one exists.
\end{proposition}
\subsection{Distance optimality}
Distance optimality objective can be encoded using minimum cost maximum multiflow. Constraints (\ref{to1}) and (\ref{to2}) remain; to force a maximum flow, let $x_{i,i} = 1$ for $1 \le i \le n$. The objective is given by
\begin{equation}\label{to4}
\min \sum_{e_j \in G', j > n,\, 1 \le i \le n} c_2(e_j) \cdot x_{i,j}.
\end{equation}
The value given by (\ref{to4}), when feasible, is the total distance of all robots' paths. Let $T_t$ denote the optimal $T$ produced by \tompp\,(if one exists), then a distance optimal solution exists in a time-expanded network with $T = nT_t$ steps. Calling this algorithm \dompp\, (distance optimal $\mpp$), we have
\begin{proposition}Algorithm \dompp\, is complete and returns a solution with minimum total path length to Problem \ref{pimpp} if one exists.
\end{proposition}
Due to the large number of steps needed in the time-expanded network, \dompp, in its current form, is not very fast in solving problems with many robots. Therefore, our evaluation in this paper focuses on \tompp\, which, on the other hand, is fairly fast in solving some very difficult problems. \dompp, however, still proves useful in providing time optimal and near distance optimal solutions using the outputs of \tompp, as shown in Subsection \ref{subsec:dompp}.
\section{Properties of the $n^2$-puzzle}\label{sec:puzzle}
The example problem from Fig. \ref{fig:example} easily extends to an $n \times n$ grid; we call this class of problems the $n^2$-puzzle. Such problems are highly coupled: No robot can move without at least three other robots moving at the same time. At each step, all robots that move must move synchronously in the same direction (per cycle) on one or more disjoint cycles (see e.g., Fig. \ref{fig:puzzle-8-sol}). To put into perspective the computational results on $n^2$-puzzles that follow, we make a characterization of the state structure of the $n^2$-puzzle for $n \ge 3$ (the case of $n=2$ is trivial).
\begin{figure}[htp]
\begin{center}
\includegraphics[width=0.35\textwidth]{6-puzzle-sol.eps}
\end{center}
\vspace*{-2mm}
\caption{\label{fig:6-puzzle} A 3-step procedure for exchanging robots 8 and 9.}
\end{figure}
\vspace*{-2mm}
\begin{proposition}\label{p:state}All states of the 9-puzzle are connected via legal moves.
\end{proposition}
{\sc Proof}. We show that any state of a 9-puzzle can be moved into the state shown in Fig. \ref{fig:example}(b). From any state, robot 5 can be easily moved into the center of the grid. We are left to show that we can exchange two robots on the border without affecting other robots. This is possible due to the procedure illustrated in Fig. \ref{fig:6-puzzle}.
~\qed
Larger puzzles can be solved recursively: We may first solve the top and right side of the puzzle and then the left over smaller square puzzle. For a 16-puzzle, Fig. \ref{fig:16-puzzle} outlines the procedure, consisting of six main steps:
\begin{enumerate}
\item Move robots 1 and 2 to their respective goal locations, one robot at a time (first 1, then 2).
\item Move robots 3 and 4 (first 3, then 4) to the lower left corner (top-middle figure in Fig. \ref{fig:16-puzzle}).
\item Move robots 3 and 4 to their goal location together via counterclockwise rotation along the cycle indicated in the top-middle figure in Fig. \ref{fig:16-puzzle}.
\item Move robot 8 to its goal location.
\item Move robots 12 and then 16 to the lower left corner.
\item Rotate robots 12 and 16 to their goal locations.
\end{enumerate}
\begin{figure}[htp]
\begin{center}
\includegraphics[width=0.35\textwidth]{16-puzzle-1.eps}
\end{center}
\vspace*{-2mm}
\caption{\label{fig:16-puzzle} A solution scheme for solving top/left sides of the 16-puzzle.}
\end{figure}
\vspace*{-2mm}
It is straightforward to see that larger puzzles can be solved similarly. We have thus outlined the essential steps for proving Proposition \ref{c:state} below; a more generic proof can be written using generators of permutation groups, which we omit here due to its length. Proposition \ref{c:state} implies that, for $n \ge 3$, all instances of $n^2$-puzzles are solvable. The constructive proofs of Proposition \ref{p:state} and \ref{c:state} lead to recursive algorithms for solving any $n^2$-puzzle (clearly, the solution is not time/distance optimal in general).
\begin{proposition}\label{c:state}All states of an $n^2$-puzzle, $n \ge 3$ are connected via legal moves.
\end{proposition}
\begin{corollary}\label{c:solvable}All instances of the $n^2$-puzzle, $n \ge 3$, are solvable.
\end{corollary}
By Proposition \ref{c:state}, since all states of a $n^2$-puzzle for $n \ge 3$ are connected via legal moves, the state space of searching an $n^2$-puzzle equals $n^2$ {\em factorial}. For 16-puzzle and 25-puzzle, $16! > 10^{13}, 25! > 10^{25}$. Large state space is one of the three reasons that make finding a time optimal solution to the $n^2$-puzzle a difficult problem. The second difficulty comes from the large branching factor at each step. For a 9-puzzle, there are 13 unique cycles, yielding a branching factor of 26 (clockwise and counterclockwise rotations). For the 16-puzzle, the branching factor is around 500. This number balloons to over $10^4$ for the 25-puzzle. This suggests that on typical commodity personal computer hardware (assuming a 1GHz processor), a baisc breadth first search algorithm will not be able to go beyond depth of 3 for the 16-puzzle and depth 2 for the 25-puzzle in reasonable amount of time. Moreover, enumerating these cycles is a non-trivial task. The third difficulty is the lack of obvious heuristics: Manhattan distances of robots to their respective goals prove to be a bad one. For example, given the initial configuration as that in Fig. \ref{fig:example}(a), the first step in the optimal plan from Fig. \ref{fig:puzzle-8-sol} gets robots 1, 3, 4, 6, 8, 9 closer to their respective goals while moving robots 2, 7 farther. On the other hand, rotating counterclockwise along the outer cycle takes robots 1, 3, 4, 5, 6, 8, 9 closer and only moves robot 7 farther. However, if we instead take this latter first step, the optimal plan afterwards will take 5 more steps.
\section{Solutions and Evaluation}\label{sec:evaluation}
Our experimentation in this paper focuses on \tompp\, with the main goal being evaluating the comparative efficiency of our approach rather than pushing for best computational performance. As such, our implementation is Java based and did not directly take advantage of multi-core technology. We note that, Gurobi, the ILP solver used in our implementation, can engage multiple cores automatically for hard problems. We ran our code on an Intel Q6600 quad-core machine with a 4GB JavaVM.
\subsection{Time optimal solution to $n^2$-puzzles}
The first experiment we performed was evaluating the efficiency of the algorithm \tompp\, for finding time optimal solutions to the $n^2$-puzzle for $n = 3, 4, 5,$ and $6$. We ran Algorithm \tompp\, on 100 randomly generated $n^2$-puzzle instances for $n = 3, 4, 5$. For the 9-puzzle, computation on all instances completed successfully with an average computation time of 1.36 seconds per instance. To compare the computational result, we implemented a (optimal) BFS algorithm. The BFS algorithm is heavily optimized: For example, cycles of the grid are precomputed and hard coded to save computation time. Since the state space of the 9-puzzle is small, the BFS algorithm is capable of optimally solving the same set of 9-puzzle instances with an average computation time of about 0.89 seconds per instance.
Once we move to the 16-puzzle, the power of general ILP solvers becomes evident. \tompp\, solved all 100 randomly generated 16-puzzle instances with an average computation time of 18.9 seconds. On the other hand, the BFS algorithm with a priority queue that worked for the 9-puzzle ran out of memory after a few minutes. As our result shows that an optimal solution for the 16-puzzle generally requires 6 time steps, it seems natural to also try bidirectional search, which cuts down the total number states stored in memory. To complete such a search, one side of the bidirectional search generally must reach a depth of 3, which requires storing about $3 \times 10^7$ states, each taking 64 bits of memory. This turns out to be too much for a 4GB JavaVM: A bidirectional search ran out of memory after about 10 minutes in general. To be sure, we also coded part of the same search algorithm in C++ with STL. Reaching a search depth 3 on one side takes about a minute with a memory footprint of 1.5GB, suggesting a minimum running time of more than one minute.
\begin{figure}[htp]
\begin{center}
\includegraphics[width=0.12\textwidth]{25-puzzle-2.eps}
\end{center}
\vspace*{-2mm}
\caption{\label{fig:25-puzzle} An instance of a 25-puzzle problem solved by \tompp.}
\end{figure}
\vspace*{-2mm}
For the 25-puzzle, without a good heuristic, bidirectional search cannot explore a tiny fraction of the fully connected state space with about $10^{25}$ states. On the other hand, \tompp\, again consistently solves the 25-puzzle, with an average computational time under 2 hours over 100 randomly created problems. Fig. \ref{fig:25-puzzle} shows one of the solved instances with a 7-step solution given in Fig. \ref{fig:25-puzzle-sol}. Note that 7 steps is obviously the least \begin{figure}[htp]
\begin{center}
\includegraphics[width=0.35\textwidth]{25-puzzle-sol.eps}
\end{center}
\caption{\label{fig:25-puzzle-sol} An optimal 7-step solution (from left to right, then top to bottom) to the 25-puzzle problem from Fig. \ref{fig:25-puzzle}, by \tompp\, in about 30 minutes.}
\end{figure}
possible since it takes at least 7 steps to move robot 10 to its desired goal. We also briefly tested \tompp\, on the 36-puzzle. While we had some success here, \tompp\, generally does not seem to solve a randomly generated instance of the 36-puzzle within 24 hours, which has $3.7 \times 10^{41}$ states and a branching factor of well over $10^6$.
\subsection{Time optimal solutions for grid graphs}
For problems in which not all graph vertices are occupied by robots, \tompp\, can handle much larger instances. In a first set of tests on this subject, a grid size of $20 \times 15$ is used with varying percentage of obstacles (simulated by removed vertices) and robots for evaluating the effect of these factors. A typical set up is illustrated in Fig. \ref{fig:20x15}. The computation time (in seconds) and the average number of optimal time steps (in parenthesis) are listed in Table \ref{tab:20x15}. The numbers are \begin{figure}[htp]
\begin{center}
\includegraphics[width=0.4\textwidth]{20x15-obs.eps}
\end{center}
\vspace*{-2mm}
\caption{\label{fig:20x15} A $20 \times 15$ grid with 20\% verices removed (modeling obstacles) and 30 start/goal pairs. The start locations are marked with strings beginning with ``S'' and the goal locations are marked with strings beginning with ``G''.}
\end{figure}
\begin{table}[htp]
\begin{center}
\caption{\label{tab:20x15}}
\vspace*{-1mm}
\begin{tabular}{cccccc}
\hline\hline
\multirow{2}*{\% obs} & \multicolumn{5}{c}{Number of robots} \\
\cline{2-6}
& 10 & 20 & 30 & 40 & 50 \\
\hline
5 & 2.5(22) & 7.3(24) & 16.7(27) & 23.6(26) & 70.7(27) \\
\hline
10 & 2.1(21) & 7.8(24) & 13.1(26) & 20.4(26) & 48.6(26) \\
\hline
15 & 3.9(25) & 6.2(24) & 13.8(26) & 32.8(27) & 126(28) \\
\hline
20 & 2.4(24) & 7.7(27) & 21.9(28) & 39.3(26) & 173(27) \\
\hline
25 & 2.7(27) & 8.1(28) & 24.8(30) & 68.0(28) & $253(30)^4$ \\
\hline
30 & 3.0(31) & $29.9(34)^9$ & $234(44)^5$ & $80.6(29)^3$ & N/A \\
\hline\hline
\end{tabular}
\vspace*{-2mm}
\end{center}
\end{table}
averages over 10 randomly created instances. For each run, a maximum of 1000 seconds is allowed (such limits, somewhat arbitrary, were chosen to manage the expected running time of the entire set of experiments; our complete algorithms should terminate eventually). Entries with superscript numbers suggest the 10 runs did not all finish within the given time. The superscript numbers represent the successful runs on which the statistics were computed. ``N/A'' means no instance finished within the allowed time. From the results, we observe that the percentage of randomly placed obstacles does not affect the problem difficulty, as measured by computational time, in a monotonic way. On one hand, more obstacles remove more vertices from the grid, making the problem size smaller, reducing the computational difficulty. On the other hand, as more obstacles are introduced, the reduced connectivity of the graph makes the problem harder. In particular, the $20 \times 15$ grid setting suddenly becomes a hard problem with 30\% obstacles. The difficulty is also reflected by the average number of steps in an optimal solution: Longer time means reduced availability of alternative paths.
\begin{table}[htp]
\begin{center}
\caption{\label{tab:32x32p}}
\vspace*{-1mm}
\begin{tabular}{cccccc}
\hline\hline
\multirow{2}*{\% obs} & \multicolumn{5}{c}{Number of robots} \\
\cline{2-6}
& 10 & 20 & 30 & 40 & 50 \\
\hline
20 & 14.4(41) & 34.6(45) & 43.7(44) & 87.5(47) & $402(49)^9$ \\
\hline\hline
\end{tabular}
\vspace*{-2mm}
\end{center}
\end{table}
In a second test on even larger problems, $32 \times 32$ grids with 20\% obstacles were tried. For between 10 and 50 robots with an increment of 10, 10 random instances each were created; each instance is allowed to run a maximum of half an hour. The statistics, similarly composed as that in Table \ref{tab:20x15}, is listed in Table \ref{tab:32x32p}. We observe that the problem is similar in difficulty to the $20 \times 15$ grid setting with 25\% obstacles, but much simpler than that with 30\% obstacles.
\subsection{Distance optimality of time optimal solutions}\label{subsec:dompp}
Although \dompp\, is not yet practical for computing distance optimal solutions alone, it can be used for computing distance optimal solutions for a fixed time expansion length $T$. That is, we first find a time optimal solution, which gives us the smallest time-expanded network containing feasible solutions. We then run \dompp\, on this network. For evaluation, we used the same $20 \times 15$ instances with 5-25\% obstacles and 10-30 robots (\dompp\, could not finish most instances with 30\% obstacles or 40+ robots in 200 seconds, the cutoff time). We used the first 5 of every 10 instances for each obstacle/robot combination. For each fixed number of obstacles, instances of different numbers of robots are combined. The result is listed in Table \ref{tab:20x15d}. We allow \dompp\, to run for at most 200 seconds per instance. Note that unlike \tompp, even when \dompp\, does not find the optimal solution, it generally produces feasible solution which sometimes is a near optimal solution. These are included in the result. ``Time'' entires are average time, in seconds, used by \dompp. ``Disjoint'' entries are the average path lengths for all robots if we were to plan each shortest path ingoring other robots. The distance optimal solutions must produce a length no less than this. The next two lines are average path lengths from \tompp\, and \dompp\, algorithms. As we can see, \tompp\, alone yields path length 50\% than optimal; \dompp, on the other hand, provided time optimal solutions that are near distance optimal ($< 1\%$ difference). For more than half of the instances, \dompp\, produced true distance optimal solutions. In fact, \dompp\, produced true distance optimal solutions for 42 out of the 45 instances with 5-15\% obstacles.
\begin{table}[htp]
\begin{center}
\caption{\label{tab:20x15d}}
\vspace*{-1mm}
\begin{tabular}{cccccccccc}
\hline\hline
\multirow{2}*{} & \multicolumn{9}{c}{\% obs} \\
\cline{2-10}
& 5 && 10 && 15 && 20 && 25 \\
\hline
Time & 26.3 && 23.3 && 42.7 && 57.2 && 81.6 \\
\hline
Disjoint & 12.20 && 11.75 && 12.03 && 12.80 && 12.84 \\
\hline
\dompp & 12.20 && 11.75 && 12.05 && 12.85 && 12.92 \\
\hline
\tompp & 16.47 && 16.60 && 17.59 && 18.83 && 19.33 \\
\hline\hline
\end{tabular}
\vspace*{-2mm}
\end{center}
\end{table}
\vspace*{-2mm}
\subsection{Using {\sc Tompp} as a generic heuristic}
In the last experiment, we exploit \tompp\, as a {\em generic} heuristic for locally resolving path conflicts for large problem instances. By {\em generic}, we mean that the heuristic is not coded to any specific robot/grid setting. In our algorithm, paths are first planned for single robots (ignoring other robots). Afterwards, the robots are moved along these paths until no further progress can be made. We then detect on the graph where progress are stalled and resolve the conflict locally using \tompp. For every conflict, we apply \tompp\, to its neighborhood of distance 2. The above steps are repeated until a solution is found. The process can be made into a complete algorithm by allowing the local neighborhood to grow gradually. For evaluation, we ran the above algorithm on a $32\times 32$ grid with 20\% obstacles. We allow each instance to run a maximum of 30 seconds. The results, each as an average over 100 runs for a certain number of robots, are listed in Table \ref{tab:32x32} (keep \begin{table}[htp]
\begin{center}
\caption{\label{tab:32x32}}
\begin{tabular}{rrrrrrr}
\hline\hline
& \multicolumn{6}{c}{Number of Robots} \\
\cline{2-7}
& 25 & 50 & 75 & 100 & 125 & 150\\
\hline
Running time (s) & 0.04 & 0.15 & 0.32 & 1.37 & 3.85 & 10.3 \\
\hline
Fully solved & 100 & 100 & 100 & 100 & 98 & 95 \\
\hline
\% goals reached & 100.0 & 100.0 & 100.0 & 100.0 & 99.4 & 98.6 \\
\hline\hline
\end{tabular}
\vspace*{-2mm}
\end{center}
\end{table}
in mind that our implementation is Java based, which should see a speedup if implemented in C++). While we did not make side-by-side comparisons with the literature due to (seemingly small but) important differences in problem formulation, the computation time and completion rate of our algorithm appear comparable with the state of the art results from other authors.
\section{Conclusion and Open Problems}\label{sec:conclusion}
In this paper, we introduced a multiflow based ILP algorithm for planning optimal, collision-free paths for multiple robots on graphs. We provided complete ILP algorithms for solving time optimal and distance optimal $\mpp$ problems. Our experiments confirmed that \tompp\, is a feasible method for planning time optimal paths for tightly coupled problems as well as for larger problems with more free space. Moreover, we showed that \tompp\, can serve as a good heuristic for solving large problem instances efficiently. For distance optimality, \dompp, when combined with \tompp, produces time optimal solutions that are often near distance optimal.
Many interesting open problems on optimal $\mpp$ remain; we mention two here. First, the ILP algorithms have ample room for performance improvements. On one hand, the ILP model can be make leaner. For example, it is clear that some $x_{i,j}$'s will never be set to 1; these should be removed from the model. On the other hand, our application of the Gurobi solver is fairly rudimentary - we simply feed the model to the solver as a mixed integer program (MIP) without specifying any other optimization options. Therefore, it would not be surprising that tuning the parameters of the solver greatly improves its performance on $\mpp$ problems. Secondly, while \tompp\, could solve hard $\mpp$ problems such as the 25-puzzle, ILP solvers are nevertheless not tailored for such problems. Thus, we expect that tailored methods, such as heuristic based search, to solve problems like $n^2$-puzzles even faster. Looking closely at how ILP solvers work on these problems should provide insights that help building these heuristics.
\bibliographystyle{plain}
|
1,108,101,566,322 | arxiv | \section{#1} \setcounter{equation}{0}}
\newcommand{\g}{\mathfrak{g}}
\newcommand{\lcf}{\lbrack\! \lbrack}
\newcommand{\rcf}{\rbrack\! \rbrack}
\newcommand{\todo}[1]{\vspace{5 mm}\par \noindent
\marginpar{\textsc{ToDo}}
\framebox{\begin{minipage}[c]{0.86\textwidth} \tt #1
\end{minipage}}\vspace{5 mm}\par}
\newcommand{\note}[1]{
\begin{minipage}[c]{0.86\textwidth} \tiny {\bf Note:} #1
\end{minipage}}
\newcommand{\Om}{\Omega}
\newcommand{\Xnh}{\mbox{$X_{\textup{nh}}$}}
\def\W{\mathcal{W}}
\def\M{\mathcal{M}}
\def\U{\mathcal{U}}
\def\V{\mathcal{V}}
\def\S{\mathcal{S}}
\def\C{\mathcal{C}}
\def\Ham{\mathcal{H}}
\def\Lag{\mathcal{L}}
\def\R{\mathbb{R}}
\def\D{\mathcal{D}}
\def\F{\mathcal{F}}
\def\RR{\mathcal{R}}
\def\L{\mbox{Leg}}
\def\red{{\mbox{\tiny{red}}}}
\def\nh{{\mbox{\tiny{nh}}}}
\def\kin{{\mbox{\tiny{kin}}}}
\def\B{{\mbox{\tiny{$B$}}}}
\def\subW{{\mbox{\tiny{$\W$}}}}
\def\subC{{\mbox{\tiny{$\C$}}}}
\def\subS{{\mbox{\tiny{$\S$}}}}
\def\subM{{\mbox{\tiny{$\M$}}}}
\def\O{\Omega}
\def\RR{\mathcal{R}}
\def\vecOm{\boldsymbol{\Omega}}
\def\I{\mathbb{I}}
\def\a{\alpha}
\def\b{\beta}
\def\vecom{\boldsymbol{\omega}}
\newcommand{\SO}{\mbox{$\textup{SO}$}}
\def\so{\mathfrak{so}}
\def\se{\mathfrak{se}}
\def\vecep{\boldsymbol{\epsilon}}
\def\vecL{\boldsymbol{\lambda}}
\def\vecR{\boldsymbol{\rho}}
\def\vecgamma{\boldsymbol{\gamma}}
\def\vecalpha{\boldsymbol{\alpha}}
\def\vecbeta{\boldsymbol{\beta}}
\begin{document}
\maketitle
\begin{abstract}
In this paper we study the Jacobiator (the cyclic sum that vanishes
when the Jacobi identity holds) of the almost Poisson brackets
describing nonholonomic systems. We revisit the local formula for
the Jacobiator established by Koon and Marsden in \cite{MarsdenKoon}
using suitable local coordinates and explain how it is related to
the global formula obtained in \cite{paula}, based on the choice of
a complement to the constraint distribution. We use an example to
illustrate the benefits of the coordinate-free viewpoint.
\end{abstract}
\begin{center} {\it Dedicated to the memory of J.E. Marsden}
\end{center}
\tableofcontents
\section{Introduction} \label{S:Intro}
The geometric approach to nonholonomic systems was among the many
research interests of J. E. Marsden, and his contributions to this
area were fundamental. A system with nonholonomic constraints can be
geometrically described by an almost Poisson bracket
\cite{IbLeMaMa1999,Marle1998,SchaftMaschke1994}, whose failure to
satisfy the Jacobi identity, measured by the so-called Jacobiator,
is precisely what encodes the nonholonomic nature of the system.
There is a vast literature on the study of such nonholonomic
brackets and their properties, starting with the early work of
Chaplygin \cite{Chapligyn_reducing_multiplier}, see e.g.
\cite{BS93,IbLeMaMa1999,MovingFrames,Fernandez,BorisovMamaev2008,Naranjo2008,JovaChap}. An explicit formula
for the Jacobiator of nonholonomic brackets, expressed in suitable
local coordinates, was obtained by Koon and Marsden in their 1998
paper \cite{MarsdenKoon}. In the present paper, we revisit the
Koon-Marsden formula of \cite{MarsdenKoon} and explain how it can be
derived from the coordinate-free Jacobiator formula for nonholonomic
brackets obtained in \cite{paula}.
We organize the paper as follows. In Section~\ref{S:NHsystem}, we
recall the hamiltonian viewpoint to systems with nonholonomic
constraints. For a nonholonomic system on a configuration manifold
$Q$, determined by a lagrangian $L:TQ \to \R$ and a nonintegrable
distribution $D$ on $Q$ (the {\it constraint distribution}, defining
the permitted velocities of the system), we consider the induced
{\it nonholonomic bracket} $\{ \cdot, \cdot \}_\nh$ defined on the
submanifold $\M:=\L(D)$ of $T^*Q$, where $\L:T^*Q \to TQ$ is the
Legendre transform (see Section~\ref{Sub:nhb}). In
Section~\ref{Sub:jac} (see Theorem~\ref{T:MarsdenKoon}) we recall
the global formula for the Jacobiator of $\{\cdot,\cdot\}_\nh$ from
\cite{paula}, which depends on the choice of a complement $W$ of the
constraint distribution $D$ such that $TQ=D \oplus W$. As shown in
\cite{paula}, this formula is useful to provide information about
properties of reduced nonholonomic brackets in the presence of
symmetries.
In Section~\ref{S:adapted} we recall the choice of coordinates,
suitably adapted to the constraints, used by Koon and Marsden in
\cite{MarsdenKoon}, and in terms of which their Jacobiator formula
is expressed. We then compare the global and local viewpoints in
Section~\ref{S:coord}, explaining how one can derive the local
Jacobiator formula in \cite{MarsdenKoon} from the coordinate-free
formula in \cite{paula}.
Since the formula in \cite{paula} is coordinate free, it can be used
in examples without specific choices of coordinates. We illustrate
this fact studying the {\it snakeboard}, following
\cite{Ostrowski,MarsdenKoon}; here the natural coordinates in the
problem are not adapted to the constraints so, in principle, the
local formula from \cite{MarsdenKoon} cannot be directly applied.
\medskip
\noindent {\bf Acknowledgments}: I thank the organizers of the {\it Focus
Program on Geometry, Mechanics and Dynamics, the Legacy of Jerry
Marsden}, held at the Fields Institute in Canada, for their
hospitality during my stay. I also benefited from the financial
support given by Mitacs (Canada), and I am specially grateful to
Jair Koiller for his help. I also thank FAPERJ (Brazil) and the GMC
Network (projects MTM2012-34478, Spain) for their support.
\section{Nonholonomic systems} \label{S:NHsystem}
\subsection{The hamiltonian viewpoint} \label{Sub:ham}
A nonholonomic system is a mechanical system on a configuration
manifold $Q$ with constraints on the velocities which are not
derived from constraints in the positions. Mathematically, it is
defined by a lagrangian $L :TQ \to \R$ of mechanical type, i.e., $L=
\kappa - U$ where $\kappa$ is the kinetic energy metric and $U \in
C^\infty(Q)$ is the potential energy, and a nonintegrable
distribution $D$ on $Q$ determining the constraints, see
\cite{BlochBook,CushmannBook}.
If $D$ is an
integrable distribution then the system is called {\em holonomic}.
In order to have an intrinsic formulation of the dynamics of
nonholonomic systems, let us consider the Legendre transform $\L:TQ
\to T^*Q$ associated to the lagrangian $L$. The Legendre transform
is a diffeomorphism since $\L = \kappa^\flat$, where
$\kappa^\flat:TQ \to T^*Q$ is defined by
$\kappa^\flat(X)(Y)=\kappa(X,Y)$. We denote by
$\Ham:T^*Q \to \R$ the hamiltonian function associated to the lagragian $L$.
We define the constraint submanifold $\M$ of $T^*Q$ by $\M =
\kappa^\flat(D)$. Note that $\M$ is a vector subbundle of $T^*Q$.
We denote by $\tau: \M \to Q$ the restriction to $\M$ of the
canonical projection $\tau_Q:T^*Q \to Q$.
On $\M$ we have a natural 2-form $\Omega_\subM$ given by
$\Omega_\subM := \iota^*\Omega_Q$ where $\iota :\M \to T^*Q$ is the
inclusion and $\Omega_Q$ is the canonical 2-form on $T^*Q$. The
constraints are encoded on a (regular) distribution $\C$ on $\M$
defined, at each $m \in \M$, by
\begin{equation} \label{Eq:C}
\C_m = \{ v \in T_m\M \ : \ T\tau(v) \in D_{\tau(m)} \}.
\end{equation}
It was proven in \cite{BS93} that the point-wise restriction of the
2-form $\Omega_\subM$ to $\C$, denoted by $\Omega_\subM|_\C$, is
nondegenerate. That is, if $X \in \Gamma(\C)$ is such that ${\bf
i}_X \Omega_\subM|_\C \equiv 0$, then $X =0$. Therefore, there is
a unique vector field $X_\nh$ on $\M$, called the {\it nonholonomic
vector field}, such that $X_\nh(m) \in \C_m$ and
\begin{equation} \label{Eq:NH-Dyn}
{\bf i}_{X_\nh} \Omega_\M |_\C = d\Ham_\subM|_\C,
\end{equation}
where $\Ham_\subM := \iota^*\Ham: \M \to \R$.
The integral curves of $X_\nh$ are solutions of the nonholonomic dynamics \cite{BS93}.
In order to write \eqref{Eq:NH-Dyn} in local coordinates, suppose
that the constraint distribution $D$ is described (locally) by the
annihilators of 1-forms $\epsilon^a$ for $a=1,...,k$, that is $D=
\{(q, \dot q) \ : \ \epsilon^a(q) (\dot q) = 0 \mbox{ for all }
a=1,...,k \}.$ If we consider canonical coordinates $(q^i, p_i)$ on
$T^*Q$ then the constraints are given by
$$
\epsilon^a_i (q) \frac{\partial \Ham}{\partial p_i} = 0, \qquad \mbox{for } a=1,...,k,
$$
and \eqref{Eq:NH-Dyn} becomes
$$
\dot q^i = \frac{\partial \Ham}{\partial p_i}, \qquad \dot p_i = - \frac{\partial \Ham}{\partial q^i} + \lambda_a \epsilon^a,
$$
where $\lambda_a$ are functions (called the Lagrange multipliers) which are uniquely determined by the fact that the constraints are satisfied.
\subsection{The nonholonomic bracket}\label{Sub:nhb}
Recall that an {\it almost Poisson bracket} on $\M$ is an
$\R$-bilinear bracket $\{\cdot, \cdot \}: C^\infty(\M) \times
C^\infty(\M) \to C^\infty(\M)$ that is skew-symmetric and satisfies
the Leibniz condition:
$$
\{fg,h\} = f\{g,h\} + \{f,h\}g, \qquad \mbox{for } f,g,h \in C^\infty(\M).
$$
If $\{\cdot, \cdot \}$ satisfies the Jacobi identity, then the
bracket is called {\it Poisson}. The {\it hamiltonian vector field}
$X_f$ on $\M$ associated to a $f \in C^\infty(\M)$ is defined by
\begin{equation} \label{Eq:HamVF}
X_f = \{ \cdot , f\}
\end{equation}
and the {\it characteristic distribution} of $\{\cdot, \cdot \}$ is
a distribution on the manifold $\M$ whose fibers are spanned by the
hamiltonian vector fields. If the bracket is Poisson, then its
characteristic distribution is integrable. However, the converse is
not always true.
From the Leibniz identity it follows that there is a one-to-one
correspondence between almost Poisson brackets $\{\cdot, \cdot \}$
and bivector fields $\pi \in \bigwedge^2(T\M)$ given by
\begin{equation} \label{Eq:Pi-bracket}
\{f,g\} = \pi(df,dg), \qquad f,g, \in C^\infty (\M).
\end{equation}
Let us denote by $\pi^\sharp:T^*\M \to T\M$ the map defined by
$\beta(\pi^\sharp(\alpha)) = \pi(\alpha, \beta).$ Then, using
\eqref{Eq:HamVF}, the hamiltonian vector field $X_f$ is also given
by $X_f = - \pi^\sharp(df)$ and the characteristic distribution of
$\pi$ is the image of $\pi^\sharp$. The Schouten bracket $[\pi,\pi]$
(see \cite{MarsdenRatiu}) measures the failure of the Jacobi
identity of $\{\cdot , \cdot \}$ through the relation
\begin{equation}
\frac{1}{2}[\pi, \pi](df,dg,dh) = \{f,\{g,h\}\} + \{f, \{g,h\}\}+ \{g,\{h,f\}\} + \{h,\{f,g\}\} \label{E:Jacobi}
\end{equation}
for $f, g, h \in C^\infty(\M)$. So we refer to the trivector
$\frac{1}{2}[\pi,\pi]$ as the {\it Jacobiator} of $\pi$, which is
zero when $\pi$ is a Poisson bivector.
Coming back to our context, consider a nonholonomic system
on a manifold $Q$ defined by a lagrangian $L$ and a constraint distribution $D$. Due to the nondegeneracy of $\Omega_\M|_\C$,
there is an induced bivector field $\pi_\nh \in \bigwedge^2(T\M)$
defined at each $\alpha \in T^*\M$ by
\begin{equation} \label{Eq:Pinh}
\pi^\sharp_\nh(\a)=X \quad \mbox{if and only if} \quad {\bf i}_X \Omega_\subM |_\C = -\alpha|_\C.
\end{equation}
The characteristic distribution of $\pi_\nh$ is the distribution
$\C$ defined in \eqref{Eq:C}. Since $\C$ is not integrable,
$\pi_\nh$ is not Poisson.
The bivector field $\pi_\nh$ is called the {\it nonholonomic
bivector field} \cite{SchaftMaschke1994,Marle1998, IbLeMaMa1999} and
it describes the dynamics in the sense that
\begin{equation} \label{Eq:Xnh}
\pi_\nh^\sharp(d\Ham_\subM) = -X_\nh.
\end{equation}
By \eqref{Eq:Pi-bracket}, the nonholonomic bivector $\pi_\nh$
defines uniquely an almost Poisson bracket $\{\cdot, \cdot \}_\nh$
on $\M$, called the {\it nonholonomic bracket}. From \eqref{Eq:Pinh}
we observe that
$$
\{f,g \}_\nh = \Omega_\subM(X_f, X_g) \qquad \mbox{for } f,g \in C^\infty(\M),
$$
where $X_f=-\pi_\nh^\sharp(df)$ and $X_g=-\pi_\nh^\sharp(dg)$. The
nonholonomic vector field \eqref{Eq:Xnh} is equivalently defined
through the equation $X_\nh = \{ \cdot , \Ham_\subM\}_\nh$.
\subsection{The Jacobiator formula}\label{Sub:jac}
Recall that $\C$ is a smooth distribution on $\M$. Choose a
complement $\W$ of $\C$ on $T\M$ such that, for each $m \in \M$,
\begin{equation} \label{Eq:SplittingOfTM}
T_m\M = \C_m \oplus \mathcal{W}_m.
\end{equation}
Let $P_\subC :T\M \to \C$ and $P_\subW :T\M \to \W$ be the
projections associated to the decomposition
\eqref{Eq:SplittingOfTM}. Since $P_\subW :T\M \to \W$ can be seen as
a $\W$-valued 1-form, following \cite{paula}, we define the
$\W$-valued 2-form ${\bf K}_\subW$ given by
\begin{equation}
\label{Def:K} {\bf K}_\subW(X,Y) = - P_\subW( [P_\subC (X),
P_\subC(Y)] ) \qquad \mbox{for } X,Y \in \mathfrak{X}(\M).
\end{equation}
Once a complement $\W$ of $\C$ is chosen, we obtain a
coordinate-free formula for the Jacobiator of the nonholonomic
bracket.
\begin{theorem}\cite{paula} \label{T:MarsdenKoon}
The following holds:
\begin{equation} \label{Eq:Jacobiator}
\frac{1}{2}[\pi_{\emph \nh}, \pi_{\emph \nh}] (\alpha, \beta, \gamma)= \Omega_\M ({\bf K}_\subW (\pi^\sharp_{\emph \nh}(\alpha), \pi^\sharp_{\emph \nh}(\beta) ), \pi^\sharp_{\emph \nh}(\gamma)) - \gamma \left( {\bf K}_\subW(\pi^\sharp_{\emph \nh}(\alpha), \pi^\sharp_{\emph \nh}(\beta)) \right) + \textup{cyclic}.
\end{equation}
for $\alpha, \beta, \gamma \in T^*\M$.
\end{theorem}
In fact, a more general formula appeared in \cite{paula}, valid for
any bivector field $\pi_\B$ gauge related to $\pi_\nh$. In that
context, this formula was used to understand under which
circumstances the reduction of $\pi_\B$ by symmetries had an
integrable characteristic distribution (even if it was not Poisson).
We will now show how this formula recovers the coordinate Jacobiator
formula obtained in \cite{MarsdenKoon}.
\section{The Koon-Marsden adapted coordinates}\label{S:adapted}
In this section we will recall the Koon-Marsden approach to writing
the Jacobiator of a nonholonomic bracket, based on a suitable choice
of coordinates of the manifold $Q$. After this, we will write the
objects presented in Section \ref{S:NHsystem} (such as the 2-forms
$\Omega_\M$ and ${\bf K}_\subW$, and the bivector $\pi_\nh$) in such
local coordinates in order to see the equivalence between the local
and global viewpoints.
We start by recalling the coordinates chosen in \cite{MarsdenKoon}.
Consider a nonholonomic system given by a lagrangian $L$ and a
nonintegrable distribution $D$. Let $\epsilon^a$ for $a=1,...,k$ be
1-forms that span the annihilator of $D$, i.e., $D^\circ =
\textup{span}\{\epsilon^a\}$. The authors in \cite{MarsdenKoon}
introduce local coordinates $(q^i) = (r^\a,s^a)$ on $Q$ for which
each 1-form $\epsilon^a$ has the form
\begin{equation}\label{Eq:CoordMK}
\epsilon^a = ds^a +A_\a^a(r,s) dr^\a,
\end{equation}
where $A_\a^a$ are functions on $Q$ for $\a=1,...,n-k$ and
$a=1,...,k$. During the present paper, we refer to the coordinates
$(r^\a, s^a)$ such that \eqref{Eq:CoordMK} is satisfied as {\it
coordinates adapted to the constraints}.
These coordinates induce a (local) basis of $D$ given by
$\left\{X_\a:= \frac{\partial}{\partial r^\a} - A_\a^a
\frac{\partial}{\partial s^a}\right\}$. We complete the basis
$\{X_\a\}$ and $\{\epsilon^a\}$ in order to obtain dual basis on
$TQ$ and $T^*Q$, that is
$$
TQ=\textup{span}\left\{X_\a, \frac{\partial}{\partial s^a} \right\}
\quad \mbox{and} \quad T^*Q=\textup{span}\{dr^\a, \epsilon^a\}.
$$
Let $(\tilde p_\a, \tilde p_a)$ be the coordinates on $T^*Q$
associated to the basis $\{dr^\a, \epsilon^a\}$. Since $\M =
\textup{span}\{\kappa^\flat(X_\a)\} \subset T^*Q$ then
\begin{equation} \label{Eq:M}
\M=\{(q^i, \tilde p_a,\tilde p_\a) \ : \ \tilde p_a =
[\kappa_{a\a}][\kappa_{\a\beta}]^{-1}\tilde p_\beta =J_a^\beta
\tilde p_\beta\},
\end{equation}
where $[\kappa_{a\a}]$ denotes the $(k\times(n-k))$-matrix with
entries given by $\kappa_{a\a} = \kappa( \frac{\partial}{\partial
s^a} ,X_\a)$, $[\kappa_{\a\beta}]^{-1}$ is the inverse matrix
associated to the invertible $((n-k)\times(n-k))$-matrix with
entries given by $\kappa_{\a\beta} = \kappa(X_\a,X_\beta)$ and
$J_a^\beta$ are the functions on $Q$ representing the entries of the
matrix $[\kappa_{a\a}][\kappa_{\a\beta}]^{-1}$. Therefore, each
element $(r^\a, s^a; \tilde p_\a)$ represents a point on the
manifold $\M$.
In \cite{MarsdenKoon} the Jacobiator formula is written in terms of
the curvature of an Ehresmann connection.
The local coordinates $(r^\a, s^a)$ induce a fiber bundle with projection given by $\upsilon(r^\a,s^a) = r^\a$.
Let us call $W$ the vertical distribution defined by this projection.
The Ehresmann connection $A$ on $\upsilon:Q=\{r^\a,s^a\}\to R=\{r^\a\}$ is chosen in such a way that its horizontal space agrees with the distribution $D$. The connection $A$ is represented by a
vector-valued differential form given, at each $X\in TQ$, by
\begin{equation} \label{Eq:A}
A(X)=\epsilon^a(X)\frac{\partial}{\partial s^a}.
\end{equation}
The {\it curvature} associated to this connection is
a vector-valued 2-form ${\bf K}_W$ defined on $X,Y \in
\mathfrak{X}(Q)$ by
\begin{equation}\label{Eq:Kw-MK}
{\bf K}_W(X,Y) = -A([P_D (X), P_D(Y)] ),
\end{equation}
where $P_D:TQ \to TQ$ is the projection to $D$ given by $P_D(X)=dr^\a(X)X_\a$.
In coordinates, the curvature ${\bf K}_W$ is given by the following
formula \cite[Sec. 2.1]{MarsdenKoon}:
$$
{\bf K}_W(X,Y)= d \epsilon^a (P_D (X), P_D(Y))\frac{\partial}{\partial s^a},
$$
hence, locally,
\begin{equation} \label{Eq:depsilon-coord}
d\epsilon^a |_D = C_{\a\beta}^a dr^\a \wedge dr^\beta |_D,
\end{equation}
where $\displaystyle{C_{\a\beta}^a (r,s)= \frac{\partial
A_\beta^a}{\partial r^\a} - A_\a^b \frac{\partial
A_\beta^a}{\partial s^b} }$. Let us define
\begin{equation} \label{Eq:Calphabeta}
K_{\a\beta}^a = C_{\a\beta}^a - C_{\beta\a}^a.
\end{equation}
For each $a=1,...,k$ the coefficients $K_{\a\beta}^a$ are
skew-symmetric and $d\epsilon^a |_D = K_{\a\beta}^a dr^\a \wedge
dr^\beta |_D,$ for $\a < \beta$. Therefore, if $X,\bar{X} \in D$
then $d\epsilon^a (X, \bar{X}) = K_{\a\beta}^a v^\a\bar{v}^\beta$
where $X= v^\a X_\a$ and $\bar{X}=\bar{v}^\beta X_\beta$.
\begin{remark}
Observe that in \cite{MarsdenKoon}, the 1-forms $\epsilon^a$ where denoted by $\omega^a$ while ${\bf K}_W$ was denoted by $B$ and the coefficients $K_{\a\beta}^a$ were $-B_{\a\beta}^a$. In this case, for $\dot q \in D$ then $d\omega^b(\dot q, \cdot )|_D = -B_{\a\beta}^b \dot r^\a dr^\beta |_D$ (observe the correction in the sign with respect to the equation in \cite[Sec.~2.1]{MarsdenKoon}).
\end{remark}
Finally, in \cite[Theorem 2.1]{MarsdenKoon} the almost Poisson
bracket $\{ \cdot, \cdot \}_\subM$ describing the dynamics of a
nonholonomic system was written following \cite{SchaftMaschke1994}
but in local coordinates on $Q$ adapted to the constraints
\eqref{Eq:CoordMK}. That is, $\{ \cdot, \cdot \}_\subM$ was computed
from the canonical Poisson bracket on $T^*Q$ but written in terms of
the adapted coordinates $(r^\a, s^a, \tilde p_\a, \tilde p_a)$. As a
result, the almost Poisson bracket $\{ \cdot, \cdot \}_\subM$ on
$\M$, written in local coordinates $(r^\a, s^a, \tilde p_\a)$, has
the following form \cite{MarsdenKoon}
\begin{equation}\label{Eq:NHbracket-coord} \{q^i,q^j\}_\subM = 0, \quad
\{r^\a,\tilde p_\beta \}_\subM = \delta_\a^\beta, \quad
\{s^a,\tilde p_\a\}_\subM = -A_\a^a,\quad
\{\tilde p_\a,\tilde p_\beta\}_\subM = K_{\a\beta}^b J_b^\gamma \tilde p_\gamma
\end{equation}
\section{The coordinate version of the Jacobiator formula}
\label{S:coord}
\subsection{Interpretation of the adapted coordinates}
In this section, we will relate the choice of the coordinates
proposed in \cite{MarsdenKoon} with the choice of a complement $\W$
done in \cite{paula} (see \eqref{Eq:CoordMK} and
\eqref{Eq:SplittingOfTM}, respectively). We will also connect the
{\it curvature} \eqref{Eq:Kw-MK} with the 2-form \eqref{Def:K}, and
the nonholonomic bivector $\pi_\nh$ with the bracket $\{ \cdot ,
\cdot \}_\subM$ given in \eqref{Eq:Pinh} and
\eqref{Eq:NHbracket-coord}, respectively.
Consider a nonholonomic system on a manifold $Q$ given by a
lagrangian $L$ and a nonintegrable distribution $D$. Let us consider
local coordinates $(r^\a, s^a)$ adapted to the constraints as in
\eqref{Eq:CoordMK}.
\begin{lemma}\label{L:Equival-Coord}
The choice of coordinates $(r^\a,s^a)$ adapted to the constraints
\eqref{Eq:CoordMK}, induce a complement $W$ of $D$ on $TQ$ such that
\begin{equation} \label{Eq:Decomp-TQ}
TQ = D \oplus W, \qquad \mbox{where} \quad W =
\textup{span}\left\{\frac{\partial}{\partial s^a} \right\}.
\end{equation}
\end{lemma}
\medskip
The projection $P_W: TQ \to W$ associated to the decomposition
\eqref{Eq:Decomp-TQ} is interpreted in \cite{MarsdenKoon} as the
Ehresmann connection $A$ \eqref{Eq:A}. In this context we compare
the curvature ${\bf K}_W$ defined in \eqref{Eq:Kw-MK} (see
\cite{MarsdenKoon}) with the $\W$-valued 2-form ${\bf K}_\subW$
defined in \eqref{Def:K}.
Recall that the submanifold $\M = \kappa^\flat(D) \subset T^*Q$ is
described by local coordinates $(r^\a, s^a; \tilde p_\a)$ (see
\eqref{Eq:M}). Locally $T^*\M$ is generated by the basis
$\mathfrak{B}_{T^*\M} = \{dr^\a,\epsilon^a, d\tilde p_\a\}$. During
the rest of the paper, when there is no risk of confusion, we will
use the same notation for 1-forms on $Q$ and their pull back to $\M$
and $T^*Q$, (i.e., $\tau^* dr^\a= dr^\a$ and $\tau^*\epsilon^a=
\epsilon^a$ where $\tau:\M \to Q$ is the canonical projection).
Since $\tau$-projectable vector fields generate $T\M$ at each point, we can consider a complement $\W$ of $\C$ generated by $\tau$-projectable vector fields $Z_a$ such that $T\tau(Z_a) \in W$.
That is, \begin{equation}\label{Eq:C+W-coord}
\C=\textup{span}\left\{ X_\a, \frac{\partial}{\partial \tilde p_\a}
\right\} \quad \mbox{and} \quad \W =\textup{span}\left\{Z_a \ : \ T\tau(Z_a) = \frac{\partial}{\partial s^a} \right\}.
\end{equation}
\begin{lemma} \label{L:KGamma} Let $\W$ be a complement of $\C$ as in \eqref{Eq:C+W-coord} where $W$ is the complement of $D$ induced by the coordinates $(r^\a, s^a)$ as in Lemma \ref{L:Equival-Coord}.
\begin{enumerate}
\item[$(i)$]
The $\W$-valued 2-form ${\bf K}_{\subW}$ and the curvature ${\bf K}_W$, defined in \eqref{Def:K} and \eqref{Eq:Kw-MK} respectively, are related, at each $X,Y \in T\M$, by ${\bf K}_W (T\tau(X),T\tau(Y)) = T\tau({\bf K}_{\subW}(X,Y)).$
In local coordinates $(r^\a, s^a)$ adapted to the constraints
\eqref{Eq:CoordMK}, the following holds:
$$
{\bf K}_{\subW} |_\C = (C_{\a\beta}^a dr^\a\wedge dr^\beta |_\C )\otimes Z_a.
$$
\item[$(ii)$]
Let $\bar{\W}$ be a different complement of $\C$ such that $T\tau(\W) = T\tau(\bar{\W}) = W$. For $X,Y \in \Gamma(\C)$ we have
$$
{\bf K}_{\subW}(X,Y) - {\bf K}_{\bar{\subW}} (X,Y) \in \Gamma(\C).
$$
\end{enumerate}
\end{lemma}
\begin{proof}
$(i)$ During this proof and to avoid confusion, we will work with the
basis $\{\tau^*dr^\a,\tau^*\epsilon^a, d\tilde p_\a\}$ of $T^*\M$,
keeping $dr^\a$ and $ds^a$ to denote 1-forms on $Q$. Let us consider
the basis $\mathfrak{B}=\{X_\a, \frac{\partial}{\partial \tilde
p_\a}, Z_a\}$ of $T\M$ adapted to $\C \oplus \W$ and its dual $\mathfrak{B}^*= \{\tau^* dr^a, \Psi_\a , \tau^*\epsilon^a\}$ where $\Psi_\beta(X_\a) = \Psi_\beta(Z_a) = 0$ and
$\Psi_\beta( \frac{\partial}{\partial \tilde p_\a} ) =
\delta_{\a\beta}$.
Then, for $X,Y \in \Gamma(\C)$,
$$
{\bf K}_{\subW}(X,Y) = - P_{\subW}( [X,Y] ) = - \tau^*\epsilon^a([X,Y])Z_a = d\tau^*\epsilon^a(X,Y) Z_a = d\epsilon^a(T\tau(X), T\tau(Y)) Z_a.
$$
Therefore, $T\tau({\bf K}_{\subW}(X,Y)) = d\epsilon^a(T\tau(X), T\tau(Y)) \otimes \frac{\partial}{\partial s^a} = {\bf K}_W(T\tau(X), T\tau(Y)).$
Finally, since $T\tau(X), T\tau(Y) \in \Gamma(D)$ (see \eqref{Eq:C})
and using \eqref{Eq:depsilon-coord} we obtain
$$
{\bf K}_{\subW}|_\C = (C_{\a\beta}^a \tau^*dr^\a \wedge \tau^*dr^\beta |_\C ) \otimes Z_a.
$$
Using our simplified notation ($\tau^*dr^\a = dr^\a$) we obtain the desired formula.
$(ii)$ Let $\mathfrak{B}$ and $\mathfrak{B}^*$ be the basis as in item $(i)$. Consider also $\bar{\mathfrak{B}}=\{X_\a, \frac{\partial}{\partial \tilde
p_\a}, \bar{Z}_a\}$ a basis of $T\M$ adapted to $T\M=\C \oplus \bar{\W}$ such that $T\tau(\bar{Z}_a) = \frac{\partial}{\partial s^a}$ and its dual $\bar{\mathfrak{B}}^*=\{dr^\a,\bar{\Psi}_\a, \epsilon^a\}$,
such that $\bar{\Psi}_\beta(X_\a) = \bar{\Psi}_\beta(\bar{Z}_a) = 0$ and $\bar{\Psi}_\beta(\frac{\partial}{\partial \tilde p_\a} ) = \delta_{\a\beta}$. Then we have that, for $X,Y \in \C$,
$$
{\bf K}_{\bar{\subW}} (X,Y) = - P_{\bar{\subW}} ([X,Y]) = \epsilon^a([X,Y]) \bar{Z}_a = {\bf K}_{\subW}(X,Y) + \epsilon^a([X,Y]) \otimes(\bar{Z}_a - Z_a).
$$
Since $\bar{Z}_a - Z_a \in \textup{Ker}\, T\tau \subset \C$ then
${\bf K}_\subW(X,Y) - {\bf K}_{\bar{\subW}} (X,Y) \in \C$.
\end{proof}
\begin{remark} \label{Prop:Ksemi-basic} Note that the coordinates description of
${\bf K}_\subW$ shows that it is semi-basic with respect to the bundle projection $\tau:\M \to Q$, i.e., \ ${\bf i}_X {\bf K}_\subW =0$ \ if \ $T\tau(X)=0$. This is in agreement with \cite[Prop.~3.1]{paula}
\end{remark}
In order to write the nonholonomic bivector $\pi_\nh$ using
\eqref{Eq:Pinh} but in local coordinates $(r^\a, s^a; \tilde p_\a)$
on $\M$ we study the local description of the 2-section
$\Omega_\subM|_\C$.
The canonical 1-form $\Theta_Q$ on $T^*Q$ is given, in local
coordinates $(r^\a, s^a; \tilde p_\a, \tilde p_a)$, by $\Theta_Q =
\tilde p_\a dr^\a + \tilde p_a \epsilon^\a$. Then, it is
straightforward to see that the canonical 2-form $\Omega_Q$ is
written locally as
$$
\Omega_Q= dr^\a \wedge d\tilde p_\a + \epsilon^a \wedge d\tilde p_a - \tilde p_a d\epsilon^a.
$$
Recall that $\iota: \M \to T^*Q$ is the natural inclusion, so the pull back of $\Omega_Q$ to $\M$ is given by
\begin{equation} \label{Eq:OmM-coord}
\Omega_\subM = \iota^*\Omega_Q = dr^\a \wedge d\tilde p_\a + \iota^* \epsilon^a \wedge d\iota^*(\tilde p_a) - \iota^*(\tilde p_a) d(\iota^*\epsilon^a),
\end{equation}
where $dr^\a$ and $d\tilde p_\a$ are considered as 1-forms on $\M$.
Therefore,
\begin{equation}\label{Eq:OmC-coord}
\begin{split}
\Omega_\subM |_\C & = dr^\a \wedge d\tilde p_\a - \iota^*(\tilde p_a) \iota^*(d \epsilon^a) |_\C \\
& = dr^\a \wedge d\tilde p_\a - J_a^\delta \tilde p_\delta C_{\a\beta}^a dr^\a \wedge dr^\beta |_\C,
\end{split}
\end{equation}
where in the last equation we use \eqref{Eq:M} and the coordinate
version of $d \epsilon |_D$ given in \eqref{Eq:depsilon-coord}.
Applying \eqref{Eq:Pinh} to the 2-form $\Omega_\subM$ and $\C$,
given in \eqref{Eq:OmM-coord} and \eqref{Eq:C+W-coord} respectively, we
compute the nonholonomic bivector field $\pi_\nh$ on $\M$:
\begin{equation}\label{Eq:NH-bivector}
\pi_\nh^\sharp(dr^\a) =\frac{\partial}{\partial \tilde p_\a},\qquad
\pi_\nh^\sharp(ds^a) = -A_\a^a \frac{\partial}{\partial \tilde
p_\a}, \qquad \pi_\nh^\sharp(d\tilde p_\a) = - X_\a + J_a^\delta
\tilde p_\delta K_{\alpha\beta}^a\frac{\partial}{\partial \tilde
p_\beta}.
\end{equation}
\begin{lemma}\label{L:NHbracket}
The almost Poisson bracket $\{\cdot , \cdot\}_\subM$ given in
\eqref{Eq:NHbracket-coord} (see \cite[Theorem 2.1]{MarsdenKoon}) is
the coordinate version of the nonholonomic bracket $\{\cdot , \cdot
\}_{\emph\nh}$ associated to the bivector field $\pi_{\emph\nh}$
obtained from \eqref{Eq:Pinh}.
\end{lemma}
\subsection{The Jacobiator in adapted coordinates}
Consider a nonholonomic system on a manifold $Q$ given by a lagrangian $L$ and a constraint distribution $D$ such that
$\epsilon^a$, for $a=1,...,k$, are 1-forms generating $D^\circ$.
Consider local coordinates $(r^\a,s^a)$ on $Q$ adapted to the
constraints as in \eqref{Eq:CoordMK}. Let $(r^\a, s^a;\tilde p_\a)$
be the coordinates on the manifold $\M=\kappa^\flat(D)$. By Lemma
\ref{L:NHbracket}, the almost Poisson bracket $\{\cdot, \cdot\}_\M$
\eqref{Eq:NHbracket-coord} is the coordinate version of the bivector
field $\pi_\nh$ given in \eqref{Eq:Pinh}, and thus Koon-Marsden
formula for the Jacobiator can be written directly with respect to
$\{ \cdot , \cdot \}_\nh$.
\begin{theorem}\cite[Sec.~2.5]{MarsdenKoon}
The Jacobiator of the nonholonomic bracket $\{\cdot, \cdot
\}_{\emph\nh}$, in coordinates $(r^a, s^a;\tilde p_\a)$ on $\M$, is
given by the following formula
\begin{eqnarray}
\{\tilde p_\gamma, \{r^\a,\tilde p_\beta \}_{\emph\nh} \}_{\emph\nh} + \textup{cyclic} & = & J_b^\a K_{\beta\gamma}^b, \nonumber\\
\{\tilde p_\beta, \{s^a,\tilde p_\a\}_{\emph\nh} \}_{\emph\nh} + \textup{cyclic} & = & -K_{\a\beta}^a - A_\gamma^a J_b^\gamma K_{\a\beta}^b, \label{Eq:JacobMK}\\
\{\tilde p_\gamma, \{\tilde p_\a,\tilde p_\beta\}_{\emph\nh} \}_{\emph\nh} + \textup{cyclic} & = & \tilde p_\tau J_a^\tau \frac{\partial A_\gamma^a}{\partial s^b}K_{\a\beta}^b + \tilde p_\tau J_a^\tau K_{\delta\gamma}^a J_b^\delta K_{\a\beta}^b - \tilde p_\tau K_{\a\beta}^b \left( \frac{\partial J_b^\tau}{\partial
r^\gamma} - A_\gamma^a \frac{\partial J_b^\tau}{\partial s^a} \right) + \textup{cyclic}, \nonumber
\end{eqnarray}
with all other combinations equal to zero and where $J_b^\a$,
$K_{\a\beta}^a$ and $A_\a^a$ are the functions on $Q$ defined in
\eqref{Eq:M}, \eqref{Eq:Calphabeta} and \eqref{Eq:CoordMK}, respectively.
\end{theorem}
The next result relates the coordinate formula \eqref{Eq:JacobMK} of
the Jacobiator with the coordinate-free formula given in Theorem
\ref{T:MarsdenKoon}.
\begin{theorem} \label{T:Equivalence}
Let $(r^\a, s^a)$ be coordinates on $Q$ adapted to the constraints
as in \eqref{Eq:CoordMK} and let $W$ be the complement of $D$
induced by the coordinates (Lemma \ref{L:Equival-Coord}). The
Koon-Marsden Jacobiator formula \eqref{Eq:JacobMK} for the
nonholonomic bracket $\{\cdot, \cdot \}_{\emph\nh}$ is the
coordinate version of the Jacobiator formula given in Theorem
\ref{T:MarsdenKoon} for $\W$ any complement of $\C$ as in \eqref{Eq:C+W-coord}.
\end{theorem}
\begin{proof}
In order to prove the equivalence we write the Schouten bracket
$[\pi_\nh,\pi_\nh]$ using Theorem \ref{T:MarsdenKoon} evaluated on
the elements $\{dr^\a, ds^a, d\tilde p_\a\}$.
First, observe that by Remark \ref{Prop:Ksemi-basic}, the 2-form
${\bf K}_\subW$ defined in \eqref{Def:K} is annihilated by any of the elements $\pi_\nh^\sharp(dr^\a)$ or $\pi_\nh^\sharp(ds^\a)$ (see \eqref{Eq:NH-bivector}). On the other hand, by Lemma \ref{L:KGamma}$(ii)$, we have that ${\bf
K}_\subW(\pi_\nh^\sharp(d\tilde p_\a), \pi_\nh^\sharp(d\tilde
p_\beta)) = K_{\a\beta}^a Z_a$, where $Z_a \in T\M$ such that $T\tau(Z_a)= \frac{\partial}{\partial s^a}$. Moreover, observe that $\epsilon^a(Z_b) =
\delta_{\a\beta}$ and $dr^\a(Z_a)=0$.
Therefore, using the coordinate version of $\Omega_\M$
\eqref{Eq:OmM-coord} in Theorem \ref{T:MarsdenKoon} we obtain
\begin{equation*} \begin{split}
\frac{1}{2}[\pi_\nh, \pi_\nh] (dr^\a,d\tilde p_\beta,d\tilde p_\gamma)
&= \Omega_\subM ({\bf K}_\subW(\pi_\nh^\sharp(d\tilde p_\beta), \pi_\nh^\sharp(d\tilde p_\gamma) ), \pi_\nh^\sharp(d r^\a) ) - dr^\a ( {\bf K}_\subW(\pi_\nh^\sharp(d\tilde p_\beta),\pi_\nh^\sharp(d\tilde p_\gamma) ) ) \\
& = \ J_a^\a K_{\beta\gamma}^a, \\
\frac{1}{2} [\pi_\nh, \pi_\nh] (d s^a, d\tilde p_\alpha,d\tilde p_\beta) & =
\Omega_\subM({\bf K}_\subW(\pi_\nh^\sharp(d\tilde p_\a),\pi_\nh^\sharp(d\tilde p_\beta) ), \pi_\nh^\sharp(d s^a) ) - ds^a ( {\bf K}_\subW(\pi_\nh^\sharp(d\tilde p_\a),\pi_\nh^\sharp(d\tilde p_\beta) ) ) \\ & = \ - K_{\a\beta}^b A_\gamma^a J_b^\gamma - K_{\a\beta}^a.
\end{split}
\end{equation*}
Finally, let $Y_a := Z_a-\frac{\partial}{\partial s^a} \in
\textup{Ker}\, T\tau \subset \C$. Then, we have that
\begin{equation*} \begin{split}
\frac{1}{2}[\pi_\nh, \pi_\nh] (d \tilde p_\a, d\tilde p_\beta, d \tilde
p_\gamma)
& = \Omega_\subM(K_{\a\beta}^a Z_a, \pi_\nh^\sharp(d \tilde p_\gamma) ) - d\tilde p_\gamma(K_{\a\beta}^a Z_a ) + \textup{cyclic} \\
& = \Omega_\subM(K_{\a\beta}^a \frac{\partial}{\partial s^a}, \pi_\nh^\sharp(d \tilde p_\gamma) ) + \Omega_\subM(K_{\a\beta}^a Y_a, \pi_\nh^\sharp(d \tilde p_\gamma) ) - d\tilde p_\gamma(K_{\a\beta}^a Y_a ) + \textup{cyclic} \\
& = \Omega_\subM(K_{\a\beta}^a \frac{\partial}{\partial s^a}, \pi_\nh^\sharp(d \tilde p_\gamma) ) + \textup{cyclic} \\
& = \ \tilde p_\tau J_a^\tau \frac{\partial A_\gamma^a}{\partial s^b}K_{\a\beta}^b + \tilde p_\tau J_a^\tau K_{\delta\gamma}^a J_b^\delta K_{\a\beta}^b - \tilde p_\tau K_{\a\beta}^b \left( \frac{\partial J_b^\tau}{\partial
r^\gamma} - A_\gamma^a \frac{\partial J_b^\tau}{\partial s^a} \right) + \textup{cyclic}.
\end{split} \end{equation*}
The Jacobiator on the other combinations of elements of the basis
$\{dr^\a, ds^a,d\tilde p_\a\}$ is zero. Thus, the relation
\eqref{E:Jacobi} implies that the Jacobiator formula in Theorem
\ref{T:MarsdenKoon} evaluated in coordinates \eqref{Eq:CoordMK}
gives the Koon-Marsden formula \eqref{Eq:JacobMK}.
Observe that in this proof we are implicitly using Lemma
\ref{L:KGamma} for $\W$ and $\W_0 = \textup{span}\left\{
\frac{\partial}{\partial s^a} \right\}$.
\end{proof}
\begin{remark}
From \eqref{Eq:Jacobiator} it is straightforward to see that if the
2-form ${\bf K}_\subW$ is zero then the bivector $\pi_\nh$ is
Poisson. On the other hand, it was observed in \cite{MarsdenKoon}
that if the curvature ${\bf K}_W$ is zero then the Jacobi identity
of $\{ \cdot , \cdot \}_\nh$ is satisfied. Using the equivalence
between ${\bf K}_\subW$ and ${\bf K}_W$ (Lemma
\ref{L:KGamma}$(i)$) we see that both 2-forms are zero when $D$
is involutive, i.e., the system is holonomic.
\end{remark}
\begin{remark}(Symmetries)
If the nonholonomic system admits a group of symmetries $G$
then $\pi_\nh$ is $G$-invariant with respect to the induced (lifted)
action on $\M$. As a consequence, the orbit projection $\M \to \M/G$
induces a reduced bivector field $\pi_\red^\nh$ on $\M/G$ describing
the reduced dynamics. Consider $(r^\a, s^a)$ adapted coordinates to
the constraints as in \eqref{Eq:CoordMK} and $W$ the induced
complement of $D$ in $TQ$ given by Lemma \ref{L:Equival-Coord}. Let
$V$ (respectively $\V$) be the tangent space to the orbit of the
$G$-action on $Q$ (respect. on $\M$). If $W \subset V$ then there is a unique choice of the complement $\W$ contained in $\V$:
$$
\W:= (T\tau |_\V)^{-1}(W).
$$
With this choice of $\W$, Theorem \ref{T:MarsdenKoon} induces a formula for the
Jacobiator of the reduced bivector $\pi_\red^\nh$ (see
\cite[Sec.4]{paula}).
There are a number of examples of systems verifying that the complement $W$
induced by the coordinates adapted to the constraints
\eqref{Eq:CoordMK} (as in Lemma \ref{L:Equival-Coord}) is vertical with respect to a $G$-symmetry, including the vertical rolling disk, the nonholonomic particle and the Chaplygin sphere, see \cite[Sec.~7]{paula}.
\end{remark}
On the other hand, it may happen that a given example is described
in coordinates that are not adapted to the constraints. Then, it is
better to use the coordinate free formula of Theorem
\ref{T:MarsdenKoon}.
\section{Example: the snakeboard}\label{S:ex}
The snakeboard describes the dynamics of a board with two sets of
actuated wheels, one on each end of the board. A human rider
generates forward motion by twisting his body back and forth, and
thus producing a movement on the wheels. This effect is modeled as a
momentum wheel which sits in the middle of the board and is allowed
to spin about the vertical axis. The configuration of the snakeboard
is given by the position and orientation of the board in the plane,
the angle of the momentum wheel and the angles of the back and front
wheels. Therefore, the configuration manifold $Q$ is given by
$Q=SE(2)\times (-\pi/2,\pi/2) \times S^1$ with local coordinates
$q=(x,y,\theta,\psi,\phi)$, where $(x,y,\theta)$ represents the
position and orientation of the center of the board, $\psi$ is the
angle of the momentum wheel relative to the board and $\phi$ is the
angle of the front and back wheel as in \cite{Ostrowski} (for
details see \cite{BKMM} and \cite{MarsdenKoon}).
The Lagrangian is given by
$$
L(q, \dot q) = \frac{m}{2} (\dot x^2+\dot y^2) + \frac{mr^2}{2} \dot
\theta^2 + \frac{J_0}{2}\dot \psi^2 + J_0\dot \psi \dot \theta + J_1
\dot \phi^2,$$ where $m$ the total mass of the board, $r$ is the
distance between the center of the board and the wheels, $J_0$ is
the inertia of the rotor and $J_1$ is the inertia of each wheel.
The (nonintegrable) constraint distribution $D$ is given by the
annihilator of the following 1-forms:
\begin{equation} \label{Ex:1forms-const}
\begin{split}
\epsilon^1&=-\sin(\theta + \phi)dx+\cos(\theta+\phi)dy-r\cos\phi d\theta\\
\epsilon^2&=-\sin(\theta - \phi)dx+\cos(\theta-\phi)dy+r\cos\phi d\theta.
\end{split}
\end{equation}
\begin{remark}
The coordinates $(x,y,\theta,\psi,\phi)$ on $Q$ are not adapted to
the 1-forms of constraints $\epsilon^1, \epsilon^2$. In \cite{MK2} a
simplified version is considered where, taking $\phi \neq 0$, it is
possible to write the 1-forms of constraints in such a way that
$(x,y,\theta,\psi,\phi)$ are adapted coordinates as in
\eqref{Eq:CoordMK}. In this paper, we will work with the 1-forms
given in \eqref{Ex:1forms-const}, so, our coordinates in $Q$ are not
adapted to the constraints, even though these are the coordinates
chosen in \cite{MarsdenKoon} to study the reduction by the group of
symmetries $SE(2)$.
\end{remark}
The distribution $D$ on $Q$ is given by
\begin{equation*}
D=\textup{span} \left\{X_\psi:= \frac{\partial}{\partial \psi}, \
X_\phi:= \frac{\partial}{\partial \phi}, \ X_\subS:= -2r \cos^2\phi
\cos\theta \frac{\partial}{\partial x} -2r \cos^2\phi\sin\theta
\frac{\partial}{\partial y} +\sin(2\phi) \frac{\partial}{\partial
\theta} \right\}.
\end{equation*}
We choose the complement $W$ of $D$ generated by $\{X_1,X_2\}$ so
that $\epsilon^a(X_b) = \delta^a_ b$ for $a,b=1,2$, that is
\begin{equation*}
\begin{split}
W=\textup{span} \left\{\begin{array}{c} \, \\ \, \end{array}\right. & X_1:= -\frac{1}{2}\sin\theta \sec\phi \frac{\partial}{\partial x} +\frac{1}{2} \cos\theta\sec\phi \frac{\partial}{\partial y}-\frac{1}{2r}\sec\phi \frac{\partial}{\partial \theta}, \\
& X_2:=\left. -\frac{1}{2}\sin\theta \sec\phi \frac{\partial}{\partial x} +\frac{1}{2} \cos\theta\sec\phi \frac{\partial}{\partial y}+\frac{1}{2r}\sec\phi \frac{\partial}{\partial \theta} \ \right\}.
\end{split}
\end{equation*}
Consider the dual basis $\mathfrak{B}_{TQ} = \{
X_\psi, X_\phi,X_\subS, X_1, X_2\}$ and $\mathfrak{B}_{T^*Q}= \{ d\psi, d\phi,
\alpha_\subS, \epsilon^1, \epsilon^2\}$ where
$$\a_\subS= -\frac{1}{2r} \cos\theta \sec^2\phi dx-\frac{1}{2r} \sin\theta \sec^2\phi dy.$$
Let us denote by $(q;v_\psi, v_\phi, v_\subS, v_1, v_2)$ the
coordinates on $TQ$ associated with the basis $\mathfrak{B}_{TQ}$
while $(q; \tilde p_\psi, \tilde p_\phi, \tilde p_\subS, \tilde p_1,
\tilde p_2)$ denote the coordinates on $T^*Q$ associated to
$\mathfrak{B}_{T^*Q}$.
The submanifold $\M = \kappa^\flat(D)= \textup{span}\{
\kappa^\flat(X_\psi),\kappa^\flat(X_\phi),\kappa^\flat(X_\subS) \}$
is defined in coordinates by
\begin{equation} \label{Ex:M}
\M = \{(q; \tilde p_\psi, \tilde p_\phi, \tilde p_\subS, \tilde p_1, \tilde p_2) \ : \ \tilde p_1= -\tilde p_2 = J_1(\phi)\tilde p_\subS +J_2(\phi)\tilde p_\psi \},
\end{equation}
where
$$J_1(\phi)= \frac{mr}{4(r^2m-J_0\sin^2\phi)} \sin\phi \sec^2\phi \qquad \mbox{and} \qquad J_2(\phi) =- J_1(\phi)\sin(2\phi) .
$$
In order to compute the nonholonomic bivector $\pi_\nh$ describing
the dynamics, we write the 2-form
$\Omega_\subM$ and the 2-section $\Omega_\M|_\C$ in our local coordinates. The canonical 1-form $\Theta_Q$ on $T^*Q$ is given by $\Theta_Q = \tilde p_\psi d\psi + \tilde p_\phi d\phi + \tilde
p_\subS \alpha_\subS + \tilde p_a \epsilon^a$. Then,
$$
\Omega_Q= d\psi \wedge d\tilde p_\psi + d\phi \wedge d\tilde p_\phi
+ \alpha_\subS \wedge d\tilde p_\subS - \tilde p_\subS d\alpha_\subS
+\epsilon^1 \wedge d\tilde p_1+ \epsilon^2 \wedge d\tilde p_2 -
\tilde p_1d\epsilon^1 -\tilde p_2 d\epsilon^2,
$$
Let us consider the basis
$\mathfrak{B}_{T^*\M}=\{d\phi,d\psi,\alpha_\subS,\epsilon^1,\epsilon^2,d\tilde
p_\phi,d\tilde p_\psi, d\tilde p_\subS\}$ of $T^*\M$ (here we are
using the same notation for the pullbacks of the forms to $\M$).
Recall that, on $\M$, $\tilde p_1$ and $\tilde p_2$ are given by
\eqref{Ex:M} and denoting $J_i = J_i(\phi)$ for $i=1,2$ we obtain
\begin{equation} \label{Ex:Om_M}
\begin{split}
\Omega_\subM= & d\psi \wedge d\tilde p_\psi + d\phi \wedge d\tilde p_\phi + \alpha_\subS \wedge d\tilde p_\subS - \tilde p_\subS d\alpha_\subS +
J_1\epsilon^1 \wedge d\tilde p_\subS + J_2 \epsilon^1 \wedge d \tilde p_\psi + \tilde p_\subS J'_1 \epsilon^1\wedge d\phi + \tilde p_\psi J'_2 \epsilon^1\wedge d\phi \\
& - J_1\epsilon^2 \wedge d\tilde p_\subS - J_2 \epsilon^2 \wedge d \tilde p_\psi - \tilde p_\subS J'_1 \epsilon^2\wedge d\phi - \tilde p_\psi J'_2 \epsilon^2\wedge d\phi - (J_1\tilde p_\subS +J_2\tilde p_\psi)(d\epsilon^1-d\epsilon^2).
\end{split}
\end{equation}
On $T\M$ consider the dual basis $\mathfrak{B}_{T\M} = \left\{
X_\psi \, , \, X_\phi \, , \, X_\subS \, , \, X_1\, , \, X_2\, , \,
\frac{\partial}{\partial \tilde p_\psi} \, , \,
\frac{\partial}{\partial \tilde p_\phi} \, , \,
\frac{\partial}{\partial p_\subS} \right\}$ associated to
$\mathfrak{B}_{T^*\M}$. Therefore, we can decompose $T\M = \C \oplus
\W$ such that
\begin{equation} \label{Ex:C+W}
\C= \textup{span}\left\{ X_\psi \, , \, X_\phi \, , \, X_\subS \, , \, \frac{\partial}{\partial \tilde p_\psi} \, , \, \frac{\partial}{\partial \tilde p_\phi} \, , \, \frac{\partial}{\partial p_\subS}\right\} \qquad \W =\textup{span} \left\{X_1 \, , \, X_2 \right\}.
\end{equation}
Therefore, using that $d\epsilon^a |_\C = (-1)^a 2r \cos\phi
\alpha_\subS \wedge d\phi |_\C$ for $a=1,2$ and that $d\alpha_\subS
|_\C = 2\tan\phi d\phi \wedge \alpha_\subS|_\C$, the 2-section
$\Omega_\subM|_\C$ is given by
$$
\Omega_\M|_\C = d\psi \wedge d\tilde p_\psi + d\phi \wedge d\tilde
p_\phi + \alpha_\subS \wedge d\tilde p_\subS - \tilde p_\subS
2\tan\phi d\phi \wedge \alpha_\subS + (J_1\tilde p_\subS +J_2\tilde
p_\psi) 4r \cos\phi \alpha_\subS \wedge d\phi \ |_\C.
$$
Now, we compute the nonholonomic bracket $\pi_\nh$ using
\eqref{Eq:Pinh}
\begin{equation} \label{Ex:NHbracket}
\pi_\nh=\frac{\partial}{\partial \psi} \wedge \frac{\partial}{\partial \tilde p_\psi} + \frac{\partial}{\partial \phi} \wedge \frac{\partial}{\partial \tilde p_\phi} + X_\subS \wedge \frac{\partial}{\partial \tilde p_\subS} - (\tilde p_\subS 2\tan\phi + 4r (J_1\tilde p_\subS +J_2\tilde p_\psi)\cos\phi ) \frac{\partial}{\partial \tilde p_\subS} \wedge \frac{\partial}{\partial \tilde p_\phi} .
\end{equation}
Therefore, the hamiltonian vector fields are
\begin{equation} \label{Ex:HamVectFields}
\begin{split}
\pi^\sharp_\nh(d\psi) & = \frac{\partial}{\partial \tilde p_\psi}, \qquad \pi_\nh^\sharp(d\phi)= \frac{\partial}{\partial \tilde p_\phi}, \\
\pi_\nh^\sharp(\alpha_\subS) & = \frac{\partial}{\partial \tilde p_\subS}, \qquad \pi_\nh^\sharp(\epsilon^i)=0, \qquad \pi_\nh^\sharp(d\tilde p_\psi) = - \frac{\partial}{\partial \psi},\\
\pi_\nh^\sharp(d\tilde p_\phi) &= - \frac{\partial}{\partial \phi} + (2\tan\phi \tilde p_\subS +4r\cos\phi(J_1\tilde p_\subS + J_2\tilde p_\psi) )\frac{\partial}{\partial \tilde p_\subS},\\
\pi_\nh^\sharp(d\tilde p_\subS) &= - X_\subS - (2\tan\phi \tilde p_\subS +4r\cos\phi(J_1\tilde p_\subS + J_2\tilde p_\psi) )\frac{\partial}{\partial \tilde p_\phi}.
\end{split}
\end{equation}
In order to apply Theorem \ref{T:MarsdenKoon} to compute the
Jacobiator of $\pi_\nh$ we study the $\W$-valued 2-form ${\bf
K}_\subW$ defined in \eqref{Def:K} for $\W$ in \eqref{Ex:C+W}. For
$X,Y \in \C$ and using the dual basis $\mathfrak{B}_{T\M}$ and
$\mathfrak{B}_{T^*\M}$ we have that
\begin{equation*}
\begin{split}
{\bf K}_\subW (X,Y) & = -P_\subW([X,Y]) = -\epsilon^1([X,Y]) X_1 - \epsilon^2([X,Y]) X_2 \\
& = d\epsilon^1(X,Y) X_1 + d\epsilon^2(X,Y) X_2.
\end{split}
\end{equation*}
Therefore,
\begin{equation} \label{Ex:Kw}
{\bf K}_\subW |_\C = -2r \cos(\phi) (\alpha_\subS \wedge d\phi \, |_\C) \otimes (X_1 - X_2).
\end{equation}
Finally, we consider the 2-forms $\Omega_\subM$ and ${\bf K}_\subW$, described in \eqref{Ex:Om_M} and \eqref{Ex:Kw} and the vector fields
\eqref{Ex:HamVectFields}, to obtain, by \eqref{Eq:Jacobiator}, that
\begin{equation} \label{Ex:JacobEx}
\begin{split}
& [\pi_\nh, \pi_\nh] (d\tilde p_\phi, d\tilde p_\subS,d\psi) = 4r\cos(\phi) J_2(\phi), \\
&[\pi_\nh, \pi_\nh] (d\tilde p_\phi, d\tilde p_\subS,\alpha) = 4r\cos(\phi) J_1(\phi)\\
& [\pi_\nh, \pi_\nh] (d\tilde p_\phi, d\tilde p_\subS,\epsilon^i) =
(-1)^i 2r\cos(\phi), \;\;\; i=1,2,
\end{split}
\end{equation}
while on other combination of elements the Jacobiator is zero.
This example admits a symmetry given by the Lie group $SE(2)$, see
\cite{MarsdenKoon}. The reduced manifold $\M/G$ is $S^1\times
S(-\pi/2,-\pi/2) \times \R^3$ and the nonholonomic bivector field
$\pi_\nh$ is invariant by the orbit projection $\rho:\M \to \M/G$.
Thus, on $\M/G$ we have the reduced nonholonomic bivector defined at
each $\alpha \in T^*(\M/G)$ by
$$
(\pi_\red^\nh)^\sharp(\alpha) = T\!\rho \, \pi_\nh^\sharp (\rho^*\alpha).
$$
The Jacobiator of the reduced nonholonomic bivector field
$\pi_\red^\nh$ satisfies
$$
[\pi_\red^\nh, \pi_\red^\nh] (\alpha, \beta,\gamma) = T\rho \left(
[\pi_\nh, \pi_\nh] (\rho^*\alpha, \rho^*\beta,\rho^*\gamma)
\right)
$$
for $\alpha, \beta, \gamma \in T^*(\M/G)$. So, in our example it is
simple to compute the Jacobiator of $\pi_\red^\nh$. Taking into
account that, in local coordinates, the orbit projection $\rho : \M
\to \M/G$ is given by $\rho(\psi,\phi,\theta,x,y, \tilde p_\psi,
\tilde p_\phi,\tilde p_\subS)= (\psi,\phi,\tilde p_\psi, \tilde
p_\phi,\tilde p_\subS)$, the Jacobiator of the reduced bivector
$\pi_\red^\nh$ describing the dynamics is given by
$$
[\pi_\red^\nh, \pi_\red^\nh] (d\tilde p_\phi, d\tilde p_\subS,d\psi) = 4r\cos(\phi) J_2(\phi)
$$
while on other elements of $T^*(\M/G)$ is zero.
Just to complete the example we can write, in our coordinates, the
reduced bivector field $\pi_\red^\nh$ on $\M/G$:
$$
\pi_\nh^\red = \frac{\partial}{\partial \psi} \wedge
\frac{\partial}{\partial \tilde p_\psi} + \frac{\partial}{\partial
\phi} \wedge \frac{\partial}{\partial \tilde p_\phi} - (\tilde
p_\subS 2\tan\phi + 4r (J_1\tilde p_\subS + J_2\tilde
p_\psi)\cos(\phi) ) \frac{\partial}{\partial \tilde p_\subS}
\wedge \frac{\partial}{\partial \tilde p_\phi} .
$$
|
1,108,101,566,323 | arxiv | \section{Quantum optimization on Maximum Independent Set}
In this section, we provide details on the numerical analysis of the various quantum optimization algorithms presented in the main text.
As discussed in the main text, we focus on the maximum independent set problem on random unit disk (UD) graphs. We parametrize random UD graphs by two parmeters, the number of vertices $N$ and the 2D vertex density $\rho$. The unit distance is taken to be $r = 1$, and the vertices are put into a box of $L \times L$, where $L = \sqrt{N/\rho}$ (see Fig.~\ref{Fig:rUDGraph} for an example). For a random UD graph with density $\rho$, the average vertex degree is approximately $\pi \rho$. To minimize the finite-size boundary effect, we use periodic boundary conditions for UD graphs in all our numerical simulations.
\begin{figure}[b]
\begin{center}
\includegraphics[width=0.4\linewidth]{rUDGraph2.pdf}
\caption{An example of a random unit disk graph with $N = 40, \rho =1.5, |\text{MIS}| = 14$, and $93$ edges. The unit distance is set to be $r = 1$, and the box length is $L = \sqrt{N/\rho}$.}
\label{Fig:rUDGraph}
\end{center}
\end{figure}
\subsection{Quantum annealing for random UD-MIS}
As discussed in the main text, a quantum annealing algorithm (QAA) for MIS can be performed using the following Hamiltonian
\begin{align} \label{Eq:QA}
H_{\text{QA}} (t)=\sum_{v\in V} \bigg(-\Delta(t) \, n_v+\Omega(t) \sigma_v^x\bigg)+\sum_{(u,w)\in E} U n_u n_w.
\end{align}
The QAA can be designed by first initializing all qubits in $\ket{0}$ at time $t=0$, which is the ground state of $ H_{\text{QA}}(t=0)$ when $\Delta(t=0)<0$ and $\Omega(t=0)=0$ (with $U>0$). We then change the parameters, by first turning on $\Omega(t)$ to a non-zero value, sweeping $\Delta(t)$ to a positive value, and finally turning off $\Omega(t)$ again. The annealing protocol we consider throughout this work is specified by
\begin{equation} \label{Eq:AdiabaticPath}
\Delta(t) = (2s-1)\Delta_0, \quad \Omega(t) = \Omega_{0} \sin^{2}(\pi s) \quad \text{with} \quad s = t/T.
\end{equation}
If the time evolution is sufficiently slow, then by the adiabatic theorem, the system follows the instantaneous ground state, ending up in the solution to the MIS problem. We take $\Omega_{0} = 1$ to be the unit of energy, and fix $\Delta_{0}/\Omega_{0} = 6$, which empirically seems to be a good ratio to minimize nonadiabatic transitions.
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.5\linewidth]{LZfitting2.pdf}
\caption{The Landau-Zener fitting to $1-P_{\text{MIS}} = e^{a-T/T_{\text{LZ}}}$ to extract the adiabatic time scale $T_{\text{LZ}}$. Here, four random unit disk graphs with $N = 10,20,30,40$ are shown. For each instance, we find the first $T$ iteratively such that $P_\text{MIS}(T) > 0.9$, denoted as $T^{*}$. The fitting is then performed on four points $T^{*}, 1.5T^{*},2T^{*},2.5T^{*}$ to extract the time scale $T_{\text{LZ}}$.}
\label{Fig:LZfitting}
\end{center}
\end{figure}
We study quantum annealing on random unit disk graphs, with $N$ vertices and density $\rho$. We take the limit of $\Delta_{0}, \Omega_{0} \ll U$, where the non-independent sets are pushed away by large energy penalties and can be neglected. In the experiment, this corresponds to the limit where the Rydberg interaction energy is much stronger than other energy scales. In this limit, we restrict our wavefunction to the subspace of all independent sets, i.e.
\begin{equation}
\mc{H}_{\rm IS} = \{\ket{\psi}: n_v n_w\ket{\psi} = 0 \text{ for any } (v,w)\in E\},
\end{equation}
in our numerical simulation, which allows us to access a much bigger system size up to $N \sim 50$ since $\dim(\mc{H}_{\rm IS})\ll 2^N$. First, the subspace of all independent sets is found by a classical algorithm, the Bron-Kerbosch algorithm \cite{Bron:1973dm}, and the Hamiltonian in Eq.~\eqref{Eq:QA} is then projected into the subspace of all independent sets. The dynamics with the time-dependent Hamiltonian is simulated by dividing the total simulation time $t$ into sufficiently small discrete time $\tau$ and at each small time step, a scaling and squaring method with a truncated Taylor series approximation \cite{AlMohy:2011iw} is used to perform the time evolution without forming the full evolution operators.
We first consider the time scale needed for adiabatic quantum annealing to work. Typically, this is governed by the minimum spectral gap, $\epsilon_\textrm{gap}$: the runtime required is $T = O(1/\epsilon_\textrm{gap}^2)$. However, the minimum spectral gap is ambiguous when the final ground state is highly degenerate, since it is perfectly legitimate for the state to couple to an instantaneous excited state as long as it comes down to the ground state in the end. For a generic graph, there can be many distinct maximum independent sets (the ground state of $H_{P}$ is highly degenerate). So instead of finding the minimum gap, we take a different approach to extract the adiabatic time scale.
\begin{figure}[t]
\begin{center}
\includegraphics[width=\linewidth]{MIS_T_LZ_Scaling.pdf}
\caption{The adiabatic time scale $T_{\text{LZ}}$ at some fixed densities. For each system size up to $N = 46$, 200 random unit disk graphs are simulated. (a) Median $T_{\text{LZ}}$. (b) and (c): $T_{\text{LZ}}$ for individual instances for $\rho = 0.8$ and $\rho = 3$. }
\label{Fig:MIS_T_LZ_Scaling}
\end{center}
\end{figure}
In the adiabatic limit, the final ground state population (including degeneracy) takes the form of the Landau-Zener formula $P_{\text{MIS}} \approx 1 - e^{a-T/T_{\text{LZ}}}$, where $a$ is a constant and $T_{\text{LZ}}$ is the adiabatic time scale. In the nondegenerate case, one typically has $T_{\text{LZ}} = O(1/\epsilon_\textrm{gap}^{2})$. In the more general case, we extract the time scale $T_{\text{LZ}}$ by fitting to this expression. However, the simple exponential form holds only in the adiabatic limit, where $T \gtrsim T_{\text{LZ}}$. Hence, for each graph instance, we search for the minimum $T$ such that the equation holds: we adaptively double $T$ iteratively (from $T_{\min} = 5$) until we find the minimum $T^{*}$ such that $P_{\text{MIS}} > 0.9$, at which we assume the time evolution lies in the Landau-Zener regime; we then simulate the dynamics for another three time points $1.5T^{*}$, $2T^{*}$, and $2.5T^{*}$, before finally fitting to the equation from $T^{*}$ to $2.5T^{*}$ to extract the time scale $T_{\text{LZ}}$. The fitting is remarkably good for most instances (see Fig.~\ref{Fig:LZfitting} for some examples), and we drop the few graphs where the goodness-of-fit $R^{2} < 0.99$.
We perform this procedure to extract $T_\text{LZ}$ for up to 30 graph instances at each $N$ and $\rho$, and take their median;
this produces the full phase diagram in terms of $T_{\text{LZ}}$ as plotted in Fig.~4(a) of the main text.
Here, in Fig.~\ref{Fig:MIS_T_LZ_Scaling}, we also look at the scaling of $T_{\text{LZ}}$ with $N$ at some fixed densities $\rho = 0.8$ (below the percolation threshold) and $\rho = 3$ (above the percolation threshold). We simulated quantum annealing and extracted $T_{\text{LZ}}$ for 200 random UD graphs at each $N$ up to $N = 46$. As seen in Fig.~\ref{Fig:MIS_T_LZ_Scaling}(a), we can see a clear separation between $\rho = 0.8$ and $\rho = 3$, but the scaling of $N$ is unclear, due to finite-size effect: from the performance of the classical algorithm shown in the main text, one may need to go to $N \gtrsim 100$ to see the true scaling. Fig.~\ref{Fig:MIS_T_LZ_Scaling}(b) and (c) also show the spread of $T_{\text{LZ}}$ for each instance. Note that some hard instances require significantly longer $T_{\text{LZ}}$ than the typical instance, and even on average we can see $T_{\text{LZ}} > 10$ for $\rho =3, N \gtrsim 20$.
\begin{figure}[h]
\begin{center}
\includegraphics[width=0.5\linewidth]{MIS_PhaseDiag_ratio_v2.pdf}
\caption{Phase diagram of approximation ratio $r$ for non-adiabatic quantum annealing with $T=10/\Omega_0$, averaged over 30 graph instances per $N$ and $\rho$ (the same instances as in Fig.~4(b) in the main text). Red dashed line corresponds to percolation ratio at $\rho=\rho_c\approx1.436$\cite{Mertens:2012fr}. White dashed line correspond to optimal disk packing\cite{Musin:2015dz}.}
\label{Fig:MIS_PhaseDiag_ratio}
\end{center}
\end{figure}
In the main text, we focused mainly on the capacity of the algorithms to solve MIS exactly. It is also interesting to ask whether the algorithms can solve MIS approximately, in the sense of finding an independent set as large as possible. For quantum algorithms, we use the approximation ratio $r$ to gauge their performance in terms of approximation. For a quantum algorithm (such as a quantum annealer) that outputs a state $\ket{\psi_f}$, we define $r = \sum_i\braket{\psi_f}{n_i|\psi_f}/ |\text{MIS}|$, where $|\text{MIS}|$ is the size of the MIS. In other words, $r$ quantifies the ratio of the average independent-set size found by measuring the output quantum state to the maximum independent-set size. Fig.~\ref{Fig:MIS_PhaseDiag_ratio} shows an analogous phase diagram in terms of the approximation ratio $r$ by running quantum annealing at a fixed time $T=10$. It displays qualitatively the same features as the ground state population in the main text, but the finite-size effect is stronger due to the small discrete $|\text{MIS}|$ values at large densities.
\subsection{QAOA for MIS}
In this section, we explain how we simulate the Quantum Approximate Optimization Algorithm to solve Maximum Independent Set Problems.
\subsubsection{Quantum approximate optimization algorithm}
Suppose we are to find MIS on a given a graph $G=(V,E)$.
The $p$-level QAOA for MIS, suggested first by \cite{Farhi:2014wk}, is a variational algorithm consisting of the following steps:
\begin{itemize}
\item[(i)] Initialization of the quantum state in $\ket{\psi_0}=\ket{0}^{\otimes N}$.
\item[(ii)] Preparation of variational wavefunction
\begin{equation}\label{QAOAv1}
\ket{\psi_p(\vec{\gamma},\vec{\beta})}=\exp(-i\beta_p H_Q) \prod_{k=1}^{p-1} \exp(-i\gamma_k H_P)\exp(-i\beta_k H_{Q})\ket{\psi_0},
\end{equation}
where $H_P = \sum_{v\in V} -\Delta n_v + \sum_{(v,w) \in E} U n_v n_w$, and $H_Q=\sum_{v\in V} \Omega \sigma_v^x + \sum_{(v,w) \in E} U n_v n_w$. The parameters $\vec{\gamma}\in\mathds{R}^{p-1}$ and $\vec{\beta}\in\mathds{R}^p$, specify the variational state.
\item[(iii)] Measurement of $H_P$.
\end{itemize}
The three steps (i)-(iii) are iterated and combined with a classical optimization of the variational parameters in order to minimize $\mean{\psi_p(\vec{\gamma},\vec{\beta})|H_P|\psi_p(\vec{\gamma},\vec{\beta})}$.
\subsubsection{Alternative formulation}
We are interested in the $U\gg |\Omega|,|\Delta|$ limit, where the variational search is restricted to the subspace $\mc{H}_{\rm IS}$ spanned by independent sets, such that the algorithm does not need to explore states that can be directly excluded as MIS candidates.
In this limit, we can write
\begin{align}
H_Q=\sum_v \mc{P}_{\rm IS}\Omega\sigma_x\mc{P}_{\rm IS}, \qquad H_P= \sum_{v\in V} -\Delta n_v,
\end{align}
where $\mc{P}_{\rm IS}$ is a projector onto the independent set subspace $\mc{H}_{\rm IS}$.
Evolution with $H_P$ thus reduces to simple rotation of individual spins around the $z$ axis. Since
\begin{align}
\exp(-i\gamma H_P)\exp(-i\beta H_Q)=\exp\lr{-i\beta \Omega \sum_v \mc{P}_{\rm IS} \lr{\ket{0}_v\bra{1}e^{i\gamma}+\rm{h.c.}}\mc{P}_{\rm IS}}\exp(-i\gamma H_P),
\end{align}
we can commute all the unitaries generated by $H_P$ in \eqref{QAOAv1} to the rightmost side until they act (trivially) on the initial state. Thus, we can rewrite the state $\ket{\psi_p(\vec{\beta},\vec{\gamma})}$ as
\begin{equation}\label{QAOAv2}
\ket{\psi_p(\vec{\gamma},\vec{\beta})}=\prod_{k=1}^p\exp\lr{-it_k \Omega \sum_v \mc{P}_{\rm IS} \lr{\ket{0}_v\bra{1}e^{i\phi_k}+\rm{h.c.}}\mc{P}_{\rm IS}}\ket{\psi_0},
\end{equation}
where we identify
\begin{align}
\phi_k=\sum_{j\ge k}\gamma_j, \qquad t_k=\beta_k
\end{align}
Thus we recover the formulation of QAOA given in the main text, which is equivalent to \eqref{QAOAv2} for $U\gg\Omega$.
\subsubsection{Numerical simulations: preliminaries}
In our numerical study, we work in the formulation of QAOA corresponding to \eqref{QAOAv1} and take $\Delta=\Omega=1$.
Again, we work in the limit where $U\gg 1$ so that we can restrict our Hilbert space to the independent set subspace $\mc{H}_{\rm IS}$;
this allows us to efficiently simulate system sizes up to $N \sim 50$.
We prepare the state as in Eq.~\eqref{QAOAv1}, and then measure the expectation value of $H_P$, which is the objective function that we seek to minimize:
\begin{equation}
F_p(\vec{\gamma},\vec{\beta}) = \bra{\psi_p(\vec{\gamma},\vec{\beta})}H_P\ket{\psi_p(\vec{\gamma},\vec{\beta})}.
\end{equation}
This sequence of state-preparation and measurement of objective function is then fed as a subroutine to a classical optimization algorithm, which is used to find good parameters $(\vec\gamma,\vec\beta)$ with the lowest possible $F_p$.
\vspace{5pt}
\noindent
\textbf{Classical Optimization Algorithms}---
Generally, classical optimization algorithms work by starting with some initial point in the QAOA parameter space $(\vec\gamma,\vec\beta)$, and iteratively find new points $(\vec{\gamma}', \vec\beta')$ using information from the current point $(\vec\gamma, \vec\beta)$, with the hope that a new point produces a lower value of the objective function $F_p(\vec\gamma',\vec\beta') \le F_p(\vec\gamma,\vec\beta)$.
We first describe some free parameters and stopping criteria that apply to these classical optimization algorithms:
\begin{itemize}
\item $\delta$ -- step tolerance, If the optimization algorithm attempts to go to a new set of parameters $(\vec{\gamma}', \vec\beta')$ such that $|\vec\gamma'-\vec\gamma|^2 + |\vec\beta'-\vec\beta|^2 \le \delta^2$, then the algorithm terminates and outputs the smallest value of $F_p$ seen so far. When the algorithm calls for a numerical computation of the gradient using the finite-difference method, we also take $\delta$ as our increment size, e.g. $\partial F_p/\partial \gamma_i \simeq [F_p(\gamma_i+\delta)-F_p(\gamma_i)]/\delta$.
\item $\epsilon$ -- objective function tolerance. If the optimization algorithm finds that the change in the value of objective function is smaller than $\epsilon$, $|F_p(\vec\gamma',\vec\beta')-F_p(\vec\gamma,\vec\beta)|\le \epsilon$, then the algorithm terminates and outputs the smallest value of $F_p$ seen so far.
\item $\epsilon_M$ -- measurement precision. This is the target precision to which we compute the objective function $F_p$.
When simulating QAOA with measurement projection noise, this parameter determines the number of measurements necessary to obtain a good averaged value of $\mean{H_P}$.
\end{itemize}
In our numerical study, we consider two classical optimization algorithms: the BFGS quasi-Newton algorithm\cite{BFGS1,*BFGS2,*BFGS3,*BFGS4} and the Nelder-Mead simplex algorithm \cite{Nelder:1965in}.
Specifically, the BFGS algorithm computes the gradient of the objective function at the current point $(\vec\gamma, \vec\beta)$, and uses this information to build a quadratic model of objective function and determine the approximate location of a local minimum.
This algorithm terminates the optimization routine if either step tolerance or objective function tolerance is reached.
However, computing the gradient in high-dimensional parameter space can be inefficient, especially when the cost of measurement is being considered.
Hence, we also consider the Nelder-Mead simplex algorithm that does not involve gradients; this algorithm terminates the optimization routine when \emph{both} step tolerance and objective function tolerance are reached.
These algorithms are implemented in the standard library of MATLAB R2017b via \texttt{fminunc} and \texttt{fminsearch} functions, respectively.
\vspace{5pt}
\noindent
\textbf{Heuristic ansatz for optimizing QAOA at deep depths}---
For high levels of QAOA with deep circuit depths, optimization can be difficult because of large dimension of QAOA parameter space.
Due to the non-convex nature of the objective function $F_p$, one often need to optimize starting from many points to have a better chance of finding the global minimum in the QAOA parameter space.
Typically, a brute-force approach starting from random points will require $2^{O(p)}$ initial points to find the global minimum.
Nevertheless, as discussed in more detail in an upcoming study \cite{MaxCutpaper}, we discover patterns in the optimal QAOA parameters for most instances of the MIS problem.
Based on these patterns, we develop a heuristic strategy for choosing good initial points when optimizing QAOA parameters at intermediate level $p$ \cite{MaxCutpaper}.
We find that this strategy allows us to find quasi-optimal parameters that's often nearly as good as the true global minimum, but in time only $O(\poly(p))$.
We now describe our heuristic strategy optimizing QAOA parameters when applied to MIS problems.
We start with level $p=3$ and optimize from an educated guess of initial point $(\gamma_1,\gamma_2,\beta_1,\beta_2,\beta_3) \approx (1.73,-1.77,0.19,1.02,0.39)$ based on the averaged optimal QAOA parameters from 20 instances.
When the optimization algorithm terminates with the optimized parameters $(\vec{\gamma}_{(p)}^L, \vec{\beta}_{(p)}^L)$ for level $p$, we move on to level $p+1$ with the initial point $(\vec{\gamma}_{(p+1)}^0, \vec{\beta}_{(p+1)}^0)$ obtained by linear interpolation:
\begin{align}
\left[\vec{\gamma}^{0}_{(p+1)}\right]_1 & = \left[\vec{\gamma}^{L}_{(p)}\right]_1, \quad \left[\vec{\gamma}^{0}_{(p+1)}\right]_{p} = \left[\vec{\gamma}^{L}_{(p)}\right]_{p-1} ,
\quad
\left[\vec{\gamma}^{0}_{(p+1)}\right]_i = \tfrac{i-1}{p-1} \left[\vec{\gamma}^{L}_{(p)}\right]_{i-1}+ \tfrac{p-i}{p-1}\left[\vec{\gamma}^{L}_{(p)}\right]_i, \\
\left[\vec{\beta}^{0}_{(p+1)}\right]_1 & = \left[\vec{\beta}^{L}_{(p)}\right]_1, \quad \left[\vec{\beta}^{0}_{(p+1)}\right]_{p+1} = \left[\vec{\beta}^{L}_{(p)}\right]_{p} ,
\quad
\left[\vec{\beta}^{0}_{(p+1)}\right]_j = \tfrac{j-1}{p} \left[\vec{\beta}^{L}_{(p)}\right]_{j-1}+ \tfrac{p-j+1}{p}\left[\vec{\beta}^{L}_{(p)}\right]_j, \\
& \text{when } \quad 2\le i \le p-1 \text{ and } 2\le j \le p.\nonumber
\end{align}
Here, we denote $[\vec{\gamma}]_i\equiv \gamma_i$ as the $i$-element of parameter vector $\vec{\gamma}$.
This strategy takes advantage of the observation that there is often a set of optimal QAOA parameters that change smoothly from level $p$ to $p+1$.
Using this strategy, we have been able find good parameters for QAOA at levels as large as $p=50$.
\subsubsection{Simulation neglecting measurement cost}
In order to understand the potential of QAOA by finding the best possible optimal parameters, our initial numerical study neglects the cost of measurements and lets the simulated quantum processor output the exact values of $F_p(\vec\gamma,\vec\beta)$.
In other words, we effectively choose $\epsilon_M=0$.
In addition, we calculate the gradient of the objective function $\nabla F_p = (\partial F_p/\partial\vec\gamma, \partial F_p/\partial\vec\beta)$ analytically instead of using the finite-difference method.
The BFGS quasi-Newton algorithm is used in these simulations to find local minima in the QAOA parameter space, where we set the tolerances at $\epsilon=\delta=10^{-6}$.
This approach of ignoring measurement cost allows us to find optimal QAOA parameters more efficiently.
Using our heuristic ansatz mentioned above, we find optimal QAOA parameters up to $p=50$.
Our results show that optimized QAOA can achieve better performance than simple quantum annealing schedules, as shown for the instance in Fig.~4(c) of the main text.
Nevertheless, this approach does not represent a realistic simulation of actual QAOA implementations, where measurements are not free and $\epsilon_M$ is necessarily finite.
\subsubsection{Simulated experiment with measurement projection noise}
In actual experiments, the value of the objective function $F_p$ can only be determined approximately by averaging over many measurements, each projecting the wavefunction onto a possible outcome.
In our numerical simulation, we account for this effect by performing full Monte Carlo simulation of actual measurements, where the quantum processor outputs only an approximate value of the objective function obtained by averaging over $M$ measurements:
\begin{equation}
\tilde{F}_p = \frac{1}{M}\sum_{i=1}^M f_{p,i}, \quad
f_{p,i} \text{ is a random variable where }
\Pr(f_{p,i} = -k) = \braket{\psi_p(\vec\gamma,\vec\beta)}{\Pi_k|\psi_p(\vec\gamma,\vec\beta)},
\end{equation}
and $\Pi_k$ is the projector onto subspace where $H_P = -k$, spanned by independent sets of size $k$.
Note that when $M\to \infty$, we obtain $\tilde{F}_p \to F_p=\braket{\psi_p(\vec\gamma,\vec\beta)}{H_P|\psi_p(\vec\gamma,\vec\beta)}$ with perfect precision.
In order to achieve finite precision $|\tilde{F}_p - F_p | \sim \epsilon_M$,
we accumulate measurements until the standard error of the mean falls below the precision level.
In other words, for each evaluation of $F_p(\vec{\gamma},\vec{\beta})$, the number of measurements $M$ we perform is set by the following criterion:
\begin{equation}
\sqrt{\frac{1}{M(M-1)}\sum_{i=1}^M(f_{p,i}-\bar{F}_{p,M})^2} \le \epsilon_M,
\quad\text{where}\quad
\bar{F}_{p,M} = \frac{1}{M}\sum_{i=1}^M f_{p,i}.
\end{equation}
Roughly speaking, $M\approx \Var(\hat{F}_p)/\epsilon_M^2$.
To mitigate finite-sample-size effects, we also require at least 10 measurements ($M\ge 10$) be performed for each evaluation of $F_p$.
Using this approach, we simulate experiments of optimizing QAOA with measurement projection noise.
The above-mentioned heuristic ansatz is utilized, as we start with an educated guess of initial QAOA parameters at $p=3$ and optimize until tolerance levels are reached.
To illustrate the usefulness of our heuristic ansatz, we also simulate starting with random choices of initial parameters for optimization, and compare their performance.
In these simulations, the approximate value of $\tilde{F}_p(\vec{\gamma},\vec{\beta})$ is returned whenever the classical optimization algorithm requests an evaluation of $F_p(\vec{\gamma},\vec{\beta})$ from the simulated quantum processor.
This includes, for example, when the BFGS algorithm numerically computes gradients to find optimal parameters by the finite-difference method: $\partial F_p/\partial \gamma_i \approx [\tilde{F}_p(\gamma_i+\delta) - \tilde{F}_p(\gamma_i)]/\delta$.
The history of measurements is stored throughout the entire simulated experiment, which allows us to keep track of the largest independent set $\text{IS}(m)$ found after $m$-th measurement.
We repeat this numerically simulated experiment many times with different pseudo-random number generation seeds, and average over their histories.
In Fig.~4(d) of the main text, we show an example instance where the simulated experiments are run with $\epsilon=\delta=0.2$, and $\epsilon_M = 0.05$, with and without our heuristic ansatz.
\section{Generalization to arbitrary graph structure}
\subsection{Stroboscopic evolution}
As mentioned in the main text, one can generalize our implementation to address MIS problems on graphs, $G=(V,E)$ that are beyond the UD paradigm.
To do so, let us first note that all quantum algorithms we discussed in the main text require evolution with a Hamiltonian $H(t)=\sum_{v}\Omega_v(t)\sigma_v^x-\Delta_v(t)n_v+\sum_{(u,v)\in E} U n_u n_v$.
In particular, we are interested in the situation where $U\gg |\Omega|,|\Delta|$, such that the dynamics is effectively restricted to the independent set space $\mc{H}_{\rm IS}$. To generate such evolution with a Hamiltonian corresponding to a general graph structure, let us consider a Trotterized version of the time evolution operator
\begin{align}
\mc{T}\exp{ \lr{-i\int_0^T dt\,H(t)}}\simeq \prod_{j}\mc{U}(t_j)\equiv\prod_{j}\exp{ \lr{-i(t_{j+1}-t_{j})H(t_j)}},
\end{align}
where we have sliced the time interval $[0,T]$ defining times $t_j$ such that $\sum_j t_j=T$ and $t_{j+1}-t_j\ll \sqrt{D_{\rm max}}\Omega(t_j),|\Delta(t_j)|$. Here $D_{\rm max}$ denotes the maximum degree of the graph. We further Trotterize each $\mc{U}(t_j)$ as follows
\begin{align}
\mc{U}(t_j)\simeq \prod_{v=1}^N\mc{U}_v(t_j)\equiv\prod_{v=1}^N \exp\lr{-i(t_{j+1}-t_j)\lr{\Omega_v(t_j)\sigma_v^x-\Delta_v(t_j)n_v+\frac{1}{2}\sum_{u\in \mc{N}(v)} U n_u n_v)}},
\end{align}
that is we split it into a product of terms $\mc{U}_v$ that each are associated with the evolution of one spin, $v$. Here $\mc{N}(v)$ denotes the neighbors of $v$ on the graph. Note that in the $U\rightarrow\infty$ limit we are interested this can be written as
\begin{align}\label{Trotterstep}
\mc{U}_v(t_j)=\exp\lr{-i(t_{j+1}-t_j)\lr{\Omega_v(t_j)\sigma_v^x-\Delta_v(t_j)n_v}}\prod_{u\in \mc{N}(v)}\ket{0}_u\bra{0}.
\end{align}
This is a simple single qubit rotation of atom $v$, condition on the state of the atoms corresponding to neighbors of $v$ being $\ket{0}$. If at least one of the neighbors is in state $\ket{1}$, atom $v$ does not evolve.
\subsection{Implementation using qubit hyperfine encoding}
One approach to realize the corresponding dynamics with individually controlled neutral atoms can be designed as follows. We consider an implementation where qubits states $\ket{0}$ and $\ket{1}$ are encoded in two (non-interacting) hyperfine states, in the internal atomic ground state manifold. We position all atoms on the points of a 2D square lattice with distance $g$. To realize a single step, $\mc{U}_v(t_j)$, we first excite all atoms, $u$, that correspond to neighbors of $v$ on the graph ($u\in \mc{N}(v)$), selectively from the state $\ket{1}$ to a Rydberg $S$-state $\ket{1'}$. We choose a grid length $g\ll r_B$ such that none of the atoms $u\in \mc{N}(v)$ interact during this process. Then we drive atom $v$, to realize the single qubit rotation in the hyperfine manifold, i.e a unitary corresponding to an evolution with $\Omega_v(t_j)\sigma_v^x-\Delta_v(t_j)n_v$, where ${\sigma}_v^x$ couples the two hyperfine qubits states of atom $v$, and $n_v=\ket{1}_v\bra{1}$ counts if atom $v$ is in hyperfine state $\ket{1}$. To realize this rotation this we use an individually addressed two-step excitation that couples the two hyperfine states $\ket{0}$ and $\ket{1}$ of atom $v$ via a transition through a Rydberg $P$-state. If all atoms $u$ are in the state $\ket{0}$, then this process is not disturbed, but if at least one of the neighbors is the Rydberg $S$ state, the strong $S-P$ interaction gives rise to a blockade mechanism that prevents the rotation of the qubit $v$, thus realizing exactly \eqref{Trotterstep}. Note this requires a different scale of interaction length of the blockade radius for two atoms in the Rydberg $S$-states on one hand, and the $S$-$P$ blockade radius on the other hand. This can be readily realized by noting that these two interactions scale differently with separation of the atoms: the $S$-$P$ interactions decay as $1/x^3$, i.e. much slower than the $S$-$S$ interactions that scale like $1/x^6$ [refs, thompson, nature], which should allow one to implement collective gate with high fidelity.
|
1,108,101,566,324 | arxiv | \section{Introduction}
Over the last decade graphene is arguably one of the most investigated materials among all the existing carbon allotropic forms \cite{doi:10.1002/anie.201600655,Geim2007}. Indeed, despite the difficulties in synthesising high-quality large-area graphene sheets \cite{taioli2014computational,tatti2016synthesis,taioli2016characterization}, its great promises and achievements in the fields of microelectronics, of materials science and chemistry motivate the great deal of scientific and technological efforts that scientists are pursuing.\\
\indent Further progress in the search of potential applications of graphene to various material/devices is mainly related to its unique electronic and mechanical properties \cite{doi:10.1063/1.4716178,taioli2009electronic,haberer2011direct,Allen2010,Randviir2014,Signetti2017,2053-1583-4-3-031013,PEDRIELLI2018766,AzzoliniJPCC,AzzoliniCarbon,PEDRIELLI2017796,haberer2010tunable}, even though the quest for practical applications have shifted the focus over the years to other layered materials, such as transition-metal dichalcogenides (TMDs) \cite{C4CS00182F}, silicene \cite{Vogt2012}, germanene, the monolayer form of black phosphorous \cite{Chen2016, Matthes2013}, and boron-nitride. Moreover, the power of modern supercomputer platforms, algorithms \cite{Marzari} and novel approaches e.g. based on artificial intelligence \cite{Nosengo} have paved the way also to the discovery of layered hybrid materials \cite{Geim2013} with the aim of combining together the most desirable characteristics of each layered structure \cite{Meyer2009,Signetti2017}, starting a new specific research field on 2D materials.\\
\indent Nevertheless, the possibility to introduce new interesting features also in bi-dimensional carbon-based materials without chemical functionalization,
while keeping the desirable properties of graphene, such as its planar periodic structure and the $sp^2$ bonding network, might be very convenient to the existing technology. In this regard, one of the most striking properties of graphene is its Young's modulus to density ratio, probably the highest achieved so far. Unfortunately, investigations on this topic have been rarely pursued except for some notable exceptions \cite{Liu2012,Li-Chun2014,Wang2013,0953-8984-28-13-13LT01}. \\
\indent In this work we propose first a systematic approach for finding novel energetically stable structures characterized by $sp^2$-bonded carbon atoms of decreasing density using graphene as a frame of reference. In particular, we aim to find planar structures with density lower than graphene, possibly decreasing it up to the least dense form of carbon allotrope that could ever be synthesized, while displaying almost unchanged specific mechanical characteristics with respect to graphene. Indeed, one of the possible routes to increase the specific modulus with respect to graphene can be reducing the surface density. Increasing the specific modulus by decreasing the mass density is a typical request whereby the minimum structural weight can be achieved. This challenge has far-reaching consequences in a variety of applications, most notably in aerospace technologies where weight saving is a route to cost reduction.
To characterize the response of these novel planar structures to external force and electromagnetic fields, we assess the stress--strain curves, their specific mechanical properties, and we calculate the electronic band structures of both the parent and derived daughter architectures from Density Functional Theory (DFT) simulations. Our analysis shows the existence of a threshold density below which the mechanical rigidity of graphene is very much depleted, while other specific mechanical characteristics, such as the strength and toughness, can be even bigger than graphene. \\
\indent Finally, we devise that the systematic approach presented in this work can be extended also to design novel lightweight strong three-dimensional carbon allotropes.
\section{Methods and computational details}\label{methods}
\subsection{Structure optimization.} The optimal structure search, electronic structure simulations and the assessment of the mechanical properties of the previously introduced architectures were performed within the DFT framework using the {\sc Quantum ESPRESSO} (QE) suite \cite{qe}. QE is a plane-wave code based on the pseudopotential approach to deal with the interaction between valence electrons and lattice ions. Optimization of the atomic configurations was carried out by using Broyden–Fletcher–Goldfarb–Shannon (BFGS) algorithm with the following DFT parameters. The simulation cells in the orthogonal direction to the plane of the structures is set to 20 \AA, in order to avoid spurious interactions among periodic images. The optimized configurations of all the structures investigated in this work can be found in the electronic supplementary material.
\subsection{Band structure simulations and DOS.} In our DFT simulations we used a norm conserving PBE pseudopotential and an energy cut-off for the wavefuntions equal to $100$ Ry. This large value of the plane-wave cut-off is due to obtaining converged values of the stress tensor, an observable notoriously more difficult to converge with respect to the total energy. The $k$-point grid used to calculate observables in the momentum space depends on the simulation cell size and was chosen so to achieve converged DFT values below chemical accuracy ($<$ 0.01 eV for the total energy and $< 10^{-3}$ Ry/\AA~ for the interatomic forces). Thus, depending on the simulation cell we performed calculations on $6\times 6 \times 1$ up to $16\times 16 \times 1$ $k$-point grids for structural minimization, while increasing the $k$-point mesh to $48 \times 48\times 1$ for the calculation of the Density Of States (DOS) and of the band structures. Convergence of the integrals over the Brillouin zone was improved by smearing the occupancy with a $0.136$ eV width Gaussian function.\\
\subsection{Mechanical properties.} In order to properly sample the energy density function in linear regime for carrying out the calculation of the mechanical properties we used $0.001$-spaced points up to $0.01$ strain and further $0.005$-spaced points up to $0.05$ strain. To deal with the elastic deformations we used a supercell containing two unit cells for the materials having trigonal symmetry, i.e. the graphene and the flakene families. The $C_{11}$ coefficient in these cases is associated to the strain along the zig-zag direction. Upon deformation, the atomic positions within the supercell were relaxed until interatomic forces were smaller than $10^{-3}$ Ry/\AA. We further notice that in our ab-initio simulations we perform the calculations of the true stress. Thus, the simulation supercell is free to relax in the direction orthogonal to the loading below 0.5 kbar, equivalent to $0.5*20/3.35=3$ kbar. In fact, the calculation of the stress in the bi-dimensional material relies on the choice of a conventional thickness, which was set to 3.35 \AA~ for the graphene monolayer, while 20 \AA ~is the dimension of our simulation cell. The final pressure is calculated using the area resulting from relaxation, thus the plots of the stress-strain characteristics refer to the so-called ``true stress''.
\section{Results and discussion}
\subsection{Structure Search Method}
The structure of graphene-like materials is closely related to the packing of congruent discs touching each other exactly in three points. A two-dimensional packing can be achieved by a collection of congruent discs in the plane subject to the following constraints:
\begin{itemize}
\item No two discs overlap;
\item Each disc is in contact with at least another disc;
\item For any choice of two discs in the packing there is always a path connecting them through mutual contacts.
\end{itemize}
Angle strain in $sp^2$ carbon allotropes increases noticeably when far from the equilibrium configuration equal to $2 \pi / 3$ rad. For this reason, we limit our study to structures containing only angles smaller than $\pi$ rad.
This angle choice corresponds to a specific condition for packing, called {\it local stability} or {\it locally jammed packing}. For locally stable disc packing, contacts between circles should lie not all on the same hemicircle \cite{Torquato2010}. With such a constraint in place, one might wonder whether packings of arbitrarily low density exist or, in case they do not, what the least dense arrangement of discs in the plane would be. It is an interesting question on its own, given that the question addressing the opposite problem $-$ that of finding the densest arrangement of discs in the plane $-$ received much attention for a long time \cite{Lagrange1773, Thue1910} and found a formal answer only in the last century \cite{Toth1943}. \\
In this regard, it is worth noticing that if the packing is allowed to be non-periodic, then discs can indeed be packed into locally stable configurations with arbitrarily low density \cite{Boroczky1964}.
\begin{figure*}
\centering
\includegraphics[width=1.0\textwidth]{fig1.pdf}
\caption{First row: in the upper panel parent (left, red color) and daughter (right, blue color) disc packing in the unit cells of a) graphene, c) tilene, and e) flakene. The lines internal to the discs connect the nearest neighbor discs. Bottom panel: $4 \times 4$ supercells of parent (left) and daughter (right) structures of b) graphene, d) tilene, and f) flakene. Second row:
g) from left to right: pentagraphene structure; augmentation of the Cairo pentagonal tiling: liskene; further augmentation of the liskene geometry: liskene daughter. h) from left to right: top and side view of a $3 \times 3$ pentagraphene super cell, where the $sp^3$-hybridized carbon atoms are reported in green color, while in grey scale we find the $sp^2$-hybridized carbon centers; $3 \times 3$ supercell of liskene and liskene daughter after performing DFT minimization. i) By relaxing the locally jammed packing constraint, the flakene structure can be made progressively less dense by elongating the hexagonal super-ring side highlighted in the picture. l) The low density structure obtained from flakene by doubling the hexagonal super-ring side length.}
\label{fig:structure}
\end{figure*}
\subsubsection{Graphene and graphene daughter.}
Consider the packing of discs associated with the structure of graphene. This arrangement has density of $\pi/(3 \sqrt{3}) \sim 0.6046$, as shown in the middle-left panel of figure \ref{fig:structure}(a), and defines the structural net of graphene reproduced in the left hand side of figure \ref{fig:structure}(b). Now replacing each disc with three smaller discs (reducing thus the disc radius by a factor $\frac{1}{1+2/ \sqrt{3}}$ with respect to the radius of graphene discs) - a process called augmentation when referred to nets \cite{Fischer2002} - leads to a less dense packing $\pi(7\sqrt{3})-12 \sim 0.390675$, as shown in the middle-right panel of figure \ref{fig:structure}(a). The resulting structure, which we name ``{\it graphene daughter}'' (gr11 in \cite{Sun2016}), is reproduced in the right hand side of figure \ref{fig:structure}(b). Unfortunately, the substitution cannot be pursued any further as contacts between circles on the half-hemicircle would occur and spoil the local stability condition. This applies to any packing, whose associated net (or tiling) contains triangles.
\subsubsection{Tilene parent and tilene.}
By considering tilings with polygons having a number of sides larger than three, in the middle-left panel of figure \ref{fig:structure}(c) we show what is probably the simplest structure after graphene, that is the packing associated with the well known square-octagon tiling. This defines a net of carbon atoms that we label ``{\it tilene parent}'' (octagraphene in \cite{Sun2016}), which is reported in the left hand side of figure \ref{fig:structure}(d). Its packing factor is $\pi (3-2 \sqrt{2}) \sim 0.539012$, which is lower than that of graphene. Its augmentation, carried out with the previously described principles, leads to an even rarer packing, as shown in the middle-right panel of figure \ref{fig:structure}(c), where the packing factor is $3 \pi / (2+\sqrt{2}+\sqrt{3})^2 \sim 0.355866$. The resulting structure, reported in the right hand side of figure \ref{fig:structure}(d), is called through this paper ``{\it tilene}''.
\subsubsection{Flakene parent and flakene.}
The tiling of the plane by regular polygons with the largest rings is the truncated trihexagonal tiling reported in the middle-left panel of figure \ref{fig:structure}(e) for which the packing factor before the geometry optimization is $\pi(2/\sqrt{3}-1) \sim 0.486006$. The resulting geometry is called ``{\it flakene parent}'' (C64 graphenylene in \cite{Sun2016}) and is reported in the left hand side of figure \ref{fig:structure}(f). Its augmentation, shown in the middle-right panel of figure \ref{fig:structure}(e), shows a tiling containing 24-sided polygons whose initial density is $3 \sqrt{3} \pi / (20+3\sqrt{3}+6\sqrt{7}+2\sqrt{21}) \sim 0.324951$. Although topologically equivalent, this packing differs from a previously reported example \cite{Fischer2002} as its density is appreciably lower. The resulting structure, obtained by placing carbon atoms at the disc centers, is reported in the right hand side of figure \ref{fig:structure}(f) and we call it ``{\it flakene}''. It can be shown that it is one of the $sp^2$ structures with lowest density ever studied which agree to the locally jammed packing conditions.
\subsubsection{Liskene.}
Other carbon structures can be designed by using a tiling conceptually different from what we have seen so far. In particular,
one may think to modify the carbon three-coordination, and to allow also four-coordinated vertices.
The Cairo pentagonal tiling is known to be the structure of pentagraphene \cite{Zhang2015} and it cannot be filled with congruent discs as the previously proposed structures. In this tiling indeed not all carbon atoms are three-coordinated and the resulting structure is not planar.
The unit cell of pentagraphene, whose 3$\times$3 periodic arrangement is reported in figure \ref{fig:structure}(g), is made by $4$ three-coordinated and $2$ four-coordinated vertices, characterized thus by $sp^2$-$sp^3$ and $sp^2$-$sp^2$ hybridization, respectively. This diversity of coordination leads the system to develop into the third dimension.
This reflects the fact that one cannot tile the plane by regular pentagons. Top and side views of the calculation supercell used in our simulation are reported in the left hand side of figure \ref{fig:structure}(h). To find the daughter structure, we apply our augmentation method also to the Cairo pentagonal tiling, characterizing the pentagraphene cell. In this way, we obtain a planar three-coordinated structure that we name ``{\it liskene}'', which is shown in the central panel of figure \ref{fig:structure}(g). While we notice that this is a different case study with respect to the other structures as the augmentation procedure takes place in a non-planar geometry, we find that the daughter architecture is still a three-coordinated system with a density lower than the parent. In the central panel of figure \ref{fig:structure}(h) we show the DFT optimized geometry of this tiling. Furthermore, by further augmenting liskene we obtain the daughter architecture (see the right panel of figure \ref{fig:structure}(g)), which represents the maximal limit of the planar packing of pentagraphene. Finally, in the right panel of figure \ref{fig:structure}(h) we report the DFT-optimized structure.
\subsubsection{Relaxing the locally jammed packing constraint.}
By relaxing the condition of having all the angles between two carbon bonds strictly less than $\pi$, the flakene architecture can be used as a basis for building up structures with arbitrary low density. This can be achieved by progressively elongating the sides of the hexagonal super-ring highlighted in figure \ref{fig:structure}(i), at fixed width. In particular, a sketch of the structure derived by doubling the hexagonal super-ring sides is reported in figure \ref{fig:structure}(l). Concerning stability, by increasing the length of the hexagonal sides to achieve an arbitrarily low density the energy-per-atom is $1.372$ eV/atom higher than graphene, which can be assumed to be the asymptotic value for area density going to zero.
\subsection{Structural optimization}
Prior carrying out the mechanical and electronic characterization of these novel carbon nets, we perform the structural optimization (details on the DFT parameters were given in section \ref{methods}).
In the second and third columns of Table \ref{tab2} we report the energy per atom and the cohesive energies obtained upon optimization of the atomic positions within the cell. \\
\indent The cohesive energy of graphene (7.74 eV) well agrees with the experimental value of 7.6 eV \cite{Dappe2006}, and with previous DFT simulations \cite{PASTI2018433} reporting a value of 7.828 eV. We notice that graphene is still the most energetically stable allotrope. In general, with the notable exception of pentagraphene, we observe that lowering the densities of the parent structures by using the previously introduced augmentation method results in daughter architectures characterized by lesser energetic stability and lower intra-molecular bond strengths. We rationalize the different finding in the case of pentagraphene, for which the cohesive energy increases from parent to daughter, by noticing that the augmentation starts from a non-planar $sp^2-sp^3$ net and ends up into a purely planar $sp^2$ net. This atomic arrangement represents thus a favourable solution from both the energetic and density points of view.\\
\indent For the flakene parent we find almost the same energy difference (0.6395 eV vs. 0.64 eV) with respect to graphene (see third column of table \ref{tab2}) as in \cite{Song2013}, where this structure is labelled ``graphenylene''. Also for the tilene parent we calculate an energy difference with respect to graphene equal to 0.5186 eV, which is very similar to the value of 0.53 eV reported in \cite{Liu2012}, where the structure was named T-graphene. Finally, in the case of pentagraphene we find an energy-per-atom difference of $0.904$ eV, which is very much comparable to the value of about $0.9$ eV reported in \cite{Zhang2015}.\\
\indent While we notice that the loss of stability is not significant, as the total energy difference per atom between the less stable material (flakene) and graphene is of the order of 1$\%$, the density is almost two times lower than that one of graphene (see the second column of table \ref{tab2}).
\begin{table*}
\caption{\label{tab2}First column: structure type. Second column: surface density. Third and fourth columns report the total energy per atom with respect to graphene and the cohesive energy per atom obtained upon structural optimization, respectively. With the exception of pentagraphene, all structures are planar and each carbon atom is three-coordinated. In the table the following abbreviations were used: p.=parent, d.=daughter, dir.=direct gap, indir.=indirect gap.}
\lineup
\begin{tabular*}{\textwidth}{@{}l*{15}{@{\extracolsep{0pt plus
12pt}}l}}
\br
Structure & Density & Energy & Cohesive energy & Type & Bandgap\\
& (atoms/\AA$^2$) & ([eV]/atom) & ([eV]/atom) & & [eV]\\
\mr
\verb"Graphene" & 0.379 &0 & 7.7404 & \verb"Semi-met." & 0 \verb"(dir.)"\\
\verb"Graphene d." & 0.256 & 0.9882 & 6.7523 & \verb"Metal" & - \\
\verb"Tilene p." & 0.336 & 0.5186 & 7.2219 & \verb"Metal" & - \\
\verb"Tilene" & 0.233 & 1.0765 & 6.6640 & \verb"Metal" & - \\
\verb"Flakene p." & 0.301 & 0.6395 & 7.1009 & \verb"Semi-met." & 0.043 \verb"(dir.)"\\
\verb"Flakene" & 0.212 &1.1071 & 6.6334 & \verb"Metal" & -\\
\verb"Pentagraphene" & 0.452 & 0.9044 & 6.8361 & \verb"Semicond" & 2.23 \verb"(ind.)"\\
\verb"Liskene" & 0.297 & 0.7789 & 6.9615 & \verb"Semicond" & 0.36 \verb"(ind.)"\\
\verb"Liskene d." & 0.247 & 1.0506 & 6.6897 & \verb"Semicond" & 0.46 \verb"(ind.)"
\\
\br
\end{tabular*}
\end{table*}
\begin{figure*}[!h]
\centering
\includegraphics[width=1.0\textwidth]{fig2.pdf}
\caption{Upper panels: band structure of the parent (left, red lines) and daughter (right, blue lines) architectures of a) graphene, b) tilene, c) flakene, and d) pentagraphene. Lower panels: relevant DOS of the parent (red filled area) and daughter (blue filled area) structures. Fermi level is shifted to zero and reported as an horizontal (vertical) green line in the band structure (DOS).}
\label{fig:dos}
\end{figure*}
\subsection{Electronic properties}
In this section we report the band structures and relevant DOSs for the eight structural arrangements proposed in this work. \\
\indent We begin with the well known electronic band structure of graphene reported in the upper left panel of figure \ref{fig:dos}(a) alongside the DOS (red filled curve in the lower panel of figure \ref{fig:dos}(a)), which we reproduce to test our choice of the DFT parameters. The agreement with previous simulations \cite{RevModPhys.81.109} is excellent so we can move to the assessment of the electronic structure of the other systems. \\
\indent In the right panel of figure \ref{fig:dos}(a) we report the bands of the graphene daughter. We observe that graphene loses its semi-metal characteristics and acquires a striking metallic behaviour with a loss of the typical graphene features near the Fermi energy (valence and conduction bands do not touch in one Dirac point as well as the dispersion around the $\Gamma$-point is not linear). This is due to the appearance of a narrow band close to the Fermi level (reported as an horizontal (vertical) green line in the band structure (DOS)), a feature that appears also in tilene and flakene, as can be seen in the upper right panels of figures \ref{fig:dos}b) and \ref{fig:dos}c). This is of course reflected into the DOS, characterized by a narrow peak close to the Fermi energy, as shown in the blue filled curves reported in the lower panels of figure \ref{fig:dos}(a), \ref{fig:dos}(b), and \ref{fig:dos}(c). \\
\indent Tilene parent and flakene parent electronic band structures (see upper left panels in figures \ref{fig:dos}(b) and \ref{fig:dos}(c), respectively) are not dramatically changed by augmentation, as the daughter structures stay metallic
(see lower panel in figure \ref{fig:dos}(b) for tilene daughter) or increase their metallic character (see lower panel of figure \ref{fig:dos}(c) for flakene daughter). \\
\indent At odds with the previous architectures, pentagraphene band structure (see upper left panel of figure \ref{fig:dos}(d)), which has the typical characteristics of a semiconductor (in agreement with previous DFT calculations \cite{Zhang2015}), is heavily affected by augmentation, as the daughter structure (see upper right panel of figure \ref{fig:dos}(d)) presents an almost semi-metallic behaviour characterized by a very narrow band gap. The relevant DOSs of pentagrafene (filled red curve of figure \ref{fig:dos}(d)) and liskene (filled blue curve of figure \ref{fig:dos}(d)) are typical of a semiconductor and semi-metal, respectively.
\subsection{Elastic properties}
\begin{table}
\caption{\label{tab:Elastic} Elastic constants ($C_{11}$, $C_{12}$, $C_{44}$), area Young's modulus ($E_A$), Young's modulus ($E$), Poisson's ratio ($\nu$) and area specific Young's modulus ($E_A/\rho_A$) of the parent and daughter carbon structures. To evaluate the accuracy of our simulations, we report a comparison with data in the literature where available. In the table the following abbreviations were used: p.=parent, d.=daughter.}
\lineup
\begin{tabular*}{\textwidth}{@{}l*{15}{@{\extracolsep{0pt plus
12pt}}l}}
\br
& $C_{11}$ & $C_{12}$ & $C_{44}$ & $E_A$ & $E$ & $\nu$ & $E_A/\rho_{A}$ \\
& (N/m) & (N/m)& (N/m)& (N/m)& (TPa) & & ($10^{-3}$ Nm~kg$^{-1}$) \\
\mr
\verb"Graphene" & 348 & 53.8 & - & 340 & 1.14 & 0.154 & 1.79 \\
\cite{Sun2016} & 358 & 60 & - & 349 & & 0.17& \\
\mr
\verb"Graphene d." & 149 & 94.0 & - & 89.6 & 0.30 & 0.631 & 0.70 \\
\cite{Sun2016} & 152 & 98 & - & 92.6 & & 0.64 & \\
\mr
\verb"Tilene p." & 294 & 44.1 & 48.3 & 288 & 0.96 & 0.150 & 1.70 \\
\cite{Lei2012} & 296 & 46 & 49 & 306 & & 0.13 & \\
\mr
\verb"Tilene" & 124 & 75.5 & 11.3 & 78.6 & 0.26 & 0.607 & 0.67 \\
\mr
\verb"Flakene p." & 220 & 57.7 & - & 205 & 0.69
& 0.263 & 1.36 \\
\cite{Sun2016} & 227 & 61 & - & 210 & & 0.27 & \\
\mr
\verb"Flakene" & 87.0 & 64.9 & - & 38.6 & 0.13 & 0.746 & 0.36 \\
\mr
\verb"Liskene" & 187 & 94.8 & 52.0 & 138 & 0.46
& 0.508 & 0.93 \\
\mr
\verb"Liskene d." & 127 & 65.6 & 19.4& 93.1
& 0.31 & 0.517
&0.75\\
\br
\end{tabular*}
\end{table}
\begin{figure*}[!h]
\centering
\includegraphics[width=1.\textwidth,trim={0 0.2cm 0 0},clip=true]{fig3.pdf}
\caption{Specific biaxial elastic modulus versus area density. The black line shows the trend reported in \cite{Sun2016} for carbon allotropes.}
\label{fig:Density}
\end{figure*}
To characterize the mechanical properties of the daughter architectures in comparison to the parent structures we carried out first the ab-initio simulations of the elastic stiffness tensor ${\bf C}$. The matrix ${\bf C}$ provides in linear approximation the proportionality or elastic constants relating the stress and the strain, $\bf{\sigma}=\bf{\varepsilon}{\bf C}$ where $\bf{\varepsilon}$ is the six-component strain vector, and $\bf{\sigma}$ is the stress tensor.
The stiffness tensor is in principle characterized by six independent terms in bi-dimensional materials, being $C_{ij} = C_{ji}$ for symmetry considerations. The elastic behavior of orthotropic 2D materials can be described thus by four elastic constants $C_{11}$, $C_{22}$, $C_{12}$ and $C_{44}$ \cite{Huntington1958}. For the square lattice structures, such as tilene parent, tilene and liskene, the symmetry constraint sets $C_{11}=C_{22}$, so that one has only three independent elastic constants. Graphene, graphene daughter, flakene parent and flakene at variance show isotropic hexagonal symmetry, reducing the independent elastic constants to only two according to the relations $C_{11}=C_{22}$ and $2C_{44}=C_{11}-C_{12}$.\\
\indent In harmonic approximation, the strain--energy density function $F$ at 0 K can be expressed as
\begin{equation}\label{enden}
F=F_0+\frac{1}{2}F^{(2)}\varepsilon^2+o(\varepsilon^3)
\end{equation}
where $F_{0}$ and $1/2F^{(2)}\varepsilon^2$ are the static energy of the system and the lattice vibrational contribution, respectively. In our simulations we neglect the thermal electronic contribution, which is expected to be low.\\
\indent The elastic constants $C_{ij}$ can be then expressed as follows:
\begin{equation}
C_{ij}= \frac{\partial^2 F }{\partial \varepsilon_{i} \partial \varepsilon_{j}}
\end{equation}
The $C_{ij}$s can be derived by fitting the energy density of \ref{enden} with a second order polynomial in the imposed strain. In particular, on the one side for isotropic materials the fitting parameter $F^{(2)}$ can be identified with $C_{11}$ for uniaxial deformation and with $2(C_{11}+C_{12})$ under hydrostatic deformation, respectively. On the other side, in the case of orthotropic materials further calculations are needed in order to fully characterize the stiffness matrix. In this case, the elastic constants $C_{11}$, $C_{22}$ can be identified as the fitting parameters of the total energy under uniaxial strain, while $F^{(2)}$ corresponds to $C_{11}+C_{22}+2C_{12}$ or $4C_{44}$ in the case of hydrostatic deformation or shear deformation, respectively.
From the knowledge of the elastic constants, the Young's modulus $E$, which measures material's stiffness, and the Poisson's ratio $\nu$, which measures the material's tendency to expand in directions perpendicular to the direction of compression, can be computed as $E=(C_{11}^2-C_{12}^2)/C_{11}$ and $\nu=C_{12}/C_{11}$, respectively.\\
\indent In table \ref{tab:Elastic} we report the elastic constants, Young's modulus and Poisson's ratio of all the 2D carbon allotropes studied in this work in comparison with the DFT values reported in the literature \cite{Sun2016, Lei2012}, finding a remarkable agreement with previous calculations and experiments. We remind that, at variance with a stable, isotropic, linear elastic 3D material where the bounds on Poisson's ratio are $-1<\nu<1/2$, for 2D materials one has $-1<\nu<1$ \cite{Thorpe531}. Therefore, it is not surprising to obtain values of the Poisson's ratios higher than 1/2 for our 2D architectures. \\
\indent The Young's modulus of graphene obtained from our DFT simulations is in good agreement with the experimental value of 1$\pm$0.1 TPa (assuming a graphene thickness equal to 0.335 nm), obtained by nanoindentation measurements on single-layer graphene \cite{Lee385}. The analysis of the Poisson's ratio of tilene, flakene and liskene shows that these materials are almost incompressible. More precisely, the Poisson's ratio of tilene (as of graphene daughter, flakene, and liskene) is higher than the limit of isotropic incompressible 3D materials (which is $0.5$) while lower than the corresponding upper bound on Poisson's ratio for 2D materials (which is $1$ \cite{Thorpe531}): this material presents a hyper-restriction correspondent to a decrease of the area under tension.
\begin{table}
\caption{\label{tab:stresstrain}Fracture strain (first column), strength (second column), strength$\times t$ (third column) and toughness$\times t$ (fourth column) of the parent and daughter planar structures alongside the specific strength and specific toughness (fifth and sixth columns). The conventional thickness of the graphenic materials is considered to be $t=3.35$~\AA. In the table the following abbreviations were used: p.=parent, d.=daughter.}
\lineup
\begin{tabular*}{\textwidth}{@{}l*{15}{@{\extracolsep{0pt plus
12pt}}l}}
\br
& {\small Loading} & {\small Fracture} & {\small Strength} & {\small Strength } & {\small Toughness} & {\small Specific} & {\small Specific} \\
& {\small direction} & {\small strain} & & {\small $ \times t$} & {\small $ \times t$} & {\small strength} & {\small toughness} \\
& & {\small (\%)} & {\small (GPa)}& {\small (N/m)} & {\small (J m$^{-2}$)}& {\small (MNm kg$^{-1}$)}& {\small (MJ~kg$^{-1}$)}\\
\mr
\verb"Graphene" & x & $>$ 35 & 112 & 37.5 & $>$ 9.83 & 49.7 & $>$ 13.0 \\
& y & 26-28 & 102 & 34.2 & 6.51 & 45.2 & 8.61 \\
\mr
\verb"Graphene d." & x & 18-20 & 29.3 & 9.81 & 0.83 & 19.2 & 1.62 \\
& y & $>$ 30 & 67.7 & 22.6 & $>$ 3.63 & 44.3 & $>$ 7.11 \\
\mr
\verb"Tilene p." & x,y & 24-26 & 99.6 & 33.4 & 5.55 & 49.7 & 8.27 \\
& $45 ^{\circ}$ & 32-34 & 79.1 & 26.5 & 5.68 & 39.5 & 8.47 \\
\mr
\verb"Tilene" & x,y & 20-22 & 44.8 & 15.0 & 1.66 & 32.3 & 3.57 \\
& $45 ^{\circ}$ & 18-20 & 30.3 & 10.2 & 0.92 & 21.9 & 1.97 \\
\mr
\verb"Flakene p." & x & 22-24 & 66.7 & 22.3 & 3.37 & 37.2 & 5.61 \\
& y & 22-24 & 57.9 & 19.4 & 3.01 & 32.3 & 5.02 \\
\mr
\verb"Flakene" & x & 12-14 & 23.6 & 7.92 & 0.49 & 18.7 & 1.16 \\
& y & 14-16 & 25.8 & 8.63 & 0.63 & 20.4 & 1.49 \\
\mr
\verb"Liskene" & x,y & 18-20 & 63.2 & 21.2 & 2.19 & 35.8 & 3.70 \\
& $45 ^{\circ}$ & 14-16 & 43.8 & 14.7 & 1.27 & 24.7 & 2.14 \\
\mr
\verb"Liskene d." & x,y & 12-14 & 27.8
& 9.32 & 0.59 & 18.5 & 1.20\\
& $45 ^{\circ}$ & 14-16 & 28.0
& 9.37 & 0.68 & 19.0 & 1.37 \\
\br
\end{tabular*}
\end{table}
\begin{figure*}[!h]
\centering
\includegraphics[width=1.2\linewidth,trim={0 5cm 0 5cm},clip=true]{fig4.pdf}
\caption{Stress--strain curves of graphene and graphene daughter along the $x$-direction (or zig-zag, represented by red and blue empty squares for the two architectures, respectively) and the $y$-direction (or armchair, represented by green and violet empty circles for the two architectures, respectively). The differently colored lines represent the best fits to the ab-initio data. On the left and right sides of the image we report the simulation cells of graphene daughter for different strain values and directions.}
\label{fig:GrapheneFamily}
\end{figure*}
\begin{figure*}[!h]
\centering
\includegraphics[width=1.\linewidth,trim={0 5cm 0 5cm},clip=true]{fig5.pdf}
\caption{Stress--strain curves of tilene parent and tilene along the $x$-direction (red and blue empty squares for the two architectures, respectively) and the $45 ^{\circ}$-direction (green and violet empty triangles for the two architectures, respectively). The differently colored lines represent the best fits to the ab-initio data. On the left and right sides of the image we report the simulation cells of tilene for different strain values and directions.}
\label{fig:TileneFamily}
\end{figure*}
\begin{figure*}[!h]
\centering
\includegraphics[width=1.\linewidth,trim={0 6cm 0 5cm},clip=true]{fig6.pdf}
\caption{Stress--strain curves of flakene parent and flakene along the $x$-direction (red and blue empty squares for the two architectures, respectively) and the $y$-direction (green and violet empty circles for the two architectures, respectively). The differently colored lines represent the best fits to the ab-initio data. On the left and right sides of the image we report the simulation cells of flakene for different strain values and directions.}
\label{fig:FlakeneFamily}
\end{figure*}
\begin{figure*}[!h]
\centering
\includegraphics[width=1.\linewidth,trim={0 5cm 0 5cm},clip=true]{fig7.pdf}
\caption{Stress--strain curves of liskene and liskene daughter along the $x$-direction (red and blue empty squares for the two architectures, respectively) and the $45 ^{\circ}$-direction (green and violet empty circles for the two architectures, respectively). The differently colored lines represent the best fits to the ab-initio data.}
\label{fig:LiskeneFamily}
\end{figure*}
\begin{figure*}[!h]
\centering
\includegraphics[width=1.\textwidth,trim={0 5cm 0 5cm},clip=true]{fig8.pdf}
\caption{Comparison between the stress--strain curves of liskene, tilene, and flakene along the $x$-direction (blue, red and cyan empty squares for the three architectures, respectively), along the $45 ^{\circ}$-direction (violet and green triangles for liskene and tilene, respectively) and along the $y$-direction (black empty circles for the flakene architecture). The differently colored lines represent the best fits to the ab-initio data. On the left and right sides of the image we report the simulation cells of liskene for different strain values and directions.}
\label{fig:NewStructures}
\end{figure*}
\begin{figure}[!h]
\centering
\includegraphics[width=1.\linewidth]{fig9.pdf}
\caption{Comparison between the stress--strain curves of liskene (blue empty squares) and tilene (green empty triangles) parent along the $x$ and $45 ^{\circ}$ directions, respectively. The differently colored lines represent the best fits to the ab-initio data.}
\label{fig:LiskenevsTilParent}
\end{figure}
Tilene presents an area Young's modulus $E_A=78.6$ N/m and a Poisson's ratio $\nu=0.607$, which are similar to those of the graphene daughter. Flakene has an area Young's modulus $E_A=38.6$ N/m and a Poisson's ratio $\nu=0.746$. Generally, we notice that graphene has the highest Young's modulus, and that moving from parent to daughter structures the Young's modulus decreases and the Poisson's ratio consequently increases. \\
\indent Moreover, one of the most significant observables to be computed for low density materials is of course the specific modulus, namely the Young's modulus divided by the mass density. Thus, we computed the Young's modulus per mass density $E_A/\rho_{A}=E/\rho$, where $\rho_{A}$ is the density in units of Kg/m$^2$ and $\rho$ the mass density in Kg/m$^3$. The outcome of our simulations concerning this quantity are reported in the last column of table \ref{tab:Elastic}.
We notice that graphene presents the biggest specific modulus among the materials studied here. At odds flakene, while displaying the lowest density among the investigated structures, shows a major drop in both the absolute and specific elastic moduli, which are from 8 to 5 times lower than graphene.
Nevertheless, while we do not find a material outperforming the specific properties of graphene in this respect and, thus, we do observe that the augmentation is only partially an advantageous route to follow in order to increase the specific modulus of graphene-like materials, the difference in the specific Young's modulus is less remarkable than for the absolute values, with the exception of flakene. \\
\indent The drop of flakene Young's modulus suggests that there is a threshold to the decrease of the density of these carbon-based planar materials, below which this mechanical characteristic is significantly depleted.
In order to get further insights on this issue, we report in figure \ref{fig:Density} the specific modulus of our carbon allotropes versus area density. In particular, we plot the specific biaxial modulus ($E_{bi}=C_{11}+C_{22}$) versus the area density, fitting the data reported in \cite{Sun2016} by the formula $E_{bi}=1184.3 \times \rho_A-56.88$ (N/m)/(atoms/\AA$^2$) (black curve in figure \ref{fig:Density}). We notice that the specific biaxial modulus of the structures studied in this work can be found in close proximity to the model fit.
These findings led us to the conclusion that the idea of decreasing the density, retaining the specific mechanical characteristics, can be pursued only to some extent at least as far as the Young's modulus is concerned. \\
\subsection{Stress--strain curves}
To gain further insight on the dependence of the mechanical properties of our structures on the density, we carried out the first-principles simulations of the stress--strain curves from which several observables can be obtained, such as the fracture strain, the tensile strength and the toughness. DFT calculations of the true stress tensor in response to strain, from which one can develop constitutive equations to fit the ab-initio data, were performed on the unit cell of the materials. We remind that all structures were relaxed below 3 kbar in the direction orthogonal to loading, and we plot the stress obtained by using the relaxed surface (true stress as opposed to engineering stress). \\
\indent In figure \ref{fig:GrapheneFamily} we start from analyzing the stress--strain curves of graphene and graphene daughter to benchmark our results against the extensive number of computational and theoretical studies carried out in this respect, along the Cartesian directions $x,y$, which represents the zig-zag and armchair directions of graphene (or, better to say, of the zig-zag and armchair ribbon that can be obtained by cleaving along the $x,y$ directions), respectively.
The stress--strain characteristics under uniaxial tensile loading along the zigzag ($x$, empty red squares) and armchair ($y$, empty green circles) directions, reported in figure \ref{fig:GrapheneFamily}, show the known anisotropic response of graphene that results in nonlinear constitutive equations \cite{XU20122582,doi:10.1021/jp307469u}.
The mechanical response of graphene to uniaxial tension is almost linear until about 10\% strain for both the armchair and zigzag directions, with the curve slope progressively decreasing with increasing strain. Beyond that value the stress--strain curves deviate significantly from linearity, keeping the isotropic behaviour up to $15$\% strain, where the mechanical characteristics along the two loading directions fork. The anisotropy develops at rather moderate strain with the zig-zag stiffness dramatically decreasing with respect to the armchair direction. In figure \ref{fig:GrapheneFamily} we sketch also the mechanical response to loading along the $x$ (empty blue squares) and $y$ (empty violet circles) directions of the graphene daughter. We notice that the absolute mechanical properties deplete significantly from graphene to its daughter in all respect, with a strong decrement in toughness and strengths (see also table \ref{tab:stresstrain}, where we report the absolute mechanical characteristics of these structures along with those of the other novel 2D architectures proposed in this work, that is tilene, flakene and liskene).
In this respect, we notice that the architecture of the graphene daughter is largely dominated by the presence of triangular shapes, at variance with graphene (see figure \ref{fig:structure}(b)). This feature is shared also by tilene (see figure \ref{fig:structure}(d)). This seems the major reason of the similar mechanical response to uniaxial strain along the $x$-direction between the graphene daughter (see violet empty circles in figure \ref{fig:GrapheneFamily}) and tilene (see blue empty squares in figure \ref{fig:TileneFamily} for a comparison). Along the $y$ direction the mechanical response of graphene daughter is similar to graphene (see violet empty circles in figure \ref{fig:GrapheneFamily}), showing a high fracture strain at lower stress than graphene.\\
\indent Tilene parent and tilene (see figure~\ref{fig:structure}(d)) belong to the dihedral group of symmetries (D4) and, thus, in figure \ref{fig:TileneFamily} we reproduced the stress--strain characteristics along the $x$ (empty squares, a strain along the $y$ direction would provide the same results) and the diagonal ($45^{\circ}$, empty triangles) directions.
Tilene parent displays a behavior under mechanical loading similar to graphene along the $x$-direction (empty red square in figure \ref{fig:TileneFamily}) with comparable strength and proportional limit stress (see table \ref{tab:stresstrain}). However, in the diagonal direction (see empty green triangles in figure \ref{fig:TileneFamily}) the presence of $sp^2$-carbon squares reduces the absolute mechanical performances of the tilene parent, but with a significantly higher fracture point (see table \ref{tab:stresstrain}). Nevertheless, the stress--strain characteristics do not overlap along the two different directions. Tilene shows a mechanical response to uniaxial strain comparable to graphene daughter in both the $x$ (blue empty squares of figure \ref{fig:TileneFamily}) and diagonal directions (violet empty triangles of figure \ref{fig:TileneFamily}), being its structure characterized by a similar occurrence of $sp^2$ triangles. \\
\indent Furthermore, in figure \ref{fig:FlakeneFamily} we report the stress--strain curves of flakene parent and daughter along the orthogonal directions $x$ (empty squares) and $y$ (empty circles) as their lattices display hexagonal symmetry.
The mechanical characteristics of flakene parent are similar to those of graphene, showing a split between $x$ (empty red squares) and $y$ (empty green circles) curves at about $13$\% strain, and an almost linear regime up to $10$\% strain.
However, the values of the strength are $60$\% lower than in the case of graphene (see table \ref{tab:stresstrain}). Flakene daughter shows a behaviour comparable to graphene daughter, being characterized by a similar large presence of $sp^2$-carbon triangular lattices, with lower absolute values of the strength (see table \ref{tab:stresstrain}).\\
\indent We notice that the augmentation procedure to obtain the liskene daughter architecture from liskene concerns only the carbon atoms that belong to the square shapes. In figure \ref{fig:LiskeneFamily} we report the stress--strain curves of liskene and liskene daughter along the orthogonal $x$-(empty squares) and $45^{\circ}$- directions (empty circles). Even in this case we find the general trend previously observed of a decrease of the fracture strain and tensile strength from the parent to the daughter structure.\\
\indent In figure \ref{fig:NewStructures} we present the stress--strain characteristics along the $x$ and $y$ directions of our novel planar architecture named liskene, compared to tilene and flakene.
As previously noticed, within the linear steep the uniaxial $x$ (blue empty squares) and $y$ (violet empty triangles) loading curves are overlapping, and at about 6\% strain they fork and deviate progressively from linearity up to the fracture strain at about 14\% and 18\% strain, respectively. The values of liskene mechanical characteristics are slightly higher than the other proposed architectures (see table \ref{tab:stresstrain}). We notice that after the disruption of the first set of bonds at 14\% and 17\% strain along the $x$ and $y$ directions respectively (see the structures reported in the left and right hand sides of \ref{fig:NewStructures}), liskene stress--strain curves in both directions bounce into a second linear regime with different slope. \\
\indent Finally, in figure \ref{fig:LiskenevsTilParent} we notice that tilene parent (green line) and liskene (blue line) basically present comparable stress--strain curves up to $10\%$ strain. We rationalize this by noting that for both structures the stress--strain characteristics are initially dominated by the deformation of $sp^2$-carbon atom arranged in square forms. However, as the strain increases, the liskene stress--strain curve departs from that one of tilene parent. This is due to the fact that the former architecture undergoes the fracture of the bonds within the squares, and the stronger bounds of carbon triangles come into play. \\
\indent As a final remark, we point out that the picture so far described concerning the absolute values slightly changes when we look at the specific properties reported in the last two columns of table \ref{tab:stresstrain}. Indeed, the strength of our novel 2D structures is comparable to graphene, or even higher than the latter in the case of the tilene parent. Nevertheless, the trend of the specific toughness, which measures the ability of a material to absorb energy before fracture, is generally favourable to graphene with respect to the other structures.
\section{Conclusion}
To conclude, in this work we present a systematic approach to the discovery of all-$sp^2$ carbon allotropes with the aim of decreasing the density of graphene without depleting its unique mechanical properties. This method proceeds by lowering the packing factors, which means augmenting the number of congruent discs under the constraint of local stability. \\
\indent While all the daughter structures that we generate display lower stability and smaller cohesive energy than graphene, their density is considerably lower than graphene up to $45\%$.
In particular, we argue that flakene represents the least dense possible structure among the families of all-$sp^2$ generated carbon allotropic forms starting from planar parent architectures under the local stability constraint. Nevertheless, we propose that novel geometries could be obtained by initiating the augmentation procedure from non-planar architectures, e.g. from pentagraphene. The relevant atomic arrangement derived from pentagraphene, which is named liskene, displays a high cohesive energy at a density lower than $22\%$ with respect to graphene. \\
\indent Nevertheless, by comparing the specific Young's modulus of these structures with graphene, we notice that there is a threshold below which is not possible to reduce further the density without a considerable depletion of this elastic property. In particular by lowering the density below that one of liskene results in a reduction of about $40\%$ of the specific Young's modulus. This can be clearly seen in the case of flakene, which displays the lowest density among the proposed planar structures as well as the smallest absolute and specific Young's modulus. Thus, graphene presents one of the highest specific modulus ever found and the quest for finding a better replacement in mechanical engineering applications is still open. Based on these findings, we argue that the focus on the search for materials with high specific Young's modulus should proceed among the high-density carbon allotropes. However, the area density of graphene is close to the limit of maximal planar packing. Furthermore, the specific Young's modulus has an asymptotic limit for high density packing. We further note that the structures with atomic density in the range of $0.25-0.3$ atoms/\AA$^2$ have performances similar to that of graphene and a research focused in this range of densities could be profitable in finding high specific modulus materials. Thus, a hypothetical improvement of this quantity could be devised only by changing the paradigm of interaction, for example by enhancing electrostatic and/or Van der Waals interactions. \\
\indent Our analysis of the mechanical properties concerned also the stress--strain curves of these low-density materials. We find that while the absolute values of the mechanical characteristics, such as fracture strain, strength, and toughness, are generally lower than those of graphene with the exception of the tilene parent architecture, nevertheless their specific counterparts can approach those of graphene and even surpass its specific strength for the case of tilene parent. In general, we notice that the mechanical properties deplete moving from parent to daughter architectures by lowering the packing factors. Thus, depending on the application, our structures could be used to replace graphene when weight decrease is an issue of paramount importance. \\
\indent Finally, we assessed also the electronic properties of the novel structures generated by our augmentation algorithm and compared them with their relevant parent networks. We find that a change in the packing factor results in the appearance of a narrow band close to the Fermi level, a common feature shared by all the parent-to-daughter architectures. This is particularly evident in the case of the liskene architecture, which is a semi-metal, despite the parent structure of pentagraphene is a semiconductor with a $2.3$~eV band gap according to our DFT simulations.\\
\indent We conclude by noticing that the systematic approach presented in this work could be extended also to design novel lightweight strong three-dimensional carbon allotropes \cite{Qine1601536}.
\ack
N.P. is supported by the European Commission under the Graphene Flagship Core 2 grant No.
785219 (WP14, "Composites") and FET Proactive ("Neurofibres") grant No. 732344 as well as by the Italian Ministry of Education, University and Research (MIUR) under the ``Departments of Excellence'' grant L.232/2016. The authors gratefully acknowledge the Gauss Centre for Supercomputing for funding this project by providing computing time on the GCS Supercomputer JUQUEEN at J{\"u}lich Supercomputing Centre (JSC) \cite{juqueen}. Furthermore, the authors acknowledge Bruno Kessler Foundation (FBK) for providing unlimited access to the KORE computing facility.
\section*{References}
\bibliographystyle{unsrt.bst}
|
1,108,101,566,325 | arxiv | \section{Introduction}
The well-developed experimental technique of correlation femtoscopy measurements in the field of
high-energy heavy-ion collision physics makes it possible to investigate the spatio-temporal
structure of the systems produced in such collisions, as well as the peculiarities of the process
of their evolution (see, e.g. review~\cite{Lisa}). The femtoscopy, or interferometry radii, extracted from the Gaussian fits to the
measured two-particle correlation functions at given pair momentum~$k_T$, are generally associated with
the homogeneity lengths of a rapidly expanding system, or the three-dimensional sizes of the fragment of the system,
where the particles with momentum~$k_T$ are mainly emitted from~\cite{hlength1,hlength2}.
The detailed structure of homogeneity lengths contains also the information on, e.g., the strength of the
developed collective flow and the lifetime of the created fireball~\cite{MakSin} and on
the space-time correlations between emitted particles~\cite{hbt-puzzle1,hbt-puzzle2}.
The femtoscopy analysis can provide even such advanced information about the dynamics of the system's expansion as
the effect of the resonance decays and hadron-hadron scatterings at the
afterburner stage of the collision on the formation of bulk observables, or the times at which the particles of
different species are mostly emitted from the system~\cite{lifetime,sourcefunc}. The study of the correlation
functions for pairs of non-identical particles can help to find out particles of which sort are emitted
earlier~\cite{lednicky,kiesel}.
In the paper~\cite{lifetime} we proposed a method for the estimation of the times of maximal emission
for pions and kaons in the LHC Pb+Pb collisions at $2.76 A$~TeV based on a simple analytical formulas allowing the simultaneous fitting
of transverse momentum spectra for both considered particle sorts followed by fitting of the corresponding
$long$ interferometry radii dependencies on pair $m_T$. The formulas were obtained
as a result of analytical approximation for single-particle and two-particle momentum spectra in A+A collisions \cite{hlength2,lifetime}.
This method gave us the estimates for the effective pion and kaon emission times within the hydrokinetic model
(HKM)~\cite{HKM,HKM1}, which was used to calculate the particle spectra in our study.
The obtained estimates were also in agreement with the corresponding emission function plots obtained in HKM.
In addition, the first application of the method to the RHIC BES energies ~\cite{Oslo}, where the
UrQMD was used as the evolutionary model, was quite satisfactory.
The method was
successfully used by the ALICE Collaboration~\cite{alice-mt} for the estimation of pion and kaon maximal
emission times within their experimental analysis. Both studies showed that the effective emission time
for kaons is larger than for pions. This fact, together with observed essentially non-Gaussian shape of
the considered correlation functions and the absence of scaling between $R_{\mathrm{long}}(m_T)$ dependencies
for kaons and pions, suggested that the secondary kaons, coming from the $K^{*}(892)$ resonance decays and involved
into intensive interaction with the hadronic medium at the final stage of the collision, give an important
contribution to the total kaon yield. Similar results were obtained later within the more advanced
integrated hydrokinetic model (iHKM)~\cite{ihkm1,ihkm2} for the case of Au+Au collisions at the top RHIC energy
in our recent paper~\cite{rhic-ihkm}.
In~\cite{lhc502-ihkm} we presented the results of our systematic study of Pb+Pb collisions at the LHC energy
$5.02 A$~TeV within the iHKM model concerning the most of the bulk observables, including the predictions
for femtoscopy radii in the three centrality classes ($c=0-5\%$, $c=20-30\%$, and $c=40-50\%$).
However, our previous study did not include
the analysis of the maximal emission times for kaons and pions. That is why in the present work we are going
to close this gap and to apply the developed technique to the case of Pb+Pb collisions at $\sqrt{s_{NN}}=5.02$~TeV
in order to get more details about the character of the matter evolution at this LHC energy and to compare it
with the case of $\sqrt{s_{NN}}=2.76$~TeV collisions.
\section{Analytical model}
In this section we briefly explain the idea of the method utilized (see~\cite{lifetime} for details)
and the origination of the analytical formulas for spectra and radii fitting.
The formation of particle spectra measured in heavy-ion collision experiments can be described
using a modified Cooper-Frye prescription (CFp) for particlization hypersurface~$\sigma$, which consists
of all space-time
points $(t_{\sigma}(\textbf{r},p),\textbf{r})$, where the maximal emission
of quanta with momentum $p$ takes place~\cite{spec-form1,spec-form2}. In this approach, by contrast
with the standard Cooper-Frye prescription, where the same particlization hypersurface (typically an isotherm)
is used for all the momenta $p$, one does not have any problems with the negative contributions
from non-spacelike elements of the switching hypersurface, since for each specified momentum $p$ the
corresponding fragment of the hypersurface is spacelike by its construction.
Here, similarly to~\cite{lifetime}, we base our consideration on the approximation~\cite{tau-const1,tau-const2}
of such generalized CFp, where we suppose the hypersurface $\sigma$ to be of constant proper
time $\tau$ (equal to the time of maximal emission for soft quanta, $\tau=\tau_{m.e.}=const$) and
to be limited in the direction $\textbf{r}_T$, transverse to the beam axis.
Such an assumption corresponds to the emission of soft particles with momenta $p_T \approx 0.2-0.4$~GeV/$c$,
whereas for particles with $p_T>0.8$~GeV/$c$ strong space-time correlations between particle radiation events
take place, and the related hypersurface parts essentially differ from $\tau=const$~\cite{tau-const1,tau-const2}.
Thus, we utilize the following analytical representation for the bosonic Wigner function for soft enough quanta:
\begin{equation}
f_{l.eq.}(x,p)=\frac{1}{(2\pi)^3}\left[\exp(\beta p\cdot u(\tau_{m.e.},{\bf r}_T) -\beta\mu)-1\right]^{-1}\rho({\bf r}_T),
\label{wigner}
\end{equation}
where $\beta$ denotes the inverse temperature, $\eta_L= \text{arctanh}\,v_L$
and $\eta_T = \text{arctanh}\,v_T(r_T)$ are longitudinal and transverse rapidities,
$u^{\mu}(x)=(\cosh\eta_L\cosh\eta_T,\frac{{\bf r}_T}{r_T}\sinh\eta_T,\sinh\eta_L\cosh\eta_T)$ is
hydrodynamic velocity and $\rho({\bf r}_T)$ is the Gaussian cutoff factor
\begin{equation}
\rho({\bf r}_T)=\exp[-\alpha (\cosh\eta_T(r_T)-1)],
\label{rho}
\end{equation}
where the parameter $\alpha$ is defined as $\alpha = R_v^2/R_T^2$,
and in the latter ratio $R_T$ denotes the homogeneity length
in transverse direction $\textbf{r}_T$ (for $r_T$ close to zero and at small momenta~$k_T$),
while $R_v=(v^{\prime}(r_T))^{-1}$ is the hydrodynamic length near $r_T=0$.
The $\alpha$ parameter characterizes the strength of the collective flow: strong flow corresponds to
small values of $\alpha$, since in this case one has $R_v<<R_T$, and for the case of absent flow
$R_v \rightarrow \infty$, so that $\alpha \rightarrow \infty$ as well.
The factor $\rho({\bf r}_T)$ effectively limits the particlization hypersurface in transverse direction
and removes the contributions from hard quanta, for which $\cosh\eta_T(r_T) \gg 1$.
According to the improved Cooper-Frye prescription utilized in our analysis one can
calculate the single-particle spectra $p_{0}(d^{3}N/d^{3}p)$
and two-particle correlation functions $C(p,q)$ as follows:
\begin{equation}
p_{0}\frac{d^{3}N}{d^{3}p}=\int_{\sigma_{m.e.}(p)}d\sigma_{\mu} p^{\mu} f_{l.eq.}(x,p),
\label{sp-def}
\end{equation}
\begin{equation}
C(p,q)\approx 1+\frac{\left|\int_{\sigma_{m.e.}(k)} d\sigma_{\mu}k^{\mu} f_{l.eq.}(x,k)\exp(iqx)\right|^{2}}{\left(\int_{\sigma_{m.e.}(k)}
d\sigma_{\mu}k^{\mu} f_{l.eq.}(x,k)\right)^{2}}.
\label{corr-approx}
\end{equation}
Here we use the conventional denotations $q=p_{1}-p_{2}$ and $k^{\mu}=\left(\sqrt{m^2+\left(\frac{\mathbf{p_{1}}+\mathbf{p_{2}}}{2}\right)^2},\frac{\mathbf{p_{1}}+\mathbf{p_{2}}}{2}\right)$.
When one works in both smoothness and mass shell approximations, one has $k\approx p=(p_1+p_2)/2$
and 4-vector $q$ having only three independent components. In femtoscopy analysis one usually chooses
them to be $q_{\mathrm{long}}$ --- along the beam axis direction, $q_{\mathrm{out}}$ --- along
the pair transverse momentum $\textbf{k}_T$ direction, and $q_{\mathrm{side}}$ --- orthogonal
to both {\it long} and {\it out} directions.
To obtain some simple analytical expressions for spectrum and correlation function,
which would be convenient for fitting and interpretation of the experimental or the realistic simulation
results, one can approximately calculate Eq.~(\ref{sp-def}) and Eq.~(\ref{corr-approx}),
substituting the Wigner function $f_{l.eq.}(x,p)$ by (\ref{wigner}) and
using the saddle point method, as it was done in~\cite{Tolstykh} within the Boltzmann approximation
for the case of longitudinally boost-invariant expansion.
It is interesting, that within the obtained in~\cite{Tolstykh} approximation the behavior of the correlation function
$C(p,q)$ in {\it long} direction is defined only by the $\alpha$ parameter value and does not depend on the profile
of transverse velocity $v_T$ at the particlization hypersurface (unfortunately, this is not the case for the two
transverse directions). This allows to simplify further analysis significantly, if in what follows we restrict
ourselves to the analysis of longitudinal direction only.
Introducing the longitudinal homogeneity length at non-zero transverse flow, $\lambda_l=\tau\sqrt{\frac{T}{m_T}(1-\bar{v}^2_T)^{1/2}}$~\cite{tau-const1,tau-const2} and denoting its ratio to $\tau$ as $\lambda$, one has
\begin{equation}
\lambda^2 =\frac{\lambda_l^2}{\tau^2}=\frac{T}{m_T}(1-\bar{v}^2_T)^{1/2}.
\label{lambda}
\end{equation}
Here $\bar{v}_T=k_T/(m_T+\alpha T)$ is the transverse flow velocity in the saddle point, and $T=T_{m.e.}$
is the temperature at the hypersurface of maximal emission, $\tau=\tau_{m.e.}$.
Then in LCMS frame one can write for the correlation function in $l\equiv long$ direction (at $q_{\mathrm{out}}=q_{\mathrm{side}}=0$)~\cite{Tolstykh}:
\begin{equation}
C(k,q_l)=1+\frac{\exp\left[\frac{2}{\lambda^2}\left(1-\sqrt{1+\tau^2\lambda^4q_l^2}\right)\right]}{\left[1+\tau^2\lambda^4q_l^2\right]^{3/2}}
\stackrel{k_T\rightarrow \infty}{\longrightarrow} 1+\exp(-\lambda_l^2 q_l^2).
\label{correlator}
\end{equation}
The obtained expression implies that in general case the correlation function is not Gaussian,
which means that one can obtain different analytical approximations for the Gaussian interferometry radii
based on this function, depending on the considered physical situation (see~\cite{Tolstykh} for more details).
In particular, when one considers the case of longitudinally boost-invariant matter expansion
with a transverse flow having arbitrary velocity profile $v_T(\textbf{r}_T)$, then for
small $q_{\mathrm{long}}$ (the peak of the correlation function) one has for $R_{\mathrm{long}}(m_T)$:
\begin{equation}
R^2_{\mathrm{long}}(m_T)=\tau^2\lambda^2\left(1+\frac{3}{2}\lambda^2\right),
\label{radfit}
\end{equation}
where $m_T=\sqrt{m^2+k_T^2}$.
The formula~(\ref{radfit}) can be applied in case of transverse flow of any
intensity, which is especially important for the LHC energies.
Having obtained the formula~(\ref{radfit}) for the femtoscopy radii fitting from the
approximation for the correlation function (\ref{corr-approx}), one can further
use a similar approach to get the formula for momentum spectrum starting from
the Eq.~(\ref{sp-def})~\cite{tau-const2}. As a result one will have
\begin{equation}
p_0 \frac{d^3N}{d^3p} \propto \exp{[-(m_T/T + \alpha)(1-\bar{v}^2_T)^{1/2}]}.
\label{specfit}
\end{equation}
This formula allows one to approximate the slope of the transverse momentum spectrum at not very high $p_T$
in the presence of transverse collective flow under the assumption that the shape
of the spectrum is close to the exponential one.
The procedure, proposed in~\cite{lifetime} for the estimation of the time of maximal emission
for pions and kaons in the LHC Pb+Pb collisions, suggests at first to determine the parameters $T$ and $\alpha$
based on the results of combined fitting of pion and kaon $p_T$ spectra with Eq.~(\ref{specfit})
and then use the found parameter values
to fit the corresponding $R_{\mathrm{long}}(m_T)$ dependencies with Eq.~(\ref{radfit}).
This latter fitting gives one the desired time of maximal emission values $\tau_{\pi}$ and $\tau_{K}$.
\section{Results and discussion}
In the current study we follow the same algorithm as described in~\cite{lifetime} and perform fitting of
the $p_T$ spectra
and \textit{long} femtoscopy radii obtained from the iHKM realistic simulations of the relativistic
heavy-ion collisions. The model consists of several modules, each describing one of the collision stages
(initial state of the system right after the nuclei have passed through each other, pre-thermal expansion
of far-from-equilibrium system, hydrodynamical expansion of nearly thermalized matter,
particlization and hadron cascade stage --- see~\cite{ihkm1,ihkm2} for details).
In~\cite{lhc502-ihkm} the model was tuned for the simulation of Pb+Pb collisions at the LHC energy $5.02 A$~TeV
and showed good agreement with the experimental results on different particle production observables for this energy.
The predictions for interferometry radii in the three centrality classes ($c=0-5\%$, $c=20-30\%$, and $c=40-50\%$)
were also made (see Figs.~\ref{rad05}, \ref{rad2030} and \ref{rad4050}). Thus, we have all the necessary data
to apply our method and try to extract the maximal emission times.
In Fig.~\ref{specf} one can see the plot demonstrating pion and kaon $p_T$ spectra for the most central
collisions ($c=0-5\%$) calculated in iHKM together with the ALICE Collaboration experimental data~\cite{alice-spec}
and fitting curves according to Eq.~(\ref{specfit}). The fitting is carried out in the momentum
range $0.45<p_T<1.0$~GeV/$c$. The temperature $T$ is fixed to be a common parameter for both pion and kaon spectrum,
while the $\alpha$ and normalizing constant parameters are supposed to be different for pions and kaons.
It is worth noting that as compared to the case of $2.76 A$~TeV collisions, the fitting results are not
so stable and fluctuate depending on the used $p_T$ range and the initial constraints put on the parameters.
This can mean that the shape of the spectra for $5.02 A$~TeV Pb+Pb collisions is not so close to the
exponential one as for the lower LHC energy. In such a situation we had to compare different fitting results
and choose one of them. We finally settled on the result with the common temperature $T=138$~MeV,
$\alpha_{\pi}=4.8 \pm 1.1$ and $\alpha_{K}=2.4 \pm 0.6$, based on several considerations, such as
that the effective temperature should not differ much from the value $T=144$~MeV,
obtained in the case of $\sqrt{s_{NN}}=2.76$~TeV, where the fit was more stable, and also that the parameter
errors and the fit's $\chi^2$ should be as small as possible.
Fixing then the $T$ and $\alpha$ parameters at the values found during the spectra fitting, we performed
the fitting of \textit{long} radii dependency on the pair $m_T$ (see Fig.~\ref{rlfit05}).
Similarly to \cite{lifetime} we readily obtained a good fit for pion radii and extracted the corresponding
pion maximal emission time $\tau_{\pi}=9.14$~fm/$c$, while for kaons we had to make $\alpha$ parameter
free again to get satisfactory description of the iHKM points. As a result, we obtained $\alpha_{K}=0.062$
and the maximal emission time $\tau_{K}=12.73$~fm/$c$. One can also see that the $m_T$ scaling between
pion and kaon radii is violated, as well as in $\sqrt{s_{NN}}=2.76$~TeV collisions, apparently
due to the presence of strong transverse flow and intensive hadron-hadron interactions
at the afterburner stage of the collision, as it was advocated in~\cite{lifetime}. At the same time, $k_T$ scaling
takes place starting from $k_T \approx 0.5$~GeV/$c$ for all femtoscopy scales. Such a scaling for pion and kaon radii
at the LHC was predicted in Ref.~\cite{pbm} and confirmed by the ALICE Collaboration~\cite{alice-mt}.
A slight overestimation of pion radius at the highest considered $m_T$ value, $m_T=1.12$~GeV,
in fitting curve can be possibly connected with the fact that we use the approximation
of $\tau=\tau_{m.e.}=const$ at hadronization hypersurface, applicable for soft particles with not very
high $p_T$. However quanta with $m_T$ close to 1 GeV are emitted from the side part
of the overall hadronization hypersurface, where $\tau$ values are smaller than $\tau_{m.e.}$ for soft particles.
That is why, since fitted radii values are proportional to $\tau$ (see (\ref{radfit})),
the fitting curve goes somewhat higher than the iHKM point for $m_T=1.12$~GeV.
The reason for redefining the $\alpha$ parameter for kaons for the radii fitting is non-Gaussian shape
of the correlation function. The formula (\ref{radfit}) was derived under the assumption
of small $q_{\mathrm{long}}$, and thus it describes well only the radii corresponding to
the peak part of the non-Gaussian correlation function. The $R_{\mathrm{long}}(m_T)$ fit
with $\alpha_K$ fixed to the value extracted from the combined spectra fitting would describe the iHKM
points well if the latter were obtained from the Gaussian fits to the model correlation functions
in a narrow range for $q$, as it was demonstrated in~\cite{lifetime} (see Figs.~5, 6 from \cite{lifetime}
and the related text) for the radii extracted using the $q$ range $|q|<0.04$~GeV/$c$.
And to describe the femtoscopy radii obtained from the correlation function fits in a wider $q$
interval (we used the range $|q|<0.2$~GeV/$c$ to extract the radii presented here) one should
use a separate $\alpha_K$ value, different from that ensuring the kaon spectrum description.
\begin{figure}
\centering
\includegraphics[bb=0 0 567 411, width=0.85\textwidth]{lhc_502_05-4.eps}
\caption{The iHKM results for pion and kaon interferometry radii in the LHC Pb+Pb collisions
at $\sqrt{s_{NN}}=5.02$~TeV, $c=0-5\%$.
\label{rad05}}
\end{figure}
\begin{figure}
\centering
\includegraphics[bb=0 0 567 411, width=0.85\textwidth]{lhc_502_2030-4.eps}
\caption{The same as in Fig.~\ref{rad05} for $c=20-30\%$.
\label{rad2030}}
\end{figure}
\begin{figure}
\centering
\includegraphics[bb=0 0 567 411, width=0.85\textwidth]{lhc_502_4050-4.eps}
\caption{The same as in Fig.~\ref{rad05} for $c=40-50\%$.
\label{rad4050}}
\end{figure}
\begin{figure}
\centering
\includegraphics[bb=0 0 567 411, width=0.8\textwidth]{comb_specs_502-1.eps}
\caption{The iHKM results for pion and kaon $p_T$ spectra compared to the ALICE data~\cite{alice-spec}
for the LHC Pb+Pb collisions at $\sqrt{s_{NN}}=5.02$~TeV ($c=0-5\%$)
together with the lines, representing a combined fit to the iHKM spectra using (\ref{specfit})
with the same effective temperature $T$ for pions and kaons.
\label{specf}}
\end{figure}
\begin{figure}
\centering
\includegraphics[bb=0 0 567 411, width=0.8\textwidth]{rl_mt_502_05.eps}
\caption{The fitting of pion (blue squares) and kaon (red squares) femtoscopy radii,
calculated in iHKM for $c=0-5\%$ with the lines corresponding to formula (\ref{radfit}).
The common effective temperature $T=138$~MeV and the value $\alpha_{\pi}=3.7$ are taken
from the combined pion and kaon $p_T$ spectra fit with Eq. (\ref{specfit}).
The $\alpha$ value for kaons as well as the maximal emission times $\tau_\pi$ and $\tau_K$ are free parameters.
Their values extracted from the best fit are: $\alpha_{K}=0.062$, $\tau_{\pi}=9.14$~fm/$c$
and $\tau_{K}=12.73$~fm/$c$.
\label{rlfit05}}
\end{figure}
We also present the results on pion and kaon maximal emission times for the two other centrality
classes, $c=20-30\%$ and $c=40-50\%$, for which the iHKM predictions on femtoscopy radii were
made in~\cite{lhc502-ihkm}. The corresponding plots are shown in Figs.~\ref{rlfit2030} and \ref{rlfit4050}.
For the $c=20-30\%$ case the temperature parameter extracted from the combined pion and kaon
$p_T$ spectra fit is $T=125$~MeV and the corresponding $\alpha_{\pi}=3.94$.
The $\alpha_{K}=0.35$ value is again extracted from the radii fit, and the maximal emission times
are $\tau_{\pi}=7.59$~fm/$c$ and $\tau_{K}=9.87$~fm/$c$.
Finally, in the $c=40-50\%$ case the values $T=127$~MeV and $\alpha_{\pi}=4.20$ were found from the spectra fit,
and the \textit{long} radii fit gave $\alpha_{K}=0.53$, together with the times $\tau_{\pi}=5.88$~fm/$c$
and $\tau_{K}=7.35$~fm/$c$.
One can note that the obtained effective temperature in case of non-central events is lower than
that for the most central collisions, as well as the corresponding maximal emission times.
The values of $\alpha$, on the contrary, are higher for the non-central collisions.
Such interrelation between the fit parameter values reflects the actual physical difference
between the systems, formed in central and non-central collisions, namely that those created in
non-central collisions live shorter, cool-down faster and develop less intensive collective flows
during their evolution.
\begin{figure}
\centering
\includegraphics[bb=0 0 567 411, width=0.8\textwidth]{rl_mt_502_2030.eps}
\caption{The fitting of pion (blue squares) and kaon (red squares) femtoscopy radii,
calculated in iHKM for $c=20-30\%$ with the lines corresponding to formula (\ref{radfit}).
The common effective temperature $T=125$~MeV and the value $\alpha_{\pi}=3.94$ are taken
from the combined pion and kaon $p_T$ spectra fit with Eq. (\ref{specfit}).
The $\alpha$ value for kaons as well as the maximal emission times $\tau_\pi$ and $\tau_K$ are free parameters.
Their values extracted from the best fit are: $\alpha_{K}=0.35$, $\tau_{\pi}=7.59$~fm/$c$
and $\tau_{K}=9.87$~fm/$c$.
\label{rlfit2030}}
\end{figure}
\begin{figure}
\centering
\includegraphics[bb=0 0 567 411, width=0.8\textwidth]{rl_mt_502_4050.eps}
\caption{The fitting of pion (blue squares) and kaon (red squares) femtoscopy radii,
calculated in iHKM for $c=40-50\%$ with the lines corresponding to formula (\ref{radfit}).
The common effective temperature $T=127$~MeV and the value $\alpha_{\pi}=4.20$ are taken
from the combined pion and kaon $p_T$ spectra fit with Eq. (\ref{specfit}).
The $\alpha$ value for kaons as well as the maximal emission times $\tau_\pi$ and $\tau_K$ are free parameters.
Their values extracted from the best fit are: $\alpha_{K}=0.53$, $\tau_{\pi}=5.88$~fm/$c$
and $\tau_{K}=7.35$~fm/$c$.
\label{rlfit4050}}
\end{figure}
Additionally, in Figs.~\ref{emiss1}-\ref{emiss3} we demonstrate the plots for the averaged emission functions
of pions and kaons for the three considered centrality classes, which allows to qualitatively identify the regions
of the maximal emission for particles of each species (with $0.2<p_T<0.3$~GeV/$c$) and in such a way
to approximately estimate the corresponding effective maximal emission times~$\tau$.
One can see, that the maximal emission time values, previously obtained from the fits, e.g. for $c=0-5\%$ events,
$\tau_{\pi}=9.14$~fm/$c$ and $\tau_{K}=12.73$~fm/$c$, are in agreement
with the presented emission pictures, since according to the plot, the maximum of pion emission
should be close to the particlization time in the center of the system, $\tau \approx 10$~fm/$c$,
and for kaons the effective $\tau_K$ value should be between the two emission maxima, seen on the plot
(more clearly in numerical representation),
namely between $\tau \approx 10$~fm/$c$ and $\tau \approx 15$~fm/$c$. The second in time maximum is conditioned by
$K^{*}(892)\rightarrow \pi + K$ decays, as it was earlier explained in Ref.~\cite{lifetime}
and in fact was confirmed by the results of the ALICE Collaboration~\cite{alice-mt}.
Quite similar situation takes place also for non-central collisions (see Figs.~\ref{emiss2}, \ref{emiss3}): the
times of maximal emission for pions, $\tau_{\pi}=7.59$~fm/$c$ and $\tau_{\pi}=5.88$~fm/$c$, extracted from the fits
are close to the central parts' particlization times, $\tau \approx 7.5$~fm/$c$ and $\tau \approx 5$~fm/$c$,
following from the emission pictures. For kaons the obtained times of maximal emission, $\tau_{K}=9.87$~fm/$c$
and $\tau_{K}=7.35$~fm/$c$, are, similarly to the central collision case, higher than those of pions by
about $2-3$~fm/$c$, i.e. are between the particlization time and the time of $K^{*}$ resonance decay, whose
lifetime is about $4-5$~fm/$c$ and which forms a second kaon emission maximum on the radiation plots (however,
for non-central collisions, especially for the $c=40-50$\% case, this second maximum is less pronounced
than in central events, maybe due to smaller number of produced particles in non-central collisions).
\begin{figure}
\includegraphics[width=0.9\textwidth]{emiss_lhc502_0005.eps}
\caption{The emission functions per units of
space-time and momentum rapidities averaged over momentum angles $ g(\tau,
r_T,p_T)$ [fm$^{-3}$] for pions~(a) and kaons~(b) obtained from
the iHKM simulations of the LHC Pb+Pb collisions at the energy
$\sqrt{s_{NN}}=5.02$~TeV, $0.2<p_T<0.3$~GeV/$c$, $|y|<0.5$,
$c=0-5$\%. }
\label{emiss1}
\end{figure}
\begin{figure}
\includegraphics[width=0.9\textwidth]{emiss_lhc502_2030.eps}
\caption{The same as in Fig.~\ref{emiss1} for the events from $c=20-30$\% centrality class. }
\label{emiss2}
\end{figure}
\begin{figure}
\includegraphics[width=0.9\textwidth]{emiss_lhc502_4050.eps}
\caption{The same as in Fig.~\ref{emiss1} for the events from $c=40-50$\% centrality class. }
\label{emiss3}
\end{figure}
\section{Conclusions}
The times of the maximal emission of kaons and pions in the Pb+Pb collisions at the LHC energy $5.02 A$~TeV
were estimated based on the transverse momentum spectra and \textit{long} femtoscopy radii fitting
with the analytical formulas accounting for the presence of transverse collective flow.
Comparing the fitting results for different centrality classes, one can observe that the maximal emission times,
effective temperatures, and flow intensities are smaller in non-central events, than in central ones.
The fitting results are in a qualitative agreement with the regions of the most intensive particle
emission seen on the averaged emission function plots.
The stability of the spectra fits is worse as compared to the case of the lower LHC energy $2.76 A$~TeV,
which may be due to noticeable deviations of the spectrum shape from the exponential.
In this note, we found that, similarly to LHC Pb+Pb collisions at the energy $2.76 A$~TeV, kaons radiate later
than pions at all centralities also at the energy $5.02 A$~TeV. The reason, again, is mostly decays of $K^{*}$
resonance. The intensive hadron-hadron scatterings at the afterburner stage of the collision along with very intensive
transverse flow results in breaking of $m_T$ scaling between pion and kaon interferometry radii.
At the same time, we again, as for the smaller LHC energy, predict $k_T$ scaling at not very small $k_T$ for pion and
kaon femto-scales also for Pb+Pb collisions at currently the highest energy.
We are looking forward to the corresponding
results of femtoscopic analysis from the LHC Collaborations for the energy $5.02 A$~TeV.
\begin{acknowledgments}
The authors express their sincere gratitude to L.~Malinina and G.~Romanenko for their interest in this work and useful discussions. The research was carried out within the NAS of Ukraine Targeted research program ``Collaboration in advanced
international projects on high-energy physics and nuclear physics'', 2021,
Agreement \textnumero 7-2021 between the NAS of Ukraine and BITP of NASU.
\end{acknowledgments}
|
1,108,101,566,326 | arxiv | \section*{Results}
To mold the elastic energy landscape near a curved boundary (Fig. 1a), we fabricate long, epoxy resin strips using standard lithographic techniques to form wavy structures (Fig. 1b) that are placed between two parallel glass slides with planar anchoring oriented perpendicular to the strip (see Methods for details, Fig. 1c). This cell is filled by capillarity with a suspension of colloids in the NLC 4-cyano-4'-pentylbiphenyl (5CB) in the isotropic phase, and subsequently quenched into the nematic phase ($T_{NI} = 34.9 \,^o C$). Colloid migration within this assembly is observed with an optical microscope. For the larger beads, as expected, strong confinement between the glass slides stabilizes the Saturn ring configuration \cite{gu2000observation}. Smaller beads, which experience weaker confinement, adopt the dipolar structure. Particles are equally repelled by elastic interactions with the top and bottom glass slides, whose strength dominates over the particles' weight, so gravity plays a negligible role in our system. When observed through the microscope, this configuration forms a quasi-2D system in the ($x,y$) plane, where $y$ is measured from the base of a well in the direction perpendicular to the wall. The wavy wall forms a series of hills and valleys, with a distance 2$A$ from the base of the well to the highest point on a hill. Because of strong homeotropic anchoring at the wavy wall, these features impose zones of splay and bend in this domain. In particular, the valleys are sites of converging splay, the hills are sites of diverging splay, and the inflection points are sites of maximum bend (see Fig.\ S1 for a detailed discussion on the geometry of the well and the director field). In terms of the parameters $R$ and $A$, the structure has period $\lambda=4R\sqrt{\frac{A}{R}(2-\frac{A}{R})}$. Throughout this study, unless specified otherwise, 2$A$ = 10 $\mu$m. The gentle undulations of this wall deform the surrounding director field but do not seed defect structures into the NLC. We characterize the control we achieve over colloidal motion by characterizing particle behavior within the energy landscape near this wall. In addition, we use Landau-de Gennes (LdG) simulations of the liquid crystal orientation to guide our thinking. Details of the simulation can be found in the Methods section.
{\bf Attraction to the wall.}
To determine the range of interaction of a colloid with undulated walls of differing $\lambda$, a magnetic field is used to move a ferromagnetic colloid ($a$ = 4.5 $\mu$m) to a position $y$ far from the wall and $x$ corresponding to the center of the well. The magnet is rapidly withdrawn, and the colloid is observed for a period of 2 min. If the colloid fails to approach the wall by distances comparable to the particle radius within this time, the colloid is moved closer to the wall in increments of roughly a particle radius until it begins to approach the wall. We define the range of interaction $H^*$ as the maximum distance from the wall at which the colloid starts moving under the influence of the wall (Fig. 2). In these experiments, the Saturn ring defect was sometimes pinned to the rough surface of the ferromagnetic particles. To eliminate this effect, these experiments were repeated with homeotropic magnetic droplets with a smooth interface whose fabrication is described in the Methods section. The results with the two systems are similar. A typical trajectory is shown in Fig. 2a in equal time step images ($\Delta t$ = 125 s). For small $\lambda$ (i.e. $\lambda$ $\lesssim 40 \, \mu m$), $H^*$ increases roughly linearly with $\lambda$. However, at larger $\lambda$, the range of interaction increases only weakly. A simple calculation for the director field near a wavy wall in an unbounded medium in the one constant approximation and assuming small slopes predicts that the distortions from the wall decay over distances comparable to $\lambda$ \cite{luo2016experimental}. However, when $\lambda \gg T$, thickness of the cell, confinement by the top and bottom slides truncates this range (See SI and Fig. S3), giving rise to the two regimes reported in Fig. 2b: one that complies with the linear trend and one that deviates from it. A similar shielding effect of confinement in a thin cell was reported in the measurements of interparticle potential for colloids in a sandwich cell \cite{vilfan2008confinement}.
The colloid moves toward the wall along a deterministic trajectory. Furthermore, it moves faster as it nears the wall (Fig. 2c), indicating steep local changes in the elastic energy landscape. This motion occurs in creeping flow (Reynolds number $\mathrm{Re} = \rho v a/\eta \approx 1.15 \times 10^{-8}$, where $\rho$ and $\eta$ are the density and viscosity of 5CB, respectively). The energy dissipated to viscous drag along a trajectory $U$ can be used to infer the total elastic energy change; we perform this intergration and find $U \sim 5000\, k_{B}T$. In this calculation, we correct the drag coefficient for proximity to the wavy wall according to \cite{brenner1961slow} and for confinement between parallel plates according to \cite{ganatos1980strong} (see \cite{luo2016experimental} for more details). The dissipation calculation shows that gradients are weak far from the wall and steeper in the vicinity of the wall. The elastic energy profile found from the LdG simulation as a function of particle distance from the base of the well is consistent with these observations (Fig. S4). The particle finds an equilibrium position in the well. At larger distances from the wall, the energy increases first steeply, and then tapers off (Fig. S4). For wide wells ($\lambda>$ 15$a$), the energy gradient in x near the wall is weak, and the drag is large. In this setting, the colloid can find various trapped positions, and introduce error to energy calculation. Therefore, the trajectory is truncated at around $y$ = 15 $\mu$m from contact with the wall.
{ \centering
\includegraphics[width=\columnwidth]{Fig2.pdf}
\captionof{figure}{{\bf Range of colloid-wall interaction increases with $\lambda$.} A ferromagnetic homeotropic colloid with a Saturn ring defect is used to establish the range of interaction $H^*$ of the colloid with the wall. (a) An equal time step ($\Delta t$ = 125 s) image is shown for the case $\lambda=80\ \mu$m, $H^* = 60\ \mu$m. (b) Range of interaction $H^*$ versus the wavelength of the feature $\lambda$ for homeotropic droplets (open circles) and homeotropic colloids (crosses). (c) The position of the particle $y$ with respect to time $t$. Inset: Energy dissipated to viscosity along a particle trajectory $U$ with respect to the particle position $y$. The cross shows where we truncate the trajectory for integration along the path to infer the dissipation. The scale bar is 10 $\mu$m.}
}
{\bf Equilibrium position.} The wall shape also determines the equilibrium position $H_e$ near a well. In fact, we show that the particles do not always dock very close to the wall, rather, they find stable equilibrium positions at predictable distances from contact with the hills and valleys. We probe this phenomenon by varying colloid radius $a$ and wall radius of curvature $R$ (Fig. 3a). At equilibrium, the location of the center of mass of the colloid $y$= $H_{\mathrm{COM}}$ is equal to $R$. That is, the colloid locates at the center of curvature of the well (Fig. 3b). In this location,the splay of the NLC director field from the colloid matches smoothly to the splay sourced by the circular arc that defines the well. As $R$ increases, this splay matching requirement moves the equilibrium position of the colloid progressively away from the wall.
However, for wide wells with $R \gg 2a$, the elastic energy from the wall distorts the Saturn ring, displacing it away from the wall (Fig. 3c).
When this occurs, colloids equilibrate at loci closer to the wall. For all such colloids, the height of Saturn rings (Fig. 3a crosses: $y$ = $H_{\mathrm{defect}}/a$) and that of the center of mass of the particles (Fig. 3a open circles: $y$ = $H_{\mathrm{COM}}/a$) do not coincide. Specifically, the particle moves closer to the wall and the particle-defect pairs become more dipole-like. For comparison, we plot the center of mass of particles with dipolar defects sitting near the wall (Fig. 3e). We observe that, when the colloid radius is similar to the radius of the wall ($R/a < 2$), there is a similar ``splay-matching" zone for the dipoles; however, as we increase $R/a$, the behavior changes. In this regime, the dipole remains suspended with its hedgehog at a distance of roughly $y$ = $H/a = 3$ from the base of the well for wells of all sizes. The equilibrium distance of particles with distorted Saturn rings (Fig. 3a open circles) is intermediate to equilibria for particles with undistorted Saturn rings and colloids in dipolar configurations with point defects. LdG simulation corroborates the finding that dipoles and quadrupoles equilibrate at different distances from the wall, and that the particles with dipolar defects sit deeper in the well than those with Saturn ring (Fig. S5a,b).
\begin{figure*}[htb]
\centering
\includegraphics[width=2\columnwidth]{Fig3.pdf}\\
\caption{{\bf Particle-wavy wall interactions: Splay matching and defect displacement regimes.} (a) Filled grey circles denote splay matching cases, where the Saturn ring sits at the equatorial position ($H_{\mathrm{defect}}/a$ = $H_{\mathrm{COM}}/a$). Crosses denote location of distorted Saturn rings, $H_{\mathrm{defect}}/a > H_{\mathrm{COM}}/a$. Open circles indicate the height of the center of mass (COM) of the colloid. The dotted line denotes flat wall limit. Inset: Schematics of splay matched and displaced defect cases. Experimental bright field microscopy images of (b) splay matched and (c) defect displacement cases. (d) LdG simulation of the geometry in (c) shows the displacement of the ring. (e) Heights of the center of mass (COM, open circles) and hedgehog defects (crosses) of the colloid with dipole defects. Inset: $a =7.5 \,\mu$m, $R = 17.5 \,\mu$m, $H_{\mathrm{defect}}/a = 3$. The scale bars are 10 $\mu$m.}
\end{figure*}
The heat map in Fig. 1a summarizes the results of LdG simulations for the energy landscape around a microparticle with a Saturn ring defect at various locations in the domain. A colloid positioned directly above a well moves down the steepest energy gradient, which corresponds to a straight path toward the wall. The energy minimum is found when the particle is at a height determined by $R/a$, consistent with our experiments (Fig. 3b). For large $R$, the LdG simulations reveal a distorted Saturn ring around the colloid (Fig. 3d). We also note that at $R/a = 7$, we find $H_{\mathrm{COM}}/a = 3.5$, which corresponds to the equilibrium distance of colloids repelled from a flat wall. However, even at these wide radii, the elastic energy landscape above the undulated wall differs significantly from the repulsive potential above a planar boundary, which decays monotonically with distance from the wall \cite{chernyshuk2011theory}. For colloids above the wide wells, energy gradients in the $y$-direction are small, but gradients in the $x$-direction are not. As a result, particles migrate laterally and position themselves above the center of the wells. These dynamics differ from particles above a planar wall, where the particles diffuse freely parallel to the wall.
{\bf Quadrupole to dipole transition.} For micron-sized colloids in an unbounded medium, the dipole is always the lowest energy state \cite{lubensky1998topological}. Electrical fields \cite{loudet2001application}, magnetic fields \cite{stark1999director} or spatial confinement \cite{gu2000observation} stabilize the Saturn ring configuration. In prior research, a colloid with Saturn ring defect, stabilized by confinement far from the wavy wall, became unstable and transformed into a dipolar structure near the wavy wall \cite{luo2016experimental}. In that work, wells with radii $R$ similar to the colloid radius $a$ were used, and categorized a set of narrow well geometries that prompted this transformation. However, in those settings, the transformation occurred very near the wall, where dynamics of the colloid and surrounding liquid crystal were strongly influenced by the details of wall-particle hydrodynamic interactions and near-wall artifacts in the director field. Here, to avoid these artifacts, we use wells with a smooth boundary where $R > a$ and amplitude $A \sim a $ (specifically, 2$A$ = 15 $\mu$m and $\lambda$ = 60 $\mu$m or 2$A$ = 25 $\mu$m, and $\lambda$ = 100 $\mu$m.). These wells are deeper and are best described as semicircle arcs with rounded corners.
We exploit these wider wells to position a colloid with a companion Saturn ring several radii above the wall. The elastic energy field distorts the Saturn ring, and drives a gentle transition to a dipolar defect configuration, as shown in Fig. 1c in time lapsed images. The location of the colloid's center of mass (COM) and the evolution of the polar angle of maximum deflection $\theta$ are tracked and reported in Fig. 4a. This transition is not driven by hydrodynamics; the Ericksen number in this system $\mathrm{Er}= 8 \times 10^{-4}$ , a value two orders of magnitude lower than the critical $\mathrm{Er}= 0.25 $ for a transition from quadrupole to dipole driven by flow \cite{khullar2007dynamic}.
Far from the wavy wall, the effect of the two parallel walls that confine the colloid is similar to that of an external electromagnetic field or to a weakening of the anchoring on the surface of the colloid \cite{stark}, all of which make the Saturn ring configuration either stable or metastable \cite{stark1999director}. The wavy wall, however, exerts an asymmetrical elastic stress on the Saturn ring, displaces it away from that wall, and ultimately destabilizes this configuration. We have performed an experiment in which we allowed the Saturn ring to transition to a dipole near the wall, and then rapidly removed the elastic stress by driving the particle away using a magnetic field (Supplemental Video V1). The dipole remained stable, which indicates that, under our experimental condition, the dipole is the stable state and the Saturn ring is metastable. We can consider the polar angle $\theta$ and the director field as our ``reaction coordinate" to characterize the transition between the Saturn ring state ($\theta=\pi/2$) and the dipolar state ($\theta=\pi$). We assume that the maximum of the energy barrier between these two states far from the wall will be found at an intermediate angle $\theta_B$ (Fig. S5c). The elastic energy from the wall lowers the energy barrier to the transformation, allowing it to occur.
Previously, Loudet and collaborators \cite{loudet2002line} studied the transition of a colloid with a Saturn ring defect to a dipolar configuration in an unbounded medium. In that study, the Saturn ring configuration was stabilized by an electric field, and the transition occurred after the electrical field was removed. The process took place over the course of 10-60 seconds and was found to be independent of the colloid size. Although these two sets of experiments take place in very different physical systems (confined vs.\ unconfined, withdrawal of an electric field versus an applied stress field via boundary curvature), the slow initial dynamics and the total time of transition are common features shared by both. Our results bear remarkable similarities with the dynamics shown in Ref. \cite{loudet2002line} (Fig. 4b-c). In our system, the Saturn ring is metastable, the stabilization is provided by confinement from parallel glass, and destabilized by elastic stress from the wavy boundary.
{ \centering
\includegraphics[width=\columnwidth]{Fig4.pdf}\\
\captionof{figure}{{\bf Dynamics of the quadrupole to dipole transition.} (a) Tracking center of mass (COM) and polar angle $\theta$ evolution during the quadrupole to dipole transition. Initially, the colloids assume the $\theta = 90^\circ$ (dipolar) configuration, which gradually evolves to $\theta = 180^\circ$ as the COM continuously moves towards the wall. After the transition has taken place, the COM continues to approach the wall. (b, c) Reduced ring size and velocity from our system reveal similar dynamics of transition as shown in Fig. 2 in \cite{loudet2002line}. (d) $\theta$ vs. $t_c-t$ plot shows three experimental runs of transition in similar geometry. In (b-d), $t_c$ is the time at which $\theta=90^\circ$.}
}
Further experiments reveal that we are able to exert control over the transition by controlling the shape of the wells. In this case, we made deep wells of either $2A = 15 \, \mu$m or $2A = 25 \, \mu$m. Then we plot the angle versus $t_c - t$, where $t_c$ is the time $\theta$ reaches $\pi/2$. The $\theta$ variations for 3 cases for wells of $2A = 15 \, \mu$m (Fig. 4d) superpose. We show additional trials in Fig. S6; the dynamics are reproducible across samples of different sizes, even in the case where debris are collected by the topological defects on the way. While Loudet et al. observed a propulsive motion opposite to the defect motion attributed to back flow from reorientation of director field, in our system the motion is smooth and continuous as the colloid passes through the spatially varying director field. However, the velocity of the droplet decreases right after transition; we attribute this to the change in the drag environment (Fig. 4a and Fig. S6b)
In shallow wells ($A < a$) with small radii of curvature ($R \sim a$), the particle docks via the familiar lock-and-key mechanism. However, if the radius is large ($R > a$), the well exerts an elastic stress on the director field around the colloid and the Saturn ring remains in the distorted state. The polar angle (Fig. S5c) then ranges from $\theta$ = $103 ^\circ$ to $130 ^\circ$ (maximum deflection). The energy barrier between the Saturn ring configuration and the dipolar configuration persists. However,at a critical angle $\theta_c$, the favorable energy from bend and splay matching eliminate the energy barrier between dipole and quadrupole, allowing completion of the quadrupole to dipole transition. This critical angle of transformation is relatively independent of the colloid size and mode of confinement, and found to be around $\theta_c = 150 ^\circ$ in our experiments, which differs from the one found by \cite{vskarabot2008interactions}. The initial dynamics of the quadrupole to dipole transition is slow, but as $\theta$ increases, so does $d\theta/dt$ (Fig. S5c).
\begin{figure*}[htb]
\centering
\includegraphics[width=1.75\columnwidth]{Fig5.png}\\
\caption{{\bf LdG simulation of the energy density for dipole and quadrupole near a wavy boundary.} The numerical energy minimization is performed for various positions of dipole and Saturn ring, with the colloid size and wavy wall geometry held fixed, to find the location of minimum energy. (a) A Saturn ring located at reference state (State 1, $y=5a$) with E = 7289.70 $k_B T$. (b) A Saturn ring located at near wall (State 2, $y = 1.8a$) with E = 7086.20 $k_B T$, a decrease of 203.5 $k_B T$ from State 1. (c) A dipole located at reference state (State 1, $y=5a$) with E = 9041.13 $k_B T$. (d) A dipole located at near wall (State 2, $y=1.5a$) with E = 8456.12 $k_B T$, a decrease of 585.01 $k_B T$ from State 1. (e) The energy of the dipole and quadrupole are calculated for systems of different size ($a =90, 135, 180, 225, 270\,$ nm, the simulation box and the wall are scaled accordingly). The energy difference between quadrupole and dipole ($\Delta E = E_{dipole} - E_{Saturn \, ring}$) for systems of different sizes at $y = 1.5a$ (blue curve), $y = 1.8a$ (orange curve), $y = 5a$ (yellow curve), and no wavy wall (purple curve).}
\end{figure*}
In deeper wells ($A > a$), the polar angle increases as the colloid migrates into the well. LdG simulation reveals that, in the dipolar configuration, there is less distortion in the director field near the colloid owing to bend and splay matching, and that it is indeed more favorable for a colloid with dipolar defect to locate deep within the well (Fig. 5). We can compute the energy of a colloid both far (State 1: $y=5a$, reference state) and near the wavy wall (Fig. 5a-d) for both Saturn ring and dipolar configurations (State 2: $y = 1.8a$ and $y = 1.5a$, for Saturn ring and dipolar configuration, respectively). Using identical parameters for the LdG numerics, we can stabilize a dipolar configuration by initializing the director field by the dipolar far-field ansatz \cite{stark2001physics}. While colloids in both configurations decrease their energy upon moving toward the wall from State 1 to State 2, the decrease in energy is $2.9$ times greater for the dipolar case (Fig. 5a). This change is determined by differences in the gradient free energy, corresponding to reduced distortion in the nematic director field.
Stark \cite{stark} argues that the stabilization of a Saturn ring under confinement occurs when the region of distortion becomes comparable to or smaller than that of a dipole, assuming the same defect energy and energy density. Yet this argument does not apply here because the presence of the wavy wall strongly alters the energy density at various regions (Fig. 5). Due to limitations in computational power, we cannot model colloids of our experimental scale. This limits our simulations to settings in which the dipole is more energetically costly than the Saturn ring configuration. However, as we increase the size of the simulation ($a$ = 90, 135, 180, 225, 270 nm), the energy difference between dipole and quadrupole decreases for colloids located at $y = 1.5a$ (Fig. 5e, blue curve), suggesting that at larger system sizes, the dipole may become the stable configuration, in agreement with experiment. The energy difference $\Delta E\, ( = E_{dipole} - E_{Saturn \, ring}$) decreases going from $y = 5a$ (Fig. 5e, yellow curve) to $y = 1.8a$ (Fig. 5e, orange curve), as expected. Note that $\Delta E$ at $y = 5a$ agrees to within 1.15\% with a simulation of colloids in a sandwich cell with no wavy wall (Fig. 5e, purple curve), serving as a valid reference state. Furthermore, we note that the energy difference between a dipole and quadrupole configuration decreases as colloids move closer to the bottom of the well. These results show that the distortion field exerted by the wavy boundary can be considered as a gentle external field, in analogy to electrical, magnetic or flow fields.
\section*{Discussion, multistable states}
{\bf Multiple paths diverging from unstable points.}
In the preceding discussions, we have focused on attractive particle-wall interactions and associated stable or metastable equilibria. However, the location directly above a hill is an unstable point. When colloids are placed nearby using an external magnetic field, they can follow multiple diverging paths upon removal of the magnetic field. The particular paths followed by the colloid depend on small perturbations from the unstable point.
For example, amongst 28 such trials using an isolated homeotropic colloid with a Saturn ring, a colloid moved along a curvilinear path to the well on its left 11 times, to the well on its right 10 times and was repelled away from the peak until it was approximately one wavelength away from the wall 7 times. Three sample trajectories are shown in Supplemental Video V2a-c. These trajectories are also consistent with the heat map in Fig. 1a, computed by taking a fixed step size in the direction of the local force as defined by the local energy gradient. The numerically calculated trajectories, and their extreme sensitivity to initial position, are in qualitative agreement with our experimental results. Thus, small perturbation in colloid location can be used to select among the multiple paths.
These features can be used to launch the colloid from one location to another, propelled by the elastic energy field. To demonstrate this concept, we arranged two wavy walls parallel to each other with the periodic structures in phase, i.e. the hills on one wall faced valleys on the other (Fig. 1f). For wall-to-wall separations more than $2 \lambda$, colloids docked, as expected (Fig. 1f). For wall-to-wall separations less than $2 \lambda$, a colloid, placed with a magnetic field above the peak on one wall was guided by the NLC elastic energy to dock in the valley on the opposite wall (Fig. 1g), thus effectively extending its range of interaction with the second wall (Supplemental Video V3). In the context of micro-robotics, such embedded force fields could be exploited to plan paths for particles to move from one configuration to another, guided by a combination of external magnetic fields and NLC-director field gradients.
{\bf Path-planning for colloids with different defect structures.}
We can tailor unstable points and attractors for these particles, and find important differences between the behavior of colloids attracted to wells and those attracted to hills. For example, a dipole pointing away from the wall (Fig. 6a) behaves like a colloid with companion Saturn rings in several ways. Both are attracted over a long range to equilibrate in wells, and both have unstable points above hills. Also, when released from this unstable point, both defect structures can travel in three distinct directions (left, right and away from the wall, Fig. 6a).
On the other hand, dipoles pointing toward the wall (Fig. 6b) behave differently. They are attracted to stable equilibria near hills, and are unstable near wells. Interestingly, when released from a point near a well, these colloids can travel only toward one of the adjacent hills. That is, there is no trajectory above the well that drives them in straight paths away from the wall. Colloids with planar anchoring form boojum structures which behave similarly (Fig. 6c); they equilibrate near the hills, and follow only two sets of possible paths when released from unstable points above a well. The ability to drive particle motion with a gently undulating wall is thus not limited to colloids with companion Saturn rings; the wall also directs the paths of dipolar colloids with homeotropic anchoring and colloids with planar anchoring, decorated with boojums. The interaction ranges for various colloid-defect configurations are summarized in Fig. 6d; while colloids with each defect structure have distinct equilibrium distances from a flat wall (Fig. S7), the range of interactions follow similar trends as functions of $\lambda$ (Fig. 6d).
{ \centering
\includegraphics[width=\columnwidth]{Fig6.pdf}\\
\captionof{figure}{{\bf Multiple states and reconfigurable docking.} Particle paths are illustrated by points that indicate particle COM position over time; time step $\Delta t$=5 seconds between neighboring points. The colored dots denote (a) 4 representatives trajectories (out of 12) of an upward-oriented dipole, (b) 2 representative trajectories (out of 11) of a downward-oriented dipole, and (c) 2 representative trajectories (out of 14) of a planar-anchoring colloid with two boojums released between two neighboring wells. Inset: A sketch of the director field around (a) an upward-oriented dipole docked inside the well, (b) a downward oriented dipole and (c) a planar-anchoring colloid perched on top of a hill. The scale bars are 10 $\mu$m. (d) The range of interaction $H^*$ as a function of $\lambda$ is similar for homeotropic (H) and planar (P) anchoring, for hedgehog (DP) and Saturn ring (QP) defects, and for solid colloids and droplets. }
}
These results indicate that the range of repulsion differs for hills and wells. This is likely related to the differences in the nematic director field near these boundaries. While converging splay field lines are sourced from the well, divergent splay field lines emanate from the hill. Both fields must merge with the oriented planar anchoring far from the wall. As a result, hills screen wells better than wells screen hills.
We can exploit these wall-dipole interactions to shuttle the colloid between parallel walls. For walls positioned with their wavy patterns out-of-phase (Fig. 7, Supplemental Video V4), dipoles with point defect oriented upwards are repelled from initial positions above hills on the lower wall and dock on the hill on the opposite wall. However, for walls with their patterns in phase, dipoles with defects oriented downwards released from an initial position above a well dock either at an adjacent hill on the same wall (Fig. 7c, Supplemental V5a), or in an attractive well on the opposite wall (Fig. 7d, Supplemental Video V5b).
{ \centering
\includegraphics[width=0.9\columnwidth]{Fig7.pdf}\\
\captionof{figure}{{\bf Repulsion and bistable docking of dipoles.} (a) Schematics of two parallel walls with a gap comparable to $\lambda$ between them. The waves of the wall are either out of phase with hill to hill configuration such as in (b) or in phase with hill to valley such as in (c-d). The scale bars are 10 $\mu$m.}
}
Finally, we demonstrate that particles moving in weak flow can select preferred docking sites along the wavy wall. Wells of different wavelengths create energy gradients that decay at distinct rates. Placing wells of different wavelength adjacent to each other offers additional opportunities for path planning. In one setting, a colloid can sample multiple wells of varying sizes under a background flow in the $x$ direction (Fig.\ 8). The outcome of whether the colloid docks or continues to be advected is determined by a balance between viscous forces that drive $x$-directed motion and attractive and repulsive interactions with the wall. In a separate experiment, we place tracer particles in the background while a sampling/docking event takes place (Fig. S8). The tracer particle travels along a straight path while the colloid near the wall follows a more complex trajectory, eventually docking in a well that, as for Goldilocks, protagonist of a beloved children story, is ``just right". The complexity of the colloid's path confirms that the elastic energy field plays an important role in guiding the motion of the colloid to its preferred well. Such interactions open interesting avenues for future studies, in which the rates of motion owing to elastic forces and those owing to applied flows are tuned, and the trapping energy of the docking sites are tailored, e.g. for colloidal capture and release.
{ \centering
\includegraphics[width=\columnwidth]{Fig8.pdf}\\
\captionof{figure}{{\bf ``Goldilocks'': Colloid docks in a preferred well.} A particle finds the lowest energy locations under a biasing flow (Supplemental Video V6), ending in the well that best matches its curvature. The length of the arrow is proportional to the instantaneous velocity. The scale bar is 10 $\mu$m.}
}
\section*{Conclusions}
The development of robust methods to drive microscopic objects along well defined trajectories will pave new routes for materials assembly, path planning in microrobotics and other reconfigurable micro-systems. Strategies developed within NLCs are one means to address these needs. Since the strategies developed in liquid crystals depend on topology, confinement, and surface anchoring, which can be manipulated by changing surface chemistry or texture on colloids with very different material properties, they are broadly applicable across materials platforms. We have developed controllable elastic energy fields in NLCs near wavy walls as a tool to manipulate the ranges of attraction and to define stable equilibiria. We have also exploited elastic energy fields to drive transitions in topological defect configurations. The near-field interaction between the colloid and the wall rearranges the defect structure, driving a transition from the metastable Saturn ring configuration to the globally stable dipolar configuration for homeotropic colloids.
We account for this transformation by means of an analogy between confinement and an external magnetic field. As these defect sites are of interest for molecular and nanomaterials assembly, the ability to control their size and displacement will provide an important tool to improve understanding of their physico-chemical behavior, and potentially to harvest hierarchical structures formed within them.
Furthermore, we have developed the concept of repulsion from unstable points as a means to dictate paths for colloids immersed within the NLCs. We have identified unstable sites from which multiple trajectories can emerge, and have used these trajectories to propel particles, demonstrating the multistability made possible by the wavy wall geometry.
\section*{Methods}
{\bf Assembly of the cell.} We have developed a wavy wall confined between two parallel plates as a tool to direct colloid assembly. The wavy wall is configured as a bounding edge to the planar cell. The NLC cell and the walls were fabricated following the procedure in \cite{luo2016experimental}. The procedure is briefly outlined here. The wavy walls are made with standard lithographic methods of SU-8 epoxy resin (MicroChem Corp.). The walls have period $\lambda$ ranging from $27-80$ $\mu$m and consist of smoothly connected circular arcs of radius $R$ between $7-40$ $\mu$m. These strips, of thickness between $20-28$ $\mu$m, are coated with silica using silica tetrachloride via chemical vapor deposition, then treated with DMOAP (dimethyloctadecyl[3-(trimethoxysilyl)propyl]). The wavy wall is sandwiched between two antiparallel glass cover slips, treated with 1\% PVA (poly(vinyl alcohol)), annealed at 80 $^{\circ}$C for one hour and rubbed to have uniform planar anchoring. Once assembled, the long axis of the wall is perpendicular to the oriented planar anchoring on the bounding surfaces. We observed that in some LC cells the actual thickness was larger than expected, which we attribute to a gap above the strip. In those cases we noticed that some small colloids could remain trapped between the wavy strip and the top glass surface, so the effective thickness could be as large as $35-40$ $\mu$m.
{\bf Particle treatment and solution preparation.} We use the nematic liquid crystal 5CB (4-cyano-4'-pentylbiphenyl, Kingston Chemicals) as purchased. We disperse three types of colloids in the 5CB. The size and polydispersity of the colloids are characterized by measuring a number of colloids using the program FIJI. (1) $a = 7.6 \pm 0.8$ $\mu$m silica particles (Corpuscular Inc.), treated with DMOAP to have homeotropic anchoring. (2) $a = 4.3 \pm 0.4$ $\mu$m ferromagnetic particles with polystyrene core and coated with chrome dioxide (Spherotech, Inc.), treated with DMOAP, an amphiphile that imposes homeotropic anchoring, and with PVA for planar anchoring. (3) $a = 4.3-8$ $\mu$m custom-made emulsion droplets where water phase was loaded with magnetic nanoparticles and crosslinked. The oil phase consisted of 5CB mixed with 2wt\% Span 80. The water consisted of a 50:50 mixture of water loaded with iron oxide nanoparaticle and a pre-mixed crosslinking mixture. The magnetic nanopowder iron (II, III) oxide (50-100 nm) was first treated with citric to make it hydrophilic. The crosslinking mixture was pre-mixed with HEMA (2-hydroxyl ethyl methacrylate): PEG-DA (poly(ethylene glycol) diacrylate): HMP (2-hydroxyl-2-methylpropiophenone) in 5:4:1 ratio. Water and oil phases were emulsified with a Vortex mixer to reach desired colloid size range. The two were combined in a vial treated with OTS (trichloro(octadecyl)silane) to minimize wetting of the wall by the water phase during the crosslinking process. All chemicals were purchased from Sigma Aldrich unless otherwise specified. The emulsion was crosslinked by a handheld UV lamp (UVP, LLC) at = 270 nm at roughly P = 1mW/cm$^2$ for 3 hours. The emulsion was stored in a refrigerator for stability. Span 80 ensured that the liquid crystal-water interface would have homeotropic anchoring. The magnetic droplets are very polydispersed due to the emulsification process. However, when we compare their behavior with the silica and feromagnetic colloids, we only compare colloids and droplets of similar sizes.
{\bf Imaging.} The cells form a quasi-2D system that is viewed from above. In this view, the wavy wall is in the plane of observation. The homeotropic colloids dispersed in the NLC are located between the top and bottom coverslips. These colloids are levitated away from both top and bottom surfaces by elastic repulsion \cite{pishnyak2007levitation}. The cell was imaged with an upright microscope (Zeiss AxioImager M1m) under magnification ranging from 20x to 50x. The dynamics of the colloid near the wavy wall are recorded in real time using optical microscopy. Additional information regarding the director field configuration is also gleaned using polarized optical microscopy (POM).
{\bf Application of a magnetic field.} The magnetic field was applied by using a series of 8 NdFeB magnets (K\&J Magnetics, Inc.) attached to the end of a stick. The magnets was placed roughly 0.5 cm from the sample; the field applied is estimated to be roughly 40-60 mT, far below the strength required to reorient the NLC molecules, but sufficiently strong to overcome the drag and move magnetic droplets and particle in arbitrary directions.
{\bf Numerical modeling by Landau-de Gennes (LdG) simulation.} Numerical modeling provides insight into the NLC director field in our confining geometries. We use the standard numerical Landau-de Gennes (Q-tensor) approach \cite{ravnik2009landau} with a finite difference scheme on a regular cubic mesh. This approach is widely used to compute regions of order and disorder in bounded geometries through a global free energy minimization. The Q-tensor is a second-rank, traceless, symmetric tensor whose largest eigenvalue is the order parameter $S$ in the NLC. Using the Landau-de Gennes approach, at equilibrium, the 3-D director field and the locations of defect structures for a given geometry are predicted. The nematic director field, a headless vector field (i.e. $-\bf{n} \equiv \bf{n}$), represents the average direction of an ensemble of molecules of size comparable to the correlation length at any point in the system. The geometry of the system, the boundary conditions, and elastic constants for the NLC are inputs to the numerical relaxation procedure. Specifically, the particle is bounded by walls with oriented planar anchoring separated by thickness $T = 4a$. The anchoring at the boundary opposite of the wavy wall is set to zero, and that of the flat plates sandwiching the colloid and the wavy wall is set to oriented planar. The Nobili-Durand anchoring potential is used \cite{nobili1992disorientation}. Defects are defined as the regions where the order parameter $S$ is significantly less than than the bulk value. The mesh size in our simulation is related to the correlation length in the NLC, and corresponds to 4.5 nm. Due to the difference in scale, the exact final configurations of numerics and experiment must be compared with caution; nevertheless, it is an invaluable tool to corroborate and elucidate experimental findings. Because the size of simulation is much smaller, much stronger anchoring is applied. For most of our results, infinite anchoring strength is applied unless otherwise specified. To simulate dipoles, we vary the material constants $B$ and $C$ so that the core energy of the defect is 2.6x higher to compensate for the small system (details can be found in Supplemental Materials). In addition, we also use an initial condition with a dipolar configuration about the colloid: ${\bf n}({\bf r}) = \hat i + PR_c^2\frac{{\bf r}-{\bf r_c}}{|{\bf r}-{\bf r_c}|^3}$, where $R_c$ is the colloid radius, ${\bf r_c}$ is the location of the colloid center, $P=3.08$ is the dipole moment, and $\hat i$ is the far-field director \cite{stark2001physics}. This expression is applied only in a sphere of radius 2$R_c$ around ${\bf r_c}$.
{\bf Numerical modeling by COMSOL.} To describe some aspects of the director field in the domain (Fig. S1b), we employ the common simplification in nematic liquid crystal modeling known as the one-constant approximation: $K_1=K_2=K_3\equiv K$. If there is no bulk topological defect, then the director field is a solution to Laplace's equation $\nabla^2 {\bf n} = 0$, which can be solved by COMSOL separately for the two components $n_x$ and $n_z$, from which $n_y$ is obtained by the unit length restriction on $\mathbf{n}$. In COMSOL, this is easiest implemented by the Electrostatics Module. The model solves the equivalent electrostatic problem of $\nabla^2 V = 0$, which gives us $n_x$ and $n_z$. Customized geometry, such as the wavy wall, can be built with the geometry builder. We mesh the space with a triangular mesh and calculate the director field components; the results are then exported in grid form and post-processed in MATLAB.
\section*{Acknowledgement}
This work is supported by the Army Research Office, Grant W911NF1610288. We thank Dr. Sarah Hann for treatment of iron oxide NPs, Dr. Laura Bradley for useful discussion on synthesizing magnetic droplets, Prof. Ani Hsieh, Dr. Shibabrat Naik, Dr. Denise Wong, and Dr. Edward Steager for useful discussion on magnetic control and path-planning.
\section*{Author contributions}
K.J.S, F.S., and Y.L. designed the project. Y.L. performed research. Y.L. and D.A.B. performed numerical modeling and theoretical analysis. K.J.S, F.S., Y.L. and D.A.B. wrote the manuscript.
{\bf Competing interests:} The authors declare no competing financial interests.
|
1,108,101,566,327 | arxiv | \section{Introduction}
Events containing a high $P_{T}$ isolated electron or muon and associated
with missing transverse momentum have been observed at
HERA \cite{isoleph1origwpaper,isoleph1newwpaper,isolepzeusorigwpaper,zeustop}.
An excess of HERA~I data events (1994--2000, mostly in $e^{+}p$ collisions)
compared to the SM prediction at large hadronic transverse
momentum $P_{T}^{X}$ was reported by the H1 Collaboration
\cite{isoleph1newwpaper}, which was not confirmed by the ZEUS Collaboration,
although the analysis was performed in a more restricted phase
space \cite{zeustop}.
The main SM contribution to the signal topology is the production
of real $W$ bosons via photoproduction with subsequent leptonic decay
$ep\rightarrow eW^{\pm}$($\rightarrow l\nu$)$X$, where the hadronic
system $X$ is typically of low $P_{T}$.
The equivalent charged current process
$ep \rightarrow \nu$$W^{\pm}$($\rightarrow l\nu$)$X$ also contributes to the
total signal rate, although only at a level of around 7\%.
The production of $Z^{0}$ bosons with subsequent decay to neutrinos
$ep \rightarrow eZ^{0}$($\rightarrow \nu\bar{\nu}$)$X$ results in a
further minor contribution\footnote{This process is not included in
the present ZEUS analysis.} to the total signal rate in the electron
channel at a level of 3\%.
The event selection employed by the H1 \cite{h1isolepnew} and ZEUS
\cite{zeusisolepnew} analyses are very similar and may be summarised
as follows:
The identified lepton should have high transverse momentum $P_{T}^{l} >$~10~GeV,
be observed in the central region of the detector and be isolated with
respect to jets and other tracks in the event.
The event should also contain a large transverse momentum imbalance,
$P_{T}^{miss} >$~12~GeV. Further cuts are then applied, which are designed
to reduce SM background, whilst preserving a high level of signal purity.
Event quantities sensitive to the presence of high energy undetected
particles in the event are employed such as the azimuthal balance of the
event, the difference in azimuthal angle between the lepton and the
hadronic system and the longitudinal momentum imbalance.
To ensure that the two lepton channels are exclusive and may therefore
be combined, electron events must contain no isolated muons.
\section{Results from the H1 and ZEUS Analyses}
\label{sec:sep}
Both H1 and ZEUS have recently performed the analysis of the electron and
muon channels\footnote{The H1 Collaboration have also performed the
analysis of the tau decay channel using the full HERA~I+II data and the
hadronic 1--prong tau decay mode \cite{h1isotaunew}. In this search, where the
signal purity is much lower at around 14\%, 20 events are observed in the
data compared to a SM prediction of 19.5~$\pm$~3.2.} on their respective
complete HERA I+II data sets, which correspond to
approximately 0.5~fb$^{-1}$ per experiment \cite{h1isolepnew,zeusisolepnew}.
A total of 59 events are observed in the H1 data, compared to a
SM prediction of 58.9~$\pm$~8.2.
For $P_{T}^{X} >$ 25~GeV, a total of 24 events are observed compared
to a SM prediction of 15.8~$\pm$~2.5, where 21 events are observed in
the $e^{+}p$ data compared to a SM prediction of 8.9~$\pm$~1.5.
The observed data excess in the HERA~I $e^{+}p$ data thus remains at
the 3.0$\sigma$ level for the complete H1 $e^{+}p$ dataset.
In the ZEUS analysis of the complete HERA~I+II data, 41 data events are
observed in agreement with the SM prediction of 48.3~$\pm$~6.8.
Unlike in the H1 analysis, agreement between data and SM is also observed
in the high $P_{T}^{X}$ region, where 11 events are seen in
the $e^{\pm}p$ data compared to a SM prediction of 13.1~$\pm$~1.8.
\section{A Combined H1 and ZEUS Analysis}
\label{sec:comb}
A study of the selection efficiency for signal process using the event
generator EPVEC \cite{epvec} found the H1 and ZEUS analyses
to be compatible in the kinematic region where they are directly
comparable \cite{h1ichep06,zeusichep06}.
The majority of the data events observed by H1 at $P_{T}^{X} >$ 25~GeV
are also found to fall into the region of overlap of the two analyses.
Nevertheless, in order to coherently combine the results from the two
experiments, a common phase space has been established.
The common selection is based
on the H1 event selection \cite{isoleph1newwpaper,h1isolepnew}, but over a
more restricted lepton polar angle range of
15~$^\circ < \theta_l <$~120~$^\circ$, as employed in the ZEUS analysis
\cite{zeusisolepnew}.
The signal expectation rates of the H1 and ZEUS analyses using the common
selection are found to be comparable, taking into account the
respective luminosities of the data sets and the signal processes included.
More details on the combination of the H1 and ZEUS analyses can be found
in \cite{h1andzeusisolepnew}.
The results of the combined H1+ZEUS analysis are summarised in table 1.
The signal contribution, dominated by real $W$ production, is seen to
dominate the total SM expectation in all data samples.
At large hadronic transverse momentum $P_{T}^{X} >$ 25~GeV a total of
29 events are observed in the H1+ZEUS $e^{\pm}p$ data compared to a
SM prediction of 25.3~$\pm$~3.2.
In the $e^{+}p$ data alone, 23 events are observed with $P_{T}^{X} >$ 25~GeV
compared to a SM prediction of 14.6~$\pm$~1.9, equivalent to an excess
of data over the SM prediction of 1.8$\sigma$.
Seventeen of the 23 data events are observed in the H1 data compared to
a SM expectation of 7.1~$\pm$~0.9, equivalent to an excess of data over
the SM prediction of 2.9$\sigma$.
Figure~\ref{fig:isolep1} shows the transverse mass, $M_{T}^{l\nu}$
and $P_{T}^{X}$ distributions of the H1+ZEUS $e^{\pm}p$ data for the
combined electron and muon channels.
\begin{figure}
\includegraphics[height=.42\textwidth]{southEps07.fig1a.eps}
\hfill
\includegraphics[height=.42\textwidth]{southEps07.fig1b.eps}
\caption{The transverse mass $M_{T}^{l\nu}$ (left) and hadronic
transverse momentum $P_{T}^{X}$ (right)
distributions of the combined H1+ZEUS $e^{\pm}p$ HERA I+II data.
The data (points) are compared to the
SM expectation (open histogram). The signal component of the SM
expectation is given by the
hatched histogram. $\rm N_{Data}$ is the total number of data events
observed, $\rm N_{SM}$ is the total SM expectation. The total error
on the SM expectation is given by the shaded band.}
\label{fig:isolep1}
\end{figure}
\section{Cross Sections and $W$ Polarisation Fractions}
The H1 selection results described in section \ref{sec:sep} are used to
calculate production cross sections for events with an energetic isolated
lepton and missing transverse momentum ($\sigma_{\ensuremath{\ell+{P}_{T}^{miss}}}$)
and for single $W$ boson production ($\sigma_{W})$, for which the branching
ratio for leptonic $W$ decay is taken into account \cite{h1wpol}.
The results are shown below with statistical (stat) and systematic (sys)
uncertainties compared to the SM, quoted with a theoretical systematic error
(th.sys) of 15\%.
\vspace{-0.2cm}
\begin{table}[h]
\begin{center}
\begin{tabular}{ | l | c @{$\,\pm\,$} c @{$\,(\textrm{stat})\,\pm\,$} c @{\,(sys)\,}| c @{$\,\pm\,$} c @{\,(th.sys)\,}| }
\hline
\multicolumn{1}{|c|}{{ \bf H1}} & \multicolumn{3}{c|}{HERA I+II Data} & \multicolumn{2}{c|}{SM} \\
\hline\hline
{\small $\sigma_{\ensuremath{\ell+{P}_{T}^{miss}}}$} &
{\small 0.24} &
{\small 0.05} &
{\small 0.05} &
{\small 0.26} &
{\small 0.04} \\
{\small $\sigma_{W}$} &
{\small 1.23} &
{\small 0.25} &
{\small 0.22} &
{\small 1.31} &
{\small 0.20} \\
\hline
\end{tabular}
\end{center}
\end{table}
\vspace{-0.6cm}
A measurement of the $W$ polarisation fractions is also performed
by H1, as described in \cite{h1wpol}.
Using a 2D fit, optimal values of the left-handed ($F_{-}$) and longitudinal
($F_{0}$) fractions are extracted, as shown in
figure \ref{fig:isolep2} (left) compared to the SM and a FCNC single top model.
\section{Search for Single Top Quark Production}
A search for single top quark production at HERA is performed by H1
as an extension of the search for isolated lepton events,
using the full HERA~I+II $e^{\pm}p$ data \cite{h1top,h1topnew}.
The investigated model considers anomalous production of top quarks in a
Flavour Changing Neutral Current process involving the coupling
$\kappa_{tu\gamma}$.
A multivariate analysis is performed to discriminate top from SM
background (dominated by real $W$~production).
No evidence for single top production is observed.
An upper limit on the anomalous top production cross section of
$\sigma_{ep\rightarrow etX}~<~0.16$~pb is established at 95\% CL.
The corresponding H1 limit on the coupling $\kappa_{tu\gamma}~<~0.14$ is
shown in figure \ref{fig:isolep2} (right) and is currently the best limit compared to
those from other colliders\footnote{An improved limit on $v_{tuZ}$ by the CDF Collaboration
was presented at the HEP-EPS 2007 conference; see \cite{cdftopnew}.}.
\section*{References}
|
1,108,101,566,328 | arxiv | \section{Introduction and general framework}
In recent years, fractional calculus has received a great deal of attention. Equations involving fractional
derivatives and fractional Laplacians have been studied by various authors (see, e.g. Podlubny
\cite{Po} and references
therein). In probability theory, fractional calculus has been extensively used in the study of fractional
Brownian motions. In this work we consider a stochastic partial differential equations where the standard Laplacian
operator is replaced by a fractional one. \\
\noindent
Let $\lambda > 0$. We consider the fractional Laplacian
$\Delta_\lambda=- (-\Delta)^{\lambda/2}= - (-\partial^2/\partial x^2)^{\lambda/2}$, the symmetric fractional derivative
of order $\lambda$ on ${\rm I~\hspace{-1.15ex}R}$. This is a non-local operator defined via the Fourier transform
${\cal F}$:
$$ {\cal F}(\Delta_\lambda v)(\xi)=-\vert \xi\vert^\lambda {\cal F}(v)(\xi).$$
It also has another representation, for $0<\lambda <2$,
\begin{equation}
\label{delta}
\Delta_\lambda v(x)=K \int_{{\rm I~\hspace{-1.15ex}R}} \left\{ v(x+y)-v(x)-
\nabla v(x)\cdot \frac{y}{1+\vert y\vert^2}
\right\}\frac{dy}{\vert y\vert^{1+\lambda}} ,
\end{equation}
for some positive constant $K=K_\lambda$, which identifies it as the infinitesimal generator for
the symmetric
$\lambda$-stable L\'evy process (see, e.g., It\^o \cite{It}, Stroock \cite{St}, Komatsu \cite{Ko},
Dawson and Gorostiza \cite{Da}).\\
\noindent
Let $W=\left\{W(t,x),(t,x)\in [0,T]\times {\rm I~\hspace{-1.15ex}R} \right\}$ be a Brownian sheet on a complete probability
space $(\Omega ,{\cal{G}},P)$. That is, $W$ is a zero-mean Gaussian random field with covariance
function
$$
E(W(t,x)W(s,y))=\frac{1}{2} (
s\wedge t) \left( \vert x\vert +\vert y\vert -\vert x-y\vert\right),
$$
$x, y \in {\rm I~\hspace{-1.15ex}R}$, $s, t \in [0,T]$.
Then, for each $t\in [0,T]$, we define a filtration
$$
{\cal{G}}_t^0=\sigma \left( W(s,x), s\in [0,t], x\in {\rm I~\hspace{-1.15ex}R}\right),\,\,\,\,\,
{\cal{G}}_t= {\cal{G}}_t^0 \vee {\cal{N}},
$$
where ${\cal{N}}$ is the $\sigma$-field generated by sets with $P$\--outer measure zero. \\
The family of $\sigma$-fields $\{{\cal{G}}_t, 0\le t\le T\}$ constitutes a stochastic basis on the probability
space $(\Omega ,{\cal{G}},P)$. Let ${\cal P}$ the corresponding predictable $\sigma$-field on
$\Omega \times [0,T] \times {\rm I~\hspace{-1.15ex}R}$. The stochastic integral with respect to the Brownian sheet
is explained in Cairoli et al. \cite{Ca} or Walsh \cite{Wa}.\\
We focus on the following parabolic stochastic partial
differential equation, driven by space--time white noise in one space
dimension on $[0, T]\times {\rm I~\hspace{-1.15ex}R} $%
$${\bf (E)}\hspace{4mm}\frac{\partial u}{\partial t}(t,x)=\Delta_\lambda u(t,x)
+b\left( t,x,u\left( t,x\right) \right)+\sigma \left( t,x,u\left( t,x\right) \right)\dot W(t,x),$$
with initial condition $u(0,x)=u_0(x)$ ${\cal{G}}_0$-measurable and satisfying some conditions that will be specified
later. The process $\dot W(t,x) = \frac{\partial^2 W}{\partial t \partial x}$ is
the generalized (distribution) derivative of the Brownian sheet.
The properties of $\dot W$ are described in Walsh \cite{Wa}.\\
In principle one can think of a wide variety of random forcing terms. White noise in time and space
is very often a candidate. Main motivations behind this choice are
central limit type theorems and the insufficient knowledge of the
neglected effects or external disturbances.\\
Evolution problems involving fractional Laplace operator have long been extensively studied in mathematical and physical
literature. In the latter, this type of models has been motivated by fractal (anomalous) diffusion related to the L\'evy flights
(see, e.g., Stroock \cite{St}, Bardos et al. \cite{Ba}, Dawson and Gorostiza
\cite{Da}, Metzler and Klafter \cite{Me}, Mann and
Woyczynski \cite{Man}). In fact, in various physical
phenomena in statistical mechanics,
the anomalous diffusive terms can be nonlocal and fractal, i.e.
represented by a fractional power of the Laplacian.\\
Equation {\bf (E)} is a generalization of the classical stochastic heat equation where $\lambda=2$ (see, e.g.,
Walsh \cite{Wa}, Pardoux \cite{Pa2} and the references quoted therein).
In those papers, the authors
prove existence and uniqueness of the mild solution in the space interval $[0,1]$.
The proof relies stronly on properties of the explicit Green kernel associated
to the operator $\frac{\partial^2}{\partial x^2}$ in bounded space interval
with Dirichlet boundary conditions. In the present paper, we consider the above
class of equations in the whole line, instead of a bounded interval,
for the space variable. The main properties of the semigroup generated by
the fractional Laplacian can be derived by Fourier transform techniques.\\
Consider the fundamental solution $G_\lambda(t,x)$, associated to the equation
{\bf (E)} on $[0,T] \times {\rm I~\hspace{-1.15ex}R} $ i.e.
the convolution kernel of the L\'evy semigroup $\exp(t \Delta_\lambda)$ in ${\rm I~\hspace{-1.15ex}R}$.\\
Using Fourier transform, we easily see that $G_\lambda(t,x)$ is given by :
$$G_\lambda(t,x)= {{\cal F}}^{-1}(e^{-t \vert\,\cdot\,\vert^\lambda})(x)
={\int_{{\rm I~\hspace{-1.15ex}R}}} e^{2 i\pi x\xi} e^{-t\vert\xi\vert^\lambda}d\xi={\cal{F}}(e^{-t
\vert\,\cdot\,\vert^\lambda})(x).$$
For $\lambda \in ]0,2]$, the most important property of $G_\lambda$ is its \emph{nonnegativity} (see L\'evy \cite{Le}
or Droniou et al. \cite{Dro1} for a quick proof ). \\
Throughout this work we consider solutions to the spde {\bf (E)} in the mild sense,
following Walsh \cite{Wa}, given by the following definition
(which is formally equivalent to Duhamel's principle or the variation of parameters formula): \\
\begin{definition}
A stochastic process $u: \Omega \times [0,T] \times {\rm I~\hspace{-1.15ex}R} \rightarrow {\rm I~\hspace{-1.15ex}R}$, which is jointly measurable
and ${\cal{G}}_t$-adapted, is said to be a (stochastically) mild solution to the stochastic
equation {\bf (E)} with initial condition $u_0$ if there exists a martingale measure $W$, defined on
$\Omega$, such that
{\it a.s.} for almost all $t\in [0,T], x\in {\rm I~\hspace{-1.15ex}R}$,
\begin{eqnarray}
\label{walsh}
u(t,x)&=& G_\lambda(t,\cdot) \ast u_0(x)+\int_0^t\hspace{-2mm}\int_{\rm I~\hspace{-1.15ex}R}
G_\lambda(t-s,x-y)b(s,y,u(s,y))dyds\nonumber\\
& & \nonumber\\
& & \quad \quad +\int_0^t
\hspace{-2mm}\int_{\rm I~\hspace{-1.15ex}R} G_\lambda(t-s,x-y)\sigma (s,y,u(s,y))W(dy,ds) ,
\end{eqnarray}
where the last integral is an It\^o stochastic integral.
\end{definition}
\noindent
We assume that the reaction term $b$ and the white-noise amplitude $\sigma$ are continuous functions on
$[0,T] \times {\rm I~\hspace{-1.15ex}R} \times {\rm I~\hspace{-1.15ex}R}$
and satisfy the following growth and Lipschitz conditions:\\
\noindent
${\bf (H_0)}$ \\
For all $T>0$, there exists a constants $C=C(T)$, such that for
all $0\le t \le T, x\in {\rm I~\hspace{-1.15ex}R}$
and $u \in {\rm I~\hspace{-1.15ex}R}$,
\begin{eqnarray*}
\vert b(t,x,u)\vert + \vert \sigma (t,x,u)\vert & \le & C (1+ \vert u\vert),\\
\vert \sigma (t,x,u)- \sigma (t,x,v)\vert & \le &
C \,\vert\, u - v\,\vert.\\
\vert b (s,x,u)- b(t,y,v)\vert & \le &
C \left( \vert\, t - s\,\vert+\vert\, x - y\,\vert+\vert\, u - v\,\vert \right).
\end{eqnarray*}
\medskip
We shall also need some hypotheses on the initial condition $u_0$ :
\noindent
${\bf (H_1.1)}\quad \sup_{x\in {\rm I~\hspace{-1.15ex}R}} E(\displaystyle|u_0(x)|^p)<\infty,\,\forall p\in[1,+\infty[$.\\
\medskip
\noindent
${\bf (H_1.2)}\quad \exists\rho \in (0,1),\,\forall z \in {\rm I~\hspace{-1.15ex}R},\,\forall p\in[1,+\infty[,\, \exists C_p>0 $
$$
\sup_{y\in {\rm I~\hspace{-1.15ex}R}}E|u_0(y+z)- u_0(y)|^p \leq C_p \vert z \vert^{\rho p}.
$$
\bigskip\\
Let us recall some well-known properties
(see, e.g. Komatsu \cite{Ko}, Biler et Woyczynski \cite{Bi},
Droniou et Imbert \cite{Dro})
of the Green kernel $G_\lambda(t,x)$ which will be used later on.
\begin{Lemma}
\label{tech} Let $\lambda \in ]0,2]$.
The convolution kernel $G_\lambda$ satisfies the following properties:
\mbox{ }\\
(\textbf{a}) For any $t\in \left] 0,+\infty \right[ $ and $x\in {\rm I~\hspace{-1.15ex}R} $,
$$
G_\lambda(t,x) \geq 0 \quad\mathrm{and}\quad \displaystyle\int_{{\rm I~\hspace{-1.15ex}R}}G_\lambda(t,x)dx = 1.
$$
(\textbf{b})\ (self similarity) For any $t\in {\rm I~\hspace{-1.15ex}R}_{+}$\ and\ $x\in {\rm I~\hspace{-1.15ex}R} $
$$
G_\lambda(t,x)=t^{-\frac{1}{\lambda}} G_\lambda(1,t^{-\frac{1}{\lambda}} x),
$$
(\textbf{c}) $G_\lambda$ is $C^\infty$ on $\left] 0,+\infty \right[ \times {\rm I~\hspace{-1.15ex}R}$ and, for
$m\geq 0$, there exists \\ $C_m>0$ such that for any $t\in {\rm I~\hspace{-1.15ex}R}_{+}$ and $x\in {\rm I~\hspace{-1.15ex}R} $
$$
\mid \partial_x^m G_\lambda(t,x)\mid \leq \frac{1}{t^{(1+m)/\lambda}} \frac{C_m}{(1+t^{-2/\lambda}\vert x\vert^2)}.
$$
(\textbf{d}) For any $(s,t) \in \,]0,\infty[\times ]0,\infty[$
$$G_\lambda(s,\cdot)\ast G_\lambda(t,\cdot)=G_\lambda(s+t,\cdot).$$
(\textbf{e}) $\int_0^T\,dt\int_{\rm I~\hspace{-1.15ex}R}\,dx \, G_\lambda(t,x)^\alpha <\infty$ iff $1/2 < \alpha < 1+\lambda$.
\end{Lemma}
\noindent
In this paper, in order to define the stochastic integral, we restrict ourselves to the case
$\lambda \in ]1,2]$ : we must take $\lambda \leq 2$ to have $G_\lambda$
positive and we have to take $\lambda >1$ in order that
$\int_0^T\int_{\rm I~\hspace{-1.15ex}R} G_\lambda(t,x)^2\,dtdx <\infty$, by lemma \ref{tech} (\textbf{e}).\\
\noindent
Inessential constants will be denoted
generically by $C$, even if they vary from line to line.\\
\noindent
The paper is organized as follows. In section 2, we prove
existence and uniqueness of the solution.
In section 3 we prove H\"older continuity of the solution in space and time.
A Gronwall-type improved inequality and
an H\"older inequality frequently used in the paper are
collected in the appendix.
\section{ Existence and Uniqueness of the solution}
\setcounter{equation}{0}
The main result of this section is the following:
\medskip
\begin{Theorem}
\label{prop1} Let $\lambda \in ]1,2]$.
Suppose that the hypothesis ${\bf (H_0)}$ and ${\bf (H_1.1)}$ hold. Then
there exists a unique solution $u(t,x)$ to {\bf (E)} such that: for any $T>0$ and $p\geq 1$,
\begin{equation}
\label{borne0}
\sup_{0\le t\le T}\sup_{x \in {\rm I~\hspace{-1.15ex}R}}E(|u(t,x)|^p)\leq
C_p<\infty.
\end{equation}
\end{Theorem}
\medskip
\noindent
{\bf Proof.} The proof of the existence can be done by the usual Picard
iteration procedure. That is, we define recursively
$$u^0\left( t,x\right) = \displaystyle \int_{{\rm I~\hspace{-1.15ex}R}} G_\lambda(t, x-y) u_0\left( y\right) dy,$$
\begin{equation} \label{picard}
\begin{array}{lll}
u^{n+1}\left( t,x\right) & = & u^0\left( t,x\right) +
\displaystyle\int_0^t\hspace{-2mm}\displaystyle\int_{{\rm I~\hspace{-1.15ex}R}}G_\lambda(t-s,x-y)\sigma(s,y,u^n(s,y))W(dy,ds) \\
& & \\
& & +\displaystyle\int_0^t\hspace{-2mm}\displaystyle\int_{{\rm I~\hspace{-1.15ex}R}}G_\lambda(t-s,x-y)b(s,y,u^n(s,y))dyds,
\end{array}
\end{equation}
for all $n\geq 0$. We start by proving that given $t>0$, $2\le p< \infty,$
\begin{equation}
\label{borne}
\sup_{n\geq 0}\sup_{0\le s\le t}\sup_{x \in {\rm I~\hspace{-1.15ex}R}}E(\vert u^n(s,x)\vert^p)\le C <+\infty,
\end{equation}
where $C$ is a constant depending on $p, t,$ the supremum norm of $u_0$ and the Lipschitz constants of
$\sigma$ and $b$. Indeed,
\begin{equation}
\label{u0AB}
E(\left| u^{n+1}\left( t,x\right) \right| ^p) \le C
\left\{ E(\vert u^0(t,x)\vert^p) + E(\vert A_n(t,x)\vert^p)
+ E(\vert B_n(t,x)\vert^p)\right\},
\end{equation}
where $ A_n(t,x)$ is the second term in (\ref{picard}) and $ B_n(t,x)$
is the third term in the right-hand side of the same equation.\\
\noindent
Then Jensen inequality with respect to the probability measure $G_\lambda(t, x-y) dy$ yields
$$
\vert u^0(s,x)\vert^p \le
\left(\int_{{\rm I~\hspace{-1.15ex}R}} G_\lambda(s, x-y) \vert u_0\left( y\right)\vert^p dy \right).
$$
Taking expectation and applying Fubini's theorem we obtain :
$$E(\vert u^0(s,x)\vert^p) \le \sup_{y\in {\rm I~\hspace{-1.15ex}R}} E(\displaystyle|u_0(y)|^p) \int_{{\rm I~\hspace{-1.15ex}R}} dy\, G_\lambda(s, x-y) \le \sup_{y\in {\rm I~\hspace{-1.15ex}R}} E(\displaystyle|u_0(y)|^p) .
$$
Now as ${\bf (H_1.1)}$ holds, we get :
\begin{equation}
\label{u0}
\sup_{0\le s\le t}\sup_{x \in {\rm I~\hspace{-1.15ex}R}}E(\vert u^0(s,x)\vert^p) \le C < \infty,
\end{equation}
for some positive constant $C$.
\medskip\noindent\\
Burkholder's inequality yields, for any $p\geq 2$
$$E(\left| A_n(t,x) \right|)^p\le C E\left(\displaystyle\int_0^t%
\displaystyle\hspace{-2mm}\int_{{\rm I~\hspace{-1.15ex}R}}G_\lambda^2(t-s,x-y)\sigma^2(s,y,u^n(s,y))\;dyds\right)^{p/2}.$$
Set
$$\nu_t=\int_0^t\hspace{-2mm}\int_{{\rm I~\hspace{-1.15ex}R}}G_\lambda^2(t-s,x-y)dyds,$$
Since $\lambda > 1$,
$\nu_t \le \int_0^T\hspace{-2mm}\int_{{\rm I~\hspace{-1.15ex}R}}G_\lambda^2(t-s,x-y)dyds < \infty$
by lemma \ref{tech}(\textbf{e}).\\
Consider
\begin{equation}
\label{jts}
J(t-s)=\int_{{\rm I~\hspace{-1.15ex}R}}G_\lambda^2(t-s,y)dy.
\end{equation}
Due to the scaling property (see lemma \ref{tech} (\textbf{b}) ), one easily checks that
\begin{equation}
\label{J}
J(t-s) = C (t-s)^{-1/\lambda}.
\end{equation}
Indeed,
$J(t-s) = (t-s)^{-1/\lambda}\int_{\rm I~\hspace{-1.15ex}R} G_\lambda^2(1,x)dx = (t-s)^{-1/\lambda}\int_{\rm I~\hspace{-1.15ex}R} \exp( -\vert \xi \vert^{2\lambda}) d\xi.$
the last equality resulting from Plancherel identity.\smallskip\par\noindent
Because of the hypotheses on the coefficients $\sigma$ and $b$, the H\"older inequality (\ref{holder}) applied with
$f=\sigma^2(s,y,u^n(s,y))$, $h=G_\lambda^2(t-s,x-y)$ and $q=p/2$ implies
\begin{eqnarray*}
E(\left| A_n(t,x) \right|)^p &\le& C \,\nu_t^{\frac{p}{2}-1} E\left( \displaystyle\int_0^t%
\hspace{-2mm}\displaystyle\int_{{\rm I~\hspace{-1.15ex}R}}G_\lambda^2(t-s,x-y)\sigma^p(s,y,u^n(s,y))\;dyds\right) \\
&\le& C \left( \displaystyle\int_0^t\hspace{-2mm}\displaystyle%
\int_{{\rm I~\hspace{-1.15ex}R}}(1+\sup_{y \in {\rm I~\hspace{-1.15ex}R}}E(\vert u^n(s,y)\vert^p) \,G_\lambda^2(t-s,x-y)dyds\right) \\
\le&C& \left( \displaystyle\int_0^t(1+\sup_{y \in {\rm I~\hspace{-1.15ex}R}}E(\vert u^n(s,y)\vert^p)
\left(\displaystyle\int_{{\rm I~\hspace{-1.15ex}R}}G_\lambda^2(t-s,x-y)dy\right)ds\right).
\end{eqnarray*}
Hence
\begin{equation}
E(\left| A_n(t,x) \right|)^p \le C \displaystyle\int_0^t \left(1+\sup_{0\le s\le t}\sup_{y \in {\rm I~\hspace{-1.15ex}R}}E(\vert u^n(s,y)\vert^p)\right)
J(t-s)ds.
\label{A}
\end{equation}
The linear growth assumption on $b$ and H\"older's inequality applied to integrals with respect to the measure
$G_\lambda(t-s,x-y)dsdy$ implies
\begin{equation}
\label{B}
E(\left| B_n(t,x) \right|^p )\le C\, \int_0^t\,
\left(1+\sup_{0\le s\le t}\sup_{y \in {\rm I~\hspace{-1.15ex}R}}E(\vert u^n(s,y)\vert^p)\right)\,ds.
\end{equation}
\noindent
Collecting (\ref{u0AB}),(\ref{u0}),(\ref{A}),(\ref{B}) and (\ref{J}) we conclude that
\begin{eqnarray*}
&&E(\left| u^{n+1}\left( t,x\right) \right|^p )\\
&& \, \le C\,\left(E(\vert u^0(t,x)\vert^p)
+ \int_0^t\left(1+\sup_{y \in {\rm I~\hspace{-1.15ex}R}}E(\vert u^n(s,y)\vert^p)\right)
(J(t-s)+1)ds\right)\\
&& \, \le C\,\left(1
+ \int_0^t \,(t-s)^{-\frac{1}{\lambda}}\sup_{0\le s\le t}
\sup_{y \in {\rm I~\hspace{-1.15ex}R}}E(\vert u^n(s,y)\vert^p) \,ds
\,\right).\\
\end{eqnarray*}
Thus by lemma \ref{gronwall} (see appendix) we obtain (\ref{borne}). \\
\noindent
In order to prove that $(u_n(t,x),\, n\geq 0)$ converges in $L^p$, let
$n\geq 0$, $0\leq t\leq T$ and set
$$
M_n(t) = \sup_{0\le s\le t}\sup_{x\in {\rm I~\hspace{-1.15ex}R}} E(\left| u^{n+1}\left( s,x\right)
-u^{n}\left( s,x\right) \right|^p).
$$
Using the Lipschitz property of $\sigma$ and $b$, a similar computation implies
$$M_n(t) \le C\, \int_0^t ds \,M_{n-1}(s)
(J(t-s)+1) .$$
\noindent
Moreover, owing to (\ref{borne}) we have
$$\sup_{0\le t\le T} M_0(t)\le
\sup_{0\le t\le T}\sup_{x \in {\rm I~\hspace{-1.15ex}R}}E(|u^1(t,x)|^p) +
\sup_{0\le t\le T}\sup_{x \in {\rm I~\hspace{-1.15ex}R}}E(|u^0(t,x)|^p) <\infty.
$$ Therefore, by lemma \ref{gronwall}
the sequences $(u_n(t,x),\, n\geq 0)$ converges in $L^p(\Omega,{\cal{G}},P)$, uniformly in $x\in {\rm I~\hspace{-1.15ex}R}$ and
$0\le t \le T$, to a limit
$u(t,x)$. It is easy to see that $u(t,x)$ satisfies (\ref{walsh}), (\ref{borne0})
which proves the existence of a solution. Following the same approach as in Walsh \cite{Wa}, we can
prove that the process $(u(t,x),\, t\geq 0, x\in {\rm I~\hspace{-1.15ex}R})$
has a jointly measurable version which is continuous in $L^p$ and fulfills (\ref{walsh}).
Uniqueness of the solution
is checked by standard arguments. \hfill $\Box$\\
\section{H\"older continuity of the solution}
\setcounter{equation}{0}
In this section we analyze the path regularity of $u(t,x)$. The next result extends and improves
similar estimates known for the stochastic heat equation (corresponding to the case $\lambda=2$).
\begin{Theorem}
\label{holdercont} Let $\lambda \in ]1,2]$. Suppose that ${\bf (H_0)}$, ${\bf (H_1.1)}$ and
${\bf (H_1.2)}$ are satisfied.
Then, $\omega$-almost surely, the function $\left( t,x\right)
\longmapsto u\left( t,x\right) \left( \omega \right) $ belongs to H\"older space
${\cal{C}} ^{\alpha ,\beta }\left( \left[ 0,T\right] \times {\rm I~\hspace{-1.15ex}R} \right) $
for $0<\alpha < (\frac{\rho}{\lambda} \wedge \frac{ \lambda -1}{2 \lambda })$ and
$0<\beta< (\rho \wedge \frac{\lambda-1}{2})$.
\end{Theorem}
\noindent \textbf{Proof.}
\noindent
Fix $T>0, h>0$ and $p\in ]1,1/\rho[.$ We show first that
\begin{equation}
\label{temps}
\sup_{0\le t\le T}\sup_{x\in {\rm I~\hspace{-1.15ex}R}} E(\mid u(t+h,x)-u(t,x)\mid ^p)\le C\,h^{\alpha p},
\end{equation}
for any $0<\alpha < (\frac{\rho}{\lambda} \wedge \frac{ \lambda -1}{2 \lambda })$.\\
\par\noindent
Indeed, we have
\begin{equation}
\label{sum}
E(\mid u(t+h,x)-u(t,x)\mid ^p)\le C \sum_{i=1}^4 I_i(t,h,x),
\end{equation}
where
\begin{eqnarray*}
I_1(t,h,x)&=& E\left\vert \displaystyle{\int_{{\rm I~\hspace{-1.15ex}R}}} (G_\lambda(t+h,x-y)-
G_\lambda(t,x-y))u_0(y)dy \right\vert^p,\\
I_2(t,h,x)&=& E\left(\left\vert\int_0^t \displaystyle{\int_{{\rm I~\hspace{-1.15ex}R}}}
[G_\lambda(t+h-s,x-y)-G_\lambda(t-s,x-y)]\right.\right.\\
&&\hspace{3cm}\times \left.\sigma(s,y,u(s,y))W(dy,ds) \Bigg\vert^p \right),\\
I_3(t,h,x)&=&E\left(\left\vert\int_t^{t+h}
\hspace{-2mm} \displaystyle{\int_{{\rm I~\hspace{-1.15ex}R}}}
G_\lambda(t+h-s,x-y)\sigma(s,y,u(s,y))W(dy,ds) \right\vert^p \right),\\
I_4(t,h,x)&=& E\left(\left\vert\int_0^{t+h} ds\displaystyle{\int_{{\rm I~\hspace{-1.15ex}R}}} dy\,
G_\lambda(t+h-s,x-y)b(s,y,u(s,y))\right.\right.\\
&&\quad \left.\left. - \int_0^t ds \displaystyle{\int_{{\rm I~\hspace{-1.15ex}R}}} dy\,
G_\lambda(t-s,x-y)b(s,y,u(s,y)) \right\vert^p \right).\\
\end{eqnarray*}
Using the semigroup property of the convolution kernel $G_\lambda$,
$$ G_\lambda(t+h ,x-y) = \int_{{\rm I~\hspace{-1.15ex}R}} G_\lambda(t,x-y-z)\, G_\lambda(h,z)\, dz.$$
Hence
$$
I_1(t,h,x) =E\left(\left\vert \displaystyle{\int_{{\rm I~\hspace{-1.15ex}R}}}G_\lambda(h,z)\left(
\displaystyle{\int_{{\rm I~\hspace{-1.15ex}R}}} G_\lambda(t,x-y)\left( u_0(y-z)-u_0(y)\right) dy \right)dz\right\vert^p\right).
$$
With H\"older's inequality (\ref{holder}), the assumption
${\bf (H_1.2)}$ and Fubini's theorem we obtain
\begin{eqnarray}
I_1(t,h,x)& & \le \displaystyle{\int_{{\rm I~\hspace{-1.15ex}R}}}\, G_\lambda(h,z) \sup_{ y\in {\rm I~\hspace{-1.15ex}R}} E\vert
u_0(y-z)-u_0(y)\vert^p\, dz\nonumber\\
& & \quad \le C \displaystyle{\int_{{\rm I~\hspace{-1.15ex}R}}}\, G_\lambda(h,z) \, \vert z\vert^{\rho\,p} \,dz.
\end{eqnarray}
Now, due to the self-similarity property (see lemma \ref{tech} {\bf b })
$$
\int_{{\rm I~\hspace{-1.15ex}R}}\, G_\lambda(h,z) \, \vert z\vert^{\rho\,p} \,dz =
\int_{{\rm I~\hspace{-1.15ex}R}}\,h^{-1/\lambda} G_\lambda(1,h^{-1/\lambda}\,z)\,\vert z\vert^{\rho\,p} \,dz
$$
$$
= h^\frac{\rho\, p}{\lambda} \, \int_{{\rm I~\hspace{-1.15ex}R}} G_\lambda(1,y)\vert y \vert^{\rho\,p}\,dy.
$$
Using the fact that $G_\lambda(1,y) \leq \frac{C}{1+y^2}$ (see lemma \ref{tech}
{\bf c}), and that $\rho p < 1$ we obtain that
$$
\int_{{\rm I~\hspace{-1.15ex}R}} G_\lambda(1,y)\vert y \vert^{\rho\,p}\,dy < \infty.
$$
Therefore we have proved that
\begin{equation}
I_1(t,h,x) \, \le \, C\, h^\frac{\rho\, p}{\lambda}.
\label{}
\end{equation}
\noindent
Bukholder inequality, H\"older inequality (\ref{holder}) applied to integrals with respect to the measure
$[G_\lambda(t+h-s,x-y)-G_\lambda(t-s,x-y)]^2 ds dy$,
the growth assumption on $\sigma$ and (\ref{borne0}) yield the following bound on $I_2$.
\begin{eqnarray*}
&& I_2(t,h,x)\le C \left(1+\sup_{0\le s\le t}\sup_{x\in {\rm I~\hspace{-1.15ex}R}}E(\vert u(s,x)\vert^p )\right)\\
&& \quad \times \left(\left\vert\int_0^t \displaystyle{\int_{{\rm I~\hspace{-1.15ex}R}}}
\, [G_\lambda(t+h-s,x-y)-G_\lambda(t-s,x-y)]^2 ds dy \right\vert^{p/2}\right)\\
&&\quad \le C \left(\int_0^t\hspace{-2mm}
\displaystyle{\int_{{\rm I~\hspace{-1.15ex}R}}}\left({\cal F}(e^{-(t+h-s)\mid \,\cdot\,\mid^\lambda })(y)-
{\cal F}(e^{-(t-s)\mid \,\cdot\,\mid^\lambda })(y) \right)^2 ds dy\right)^{p/2}.
\end{eqnarray*}
Therefore, using Plancherel identity one easily checks that
\begin{eqnarray*}
&&\int_0^t\hspace{-2mm}
\displaystyle{\int_{{\rm I~\hspace{-1.15ex}R}}}\left({\cal F}(e^{-(t+h-s)\mid \,\cdot\,\mid^\lambda })-
{\cal F}(e^{-(t-s)\mid \,\cdot\,\mid^\lambda }) \right)^2 (y)\, ds dy \\
&&\hspace{3cm} =\,\,
\int_0^t
\displaystyle{\int_{{\rm I~\hspace{-1.15ex}R}}}\left(e^{-(t+h-s)\mid y\mid^\lambda }-
e^{-(t-s)\mid y\mid^\lambda } \right)^2 \,ds dy\\
&& \hspace{3cm} =\,\,
\int_0^t \hspace{-2mm}
\displaystyle{\int_{{\rm I~\hspace{-1.15ex}R}}} \, e^{-2(t-s)\mid y\mid^\lambda}\left(e^{-h\mid y\mid^\lambda }- 1\right)^2 ds \,dy.
\end{eqnarray*}
\noindent
Decomposing the integral on ${\rm I~\hspace{-1.15ex}R}$ into integrals
on $\{\vert y\vert>1\}$ and its complementary set,
we have
$$I_2(t,h,x)\le C\,(I_{2,1}(t,h,x) + I_{2,2}(t,h,x))$$
\noindent
where
\begin{eqnarray*}
I_{2,1}(t,h,x)&=&\left(\int_0^t \hspace{-2mm}
\displaystyle{\int_{\mid y\mid \le 1}} \,
e^{-2(t-s)\mid y\mid^\lambda}\left(e^{-h\mid y\mid^\lambda }- 1\right)^2 ds dy\right)^{p/2},\\
I_{2,2}(t,h,x)&=&\left(\int_0^t\hspace{-2mm}\displaystyle{\int_{\mid y\mid >1}}\,
e^{-2(t-s)\mid y\mid^\lambda}\left(
e^{-h\mid y\mid^\lambda }- 1\right)^2 ds dy\right)^{p/2}.
\end{eqnarray*}
\noindent
Then by the mean value theorem,
\begin{eqnarray*}
\int_0^t \hspace{-2mm}
\displaystyle{\int_{\mid y\mid \le 1}}\,e^{-2(t-s)
\mid y\mid^\lambda}\left(e^{-h\mid y\mid^\lambda }- 1\right)^2 ds dy&\le&
\int_0^T \hspace{-2mm}
\displaystyle{\int_{\mid y\mid \le 1}}\,e^{-2(t-s)
\mid y\mid^\lambda} h^2 ds dy\\
&\le& C h^2.
\end{eqnarray*}
\noindent
On the set $\{\vert y\vert>1\}$, let $0<\alpha<\frac{\lambda - 1}{2 \lambda}$,
then the same argument as above implies
\begin{eqnarray*}
&&\int_0^t \hspace{-2mm}
\displaystyle{\int_{\mid y\mid >1}}e^{-2(t-s)
\mid y\mid^\lambda}\left(e^{-h\mid y\mid^\lambda }-1\right)^2dsdy\\
&&\hspace{1cm} =\,\,
\int_0^t \hspace{-2mm}
\displaystyle{\int_{\mid y\mid >1}} e^{-2(t-s)
\mid y\mid^\lambda}\left(1 - e^{-h\mid y\mid^\lambda}\right)^{2\alpha}
\left(1 - e^{-h\mid y\mid^\lambda }\right)^{2-2\alpha}ds dy\\
&& \hspace{1cm} \le \,\,C \int_0^\infty \hspace{-2mm}
\displaystyle{\int_{\mid y\mid >1}} \,e^{-2s
\mid y\mid^\lambda}\vert h \vert^{2\alpha} \vert y\vert^{2\lambda\alpha} ds dy
\\
&& \hspace{1cm} \le \,\,C
\displaystyle{\int_{\mid y\mid >1}}\,\vert h \vert^{2\alpha} \vert y \vert^{2\lambda\alpha}
\vert y\vert^{-\lambda}dy \\
&& \hspace{1cm} \le \,\,C \,h^{2\alpha} \displaystyle{\int_{\mid y\mid >1}}\, \vert y \vert^{\lambda(2\alpha -1)}dy
\le C h^{2\alpha}.
\end{eqnarray*}
\noindent
Consequently, for $0<\alpha < \frac{\lambda - 1}{2 \lambda}$, we have proved that
$$I_{2,1}(t,h,x)\le C \, h^{ p},$$
$$I_{2,2}(t,h,x)\le C\, h^{\alpha p}.$$
Since $0 < \alpha <\frac{\lambda - 1}{2 \lambda} <1,\,\forall \lambda \in ]1,2] $,
we obtain
\begin{equation}
\label{i2}
I_2(t,h,x)\le C\, h^{\alpha p}.
\end{equation}
\noindent
As before, Bukholder inequality, H\"older inequality (\ref{holder}) applied to integrals with respect to the measure
$G^2_\lambda(t+h-s,x-y) ds dy$,
the growth assumption on $\sigma$ and (\ref{borne0}) yield
\begin{eqnarray*}
I_3(t,h,x)&\le& C\left(1+\sup_{0\le s\le t}\sup_{x\in {\rm I~\hspace{-1.15ex}R}}E(\vert u(s,x)\vert^p )\right) \\
&&\quad\quad\times \left( \int_t^{t+h}\int_{{\rm I~\hspace{-1.15ex}R}}
G^2_\lambda(t+h-s,x-y)\,dsdy\right)^{p/2}.
\end{eqnarray*}
Recalling from (\ref{J}) that
$$\displaystyle{\int_{{\rm I~\hspace{-1.15ex}R}}} G^2_\lambda(t+h-s,x-y)\,dy = J(t+h-s)
= C(t+h-s)^{-1/\lambda}$$
we compute $\displaystyle\int_t^{t+h} (t+h-s)^{-1/\lambda} ds =
C\,h^{\frac{\lambda-1} {\lambda}}$.
\par\noindent Thus
\begin{equation}
\label{eq:I3}
I_3(t,h,x)\le C\,h^{\frac{p(\lambda-1)} {2\lambda}}.
\end{equation}
A change of variable yields
$$I_4(t,h,x)\le C\,(I_{4,1}(t,h,x) + I_{4,2}(t,h,x))$$
\noindent
with
\begin{eqnarray*}
I_{4,1}(t,h,x)&=& E\left(\left\vert\int_0^{h}ds\displaystyle{\int_{{\rm I~\hspace{-1.15ex}R}}}dy
G_\lambda(t+h-s,x-y)b(s,y,u(s,y)) \right\vert^p \right),\\
I_{4,2}(t,h,x)&=&
E\left(\left\vert\int_0^{t} ds\displaystyle{\int_{{\rm I~\hspace{-1.15ex}R}}} dy\,
G_\lambda(t-s,x-y)\right.\right.\\
&& \quad\quad\times \left.\left(b\left(s+h,y,u(s+h,y)\right)-
b\left(s,y,u(s,y)\right)\right)\Bigg\vert^p \right).
\end{eqnarray*}
Applying H\"older inequality (\ref{holder}) to integrals with respect to the measure $G_\lambda(t+h-s,x-y)\,ds dy$,
the growth assumption on $b$ and (\ref{borne0}) we get
\begin{eqnarray*}
I_{4,1}(t,h,x)&\le& C \left(1+\sup_{0\le s\le t}\sup_{x\in {\rm I~\hspace{-1.15ex}R}}E(\vert u(s,x)\vert^p )\right) \\
&& \quad\times \left(\displaystyle\int_0^{h} ds \displaystyle{\int_{{\rm I~\hspace{-1.15ex}R}}} dy\,
G_\lambda(t+h-s,x-y)\right)^{p}.
\end{eqnarray*}
Since $\int_{{\rm I~\hspace{-1.15ex}R}} G_\lambda(t+h-s,x-y)\,dy = 1$, we obtain
\begin{equation}
\label{i41}
I_{4,1}(t,h,x)\le C h^p.
\end{equation}
Again H\" older inequality applied to integral w.r.t. the measure
$G_\lambda(t-s, x-y)\, dsdy$, Fubini's theorem and the Lipschitz property of $b$ imply
\begin{eqnarray*}
I_{4,2}(t,h,x)& \le & C \left(\int_0^t \left(h^p + \sup_{y\in {\rm I~\hspace{-1.15ex}R}}E(\vert u(s+h,y)-u(s,y)\vert^p )\right)\, ds\right) \\
& & \times
\left( \int_0^{T}\int_{{\rm I~\hspace{-1.15ex}R}}
G_\lambda(t-s,x-y) \,dsdy\right).
\end{eqnarray*}
Hence
\begin{equation}
\label{i42}
I_{4,2}(t,h,x) \le C
\left( h^p +\int_0^t \sup_{y\in {\rm I~\hspace{-1.15ex}R}}E(\vert u(s+h,y)-u(s,y)\vert^p )\,ds\right).
\end{equation}
Then, putting together (\ref{sum})-(\ref{i42}) we obtain for $0<\alpha<\frac{\lambda - 1}{2 \lambda}$
\begin{eqnarray*}
\sup_{x\in {\rm I~\hspace{-1.15ex}R}}E(\vert u(t+h,x)-u(t,x)\vert^p )&\le& C\,h^{p \min(\frac{\rho}{\lambda}, \alpha)}\\
&+&
C \int_0^t \sup_{x\in {\rm I~\hspace{-1.15ex}R}}E(\vert u(s+h,x)-u(s,x)\vert^p )ds.
\end{eqnarray*}
Finally, the estimates (\ref{temps}) follows from standard Gronwall's Lemma.
\newpage
\noindent
Consider now the increments in the space variable. We want to check that for any $T>0$, $p \in [2, \infty),
x \in {\rm I~\hspace{-1.15ex}R}$, $z$ in a compact set $K$ of ${\rm I~\hspace{-1.15ex}R}$ and $\beta \in(0, \rho \wedge (\frac{\lambda-1}{2}))$,
\begin{equation}
\label{espace}
\sup_{0\le t\le T}\sup_{x\in {\rm I~\hspace{-1.15ex}R}} E(\mid u(t,x+z)-u(t,x)\mid ^p)\le C\,z^{\beta p},
\end{equation}
\noindent
We write
\begin{equation}
\label{sum2}
E(\mid u(t,x+z)-u(t,x)\mid ^p)\le C \sum_{i=1}^3 J_i(t,z,x),
\end{equation}
with
\begin{eqnarray*}
J_1(t,z,x)&=& E \left\vert \displaystyle{\int_{{\rm I~\hspace{-1.15ex}R}}} (G_\lambda(t,x+z-y)-G_\lambda(t,x-y))u_0(y)dy \right\vert ^p,\\
J_2(t,z,x)&=& E\left(\left\vert\int_0^t \hspace{-2mm} \displaystyle{\int_{{\rm I~\hspace{-1.15ex}R}}}
[G_\lambda(t-s,x+z-y)-G_\lambda(t-s,x-y)]\right.\right.\\
&& \quad \quad \quad \quad \hspace{2cm}
\left.\times \sigma(s,y,u(s,y))W(dy,ds) \Big \vert^p \right),\\
J_3(t,z,x)&=& E\left(\left\vert\int_0^{t} ds \displaystyle{\int_{{\rm I~\hspace{-1.15ex}R}}} dy\,
[G_\lambda(t-s,x+z-y)-G_\lambda(t-s,x-y)]\right.\right.\\
&& \quad \quad \quad \quad\hspace{3,5cm}\left.\times
\,b(s,y,u(s,y))\Big\vert^p \right).
\end{eqnarray*}
In the remainder of the proof we are going to establish separate upper bounds for $J_1, J_2$ and $J_3$.\\
\noindent
A change of variable gives immediately
$$
J_1(t,z,x) = E \left\vert \displaystyle{\int_{{\rm I~\hspace{-1.15ex}R}}} G_\lambda(t,x-y)\left(u_0(y+z)
-u_0(y)\right)\,dy \right\vert ^p.
$$
Applying again H\"older's inequality (\ref{holder}) to integral w.r.t. the measure
$G_\lambda(t,x-y)\,dy$, the assumption
${\bf (H_1.2)}$ and Fubini's theorem we obtain
\begin{eqnarray*}
J_1(t,z,x)&\le & \, C \left(\displaystyle{\int_{{\rm I~\hspace{-1.15ex}R}}}\, G_\lambda(t,x-y) \sup_{ y\in {\rm I~\hspace{-1.15ex}R}} E(\vert
u_0(y+z)-u_0(y)\vert^p)\, dy\right)\\
&\le & \, C \left(\displaystyle{\int_{{\rm I~\hspace{-1.15ex}R}}}\, G_\lambda(t,x-y) \vert z\vert^{\rho\,p}\, dy\right)\le \, C
\,\vert z\vert^{\rho\,p}.
\end{eqnarray*}
\noindent
Bukholder's inequality and H\"older's inequality (\ref{holder}) applied to integrals with respect to the
measure $[G_\lambda(t-s,x+z-y)-G_\lambda(t-s,x-y)]^2 ds dy$, the linear growth assumption on $\sigma$
and (\ref{borne0}) imply
\begin{eqnarray*}
J_2(t,z,x)&\le & C \,\left(1+\sup_{0\le t\le T}\sup_{x\in {\rm I~\hspace{-1.15ex}R}}E(\vert u(t,x)\vert^p )\right)\\
&& \times \left(\left\vert\int_0^t ds \displaystyle{\int_{{\rm I~\hspace{-1.15ex}R}}}
dy\; [G_\lambda(t-s,x+z-y)-G_\lambda(t-s,x-y)]^2 \right\vert^{p/2} \right)\\
J_2(t,z,x)&\le & C\left(\int_0^t
\int_{{\rm I~\hspace{-1.15ex}R}} \left({\cal F}(e^{-2\pi i z \cdot}e^{-(t-s)\mid\cdot\mid^\lambda })
- {\cal F}(e^{-(t-s)\mid \cdot\mid^\lambda }) \right)^2 (x-y)\right)^{p/2}\\
&\le & C \,\left(\int_0^t ds
\displaystyle{\int_{{\rm I~\hspace{-1.15ex}R}}}dy \left(e^{-2\pi i z y}e^{-(t-s)\mid \,y\,\mid^\lambda }-
e^{-(t-s)\mid \,y\,\mid^\lambda } \right)^2 \right)^{p/2}\\
&\le & C\,( J_{2,1}(t,z,x)+ J_{2,2}(t,z,x)),
\end{eqnarray*}
\noindent
where we have used the property that
${\cal F}\left(f(x)\right)(\xi + a) =
{\cal F}\left( e^{-2i\pi ax} f(x)\right)(\xi)$
and the Plancherel identity and denote
\begin{eqnarray*}
J_{2,1}(t,z,x)& = & \left(\int_0^t ds
\displaystyle{\int_{\vert y \vert \le 1}}dy \left(e^{-2\pi i z y}e^{-(t-s)\mid \,y\,\mid^\lambda }-
e^{-(t-s)\mid \,y\,\mid^\lambda } \right)^2 \right)^{p/2},\\
J_{2,2}(t,z,x)& = & \left(\int_0^t ds
\displaystyle{\int_{\vert y \vert >1}}dy \left(e^{-2\pi i z y}e^{-(t-s)\mid \,y\,\mid^\lambda }-
e^{-(t-s)\mid \,y\,\mid^\lambda } \right)^2 \right)^{p/2}.
\end{eqnarray*}
\noindent
We therefore have, by the mean value theorem
\begin{equation}
\label{j21}
J_{2,1}(t,z,x) \le C \vert z\vert^p.
\end{equation}
\noindent
On the other hand, for any $0<\beta <\frac{\lambda - 1}{2}$
\begin{eqnarray}
\label{j22}
&&J_{2,2}(t,z,x)\nonumber\\
&& \quad =\left(\int_0^t\hspace{-2mm}
\displaystyle{\int_{\vert y \vert >1}} e^{-(t-s)\mid y\mid^\lambda }
\left(e^{-2\pi i z y}-
1 \right)^{2 \beta} \left(e^{-2\pi i z y}-
1 \right)^{2 - 2 \beta}ds dy\right)^{p/2}\nonumber\\
& & \quad \le C \left(\int_0^t ds
\displaystyle{\int_{\vert y \vert >1}}dy\, e^{-(t-s)\mid \,y\,\mid^\lambda }
\vert y\vert^{2 \beta} \vert z\vert^{2 \beta}\right)^{p/2}\nonumber\\
& & \quad \le C \vert z\vert^{2 \beta} \left(\displaystyle{\int_{\vert y \vert >1}}dy\, \vert y\vert^{2 \beta}
\int_0^t ds\, \, e^{-(t-s)\mid \,y\,\mid^\lambda }
\right)^{p/2} \nonumber\\
&& \quad \le C \vert z\vert^{ \beta p} \displaystyle{\int_{\vert y \vert >1}}
\frac{dy}{\vert y\vert^{\lambda - 2 \beta}} \le C \vert z\vert^{\beta p}.
\end{eqnarray}
Finally, by a change of variable, the Lipschitz property of $b$, and H\"older's inequality,
\begin{eqnarray}
\label{j3}
J_3(t,z,x)&\le & E\left(\left\vert\int_0^{t} ds \displaystyle{\int_{{\rm I~\hspace{-1.15ex}R}}} dy\,
G_\lambda(t-s,x-y)\right.\right.\nonumber\\
&& \quad\quad \times\Bigg.\left.
[b(s,y+z,u(s,y+z))- b(s,y,u(s,y))]\right\vert^p \Bigg)\nonumber\\
&\le &
C\left( z^p + \int_0^t \sup_{y\in {\rm I~\hspace{-1.15ex}R}}E(\vert u(s,y+z)-u(s,y)\vert^p )\,ds\right).
\end{eqnarray}
\noindent
Then (\ref{espace}) follows from (\ref{sum2})-(\ref{j3}) and Gronwall's lemma.
The H\"older continuity in the time and space variables results from Kolmogorov criterion.\hfill $\Box$
\begin{remark}
Hypothesis ${\bf (H_1.2)}$ is useful to have H\"older continuity up to time 0.
If we discard ${\bf (H_1.2)}$ and assume instead that
$$
{\bf (H_1.3)}\quad E\left(\int_{{\rm I~\hspace{-1.15ex}R}}\displaystyle|u_0(y)|\,dy\right)^p < \infty
$$ then $\omega$-almost surely, the function $\left( t,x\right)
\longmapsto u\left( t,x\right) \left( \omega \right) $ belongs
to
${\cal{C}} ^{\alpha ,\beta }\left( \left[ \epsilon,T\right] \times {\rm I~\hspace{-1.15ex}R} \right) $ for $0<\alpha < \frac{ \lambda -1}{2 \lambda }$ and
$0<\beta< \frac{\lambda-1}{2}$, for any $\epsilon >0$.
\end{remark}
Indeed, we slightly modify the preceding proof to bound
$$I_1(t,h,x)= E\left\vert \displaystyle{\int_{{\rm I~\hspace{-1.15ex}R}}} (G_\lambda(t+h,x-y)-
G_\lambda(t,x-y))u_0(y)dy \right\vert^p,
$$
and
$$
J_1(t,z,x) = E \left\vert \displaystyle{\int_{{\rm I~\hspace{-1.15ex}R}}} (G_\lambda(t,x+z-y)-G_\lambda(t,x-y))u_0(y)dy \right\vert ^p.
$$
First we bound
$$I_1(t,h,x) \leq
\sup_{z\in {\rm I~\hspace{-1.15ex}R}}\vert G_\lambda(t+h, z)-G_\lambda(t, z) \vert^p\,\cdot
E\left(\int_{{\rm I~\hspace{-1.15ex}R}}\displaystyle|u_0(y)|\,dy\right)^p.
$$
The following estimates are elementary
$$G_\lambda(t+h, z)-G_\lambda(t, z) =
\int_{{\rm I~\hspace{-1.15ex}R}} e^{2 i\pi z\xi}
(e^{-(t+h)\vert\xi\vert^\lambda} - e^{-t\vert\xi\vert^\lambda})\, d\xi,
$$
$$|\,G_\lambda(t+h, z)-G_\lambda(t, z)| \leq
\int_{{\rm I~\hspace{-1.15ex}R}} e^{-t\vert\xi\vert^\lambda} |\,e^{-h\vert\xi\vert^\lambda}-1|\, d\xi,
$$
$$
|\,G_\lambda(t+h, z)-G_\lambda(t, z)| \leq
h \,\int_{{\rm I~\hspace{-1.15ex}R}} e^{-\epsilon\vert\xi\vert^\lambda}\vert\xi\vert^\lambda\, d\xi = C\,h.
$$
Hence
$$I_1(t,h,x) \leq C\, h^p.$$
As for the space increments, we bound
$$G_\lambda(t, x+z)-G_\lambda(t, x) =
\int_{{\rm I~\hspace{-1.15ex}R}} e^{2 i\pi x \xi} e^{-t\vert\xi\vert^\lambda}
(e^{2 i\pi z \xi} - 1)\, d\xi,
$$
$$
|\,G_\lambda(t, x+z)-G_\lambda(t, x)| \leq
\vert z\vert\,
\int_{{\rm I~\hspace{-1.15ex}R}} e^{-\epsilon\vert\xi\vert^\lambda}2 \pi \vert\xi\vert\, d\xi =
C\,\vert z\vert,
$$
Hence
$$J_1(t,z,x) \leq C\, z^p.$$
The rest of the proof is the same as for theorem \ref{holdercont}. \hfill $\Box$
\section{Appendix}
\setcounter{equation}{0}
\begin{Lemma}
Let $f, h$ be two functions defined on ${\rm I~\hspace{-1.15ex}R}$ and $\mu$ a positive measure such that $f \cdot h \in L^1(\mu)$. Then, for all
$q>1$, we have:
\begin{equation}
\label{holder}
\left\vert\int f\cdot \vert h \vert d\mu \right\vert^q\le \left(\int \vert f\vert^q\cdot \vert h\vert d\mu\right)
\left(\int \vert h\vert d\mu\right)^{q-1}.
\end{equation}
\end{Lemma}
\noindent \textbf{Proof.} Set $\nu= \vert h\vert d\mu$, then the result follows from the H\" older inequality applied to
$\int fd\nu$. \hfill $\Box$
\smallskip\par\noindent
The following elementary Lemma is an extension of Gronwall's Lemma akin to lemma 3.3 established in Walsh \cite{Wa}.
\begin{Lemma}
\label{gronwall}Let $\theta > 0$.
Let $(f_n, n\in \N)$
be a sequence of non-negative functions on $[0,T]$ and $\alpha, \beta$ be non-negative real numbers such that for $0\le t\le T,\, n\geq 1$
\begin{equation}
f_n(t)\le \alpha+ \int_0^t \beta\,f_{n-1}(s) (t-s)^{\theta -1}ds.
\label{gronvol}
\end{equation}
If $\sup_{0\le t \le T}f_0(t)=M, $ then for $n\geq 1$,
\begin{equation}
f_n(t)\le \frac{1}{2} \left(\alpha + \alpha
\exp \left(\frac{2 \beta t^{\theta}}{\theta}\right) +
\frac{M}{n!} \left( \frac{2 \beta t^\theta}{\theta}\right)^n\right).
\end{equation}
In particular, $\sup_{n \geq 0}\sup_{0\le t \le T}f_n(t)<\infty $, and if $\alpha=0$, then
$\sum_{n \geq 0} f_n(t)$ converges uniformly on $[0,T]$.
\end{Lemma}
\par\noindent \textbf{Proof.}
Let us prove by induction that, for $n\geq 1$,
\begin{equation}
\label{induc}
f_n(t)\le \alpha \Bigg(1 + \sum_{1\leq k \leq n-1} \frac{2^{k-1}}{k!}
\left(\frac{\beta t^\theta}{\theta}\right)^k \Bigg) + \;M\,
\frac{2^{n-1}}{n!} \left( \frac{\beta t^\theta}{\theta}\right)^n.
\end{equation}
The initial step is readily checked :
$$
f_1(t) \leq \alpha + \int_0^t \beta\,M (t-s)^{\theta -1}ds
=\alpha + M\,\frac{\beta t^\theta}{\theta}.
$$
Now since \ref{gronvol}
we have
$$
f_n(t)\le \alpha +
\int_0^t \beta \left( \alpha + \alpha
\sum_{1\leq k \leq n-2} \frac{2^{k-1}}{k!}
\left(\frac{\beta s^\theta}{\theta}\right)^k \right.
$$
\begin{equation}
+ \left. M\,\frac{ 2^{n-2}}{(n-1)!}
\left( \frac{\beta s^\theta}{\theta}\right)^{n-1}\right)
(t-s)^{\theta -1}\,ds.
\end{equation}
Consider
\begin{equation}
\int_0^t s^{k\theta} (t-s)^{\theta-1} ds \leq
\int_0^{t/2} (t-s)^{(k+1)\theta-1} ds +
\int_{t/2}^t s^{(k+1)\theta -1} ds.
\end{equation}
Hence we may bound
\begin{equation}
\label{eq:beta}
\int_0^t s^{k\theta} (t-s)^{\theta-1} ds
\leq 2 \frac{t^{(k+1)\theta}}{(k+1)\theta}.
\end{equation}
Summation over $k$ brings (\ref{induc}). \hfill $\Box$
\addcontentsline{toc}{section}{References}
|
1,108,101,566,329 | arxiv | \section{Introduction}
\noindent
The study of warped discs dates back to Laplace's (1805) study of the
motions of the satellites of Jupiter, in which he showed that each
satellite precessed around an axis on which the orbit-averaged torques from the
quadrupole moment of the planet and the tidal field from the Sun
cancelled. The locus of the circular rings defined by these axes, now
called the Laplace surface, is the expected shape of a dissipative
low-viscosity disc in this potential \citep[for a review see][]{tre09}.
More recent studies of warped accretion discs began with \cite{bp75},
who pointed out that an accretion disc orbiting a spinning black hole
(BH) would be subject to Lense--Thirring torque if its
orbital axis were not aligned with the spin axis of the BH; this
torque leads to precession of the axis of a test particle on a
circular orbit of radius $r$ at an angular speed
$\bomega=2G\bfL_\bullet/(r^3c^2)$, where $\bfL_\bullet$ is the
angular momentum of the BH\footnote{The quadrupole moment of the BH
also leads to precession, but this is usually less important as its
effects fall off faster with radius by a factor $r^{-1/2}$.}.
We call discs `quadrupole' or `Lense--Thirring' discs depending on
which determines the torque from the central body. There are
fundamental differences in the behavior of warped quadrupole and
Lense--Thirring discs. The first is that if the spin axis of the
central body is reversed, the Lense--Thirring torque is also reversed
(eq.\ \ref{eq:lt}) but the quadrupole torque is not (eq.\
\ref{eq:quad}). A second and more fundamental difference is the sign
of the torque: for small inclinations the quadrupole torque induces
retrograde precession of the angular momentum of the disc around the
spin axis of the central body, whereas the Lense--Thirring torque
induces prograde precession. The shape of a steady-state warped disc
is determined by the requirement that the sum of the torques from all
external sources equals the divergence of the angular-momentum
currents from transport within the disc (eqs.\
\ref{eq:ogone}--\ref{eq:ogfour}); thus the difference in sign of the
quadrupole and Lense--Thirring torque leads to fundamental differences
in the geometry of the corresponding discs (\S\ref{sec:invisc}).
Warps are also categorized as `small-amplitude' or
`large-amplitude' depending on whether the amplitude of the warp is
smaller or larger than the disc thickness. The first self-consistent
equations governing warps in viscous fluid discs were derived by
\cite{pp83} in the small-amplitude approximation; their treatment
assumed (as we do in this paper) that the equation of state is barotropic, that
the disc material at radius $r$ is azimuthally symmetric about some
symmetry axis $\bfn(r)$ parallel to the local angular-momentum vector,
that the disc is thin ($H/r\ll1$), and that the time evolution of the
disc is slow ($\p/\p t\ll\Omega$ where $\Omega^2=GM/r^3$ is the
squared angular speed of a Keplerian ring). Among other results
\cite{pp83} found that the behavior of near-Keplerian discs is
complicated by a global resonance between the azimuthal and
radial frequencies $\Omega$ and $\kappa$ of test particles in a
Keplerian potential. Non-resonant behavior requires that
\begin{equation}
\alpha\quad\mbox{or}\quad \left|1-\kappa^2/\Omega^2\right|\ga H/r
\label{eq:nonres}
\end{equation}
where $\alpha$ is the dimensionless Shakura--Sunyaev (1973) viscosity
parameter (eq.\ \ref{eq:alpha}). Most astrophysical discs are
non-resonant in this sense, and we shall assume that this is so in our
analysis. An additional complication, which we shall ignore, is that
the strong, oscillating, shearing flows generated by this
near-resonance are likely to be unstable to the development of
turbulence \citep[see][and references therein]{ol13b}, especially for the low viscosities and large warps that
occupy much of our discussion.
The equations governing the viscous evolution of thin discs with
large-amplitude warps were derived by \cite{pri92} and
\cite{o99}; a simplified local derivation of the equations is given by
\cite{ol13a}. These authors point out that the evolution of a twisted
disc depends on three conceptually distinct transport coefficients: $\nu_1$ is
the usual viscosity associated with flat accretion discs, which
produces a torque parallel to the local disc normal\footnote{\label{footnote1}Note that
$\bfn(r)$ is the normal to the orbital plane of the ring at radius
$r$ but not the normal to the disc surface at radius $r$, which in
general depends on azimuth.} $\bfn(r)$ that
tends to bring adjacent rings to the same angular speed; $\nu_2$ is
associated with the shear normal to the disc and produces a torque
proportional to $\upartial\bfn/\upartial r$ that tends to bring
adjacent rings to the same orientation; and $\nu_3$ produces a torque
that is proportional to $\bfn\cross\p\bfn/\p r$ and advects angular
momentum in a warped disc. In general these
three transport coefficients are not equal, and a specific model for the stress
tensor in the disc fluid is required to determine their values.
\cite{o99} carries out this determination for Shakura--Sunyaev discs,
in which the shear and bulk viscosity are given by equation
(\ref{eq:alpha}); see for example Fig.\ \ref{fig:two}. However, it is
unclear how directly this treatment applies to real discs,
where the stress tensor is thought to be determined by
magnetohydrodynamic (MHD) turbulence (see \S\ref{sec:visc}).
The evolution and steady-state shape of warped accretion discs can be
determined by a variety of competing effects: the quadrupole or
Lense--Thirring torque from the central body; mass and angular-momentum
transport through the disc due to viscosity; the tidal field from a
companion object (the Sun for planetary satellites or a stellar
companion for X-ray binary stars); the self-gravity of the disc;
radiation pressure from the central object; magnetic fields; etc.
We shall not consider radiation pressure \citep{pri96} or
magnetic fields \citep{lai99} in this paper, although some of the
phenomena that we describe have analogs when these effects are
important. We distinguish `high-viscosity' from `low-viscosity'
discs depending on whether the torque associated with viscous
angular-momentum transport plays a dominant role in determining the
shape of the warped disc (see \S\ref{sec:approx}).
What we mean by the self-gravity of the disc needs to be
amplified. There are different ways in which discs can be
`self-gravitating'. (i) The radial gravitational force from the disc
can be comparable to the gravity from the host BH, which requires that
the surface density $\Sigma \ga M/r^2$; this case is not relevant for most accretion
discs and we shall not discuss it further. (ii) Within the disc, the
vertical gravitational force from the disc can be comparable to the
vertical gravity from the BH; this requires that the density in the
disc is of order $M/r^3$ or that Toomre's (1964) $Q$ parameter (eq.\
\ref{eq:toomredef}) is of order unity. Models of accretion discs with
$Q\simeq 1$ were first described by \cite{pac78}; in accretion discs
surrounding supermassive BHs in active galactic nuclei (AGN) this
condition is likely to be satisfied at distances exceeding $\sim
0.01\,\mbox{pc}$, and such discs may fragment into stars
\citep{2003MNRAS.339..937G}. (iii) The apsidal and/or nodal
precession rate of the disc may be dominated by self-gravity; for AGN
accretion discs this requires far less mass than cases (i) or (ii) and
this is the case that we focus on here.
Remarkably, almost all previous studies of warped Lense--Thirring
discs follow Bardeen and Petterson in considering only torques from
the central body and viscous torques in their analyses. We shall show
that the other two effects listed in the preceding
paragraph -- gravitational torques from the companion and the
self-gravity of the disc -- can introduce qualitatively new phenomena
in the behavior of warped discs surrounding stellar-mass and
supermassive BHs, respectively. In particular, (i) warped
low-viscosity discs exhibit a sharp depression in their surface
density near the radius where the warp is strongest; (ii)
steady-state Lense--Thirring discs do not exist, at least within the
standard thin disc description, for viscosities below a critical value
that depends on the obliquity (the angle between the BH spin angular
momentum and the companion orbital angular momentum); (iii) warped low-viscosity discs in
which self-gravity is important can develop strong short-wavelength
bending waves.
As a preliminary step, \S\S\,\ref{sec:ext} and \ref{sec:invisc}
derive the steady-state properties of warped discs in which viscosity
is negligible. Then \S\,\ref{sec:approx} provides a broad-brush
overview of the competing effects that determine the behavior of
warped discs. Section \ref{sec:visc} derives the equations of motion
for a thin, viscous disc subjected to external torques, following
\cite{pri92} and \cite{o99}, and \S\,\ref{sec:results} describes our
numerical methods and the results for both quadrupole and
Lense--Thirring discs in systems with a binary companion. Section
\ref{sec:sg} describes the behavior of self-gravitating warped
discs. Section \ref{sec:other} relates our findings to earlier work on
warped accretion discs. Sections \ref{sec:xrb} and \ref{sec:agn}
apply our results to accretion discs around stellar-mass BHs in binary
systems, around supermassive black holes in AGN. Finally,
\S\ref{sec:summary} contains a brief summary of our conclusions.
\subsection{External torques}
\label{sec:ext}
\noindent
In this paper we consider three types of external torque that can warp
an accretion disc. In each case we shall assume that the torque is
weak -- the fractional change per orbit in the angular momentum of an
orbiting fluid element is small -- so we can work with the
orbit-averaged torque. In particular we define $\bfT(r,\bfn,t)$ to be
the torque per unit mass averaged over a circular orbit at radius $r$
with orbit normal $\bfn$.
\paragraph*{Quadrupole torque:} In the system examined
by Laplace, the central body is a planet of mass $M$, radius $R_p$,
and quadrupole gravitational harmonic $J_2$. If the planet's spin
axis is along $\bfn_p$, the torque per unit mass on an orbiting test particle is
\begin{equation}
\bfT_p=\frac{\epsilon_p}{r^3}(\bfn\cdot\bfn_p)\,\bfn\cross\bfn_p
\quad\mbox{where}\quad
\epsilon_p=\frac{3}{2}GMJ_2R_p^2\,.
\label{eq:quad}
\end{equation}
The quadrupole torque is also relevant to circumbinary accretion
discs; in the case of a binary with masses $M_1$ and $M_2$ on a
circular orbit with separation $a\ll r$, we replace $M$ by $M_1+M_2$
and $J_2R_p^2$ by $\frac{1}{2}M_1M_2\,a^2/(M_1+M_2)^2$.
\paragraph*{Lense--Thirring torque:} The central body can also be a BH
of mass $M$ and angular momentum $\bfL_\bullet=GM^2a_\bullet\,\bfn_\bullet/c$ where
$c$ is the speed of light, $\bfn_\bullet$ is the spin axis of the BH
and $0\le a_\bullet<1$ is the dimensionless spin parameter of the BH.
The angular momentum of a test particle orbiting the BH
precesses as if it were subject to a classical torque (the
Lense--Thirring torque; see \citealt{llfields})
\begin{equation}
\bfT_{\rm LT}=-\frac{\epsilon_{\rm LT}}{r^{5/2}}\,\bfn\cross\bfn_\bullet
\quad\mbox{where}\quad \epsilon_{\rm LT}={2(GM)^{5/2}a_\bullet\over
c^3}=2R_g^{5/2}c^2a_\bullet\,,
\label{eq:lt}
\end{equation}
where $R_g\equiv GM/c^2\ll r$ is the gravitational radius of the
BH.
\paragraph*{Companion torque} The central body, whether a planet or a
BH, may be accompanied by a companion star of mass $\mstar$, on a
circular orbit with radius $\rstar\gg r$. Then the gravitational
potential of the companion can be approximated by its quadrupole
component, which after averaging over the companion orbit yields a
torque
\begin{equation}
\bfT_\star=\epsilon_\star r^2
(\bfn\cdot\bfn_\star)\,\bfn\cross\bfn_\star \quad\mbox{where}\quad
\epsilon_\star=\frac{3G\mstar}{4\rstar^3}.
\label{eq:comp}
\end{equation}
\subsection{Inviscid discs}
\label{sec:invisc}
\noindent
Following Laplace, we first consider a thin disc of material orbiting
a planet with non-zero obliquity (the obliquity is
$\cos^{-1}\bfn_p\cdot\bfn_\star$). The disc is subject to torques
from the quadrupole moment of the planet, $\bfT_p$ (eq.\
\ref{eq:quad}), and from the companion star around which the planet
orbits, $\bfT_\star$ (eq.\ \ref{eq:comp}). In the absence of pressure,
viscosity, self-gravity, or other collective effects in the disc, the
fluid rings at different radii precess independently, so the disc
cannot retain its coherence unless the total torque $\bfT_\star +
\bfT_p=0$ at each radius. This requires
\begin{equation}
r^5 (\bfn\cdot\bfn_\star)\, \bfn\cross\bfn_\star +
\frac{\epsilon_p}{\epsilon_\star}(\bfn\cdot\bfn_p)\,
\bfn\cross\bfn_p=0,
\end{equation}
which can be rewritten as
\begin{equation}
\left(\frac{r}{r_w}\right)^5 (\bfn\cdot\bfn_\star)\,
\bfn\cross\bfn_\star + (\bfn\cdot\bfn_p)\,\bfn\cross\bfn_p=0 \quad\mbox{where} \quad r_w^5\equiv\frac{\epsilon_p}{\epsilon_\star}=2
J_2\frac{M}{\mstar}R_p^2\rstar^3
\label{eq:lap}
\end{equation}
defines the characteristic radius $r_w$ at which the warp is most prominent \citep{pmg66}.
We restrict ourselves to the usual case in which the disc normal $\bfn(r)$ is coplanar
with $\bfn_p$ and $\bfn_\star$ (for a more general discussion see \citealt{tre09}).
Then the unit vectors $\bfn(r)$, $\bfn_p$, $\bfn_\star$ can be
specified by their azimuthal angles in this plane, $\phi(r)$,
$\phi_p$, $\phi_\star$. Without loss of generality we may assume
$\phi_\star=\half\upi$, so the obliquity is $\phi_p-\phi_\star=\phi_p-\half\upi$.
Then equation (\ref{eq:lap}) can be rewritten as
\begin{equation}
\left(\frac{r}{r_w}\right)^5 = \frac{\sin 2(\phi-\phi_p)}{\sin
2\phi}.
\label{eq:lapa}
\end{equation}
\begin{figure}
\includegraphics[width=0.5\textwidth]{fig1a.ps}
\includegraphics[width=0.5\textwidth]{fig1b.ps}
\caption{(left) The orientation of a stationary, inviscid disc
orbiting a planet that has an obliquity of $60^\circ$. An orbit with
angular momentum aligned with the planetary orbit has azimuthal
angle $\phi=90^\circ$ and an orbit aligned with the planetary
equator has $\phi=90^\circ+60^\circ=150^\circ$. The black solid
circles denote the classical Laplace surface, the blue circles
denote the same spatial surface as traced by retrograde orbits,
and the red open circles denote dynamically unstable
surfaces. (right) The same as the left panel, but for an inviscid
disc orbiting a spinning BH; like the planet, the BH
orbits a companion star with an obliquity of $60^\circ$.}
\label{fig:one}
\end{figure}
The solutions to equation (\ref{eq:lapa}) are shown in the left panel
of Fig.\ \ref{fig:one} for obliquity $\phi_p-\phi_\star=60^\circ$.
The `classical' Laplace surface, shown as solid black circles, is
aligned with the planet's orbit around the star at large radii
($\phi\to\half\upi$ as $r\to\infty$). The surface shown by solid blue
circles is similar, but composed of retrograde orbits (the disc
angular-momentum vector is anti-aligned with the planetary orbital
angular momentum at large radii, and anti-aligned with the planetary
spin at small radii). The surfaces shown by open red circles are also
solutions of equation (\ref{eq:lapa}) but they are unstable to small
perturbations in $\bfn$ \citep{tre09}, and we will not consider them
further. On the classical Laplace surface, the azimuth of the disc
normal $\phi$ increases smoothly and continuously from $\phi_\star$ to
$\phi_p$, so that the disc plane gradually twists from the orbital
plane of the planet to the equatorial plane of the planet as its
radius shrinks.
We next carry out the analogous derivation for an inviscid thin disc
orbiting a spinning BH with a companion star. The disc is subject to
Lense--Thirring torque, $\bfT_{\rm LT}$ (eq.\ \ref{eq:lt}), and torque
from the companion star, $\bfT_\star$ (eq.\ \ref{eq:comp}). The
equilibrium shape defined by $\bfT_\star+\bfT_{\rm LT}=0$ is given by
\begin{equation}
r^{9/2} (\bfn\cdot\bfn_\star)\, \bfn\cross\bfn_\star -
\frac{\epsilon_{\rm LT}}{\epsilon_\star}\bfn\cross\bfn_\bullet=0
\end{equation}
which can be rewritten as
\begin{equation}
\left(\frac{r}{r_w}\right)^{9/2} (\bfn\cdot\bfn_\star)\,
\bfn\cross\bfn_\star - \bfn\cross\bfn_\bullet=0 \quad\mbox{where}
\quad r_w^{9/2}=\frac{\epsilon_{\rm LT}}{\epsilon_\star}=\frac{8a_\bullet}{3}\frac{M}{\mstar}R_g^{3/2}\rstar^3.
\label{eq:kerr}
\end{equation}
The analog to equation (\ref{eq:lapa}) is
\begin{equation}
\left(\frac{r}{r_w}\right)^{9/2}=-\frac{2\sin(\phi-\phi_\bullet)}{\sin2\phi},
\label{eq:lapk}
\end{equation}
where $\phi_\bullet$ is the azimuthal angle of the BH spin
axis. The obliquity is $\phi_\bullet-\phi_\star=\phi_\bullet-\half\upi$.
The solutions to equation (\ref{eq:lapk}) are shown in the right panel
of Fig.\ \ref{fig:one} for obliquity
$\phi_\bullet-\phi_\star=60^\circ$. In contrast to the quadrupole
case, the solution that is aligned with the companion-star orbit at
large radii ($\phi\to\half\upi$ as $r\to\infty$, shown as black filled
circles) terminates just outside the characteristic radius $r_w$ (this
solution is mirrored by an unstable solution, shown by open red
circles, that has no relevance to our discussion). The solution that
is aligned with the equator of the BH at small radii, shown as the
upper set of filled blue circles, approaches $\phi=\upi$ at large
radii; in other words the disc is perpendicular to the companion-star
orbital plane, which is inconsistent with the expectation that the
disc is fed by material lost from the companion. Material spiraling
in from the companion star along the black sequence of points in the
right panel of Fig.\ \ref{fig:one} must therefore jump to one of the
two blue sequences before proceeding inwards to the
BH\footnote{J. Touma (private communication) points out that the time
evolution of the orbit normals in the Lense--Thirring disc is the
same as that of Colombo's top, which describes the behavior of the
spin axis of the Moon due to the torque from the Earth on the lunar
figure and precession of the lunar orbit due to the Sun
\citep{col66,hen87}. The solutions shown in the right panel of
Fig.\ \ref{fig:one} correspond to the Cassini states of the Moon,
of which there are two or four depending on whether the lunar
semimajor axis is less than or greater than 34 Earth radii
\citep{ward}.}.
The lower blue sequence represents a solution in which
the disc angular momentum is anti-aligned with the BH spin at small
radii ($\phi=\phi_\bullet-\upi$) and anti-aligned with the orbital
angular momentum of the companion at large radii. This is equivalent
to a solution in which the obliquity is $120^\circ$ and the disc
angular momentum is aligned with the BH spin at small radii and the
companion's orbital angular momentum at large radii. Thus a smooth
surface similar to the classical Laplace surface seen in the left
panel of Fig.\ \ref{fig:one} exists around a spinning BH if and only if the obliquity
exceeds $90^\circ$.
These conclusions raise two obvious questions: how is this unusual behavior
related to the standard Bardeen--Petterson analysis of a warped
accretion disc orbiting a spinning BH? And how do warped accretion discs
actually behave in real astrophysical systems?
\subsection{An approximate analysis of viscous warped discs}
\label{sec:approx}
\noindent
To show the relation between the findings of the preceding subsection
and the Bardeen--Petterson treatment of viscous warped discs, we
examine the approximate strength of the torques from various sources.
Suppose that the disc is strongly warped near some radius
$r$. The torque per unit mass due to a companion is
(eq.\ \ref{eq:comp})
\begin{equation}
T_\star\simeq \frac{G\mstar r^2}{\rstar^3},
\end{equation}
where we have neglected all factors of order unity. Similarly, the torque from
the quadrupole moment of the central body is (eq.\ \ref{eq:quad})
\begin{equation}
T_p\simeq \frac{GMJ_2R_p^2}{r^3};
\end{equation}
and the Lense--Thirring torque is (eq.\ \ref{eq:lt})
\begin{equation}
T_{\rm LT}\simeq \frac{R_g^{5/2}c^2a_\bullet}{r^{5/2}}.
\label{eq:fffggg}
\end{equation}
The torque per unit mass due to viscous stress is $T_v\simeq
\eta\Omega/\rho$ where $\eta$ is the viscosity and $\rho$ is the
density in the disc. In the Shakura--Sunyaev $\alpha$-model of
viscosity (eq.\ \ref{eq:alpha}) $\eta=\alpha \rho c_s^2$ where $c_s$
is the sound speed, and $\alpha$ is a constant, typically assumed to
be $\sim 0.1$. However, the Shakura--Sunyaev model was developed to
model viscous forces in the disc arising from Keplerian shear, whereas
the warp shape is determined by viscous forces due to much smaller
shears normal to the disc plane. To represent the second kind of force
we use an $\alpha$-model with a different parameter $\alpha_\perp$
(for small-amplitude warps $\alpha_\perp=\frac{1}{2}\alpha^{-1}$; see
eq.\ \ref{eq:qalpha}). Thus
\begin{equation}
T_v\simeq \alpha_\perp c_s^2.
\label{eq:visct}
\end{equation}
For simplicity we shall usually assume that the disc is isothermal, in
which case the viscous torque is independent of radius. Finally, the torque
per unit mass due to the self-gravity of the disc is roughly
\begin{equation}
T_{sg} \simeq \upi G\Sigma r.
\end{equation}
where $\Sigma$ is the surface density near radius $r$.
\paragraph*{Viscous quadrupole discs with a companion}
The quadrupole torque $T_p$ decreases with radius, while
the torque from the companion $T_\star$ increases with radius. The two
are equal at
\begin{equation}
r_w\simeq \left(J_2\frac{M}{\mstar}R_p^2\,\rstar^3\right)^{1/5}.
\end{equation}
which agrees with the precise definition of the warp radius in equation
(\ref{eq:lap}) to within a factor of order unity. Since the viscous
torque $T_v$ is independent of radius in an isothermal disc, and one of
$T_\star$, $T_p$ is always larger than $T_\star(r_w)$, the viscous
torque is always smaller than the torque due to the central body or
the companion if $\beta\alpha_\perp <1$, where
\begin{equation}
\beta\equiv
\frac{T_v/\alpha_\perp}{T_\star(r_w)}=\frac{c_s^2R_p}{GMJ_2^{2/5}}
\left(\frac{\rstar}{R_p}\right)^{9/5}\left(\frac{M}{\mstar}\right)^{3/5}.
\label{eq:betaone}
\end{equation}
This agrees with the precise definition of $\beta$ that we give later
in the paper (eq.\ \ref{eq:qdef}) to within 1 per cent. In
the terminology introduced at the start of the paper, a disc with
$\beta\alpha_\perp\la 1$ is a `low-viscosity' disc.
\paragraph*{Viscous Lense--Thirring discs with a companion} The
Lense--Thirring torque $T_{\rm LT}$ and the companion torque $T_\star$
are equal at
\begin{equation}
r_w\simeq \left(a_\bullet\frac{M}{\mstar} R_g^{3/2}\rstar^3\right)^{2/9},
\end{equation}
and the ratio of the viscous torque to the Lense--Thirring or companion torque at
$r_w$ is then $\beta\alpha_\perp$ where
\begin{equation}
\beta\equiv \frac{T_v/\alpha_\perp}{T_\star(r_w)}=
\frac{c_s^2}{c^2a_\bullet^{4/9}}\left(\frac{\rstar}{R_g}\right)^{5/3}\left(\frac{M}{\mstar}\right)^{5/9},
\label{eq:betatwo}
\end{equation}
consistent with the precise definition in equation (\ref{eq:qdef}) to
within 15 per cent.
We expect that the shape of a low-viscosity disc
($\beta\alpha_\perp\la 1$) is determined by the competition
between the torque from the central body (quadrupole or
Lense--Thirring torque) and the torque from the companion, rather than
by viscous torques. On the other hand the surface-density distribution
in a warped disc is always determined by the viscous torque, no matter
how small, since the other two torques both scale linearly with the
surface density and hence do not establish the surface-density
distribution.
The usual Bardeen--Petterson description implicitly assumes that
$\beta\alpha_\perp\gg1$ and neglects the companion torque. In this case the warp
will be strongest at a smaller radius $r_w'$ given by
\begin{equation}
r_w'\simeq\left\{\begin{array}{ll} r_w/(\alpha_\perp\beta)^{1/3}\simeq
(J_2R_p^2GM/\alpha_\perp c_s^2)^{1/3};
& \qquad\mbox{quadrupole disc} \\[10pt]
r_w/(\alpha_\perp\beta)^{2/5}\simeq (a_\bullet/\alpha_\perp)^{2/5}\left(c/c_s\right)^{4/5}R_g
& \qquad \mbox{Lense--Thirring disc}.
\end{array}\right.
\label{eq:rwprime}
\end{equation}
\paragraph*{Viscous Lense--Thirring discs with self-gravity}
In accretion discs surrounding supermassive BHs at the centres of
galaxies, there is no companion body (except in the case of a binary
BH; see \S\ref{sec:bbh}) . Thus the torque $T_\star$ can be neglected. However, the disc can
be massive enough that its self-gravity plays a role in determining
its shape. In plausible disc models the surface density falls off
slowly enough that this torque increases outward (see
\S\ref{sec:agn}), and equals the Lense--Thirring torque at
\begin{equation}
r_w\simeq \left[\frac{a_\bullet R_g^{5/2}c^2}{\upi G\Sigma(r_w)}\right]^{2/7};
\label{eq:rwself}
\end{equation}
note that this is an implicit equation for the warp radius $r_w$ since
the surface density depends on radius. The ratio of the viscous torque, equation
(\ref{eq:visct}), to the Lense--Thirring and self-gravity torques at
$r_w$ is then $\gamma\alpha_\perp$, where
\begin{equation}
\gamma\equiv
\frac{T_v/\alpha_\perp}{T_{sg}(r_w)}=\frac{c_s^2}{\upi G\Sigma r}\bigg|_{r_w}.
\label{eq:betaself}
\end{equation}
Note that $\gamma\simeq Q (H/r)$ where $Q$ is Toomre's parameter (eq.\
\ref{eq:toomredef}) and $H=c_s/\Omega$ is the disc thickness. Thus the
viscosity becomes low (in the sense that $\gamma\ll1$) in thin discs ($H/r\ll1$)
long before they become gravitationally unstable ($Q<1$).
\section{Evolution of viscous discs with companions}
\subsection{Evolution equations}
\label{sec:visc}
\noindent
The equations that describe the evolution of a warped, thin accretion
disc are derived by \cite{pri92}, \cite{o99}, and \cite{ol13a}. Our starting point
is \cite{o99}'s equations (121) and (122). The first of these is the
equation of continuity
\begin{equation}
2\upi r{\p\Sigma\over\p t}+{\p C_M\over\p r}=0, \qquad
C_M\equiv 2\upi r\Sigma v_r,
\label{eq:ogone}
\end{equation}
where $\Sigma(r,t)$ is the surface density, $v_r(r,t)$ is the radial
drift velocity, and $C_M(r,t)$ is the mass current (rate of outward
flow of disc mass through radius $r$). The second is an equation for
angular momentum conservation,
\begin{equation}
2\upi r{\p\bfL\over \p t} +\frac{\p\bfC_L}{\p r}=2\upi r\Sigma\bfT,
\label{eq:ogtwo}
\end{equation}
where $\Omega(r)\equiv (GM/r^3)^{1/2}$ is the Keplerian angular
speed, $\bfL=\Sigma r^2\Omega\,\bfn$ is the angular momentum per unit
area, $\bfT$ is the torque per unit mass from sources external to the
disc, and $\bfC_L$ is the angular-momentum current, given by the sum
of advective and viscous currents,
\begin{align}
\bfC_L\equiv &\bfC_{\rm adv}+\bfC_{\rm visc}, \nonumber \\[10pt]
\bfC_{\rm adv}(r,t)= & 2\upi r^3\Omega\Sigma v_r\,\bfn = r^2\Omega\,\bfn\,C_M,\nonumber \\
\bfC_{\rm visc}(r,t)= & -2\upi r^2\Sigma c_s^2 \Big(Q_1\bfn +
Q_2r{\p\bfn\over\p r} + Q_3r\, \bfn\cross{\p\bfn\over\p
r}\Big).
\label{eq:ogfour}
\end{align}
Here $c_s$ is the sound speed, which is constant in an isothermal disc
(as we shall assume from now on), and as usual\footnotemark[2] $\bfn(r,t)$ is the unit vector normal to
the disc at radius $r$. The dimensionless coefficients $Q_1$, $Q_2$,
$Q_3$ depend on the equation of state, the viscosity, and the
warp $\psi\equiv r|\p\bfn/\p r|$. For a flat Keplerian disc, $Q_1$ is related to the kinematic
viscosity by $\nu=-\frac{2}{3}Q_1c_s^2/\Omega$ and the mean-square
height of the disc above the midplane is $H^2=c_s^2/\Omega^2$.
These equations are based on the assumptions \citep{o99} that (i) the
disc is thin, $H/r\ll1$; (ii) the fluid obeys the compressible
Navier--Stokes equation; (iii) the fluid equation of state is
barotropic, i.e., the viscosity is dynamically important but not
thermodynamically important; (iv) the disc is non-resonant in the
sense of equation (\ref{eq:nonres}). In the calculations below we shall also
assume that (v) the viscosity is described by the Shakura--Sunyaev
$\alpha$-model, that is, the shear and bulk viscosities $\eta$ and
$\zeta$ are related to the pressure $p$ by
\begin{equation}
\eta=\alpha\,p/\Omega, \quad \zeta=\alpha_b\, p/\Omega,
\label{eq:alpha}
\end{equation}
where $\alpha$ and $\alpha_b$ are constants. For a flat, isothermal
disc the kinematic viscosity is $\nu=\eta/\rho=\alpha c_s^2/\Omega$,
so $\alpha=-\frac{2}{3}Q_1$.
Now take the scalar product of (\ref{eq:ogtwo}) with $\bfn$. Since
$\bfn\cdot\bfn=1$, $\bfn\cdot \p\bfn/\p t=\bfn\cdot \p\bfn/\p r=0$.
Moreover $\bfn\cdot\bfT=0$ for the Lense--Thirring torque and for any
torque arising from a gravitational potential, so we
shall assume that this condition holds in general. We also use
equation (\ref{eq:ogone}) to eliminate $\p\Sigma/\p t$. The result is
an expression for the mass current,
\begin{equation}
C_M = 2\upi r\Sigma
v_r=-\frac{2}{r\Omega}\bfn\cdot\frac{\p\bfC_{\rm visc}}{\p
r}=\frac{4\upi c_s^2}{r\Omega}\frac{\p}{\p
r}\left(\Sigma r^2 Q_1\right)-\frac{4\upi\Sigma c_s^2r^2}{\Omega}
Q_2\bigg|\frac{\p\bfn}{\p r}\bigg|^2.
\label{eq:masscurr}
\end{equation}
We now introduce several new variables: the dimensionless radius
$x\equiv r/r_w$ with the warp radius $r_w$ given by (\ref{eq:lap}) or (\ref{eq:kerr});
the dimensionless time $\tau\equiv t\, c_s^2/(GM r_w)^{1/2}$
(roughly, for a Shakura--Sunyaev disc with $\alpha\sim 1$ this is time
measured in units of the viscous diffusion time at the warp radius);
and $y(r,t)\equiv \Sigma(r,t)(GM r_w)^{1/2}$ (with dimensions
of angular momentum per unit area). Equation (\ref{eq:ogtwo}) becomes
\begin{equation}
\frac{\p\bfL}{\p\tau}+\frac{1}{x}\frac{\p}{\p x}(\bfc_{\rm
visc}+x^{1/2}c_M\bfn)=\frac{y}{\beta}\left\{\begin{array}{ll}x^2(\bfn\cdot\bfn_\star)\bfn\cross\bfn_\star+x^{-3}(\bfn\cdot\bfn_p)\bfn\cross\bfn_p
& \quad \mbox{quadrupole} \\[5pt]
x^2(\bfn\cdot\bfn_\star)\bfn\cross\bfn_\star-x^{-5/2}\bfn\cross\bfn_\bullet
& \quad \mbox{Lense--Thirring}
\end{array}\right.
\label{eq:ogthree}
\end{equation}
where $\bfn=\bfL/|\bfL|=\bfL/(\Sigma r^2\Omega)=\bfL/(yx^{1/2})$, $y=|\bfL|/x^{1/2}$,
\begin{align}
\bfc_{\rm visc}\equiv &\;
\frac{1}{2\upi c_s^2}\bigg(\frac{GM}{r_w^3}\bigg)^{1/2} \bfC_{\rm visc}=-yx^2\bigg(Q_1\bfn+Q_2 x\frac{\p\bfn}{\p x}
+Q_3x\bfn\cross\frac{\p \bfn}{\p x}\bigg), \nonumber \\[10pt]
c_M\equiv & \;\frac{GM}{2\upi r_wc_s^2}C_M=-2x^{1/2}\bfn\cdot\frac{\p\bfc_{\rm visc}}{\p x}=2x^{1/2}\bigg[\frac{\p}{\p
x}\left(y x^2Q_1\right)-y x^3Q_2\Big|\frac{\p\bfn}{\p x}\Big|^2\bigg].
\label{eq:cldef}
\end{align}
The dimensionless parameter $\beta$ is given by
\begin{equation}
\beta\equiv
\frac{4M}{3\mstar}\frac{c_s^2r_w}{GM}\left(\frac{\rstar}{r_w}\right)^3=\left\{\begin{array}{ll}
\displaystyle
\frac{2^{8/5}}{3J_2^{2/5}}\frac{c_s^2R_p}{GM}\left(\frac{\rstar}{R_p}\right)^{9/5}\left(\frac{M}{\mstar}\right)^{3/5}
&\qquad\mbox{quadrupole} \\[15pt]\displaystyle
\frac{2^{2/3}}{3^{5/9}a_\bullet^{4/9}}\frac{c_s^2}{c^2}\left(\frac{\rstar}{R_g}\right)^{5/3}\left(\frac{M}{\mstar}\right)^{5/9}
&\qquad\mbox{Lense--Thirring}
\end{array}\right.
\label{eq:qdef}
\end{equation}
and represents the ratio of the strength of the viscous torque to the
external torque at the characteristic warp radius $r_w$ (cf.\ eqs.\
\ref{eq:betaone} and \ref{eq:betatwo}).
Equation (\ref{eq:ogthree}) is a parabolic partial differential
equation for the three components of $\bfL$.
The dimensionless viscosity coefficients $Q_i$
are functions of the equation of state and of the warp $\psi\equiv x|\upartial\bfn/\upartial x|$
\citep{o99}. Ogilvie shows that for an isothermal $\alpha$-disc and
small warps ($\psi\ll1$),
\begin{equation}
Q_1=-\frac{3\alpha}{2} + \mbox{O}(\psi^2),\qquad
Q_2=\frac{1+7\alpha^2}{\alpha(4+\alpha^2)}+\mbox{O}(\psi^2)=
\frac{1}{4\alpha} +\mbox{O}(\alpha,\psi^2).
\label{eq:qsmall}
\end{equation}
We shall also examine a simplified set of equations that appear to
contain most of the important physics of equations
(\ref{eq:ogthree})--(\ref{eq:qdef}). In these equations (i) we examine
only the steady-state disc, that is, we set $\p\bfL/\p t=0$ in
equation (\ref{eq:ogthree}); (ii) we set $Q_3=0$, since it appears to
play no important role in the dynamics; and (iii) we neglect the
dependence of $Q_1$ and $Q_2$ on the warp $\psi$, that is, we treat
them as constants. The steady-state assumption implies that the mass
current $c_M$ is a constant of the problem, independent of radius. We
have
\begin{align}
\frac{dy}{dx}&+y\left(\frac{2}{x}-\frac{Q_2x}{Q_1}|d\bfn/dx|^2\right)=\frac{c_M}{2Q_1x^{5/2}},
\nonumber \\[10pt]
\frac{d^2\bfn}{dx^2} &+\frac{d\bfn}{dx}\left[\frac{Q_1/Q_2+3}{x}
-\frac{c_M}{Q_2x^{5/2}y}+\frac{d\log y}{dx}\right]+ |d\bfn/dx|^2\bfn
\nonumber \\
&= \qquad -\frac{1}{\beta Q_2}\left\{\begin{array}{ll}(\bfn\cdot\bfn_\star)\bfn\cross\bfn_\star+x^{-5}(\bfn\cdot\bfn_p)\bfn\cross\bfn_p
& \qquad \mbox{quadrupole} \\[5pt]
(\bfn\cdot\bfn_\star)\bfn\cross\bfn_\star-x^{-9/2}\bfn\cross\bfn_\bullet
& \qquad \mbox{Lense--Thirring}
\end{array}\right.
\label{eq:simple}
\end{align}
The three components of the unit vector $\bfn$ are related by
the constraint $|\bfn|=1$.
This simplified model is similar to Pringle's (1992) equations of
motion, in which there are two viscosities $\eta$ and $\eta_\perp$ (in
Pringle's notation, these are $\rho\nu_1$ and $\rho\nu_2$), the first
of which is associated with the Keplerian shear and the second with
shear perpendicular to the disc caused by a warp. In an $\alpha$-disc
model $\eta=\alpha \rho c_s^2$ and $\eta_\perp=\alpha_\perp\rho c_s^2$
and the two models are equivalent if
\begin{equation}
Q_1=-\frac{3\alpha}{2}, \qquad Q_2=\frac{\alpha_\perp}{2}.
\label{eq:qalpha}
\end{equation}
If $\alpha\ll1$ and the warp is small, equation (\ref{eq:qsmall})
implies that $\alpha_\perp=\frac{1}{2}\alpha^{-1}$ \citep{pp83,o99}.
Although we adopt this formalism, one should keep in mind that
angular-momentum transport in real accretion discs is thought to be
driven by MHD turbulence, which may not be well approximated by an
isotropic viscosity -- or if it is, the viscosity may not be well
approximated by the Shakura--Sunyaev $\alpha$-model. Some support for
this formalism is provided by local, non-relativistic MHD simulations
that examine the decay of an imposed epicyclic oscillation
\citep{tor00}. Global, general-relativistic MHD simulations have
tended to show solid-body precession rather than Bardeen--Petterson
alignment, although most of these correspond to the resonant regime
$\alpha < H/r$ (cf.\ eq.\ \ref{eq:nonres}), which we exclude
\citep[e.g.][]{fra07}. More recently, global but non-relativistic MHD
calculations with an approximate treatment of Lense--Thirring
precession have been performed by Sorathia et al.\ (submitted to ApJ;
see also \citealt{2013ApJ...768..133S}). They find that diffusive
damping of vertical shear is much less important than the derivation
of the Pringle--Ogilvie equations implies. This in turn implies that the
Pringle--Ogilvie + Shakura--Sunyaev formalism overestimates the
strength of viscous torques when $\alpha\ll1$ and so the importance of
tidal torques and self-gravity in accretion discs is even greater than
we find below.
\subsection{Numerical methods}
\label{sec:methods}
\paragraph*{Steady-state discs} We have solved the simplified ordinary
differential equations (\ref{eq:simple}) for steady-state discs with
constant viscosity coefficients and $Q_3=0$. We find the numerical
solution over a range of dimensionless radii $[x_a,x_b]$; typically we
choose $x_b=1/x_a=30$, although in some cases where the viscosity is
large we cover a larger range to ensure that the disc is not still
warped at either end of the integration range. The viscosity
coefficients $Q_1$ and $Q_2$ are usually fixed at their values for an
unwarped disc with $\alpha=0.2$, $\alpha_b=0$, in which case
$Q_1=-0.3$, $Q_2=1.58416$. The equations are unchanged under the
rescaling $y(x)\to\lambda y(x)$, $c_M\to \lambda c_M$, so the
normalization of the mass current $c_M$ can be chosen arbitrarily
apart from the sign. We are interested in the case in which mass flows
into the BH, so we set $c_M=-1$.
Seven boundary conditions are required for the one first-order and
three second-order equations. In the region $x\ll1$ where external
torques are negligible, the disc is assumed to be flat,
$d\bfn/dx=0$. Then the first of equations (\ref{eq:simple}) has the
solution
\begin{equation}
y(x)=\frac{c_M}{Q_1x^{3/2}} + \frac{k}{x^2},
\label{eq:qqwwrr}
\end{equation}
where $k$ is an integration constant. We assume a no-torque boundary
condition at the radius $x_{\rm ISCO}$ of the innermost stable circular
orbit, which is close to the BH; this requires that the
viscous angular-momentum current $\bfc_{\rm visc}=0$ at $x_{\rm ISCO}$
and from the first of equations (\ref{eq:cldef}) this in turn requires
$y=0$ at $x_{\rm ISCO}$. Thus
\begin{equation}
y(x)=\frac{c_M}{Q_1x^2}(x^{1/2}-x_{\rm ISCO}^{1/2}).
\label{eq:isco}
\end{equation}
We assume that the inner boundary of our integration region $x_a$ is
much larger than $x_{\rm ISCO}$ so in the region of interest
\begin{equation}
y(x)=\frac{c_M}{Q_1x^{3/2}},
\label{eq:ggg}
\end{equation}
which provides one boundary condition at $x=x_a$.
At the outer radius $x_b$ the disc should lie in the plane of the
companion-star orbit, as we would expect if the disc is fed by mass
loss from the companion. Thus $\bfn=\bfn_\star$ at $x=x_b$, which
provides three additional boundary conditions. Moreover since
$|\bfn|=1$ at all radii, we must have $\bfn\cdot\p\bfn/\p x=0$ at
$x=x_b$, which provides another boundary condition (it is
straightforward to show from the second of eqs.\ \ref{eq:simple}
that these conditions are sufficient to ensure that $|\bfn|=1$ at all
radii). Note that we do not require that the disc lies in the equator
of the central body for $x\ll1$, although it turns out to do so in all of our
numerical solutions.
Let us assume for simplicity that (i) inside the inner integration
boundary $x_a$ the external torques on the right side of the second of
equations (\ref{eq:simple}) vanish; (ii) the disc normal $\bfn$ is
nearly constant, $\bfn(x)=\bfn_0+\epsilon\bfn_1(x)$ where
$\epsilon\ll1$. Then to first order in $\epsilon$ the first of
equations (\ref{eq:simple}) is the same as for a flat disc, yielding
the solution (\ref{eq:ggg}). Substituting this result into the second
of equations (\ref{eq:simple}) and working to first order in
$\epsilon$ we find
\begin{equation}
\frac{d^2\bfn_1}{dx^2}
+\frac{3}{2x}\frac{d\bfn_1}{dx}=0\quad\mbox{with solution} \quad
\bfn_1=\bfa + \bfb x^{-1/2}
\label{eq:ibc}
\end{equation}
where $\bfa$ and $\bfb$ are constants. To avoid an unphysical
solution that grows as $x\to 0$ we must have $\bfb=0$. The component
of $\bfb$ along $\bfn$ is already guaranteed to be zero because our
earlier boundary conditions ensure that $\bfn\cdot d\bfn/dx=0$. Thus
the two components of $d\bfn/dx$ perpendicular to $\bfn$ must vanish
at the inner boundary $x_a$, which provides the final two boundary
conditions. Note that there is no similar requirement at the outer boundary, since
the parasitic solution $\bfb x^{-1/2}$ decays as $x\to\infty$.
The resulting boundary-value problem is solved using a collocation
method with an adaptive mesh (routine \textsc{d02tvf} from Numerical Algorithms
Group). To improve convergence we start with zero obliquity and
increase the obliquity in steps of $1^\circ$, using the converged
solution from each value of the obliquity as the initial guess for the
solution for the next.
\paragraph*{Time-dependent discs} We have solved the partial
differential equations (\ref{eq:ogthree}), typically over the interval
$[x_a,x_b]$ with $x_b=1/x_a=30$. Usually the viscosity coefficients
$Q_i$ are chosen to be appropriate for a disc with $\alpha=0.2$, $\alpha_b=0$. The
coefficients are determined as functions of the warp $\psi\equiv
x|\upartial\bfn/\upartial x|$ using a code generously provided by G.\
Ogilvie (see Fig.\ \ref{fig:two}); the coefficients are tabulated on a grid $0\le\psi\le10$ and
interpolated using cubic splines. Mass, and the corresponding angular
momentum for circular orbits, are added
at a constant rate with a Gaussian distribution in radius centred at
$x=10$ (i.e., well outside the warp) and the disc is followed until it
reaches a steady state. The integration is carried out using the
routine \textsc{d03pcf} from Numerical Algorithms Group. A complication is
that the dependence of the coefficient $Q_1$ on $\psi$ means that
equation (\ref{eq:ogthree}) is third-order in the spatial derivative;
to reduce this to a second-order equation we treat the mass current
$c_M$ as a fourth dependent variable in addition to the three
components of the angular momentum $\bfL$ and integrate the second of
equations (\ref{eq:cldef}) along with equations
(\ref{eq:ogthree}).
\begin{figure}
\centerline{\includegraphics[width=0.8\textwidth,bb=1 150 600 700]{fig2.ps}}
\caption{The viscosity coefficients $-Q_1$, $Q_2$, $Q_3$ for an
isothermal disc with viscosity described by a Shakura--Sunyaev
$\alpha$-model (eq.\ \ref{eq:alpha}) having $\alpha=0.2$,
$\alpha_b=0$ (solid lines) or $\alpha=0.1$, $\alpha_b=0.1$ (dashed
lines). The horizontal coordinate is the dimensionless warp
$\psi\equiv r|d\bfn/dr|$. We plot $-Q_1$ because $Q_1$ is normally
negative for small warps; for $\alpha=0.2$, $\alpha_b=0$ $Q_1$ is
negative for all $\psi$ while for $\alpha=0.1$, $\alpha_b=0.1$ $Q_1$
is positive for $\psi>1.106$. The calculations follow the precepts
of Ogilvie (1999) and employ a code provided by G.\ Ogilvie. }
\label{fig:two}
\end{figure}
As in the steady-state case we assume that the disc is aligned with
the companion-star orbit at large radii, so $\bfn=\bfn_\star$ at the
outer boundary $x=x_b$. We also assume that the steady-state relation
(\ref{eq:ggg}) between the surface density and the mass current in a
flat disc applies at the inner boundary $x_a$; this is plausible since
we expect the disc to achieve an approximate steady-state most rapidly
at small radii. We assume that there is an outer disc boundary $x_o\gg
x_b$ at which a no-torque boundary condition applies. In the
steady-state disc, arguments analogous to those leading to equations
(\ref{eq:qqwwrr})--(\ref{eq:ggg}) imply
\begin{equation}
y(x)=-\frac{c_M}{Q_1x^2}(x_o^{1/2}-x^{1/2}).
\end{equation}
This implies in turn that at the outer boundary
\begin{equation}
y(x_b)=-\frac{c_M}{Q_1x_b^2}(x_o^{1/2}-x_b^{1/2})\quad \mbox{and}\quad \bfc_L=c_Mx_o^{1/2}\bfn_\star .
\end{equation}
Typically we use $x_o=10x_b$.
Finally, the angular-momentum current at $x_{\rm ISCO}$ is $\bfc_L=x^{1/2}_{\rm
ISCO}c_M\bfn$ which can be taken to be zero since $x_{\rm ISCO}$ is
very small. Since the disc is flat
inside the warp radius and the inner integration boundary $x_a$ is
much less than the warp radius, we may assume that $\bfc_L$ is
constant between $x_{\rm ISCO}$ and $x_a$ so we set $\bfc_L(x_a)=0$.
We usually start with a low-density disc and zero obliquity, and add
mass and angular momentum outside the warp radius at a constant
rate until the disc reaches a steady state; then we slowly increase
the obliquity to the desired value.
\subsection{Results}
\label{sec:results}
\paragraph*{Quadrupole discs}
The left panel of Fig.\ \ref{fig:three} shows the solutions of equation
(\ref{eq:simple}) for a planet obliquity of $60^\circ$ and a range of
viscosity parameters $\beta$ from 1000 to 0.001. As one might expect, very viscous discs
($\beta\gg1$) exhibit a smooth, gradual warp while low-viscosity discs
($\beta\ll1$) are close to the inviscid disc (eq.\ \ref{eq:lap}),
shown as the solid circles.
The right panel shows the surface density $y(x)$. Here the behavior is
more interesting. While the surface density in very viscous discs is
close to that of a flat disc (dashed line, from eq.\ \ref{eq:ggg}), as
the viscosity is lowered the disc develops a sharp valley -- almost two
orders of magnitude -- in the surface density near the warp radius
$r_w$. The valley presumably occurs because the viscous stresses are larger when
the warp $\psi=x|d\bfn/d x|$ is large, so the mass and
angular-momentum current can be carried by a smaller surface
density. The asymptotic behavior of the surface density as the
viscosity becomes small is obtained from the first of equations
(\ref{eq:simple}) by substituting for $|d\bfn/dx|$ the value from the
inviscid solution (\ref{eq:lap}); this is shown as the solid circles
in the right panel of Fig.\ \ref{fig:three}.
\begin{figure}
\includegraphics[width=0.5\textwidth]{fig3a.ps}
\includegraphics[width=0.5\textwidth]{fig3b.ps}
\caption{(left) The orientation of a stationary disc orbiting a planet
that has an obliquity of $60^\circ$ (from eqs. \ref{eq:simple}). The
viscosity coefficients are $Q_1=-0.3$, $Q_2=1.58416$, appropriate
for a flat disc with $\alpha=0.2$, $\alpha_b=0$, and the mass current is
$c_M=-1$. The solutions shown have the parameter $\beta$ (eq.\
\ref{eq:qdef}) representing the ratio of viscous torques to external
torques equal to 1000 (cyan), 100 (green), 10 (magenta), 1 (blue),
0.1 (yellow), 0.01 (red), 0.001 (black). The solid black circles
represent the inviscid solution (the Laplace surface), given by
equation (\ref{eq:lap}) and shown in the left panel of Fig.\
\ref{fig:one}. (right) The surface density $y(x)$ for the discs
shown in the left panel. The solid circles show the solution given
by the first of equations (\ref{eq:simple}) and the orientation
$\bfn(x)$ of the inviscid disc. The dashed line shows the surface
density for a flat disc (eq.\ \ref{eq:ggg}).}
\label{fig:three}
\end{figure}
The nature of the surface-density valley associated with the warp is
illustrated further in Fig.\ \ref{fig:four}, which shows the
surface-density profile for low-viscosity discs ($\beta\to0$) for
obliquities $10^\circ,20^\circ,\ldots,80^\circ$. As the obliquity
grows the valley becomes deeper: at an obliquity of $80^\circ$ the
surface density is only 0.2 per cent of the surface density in an unwarped
disc at the bottom of the valley, near radius $1.00r_w$.
\begin{figure}
\centerline{\includegraphics[width=0.8\textwidth,bb= 0 140 575 700]{fig4.ps}}
\caption{The surface density $y(x)$ for quadrupole discs with
negligible viscosity ($\beta\to0$) and obliquity
$10^\circ, 20^\circ,\ldots,80^\circ$. The other parameters of the discs
are the same as in Fig.\ \ref{fig:three}. The dashed line shows the
surface density for a flat disc (eq.\ \ref{eq:ggg}).}
\label{fig:four}
\end{figure}
The steady-state warped discs also exhibit some spirality or twisting;
this is shown in Fig.\ \ref{fig:foura} by plotting the horizontal
components $(n_x,n_y)$ of the unit vector normal to the disc.
\begin{figure}
\includegraphics[width=0.5\textwidth]{fig5a.ps}
\includegraphics[width=0.5\textwidth]{fig5b.ps}
\caption{The horizontal components $(n_x,n_y)$ of the unit normal
vector for quadrupole discs (left panel) and Lense--Thirring discs
(right panel). The obliquity is $60^\circ$ and the other parameters
are as described in Fig.\ \ref{fig:three} (left panel) or
\ref{fig:five} (right panel). In both panels the parameter $\beta$ (eq.\
\ref{eq:qdef}), representing the ratio of viscous torques to external
torques, is equal to 1000 (cyan), 100 (green), 10 (magenta), 1 (blue);
in the left panel there are additional curves for $\beta= 0.1$
(yellow), 0.01 (red), 0.001 (black) and in the right panel there is
an additional curve for the critical value $\beta=0.333$ (black).}
\label{fig:foura}
\end{figure}
\paragraph*{Lense--Thirring discs}
Fig.\ \ref{fig:five} is analogous to Fig.\ \ref{fig:three}: it shows
the solutions of equation (\ref{eq:simple}) for a Lense--Thirring disc when the
BH obliquity is $60^\circ$. The viscosity parameter $\beta$
ranges from 1000 to 0.333; for $\beta<0.333$ no steady-state solution
exists. Similarly, the right panel of Fig.\ \ref{fig:five} shows the
horizontal components of the unit normal in Lense--Thirring discs with
$60^\circ$, to be compared with the left panel of the same figure for
quadrupole discs.
\begin{figure}
\includegraphics[width=0.5\textwidth]{fig6a.ps}
\includegraphics[width=0.5\textwidth]{fig6b.ps}
\caption{(left) The orientation of a stationary disc orbiting a BH
that has an obliquity of $60^\circ$ (from eqs. \ref{eq:simple}). The
parameters are the same as in Fig.\ \ref{fig:three}, except that
the parameter $\beta$ (eq.\ \ref{eq:qdef}) representing the ratio of
viscous torques to external torques equals 1000 (cyan), 100 (green),
10 (magenta), 1 (blue), and 0.333 (black). For $\beta<0.333$ no
solution exists. The solid black circles represent the inviscid
solution, given by equation (\ref{eq:kerr}) and shown in the right
panel of Fig.\ \ref{fig:one}. (right) The surface density $y(x)$
for the discs shown in the left panel. The dashed line shows the
surface density for a flat disc (eq.\ \ref{eq:ggg}).}
\label{fig:five}
\end{figure}
The absence of steady-state solutions for Lense--Thirring discs for viscosity
less than some critical value at fixed obliquity -- or obliquity larger
than a critical value at fixed viscosity -- is a novel feature not seen in the
quadrupole discs, and presumably related to the jump seen in the
orientation of inviscid Lense--Thirring discs (\S\ref{sec:invisc}).
Fig.\ \ref{fig:six} illustrates how the critical obliquity and
viscosity parameter are related. The black curve shows the
critical values for the simplified steady-state equations
(\ref{eq:simple}), with $Q_1=-0.3$, $Q_2 = 1.58416$, $Q_3=0$. The
critical values are defined here by the point where the maximum warp
$\psi=10$; this is generally close to the curve with $\psi\to\infty$
and for $\psi\ga 10$ it is unlikely that our model is accurate in
any case.
\begin{figure}
\centerline{\includegraphics[width=0.8\textwidth,bb=0 150 580 700]{fig7.ps}}
\caption{Above the critical obliquity shown here, steady-state
Lense--Thirring disc solutions do not exist. The parameter $\beta$
measures the strength of the viscous forces (eq.\
\ref{eq:qdef}). The solid lines are for Shakura--Sunyaev discs with
$\alpha=0.2$, $\alpha_b=0$ and the dashed line is for
$\alpha=\alpha_b=0.1$. The black and red curves are derived from
steady-state and time-dependent disc models (eqs.\ \ref{eq:ogthree}
and \ref{eq:simple}) with the viscosity parameters $Q_1$ and $Q_2$
set to their unwarped values and $Q_3=0$. The green curves are for
$Q_i$ depending on the local warp, as in Fig.\ \ref{fig:two}.
}
\label{fig:six}
\end{figure}
The red curve in Fig.\ \ref{fig:six} shows the critical values
obtained by solving the time-dependent equations (\ref{eq:ogthree})
for the same constant values of $Q_i$; in this case the critical
values are defined by the obliquity at which the maximum warp of the
time-dependent solution exceeds $\psi=10$. The agreement of the red
and black curves is partly a successful check of our steady-state
and time-dependent numerical codes, but more importantly it implies
that time-dependent discs with obliquity above the critical value will
develop singular warps -- that is, for example, there is no oscillating
solution of the time-dependent Pringle--Ogilvie equations that remains non-singular.
The green curve shows the critical values obtained from equations
(\ref{eq:ogthree}) with viscosity parameters $Q_i$ that depend on the
warp as shown in Fig.\ \ref{fig:two}. This exhibits the same
qualitative behavior as the black and red curves, demonstrating that
the critical values are not strongly dependent on the variation of
viscosity parameters with the strength of the warp.
Finally, the green dashed curve is the same as the green solid curve,
but for parameters $Q_i$ appropriate for Shakura--Sunyaev parameters
$\alpha=0.1$, $\alpha_b=0.1$.
What happens to a Lense--Thirring accretion disc when the obliquity
exceeds the critical value is not understood. Finite-time
singularities (`blow-up') are a common feature of non-linear parabolic
partial differential equations such as the Pringle--Ogilvie equations
and it is likely that the absence of a solution reflects the
approximation of the correct, hyperbolic, fluid equations with
diffusion equations. The limitations of the diffusion approximation in
warped discs are well-known: \cite{pp83} argue that a transition from
diffusive to wavelike behavior occurs when $\alpha$ decreased below
$H/r$ (see also \citealt{pl95} and
\citealt{og06}). In this regime, bending waves governed by the
pressure in the disk could transport angular momentum to connect
smoothly the inner and outer disks. The behavior of such waves in
Lense--Thirring discs is described by \cite{lop02} but only to linear
order in the warp amplitude, where the singular behavior is not
present. For finite-amplitude warps, it is far from clear how
to incorporate the required extra physics into the Pringle--Ogilvie
equations or what behavior we might expect.
The sharp changes in disc orientation seen in Fig.\ \ref{fig:five}
are reminiscent of the phenomenon of `breaking' in which the
orientation of the accretion disc changes almost discontinuously
\citep{nk12,nix12}, although there are substantial differences in the
phenomenology and interpretation (see \S\ref{sec:other} for further
discussion).
\subsection{The behavior of the disc at the critical obliquity}
\label{sec:critical}
\noindent
At the critical obliquity or viscosity there is a radius (the
`critical radius') at which the surface density approaches zero and
the disc warp $\psi=r|d\psi/dr|$ changes from near zero to a very
large value (black curves in Fig.\ \ref{fig:five}). We can offer some analytic
insight into this behavior.
Since the behavior of the disc changes sharply in a small radial
distance, this change is unlikely to be due to the external torques, which
vary smoothly with radius. Thus we examine the governing differential
equations (\ref{eq:ogthree}) with the right-hand side and
$\p/\p\tau$ set to zero. Then this equation states that the total angular-momentum current
$\bfc_{\rm visc}+x^{1/2}c_M\bfn$ must be independent of radius $x$. We
erect a coordinate system specified by the triple of unit vectors
$\bfe_1,\bfe_2,\bfe_3$ with $\bfe_3$ parallel to the angular-momentum
current, so $\bfc_{\rm visc}+x^{1/2}c_M\bfn=c_L\bfe_3$ with the mass
and angular-momentum currents $c_M$ and $c_L$ constants. For
simplicity we assume that the viscosity coefficients $Q_1$,
$Q_2$ are constants, and $Q_3=0$. Then
\begin{equation}
x^{1/2}c_M\bfn -Q_1x^2y(x)\bfn
-Q_2x^3y(x)\frac{d\bfn}{dx}=c_L\bfe_3.
\label{eq:one}
\end{equation}
Since $\bfn$ is a unit vector, $\bfn\cdot d\bfn/dx=0$,
we may take the dot product with $\bfn$ to obtain
\begin{equation}
x^{1/2}c_M -Q_1x^2y(x)=c_Lf(x) \quad\mbox{where}\quad
f(x)\equiv\bfn\cdot\bfe_3=n_3.
\label{eq:two}
\end{equation}
The components of (\ref{eq:one}) along $\bfe_1$ and $\bfe_2$ are
\begin{equation}
\left[x^{1/2}c_M -Q_1x^2y(x)\right]n_{1,2} - Q_2x^3y(x)\frac{dn_{1,2}}{dx}=0.
\label{eq:three}
\end{equation}
Combining equations (\ref{eq:two}) and (\ref{eq:three}) with the
conditions $\sum_{i=1}^3 n_i^2=1$, $\sum_{i=1}^3 n_i dn_i/dx=0$, we
find
\begin{equation}
\frac{df}{dx}=\frac{Q_1}{Q_2x}\frac{1-f^2}{f-x^{1/2}c_M/c_L}.
\label{eq:ode}
\end{equation}
The interesting behavior occurs if the mass and angular-momentum
current have the same sign. In this case the non-linear differential
equation (\ref{eq:ode}) has a critical point at $f=1$,
$x=(c_L/c_M)^2\equiv x_c$. If we restrict ourselves to the usual case
in which $Q_1<0$, $Q_2>0$, then near the critical point solutions must
take one of the following two forms:
\begin{enumerate}
\item $f=1$; this implies an unwarped disc with normal parallel to the
angular-momentum current. The surface density is given by equation
(\ref{eq:two}) as
\begin{equation}
y(x)=\frac{c_M}{2Q_1x_c^{5/2}}(x-x_c) + \mbox{O}(x-x_c)^2.
\label{eq:kkpp}
\end{equation}
In the usual case where the mass current $c_M<0$ this
solution is physical (positive surface density) for $x>x_c$, i.e.,
outside the critical point.
\item In this case
\begin{equation}
f(x)=1+\frac{Q_2-4Q_1}{2Q_2x_c}(x-x_c) + \mbox{O}(x-x_c)^2, \qquad
y(x)=\frac{2c_M}{Q_2x_c^{5/2}}(x-x_c) + \mbox{O}(x-x_c)^2.
\label{eq:hhyy}
\end{equation}
Since $f<1$ and $y>0$ this solution is only physical when the mass
current $c_M<0$ and then only for $x<x_c$, i.e., inside the critical
point. The angle between the angular momentum current and the disc
normal is $\theta$ where $\cos\theta=f$ so $\theta\sim (x_c-x)^{1/2}$ and the warp
$\psi=x|d\bfn/dx|\sim (x_c-x)^{-1/2}$. Thus the warp angle $\psi$ is singular at
the critical point.
\end{enumerate}
The behavior of these solutions is consistent with the behavior seen
in Fig.\ \ref{fig:five} at the critical obliquity: outside the
critical radius, the disc is flat and the surface density decreases
linearly to zero as the radius decreases to the critical radius (eq.\ \ref{eq:kkpp}), while
inside the critical radius the azimuthal angle $\phi-\half\upi$ of the warp
normal varies as $(x_c-x)^{1/2}$, and the surface density decreases
linearly to zero as the radius increases to the critical radius (eq.\
\ref{eq:hhyy}). Since the surface density is zero at the critical
point, there is no viscous angular-momentum transport across it, only
advective transport.
\section{Evolution of viscous discs with self-gravity}
\label{sec:sg}
\noindent
Our treatment of accretion discs with self-gravity will be briefer and
more approximate than the treatment of discs with a companion in the
preceding section, for three main reasons: (i) AGN accretion discs are
the only ones in which self-gravity is likely to be important, and
these are less well-understood than accretion discs around
stellar-mass BHs; (ii) the theory of bending waves in
gas discs is remarkably sensitive to small deviations from Keplerian
motion (cf.\ eq.\ \ref{eq:nonres}); (iii) we found that warped steady-state accretion
discs around a spinning BH with a companion do not exist for some
values of the obliquity and viscosity, and this finding requires the
best available disc models to be credible. In contrast we shall find
that warped discs with self-gravity exhibit interesting but physically
plausible behavior even in relatively simple disc models, and there is
no reason to believe that this behavior will change qualitatively in
more sophisticated treatments.
We shall assume that the warp is small so that linearized theory can
be used, and that the disc surface-density distribution is the same as
in a flat disc. We shall also assume a simple model for the viscous
damping of the warp.
We also ignore the effects of pressure in the disc. This assumption is
problematic because \cite{pl95} showed that in gravitationally stable
Keplerian discs ($Q>1$ in eq.\ \ref{eq:toomredef}) the dispersion
relation for bending waves is dominated by pressure rather than
self-gravity. However, (i) this result depends sensitively on whether the
disc is precisely Keplerian, and small additional effects such as
centrifugal pressure support or relativistic apsidal precession can
dramatically reduce the influence of pressure on the dispersion
relation; (ii) modifying the Pringle--Ogilvie equations to include
pressure is a difficult and unsolved problem.
The normal to the disc at radius $r$ is $\bfn=(n_x,n_y,n_z)$. We
choose the axes so that the BH spin is along the positive $z$-axis;
then since the warp is small $|n_x|,|n_y|\ll 1$. Write
$\zeta(r,t)\equiv n_x+{\rm i}n_y$; then neglecting all terms quadratic in
$\zeta$ the Lense--Thirring torque (\ref{eq:lt}) causes precession of
the angular momentum at a rate
\begin{equation}
\frac{d\zeta}{dt}(r,t)\bigg|_{\rm
LT}=\frac{2(GM_\bullet)^2a_\bullet}{c^3r^3}\,{\rm i}\zeta(r,t).
\end{equation}
The equations of motion due to the self-gravity of the warped disc are
given by classical Laplace--Lagrange theory \citep{md99},
\begin{equation}
\frac{d\zeta}{dt}(r,t)\bigg|_{sg}=-\frac{{\rm i}\upi G}{2(GM_\bullet
r)^{1/2}}\int \frac{r'\,\Sigma(r')\,dr'}{\mbox{max\,}(r,r')}\,\chi
b_{3/2}^{(1)}(\chi)[\zeta(r,t)-\zeta(r',t)]
\end{equation}
where $\Sigma(r)$ is the surface density,
$\chi=\mbox{min}\,(r,r')/\mbox{max}\,(r,r')$ and the Laplace
coefficient
\begin{equation}
b_{3/2}^{(1)}(\chi)=\frac{2}{\upi}\int_0^\upi\frac{\cos
x\,dx}{(1-2\chi\cos x
+\chi^2)^{3/2}}=\frac{4}{\upi\chi(1-\chi^2)^2}[(1+\chi^2)E(\chi)-(1-\chi^2)K(\chi)]
\end{equation}
with $K(\chi)$ and $E(\chi)$ complete elliptic integrals.
The equations of motion due to viscosity are derived by simplifying
equations (\ref{eq:ogtwo}) and (\ref{eq:ogfour}). The angular-momentum
current proportional to $Q_1\bfn$ and the mass current $C_M$ determine
the steady-state surface density in a flat disc, which we assume to be
given, so we drop these terms. The current proportional to $Q_3$
appears to play no essential role, so we drop this term as
well. Furthermore we assume that the sound speed $c_s$ is independent
of radius (isothermal disc), and we replace $Q_2$ by
$\frac{1}{2}\alpha_\perp$ (eq.\ \ref{eq:qalpha}). Thus we find
\begin{equation}
\frac{d\zeta}{dt}(r,t)\bigg|_v=\frac{c_s^2\alpha_\perp}{2(GM_\bullet
r^3)^{1/2}\Sigma(r,t)}\frac{\p}{\p r}r^3\Sigma(r,t)\frac{\p \zeta}{\p r}.
\end{equation}
We now look for a steady-state solution in which
$d\zeta/dt|_{LT+sg+v}=0$. We replace the radius by the
dimensionless variable $x=r/r_w$ where $r_w$ is defined for a
self-gravitating disc by equation (\ref{eq:rwself}), and we assume that the surface density is a power
law, $\Sigma(r)=\Sigma_0 /x^s$. The equations above simplify to
\begin{equation}
\frac{4}{x^{5/2}}\zeta- \int \frac{{x'}^{1-s}\,dx'}{\mbox{max\,}(x,x')}\chi
b_{3/2}^{(1)}(\chi)[\zeta(x)-\zeta(x')] - {\rm i}\gamma \alpha_\perp x^{s-1}\frac{d}{dx}x^{3-s}\frac{d\zeta}{dx}=0
\label{eq:xxcc}
\end{equation}
where $\gamma$ is the viscosity parameter defined in equation
(\ref{eq:betaself}). We impose the boundary conditions $d\zeta/dx=0$ as
$x\to0$ and $x\to\infty$ (the disc is flat near the BH, and flat far outside
the warp radius) and $\zeta\to \zeta_0$ at $x\to\infty$ (at large distances
the normal to the disc is inclined to the spin axis of the BH by an
angle $\theta=|\zeta_0| \ll1$). Since equation (\ref{eq:xxcc}) is linear, there
is no loss of generality if we set $\zeta_0=1$.
In these dimensionless units, the shape of the warp is determined by
only two parameters, the logarithmic slope of the surface-density
distribution $s$, and the viscosity parameter $\gamma
\alpha_\perp$. The relation between $\alpha$ and $\alpha_\perp$ is
discussed after equation (\ref{eq:qalpha}).
\begin{figure}
\includegraphics[width=0.95\textwidth,bb=0 150 580 700]{fig8.ps}
\caption{The steady-state shape of warped discs including
Lense--Thirring torque, self-gravity, and viscosity (eq.\
\ref{eq:xxcc}). The four panels show four different values of the
viscosity parameter $\gamma \alpha_\perp$ (eq.\
\ref{eq:betaself}). The figures plot the real and imaginary parts of
the complex inclination $\zeta$ (solid black and dashed green lines)
as a function of the radius in units of the warp radius $r_w$ (eq.\
\ref{eq:rwself}). At large radii the disc is assumed to be flat
with $\zeta=1$; since eq.\ (\ref{eq:xxcc}) is linear the results can
be scaled to any (small) inclination. At small radii the disc is
found to lie in the BH equator, $\zeta=0$. Note the different
vertical scales in the four panels. The disappearance of the
oscillations at $x<0.18$ in the lower right panel is a numerical
artifact due to limited resolution.}
\label{fig:sg}
\end{figure}
Fig.\ \ref{fig:sg} shows the solutions of equation (\ref{eq:xxcc})
for the surface-density slope $s=\frac{3}{5}$ appropriate for a
gas-pressure dominated disc (eq.\ \ref{eq:siggas}). The solid and
dashed lines show the real and imaginary parts of $\zeta(x)$. For
low-viscosity discs ($\gamma \alpha_\perp\ll1$) we find that the disc
develops bending waves inside the warp radius, and if the viscosity
is sufficiently small the bending waves can grow in amplitude by
orders of magnitude as the radius shrinks (the disappearance of the
bending waves at $x<0.18$ in the lower right panel is a numerical
artifact, which arises because the wavelength of the bending waves
becomes shorter than the resolution of the numerical grid,
$\Delta\log_{10}x=0.002$).
Many of the properties of the bending waves can be understood using a WKB
analysis \citep[][hereafter SCL83]{shu83}. We shall quote the results from this paper
without derivations. If we assume that the waves have the form
$\zeta=A_\zeta(r)\exp[{\rm i}\Phi(r)]$ with radial wavenumber $k\equiv d\Phi/dr$, then
the dispersion relation is (SCL83 eq.\ 22, with $\omega=0$ and $m=1$)
\begin{equation}
|k|=\frac{2G^{3/2}M_\bullet^{5/2}a_\bullet}{\upi c^3\Sigma(r) r^{9/2}}.
\label{eq:wkb}
\end{equation}
The WKB approximation is valid if the waves have short wavelengths,
$|k|r\la 1$, which in turn requires that the radius is less than
the warp radius $r_w$ defined in equation (\ref{eq:rwself}); and this
in turn requires that the dimensionless variable $x$ in Fig.\
\ref{fig:sg} is small compared to unity. For plausible variations of
the surface density $\Sigma(r)$, the wavelength $2\upi/|k|$ gets
shorter and shorter as the radius shrinks.
In the absence of viscosity, the maximum inclination of the bending wave varies as
$A_\zeta(r)\propto [r^{3/2}\Sigma(r)]^{-1}$ (SCL83, eq.\ 34, with the
inclination amplitude $A_\zeta=A(r)/r$) so if the surface density falls as
$r^{-s}$ then the amplitude of the warp grows as the radius shrinks
whenever $s<\frac{3}{2}$, which is true for most disc models.
The waves are spiral, as may be deduced from the offset between the
solid (real) and dashed (imaginary) curves in Fig.\ \ref{fig:sg}
(except in the lower right panel, where the viscosity is zero). The
dispersion relation (\ref{eq:wkb}) does not distinguish leading and
trailing waves but causality arguments do: trailing waves propagate
inward (i.e. negative group velocity, see SCL83 eq.\ 23) while leading
waves propagate outward. Waves excited by the warp in the outer part
of the disc and damped at small radii by viscosity must propagate
inward and hence are trailing.
In the case of low-viscosity Lense--Thirring discs that are warped
because of a companion, we found that no solutions of the
Pringle--Ogilvie equations existed above a critical obliquity. These
calculations suggest that self-gravitating discs are more
well-behaved -- that the long-range nature of the gravitational force
allows a smooth transition from the outer to the inner orientation for
any viscosity and obliquity, through the excitation of bending waves
that are eventually damped by viscosity as they propagate
inward. However, we caution that the analysis of this section is
linear in the warp amplitude and it is possible that non-linear effects
will prohibit a continuously varying warp shape once the obliquity is
large enough.
This physical picture needs to be modified for AGN discs dominated by
radiation pressure, where the surface density varies as
$\Sigma(r)\propto r^{3/2}$ (eq.\ \ref{eq:sigrad}) out to a radius
$r_{pr}$ (eq.\ \ref{eq:rpr}) where gas pressure begins to dominate,
after which the surface density declines as $r^{-3/5}$. If $r_{pr}\la
r_w$, the bending waves are launched as usual at the warp radius $r_w$
and propagate smoothly into the region $r<r_{pr}$, although their
dispersion relation will change once they enter the
radiation-dominated region. If $r_{pr}$ is larger than $r_w$, the
gravitational torque will include a significant contribution from
material in the accretion disc near $r_{pr}$ (the torque from material
between $R\gg r$ and $2R$ varies as $G\Sigma(R)r^2/R\sim R^{1/2}$) in
addition to the gravitational torque from local material. This extra
torque will tend to counter-act the Lense--Thirring torque, and if it
is large enough will prevent the excitation of bending waves.
In summary, for low-viscosity discs in which self-gravity is
important, misalignment of the disc axis at large radii with the BH spin
axis can excite bending waves inside the warp radius
(\ref{eq:rwself}). For discs dominated by gas pressure, where the
surface density $\Sigma(r)\propto r^{-0.6}$, Fig.\ \ref{fig:sg} shows
that the condition for exciting oscillatory waves is $\gamma
\alpha_\perp\simeq 0.05$. For warps of sufficiently small amplitude,
$\alpha_\perp=\frac{1}{2}\alpha^{-1}$ (eq.\ \ref{eq:qalpha}) so the
condition for exciting bending waves is $\gamma\la
0.01(\alpha/0.1)$.
\section{Related work}
\label{sec:other}
\noindent
Most treatments of warped Lense--Thirring discs neglect torques from
the companion in determining the shape and evolution of the disc; we
may call this the Bardeen--Petterson approximation since it
first appears in \cite{bp75}. The approximation is only valid if
the torque associated with viscous angular-momentum transport exceeds
the Lense--Thirring and companion torques at the point where the
latter two are equal, the warp radius $r_w$ (eq.\ \ref{eq:lapk}),
which in turn requires $\beta\ga 1$ (eq.\ \ref{eq:qdef}).
One of the few treatments of warped AGN accretion discs to include
both Lense--Thirring and tidal torques is \cite{mpt09}. In fact the
warp radius $r_{\rm warp}$ defined in their equation (15) is almost
the same as the radius $r_w$ defined in our equation (\ref{eq:kerr}),
$r_{\rm warp}=r_w/2^{2/9}$. Martin et al.\ also define a tidal radius
$r_{\rm tid}$ and a Lense--Thirring radius $r_{\rm LT}$ where viscous
torques balance tidal and Lense--Thirring torques, respectively. Our
parameter $\beta$, defined in equation (\ref{eq:qdef}), is just
$2^{1/9}(r_{\rm tid}/r_{\rm LT})^{10/9}$. Martin et al.\ find
numerical solutions for steady-state discs with obliquities up to
$80^\circ$ but all their models have $r_{\rm tid}/r_{\rm LT}\ge 1$ and
their models with obliquities $>20^\circ$ have $r_{\rm tid}/r_{\rm
LT}=10$. Therefore they do not explore the regime with
$\beta\la 1$ where the critical obliquity becomes apparent.
\cite{sf96} give a simple analytic description of warped accretion
discs, derived from the Pringle--Ogilvie equations by linearizing in
the warp angle. The main focus of their analysis is on estimating the
rate at which the BH aligns its angular momentum with that of
the accreting material. Unfortunately, the linearization drops the
term proportional to $|\p\bfn/\p x|^2$ in equation (\ref{eq:cldef}),
and without this term low-viscosity Lense--Thirring discs develop a
thin boundary layer in which the warp angle jumps sharply, so the
linearization is not self-consistent when $\beta$ is sufficiently
small.
\cite{nk12} and \cite{nix12} have argued that warped discs described
by the Pringle--Ogilvie equations can `break' or `tear' -- divide
into inner and outer parts with discontinuous orientations -- if the
obliquity $\ga 45^\circ$. As described in their papers, this phenomenon
does not appear to be directly related to our critical obliquity, for several reasons:
(i) Nixon \& King do not include torques from a companion in their
analysis, i.e., the parameter $\beta$ in equation (\ref{eq:qdef}) is
very large, whereas we find that the critical obliquity is
important only for $\beta\la 1$ (Fig.\ \ref{fig:six}). (ii) Nixon \&
King argue that the breaking phenomenon arises through the dependence
of the viscosity parameters $Q_i$ on the warp $\psi$, whereas we have
found that the critical obliquity is almost the same whether or not
this dependence is included in the differential equations. (iii) We do not
see breaks in our high-viscosity ($\beta=1000$) solutions, even for obliquities
exceeding $88^\circ$, probably because our expression
for $Q_2(\psi)$ is relatively flat (Fig.\ \ref{fig:two}) whereas Nixon
\& King's falls sharply toward zero for $\psi\ga 1$ (their Fig.\
1)\footnote{The reason for this difference has been pointed out to us by G.~Ogilvie
(private communication). In a flat isothermal disc the sound speed
and rms thickness are related by $c_s=H\Omega$; however, this
relation no longer holds in a warped disc because a vertical
oscillation is present, so hydrostatic equilibrium does not apply. Nixon \& King's
`isothermal' disc has $H$ independent of the warp angle $\psi$
whereas ours has $c_s$ independent of $\psi$.}.
\section{Application to observed accretion discs}
\label{sec:observations}
\noindent
The accreting BHs found in astrophysical systems span a wide range of
inferred mass, from $\mbh \sim 5\msun$ up to $\sim 10^{10} \msun$.
Within this range they mostly fall -- so far -- into one of two distinct
classes. At the low-mass end, $\mbh\sim 10 \msun$, the BHs all belong
to close binary systems. The BH accretes mass from its companion star,
either by Roche-lobe overflow or by capturing a fraction of the mass
lost in a wind. Roche-lobe overflow tends to occur in low mass X-ray
binaries (LMXBs), in which the companion is an evolved star with
$\mstar \la 1.5\msun$. Wind-driven accretion is found in high
mass X-ray binaries (HMXBs), where the companion is an O or B star
with $\mstar\ga 10\msun$. The secondary star provides the tidal
torque in equation (\ref{eq:comp}), which is also thought to set the
outer radius of the accretion disc. The dynamics and geometry of
accretion in these systems is relatively well-understood and useful
summaries are found in \citet{2002apa..book.....F} and
\cite{2006ARA&A..44...49R}.
The second class consists of supermassive BHs, with $\mbh\sim
10^5$--$10^{10} \msun$, which are found -- so far -- at the centres of
galaxies and primarily accrete gas from the interstellar medium of
their galaxy. When mass is supplied at sufficiently high rates, these
are observed as AGN \citep{1999agnc.book.....K}. The properties of
these systems and how they are fed from the interstellar medium are less well
understood than binary systems and there are fewer empirical
constraints on the properties of the disc\footnote{We do not consider
the ultraluminous X-ray sources with $L \ga 10^{40}\mbox{\,erg
s}^{-1}$. If these radiate isotropically and do not exceed the
Eddington limit, they require BHs with $\mbh\ga 100 \msun$. Whether
or not these are, in fact, intermediate-mass BHs or normal HMXBs,
the implied accretion rates suggest that ultraluminous X-ray sources
arise from a short-lived phase of rapid mass transfer in a close
binary \citep{2001ApJ...552L.109K}.}.
We discuss these two classes of Lense--Thirring discs in the
next two subsections.
\subsection{Stellar-mass black holes in binary star systems}
\label{sec:xrb}
\noindent
In these binaries the X-ray emission comes from the vicinity of a
neutron star or BH (the `primary'), while the accreted mass
and the tidal torque (\ref{eq:comp}) comes from the companion star
(the `secondary'). The masses of the primary and secondary, $M$ and
$\mstar$, and their orbital separation $r_\star$ are inferred from
the orbital period, the spectral type and velocity semi-amplitude of
the secondary, periodic variations in the flux from the secondary due
to its tidal distortion by the primary, eclipses, etc. In most cases
the main evidence that the primary is a BH rather than a neutron
star is that its mass exceeds the upper limit to the mass of a neutron
star, $\sim 3\msun$ \citep{lat05}.
Compilations of BH X-ray binary system parameters can be found in
Tables 4.1 and 4.2 of \citet{2006csxs.book..157M} and Table 1 of
\citet{2006ARA&A..44...49R}. The inferred BH masses have a
relatively narrow distribution -- the best estimates in $\sim 20$
systems range from 4.5 to $14\msun$ -- with a mean near $\mbh \sim 7
\msun$. The BH spin $a_\bullet$ is more difficult to measure. The
two most commonly used methods are continuum fitting
\citep[e.g.][]{2011CQGra..28k4009M} and Fe line modeling
\citep{1995Natur.375..659T}. Only a range of plausible spins can be
inferred, even for the best systems, and both methods are subject to
systematic uncertainties. For our purposes, the most important result
is that the majority of systems are not consistent with $a_\bullet=0$,
implying that Lense--Thirring precession can be significant. Since the
parameter $\beta$ (eq.\ \ref{eq:qdef}) depends relatively weakly on
$a_\bullet$ ($\beta \propto a_\bullet^{-4/9}$), we simply adopt
$a_\bullet=0.5$ as a characteristic value.
There is strong circumstantial evidence for warps in several X-ray
binaries. The jets in the eclipsing X-ray binary SS 433 precess with a
162 d period, likely because the jet direction is normal to a
precessing warped accretion disc. The 35 d period of Her X-1 is
believed to be due to eclipses by a warped disc, and this is also
the likely explanation for some of the long-term periodicities
observed in other X-ray binaries, such as LMC X-4 and SMC X-1
\citep{cha08}. There is also evidence for misalignment between the
binary orbital angular momentum and BH spin angular momentum in GRO
J1655$-40$ and V4641 Sgr, if one assumes that the jet axis is aligned
with the BH spin axis \citep[e.g.,][]{fmw01,mac02}.
Most BH candidates with mass estimates are LMXBs, and only a handful
are HMXBs. In the Roche-lobe overflow systems that comprise the bulk
of LMXBs, it is thought that the tidal torque from the companion
truncates the accretion disc at an outer radius $r_{\rm out} \simeq
0.9 r_{L1}$ where $r_{L1}$ is the Roche radius\footnote{`Roche radius'
is defined as the radius of a sphere with the same volume as the
Roche lobe; the distance to the collinear Lagrange point from the
centre of the star is larger by $\sim 25$--$40$ per cent, depending on the
mass ratio. An analytic approximation to the Roche radius as a
function of mass ratio is given by \citet{1983ApJ...268..368E}.} of
the primary \citep{2002apa..book.....F}. Fitting of ellipsoidal
variations of LMXBs with BH primaries generally yields $r_{\rm out}$
values consistent with this assumption (J. Orosz, private
communication).
In LMXB systems, the secondaries are generally evolved F-K spectral types
with $\mstar \sim \msun$, so we scale the companion mass $\mstar$ to
$\msun$. Orbital periods $P$ range from a few hours to several days so
we scale the period to $10^5\mbox{\,s}=27.8\mbox{\,h}$. Then the
separation or semimajor axis is\comment{actual value 9.261}
\begin{equation}
\rstar=\left(\frac{P}{2\upi}\right)^{2/3}[G(\mbh+\mstar)]^{1/3}=9.3\,\rsun\left(\frac{P}{10^5\mbox{\,s}}\right)^{2/3}\left(\frac{\mbh+\mstar}{8\msun}\right)^{1/3}.
\label{eq:routlmxb}
\end{equation}
The large range of $P$ translates into a fairly broad range in $\rstar$. At the lower end of the
range, corresponding to periods of a few hours, we expect
$\rstar\simeq 2$--$3\rsun$, although $\rstar$ can be much larger than this estimate in
some cases such as GRS 1915+105: here $P=804\mbox{\,h}$ so
$\rstar=87\rsun$ for $\mbh+\mstar=8\msun$.
For comparison, the warp radius (\ref{eq:kerr}) is \comment{0.1871}
\begin{equation}
r_w = 0.19\, \rsun
\left(\frac{a_\ast}{0.5}\right)^{2/9}
\left(\frac{\mbh}{7\msun}\right)^{5/9}
\left(\frac{\msun}{\mstar}\right)^{2/9}
\left(\frac{\rstar}{10\,\rsun}\right)^{2/3} \label{eq:rwlmxb}.
\end{equation}
Assuming a mass ratio $\mbh/\mstar=7$ the primary's Roche radius is
$r_{L1}=0.55\rstar$, so if the outer disc edge is at $r_{\rm out}
\simeq 0.9 r_{\rm L1}$ we have $r_{\rm out} \simeq 0.5\rstar$. Hence,
for typical LMXBs the warp radius (\ref{eq:rwlmxb}) is well inside the
outer disc radius (cf.\ eq.\ \ref{eq:routlmxb}).
Similar conclusions hold for HMXBs. We consider the specific example
of M33 X-7 since it is the best-understood HMXB system due to its
X-ray eclipses and well-determined distance
\citep{2007Natur.449..872O,2008ApJ...679L..37L}. In this case we have
$\mstar = 70\pm7 \msun$, $\mbh=15.7\pm1.5\msun$, $\rstar=42\pm 2
\rsun$, $a_\bullet=0.84\pm0.05$, yielding a warp radius
$r_w=0.34\rsun$. Orosz et al.\ also find that the outer radius of the
disc is $r_{\rm out}=(0.45\pm0.04)r_{L1}$; for the observed mass ratio
$r_{L1}=0.5\rstar$ \citep{1983ApJ...268..368E} so $r_{\rm
out}=9.5\rsun$. Again, the warp radius is well inside the outer disc
radius\footnote{Note that the common assumption that $r_{\rm
out}=0.9r_{L1}$ is not confirmed in M33 X-7, where the eclipse
models give a result a factor of two smaller. In wind-fed HMXBs the
disc could plausibly be truncated at smaller radii via interactions
with the wind. Direct constraints on $r_{\rm out}$ in other HMXBs
are hampered by the dominance of the secondary in the optical band
\citep[see e.g.][]{2009ApJ...697..573O}.}.
The strength of the viscous torque can be parametrized through the
disc aspect ratio $H/r$, which is related to the sound speed through
$c_s=\Omega H$. The aspect ratio can be estimated using the standard
thin-disc model of \citet{1973A&A....24..337S}. In BH X-ray binaries,
the warp radius is much larger than the BH event horizon, so we can
ignore relativistic effects and corrections due to the inner
boundary condition; moreover at the warp radius the radiation pressure
is negligible. We can therefore use equation (\ref{eq:siggas})
below\footnote{Equation 2.16 of \citealt{1973A&A....24..337S} gives
the same result to within 30 per cent for their assumed efficiency $\epsilon=0.06$.} to
estimate\comment{I get 9.21, Shane gets 9.06}
\begin{equation}
\bigg(\frac{H}{r}\bigg)^2 \simeq 9.1\times10^{-5}
\bigg(\frac{L}{0.01L_{\rm Edd}}\frac{0.1}{\epsilon}\bigg)^{2/5}
\bigg(\frac{0.1}{\alpha}\bigg)^{1/5}
\bigg(\frac{7 \msun}{\mbh}\bigg)^{3/10}
\bigg(\frac{r}{\rsun}\bigg)^{1/10}. \label{eq:hrss}
\end{equation}
We assume that the Shakura--Sunyaev parameter $\alpha$ (eq.\
\ref{eq:alpha}) is approximately 0.1, based on modeling of dwarf novae and soft
X-ray transients \citep{2007MNRAS.376.1740K}.
This equation is determined by balancing local viscous heating with
radiative cooling. However, the spectra from the outer regions of
discs in LMXBs show evidence that irradiation by X-rays dominates over
local dissipation \citep{1994A&A...290..133V}. Simple models of the
X-ray irradiated outer disc imply only a weak dependence of $H/R$ on
$R$ \citep[e.g.,][]{1999MNRAS.303..139D}. So we make an alternative
estimate of the aspect ratio, valid for the outer parts of the disc,
by scaling to a characteristic temperature $T$ and assuming
hydrostatic equilibrium. Then we have approximately
\begin{equation}
\left(\frac{H}{r}\right)^2 \simeq \frac{k T r}{G \mbh m_p}
\simeq 2\times 10^{-4} \frac{r}{3\rsun}
\frac{7\msun}{\mbh}\frac{T}{10^4 \mbox{\,K}}.
\label{eq:hrir}
\end{equation}
Soft X-ray transient LMXBs are believed to be triggered by a disc
instability associated with hydrogen ionization
\citep{2001NewAR..45..449L} so one expects the outer disc has $T
\la 10^4 \mbox{\,K}$ at the beginning of an outburst, but the temperature
may rise to as high as $T \sim 10^5 \mbox{\,K}$ during outburst.
Taken together equations (\ref{eq:hrss}) and (\ref{eq:hrir}) imply $(H/R)^2
\simeq 10^{-5}$--$10^{-3}$ in most discs. Inserting the above estimates into
equation (\ref{eq:qdef}) we find \comment{122.26}
\begin{equation}
\beta =120\bigg(\frac{0.5}{a_\bullet}\bigg)^{2/3}
\bigg(\frac{\msun}{\mstar}\bigg)^{1/3}
\bigg(\frac{7\msun}{\mbh}\bigg)^{2/3} \frac{\rstar}{10
\rsun}\frac{(H/r)^2}{10^{-4}}
\label{eq:betaxrb}
\end{equation}
where $H/r$ is evaluated at the warp radius.
Therefore, we generally expect $\beta \gg 1$, that is, viscous torques
are more important than the torque from the secondary star in
determining the warp shape. In order to have the companion torque
dominate the warp dynamics, we need $\alpha_\perp\beta \la 1$, which requires
a nearby companion (the shortest orbital periods of X-ray binaries are
a few hours, corresponding to $r_\star\sim3\rsun$) and, more
importantly, a cool disc with $H/r \la 10^{-3}$. This is
plausible for quiescent discs, with low accretion rates, as
long as irradiation by the central X-ray source does not enforce a
larger $H/r$ at the radius of the warp. One might even speculate that
the absence of a steady-state solution for warped discs with
$\beta\la 1$ is the process that drives disc instability and
outbursts in some X-ray binaries.
\subsection{Warped discs in active galactic nuclei}
\label{sec:agn}
\noindent
There is strong circumstantial evidence that warps are common in
AGN accretion discs. Maser discs having modest warps on 0.1--1 pc
scales are present in NGC 4258 \citep{her05}, Circinus
\citep{green03}, and four of the seven galaxies examined by
\cite{kuo11}. Warped discs may obscure some AGN and thus play a role
in unification models of AGN based on orientation \citep{nay05}. The
angular-momentum axis of material accreting onto the AGN, as traced by
jets or other indicators, is not aligned with the axis of the host
galaxy on large scales \citep{kin00}. Radio jets from AGN often show
wiggles or bends that may arise from precession of the jet source (e.g., 3C
31). Finally, frequent and variable misalignments of the BH spin axis
with the angular momentum of accreted gas are expected theoretically
because of clumpy gas accretion, inspiral of additional BHs, and rapid
angular-momentum transport within gravitationally unstable gas discs
\citep{hop12}.
AGN accretion discs are much less well-understood than X-ray binary
discs. There is no obvious source of external torque analogous to the
companion star in X-ray binaries -- except in the case of binary BHs,
which we defer to \S\ref{sec:bbh}. In the
absence of external torques, warping can arise from a misalignment
between the orbital angular momentum of the inflowing material at the
outer edge of the disc and the spin angular momentum of the BH at its
centre. Then in the absence of other torques the shape of the
warp is determined by the competition between viscous torques and the
Lense--Thirring torque (the Bardeen--Petterson approximation).
However, AGN discs are much more massive than X-ray binary discs
relative to their host BH, and this raises the possibility that the
self-gravity of AGN discs plays a prominent role in determining the
shape of the disc.
Self-gravitating\footnote{As described in the Introduction, by
`self-gravitating' we mean that the self-gravity of the warped disc
dominates the angular-momentum precession rate, not that the disc is
gravitationally unstable or that its mass is comparable to the BH
mass.} warped discs have mostly been investigated in the context of
galaxy discs, which are sometimes warped in their outer parts. There
is a large literature on the dynamics of galactic warps \citep[e.g.,][]{ht69,sc88,jjb92,nt96,
sell13}. Very few authors have examined the properties of
self-gravitating warped discs in the context of AGN. One notable
exception is \cite{ger09}, who computed the shapes of warped
self-gravitating discs orbiting a central mass, modeling the disc as a
set of concentric circular rings and computing the gravitational
torques between each ring pair. However, they did not include either
Lense--Thirring or viscous torques so their calculations do
not address the issues that are the focus of the present paper.
We first describe a simple analytic model for flat AGN accretion
discs, which we shall use to estimate the relative importance of
self-gravity and viscous stresses in warped discs. Our model is
similar to earlier analytic models by \cite{1973A&A....24..337S},
\cite{pri81}, \cite{cs90}, and others.
We assume that the density $\rho(r,z)$ in the disc is small compared
to $\mbh/r^3$. Then hydrostatic equilibrium requires
\begin{equation}
\frac{d p_t}{dz}=\Omega^2 \gothr_z \rho z,\label{eq:he}
\end{equation}
where $p_t=p_g + p_r$ is the sum of the gas and radiation pressure,
$\Omega^2=G\mbh/r^3$, and $\gothr_z$ is a dimensionless factor
discussed below. The equation of energy conservation is
\begin{equation}
F_r =\frac{3}{4}\Omega \frac{\gothr_R}{\gothr_T} \int dz\, \tau_{r\phi},\label{eq:ec}
\end{equation}
where $F_r$ is the emissivity from one surface of the disc and
$\tau_{r\phi}$ is the viscous stress tensor. Together with $\gothr_z$
above, $\gothr_R$ and $\gothr_T$ are dimensionless factors that depend
on radius and the BH spin parameter $a_\bullet$ and approach unity for
$r\gg R_g$, where as usual $R_g=G\mbh/c^2$ is the gravitational radius
of the BH. These quantities, defined in Chapter 7 of
\citet{1999agnc.book.....K}, account approximately for
general-relativistic effects and incorporate the assumption of no
torque at the radius $r_{\rm ISCO}$ of the innermost stable circular
orbit.
Coupling equation (\ref{eq:ec}) to the equation for conservation
of angular momentum in a flat steady-state disc allows one to solve for $F_r$,
\begin{equation}
F_r=\frac{3c^3 (L/L_{\rm Edd})}{2 \kappa R_g \epsilon (r/R_g)^3}
\gothr_R,
\label{eq:flux}
\end{equation}
where $L/L_{\rm Edd}$ is the ratio of the bolometric luminosity of the
disc to the Eddington luminosity, $\kappa$ is the electron scattering
opacity (assumed to be $\simeq 0.34\,\cm^2\mbox{ g}^{-1}$), and
$\epsilon=L/(\dot M_\bullet c^2)$ is the radiative efficiency.
We now make the standard $\alpha$-disc approximations that the
stress has the form (eq.\ \ref{eq:alpha})
\begin{equation}
\tau_{r\phi}=-\eta r\frac{d\Omega}{dr}=\frac{3}{2}\alpha p_t,
\end{equation}
and that the rate of energy dissipation per unit mass is independent
of $z$. Then the radiation pressure and the temperature at the midplane of the disc are
\begin{equation}
p_{r0}=\frac{F_r \kappa \Sigma}{4 c}, \quad T_0=\left(\frac{3F_r\kappa}{16\sigma_B}\right)^{1/4},
\end{equation}
where $\sigma_B$ is the Stefan-Boltzmann constant.
The gas pressure at the midplane is
\begin{equation}
p_{g0}=\frac{\rho_0k_BT_0}{\mu}=\frac{\rho_0k_B}{\mu }\left(\frac{3 F_r \kappa}{16 \sigma_B}\right)^{1/4}
=\left(\frac{3 F_r \kappa}{16 \sigma_B}\right)^{1/4}\frac{k_B}{\mu }\frac{\Sigma^{5/4}}{2H},
\end{equation}
where $k_B$ and $\rho_0$ are Boltzmann's constant and the
midplane density. The mean particle mass $\mu$ is taken to be the
proton mass times 0.62, appropriate for fully ionized hydrogen plus
30 per cent helium by mass. In the last equation
we have replaced $\rho_0$ by $\Sigma/(2H)$ where $H$ is the disc
thickness.
We now substitute these results into equations (\ref{eq:he}) and (\ref{eq:ec}) with
the replacements $d/dz \rightarrow 1/H$, $z \rightarrow H$, and $\int\,dz \rightarrow 2H$,
to obtain
\begin{equation}
\frac{F_r \kappa \Sigma}{4 c} +
\left(\frac{3 F_r \kappa}{16 \sigma_B}\right)^{1/4}\frac{k_B}{\mu }
\frac{\Sigma^{5/4}}{2H}-\frac{\Omega^2 \gothr_z H \Sigma}{2}=0\label{eq:he1z}
\end{equation}
and
\begin{equation}
\frac{H F_r \kappa \Sigma}{4 c} +
\left(\frac{3 F_r \kappa}{16 \sigma_B}\right)^{1/4}\frac{k_B}{2 \mu }
\Sigma^{5/4}
-\frac{4 F_r \gothr_T}{9 \Omega \alpha \gothr_R }=0.\label{eq:ec1z}
\end{equation}
For given values of the radius $r$, the gravitational radius $R_g$,
the efficiency $\epsilon$, and the Eddington ratio $L/L_{\rm Edd}$,
the second of these equations can be solved for the disc thickness
$H$. Then the result can be substituted into the first equation to
yield a tenth degree polynomial in $\Sigma^{1/4}$, which can be solved
numerically to find the surface density \citep{2012MNRAS.424.2504Z}.
The analysis is simpler when the accretion disc is dominated by
radiation pressure or gas pressure. For radiation-pressure dominated
discs we set $p_g=0$ in equations (\ref{eq:he1z}) and (\ref{eq:ec1z}).
We then find\comment{69.7, 11.08}
\begin{align}
\Sigma_r &=\frac{2^6}{3^3}\,\frac{\gothr_z \gothr_T}{\gothr_R^2}\,\frac{\epsilon}{\alpha\kappa}\,\frac{L_{\rm
Edd}}{L}\left(\frac{r}{R_g}\right)^{3/2}= 70\mbox{\;g cm}^{-2}\;
\frac{\gothr_z\gothr_T}{\gothr_R^2}\;\frac{\epsilon}{0.1}\;\frac{0.1}{\alpha}\;\frac{0.1L_{\rm
Edd}}{L}\left(\frac{r}{R_g}\right)^{3/2}\nonumber \\[5pt]
H_r &= \frac{3 \gothr_R}{4\gothr_z}\;\frac{L}{L_{\rm Edd}}\;\frac{R_g}{\epsilon}= 1.1\times 10^{13}\cm\,
\frac{\gothr_R}{\gothr_z}\;\frac{0.1}{\epsilon}\;\frac{L}{0.1L_{\rm Edd}}\;\frac{\mbh}{10^8\msun}.
\label{eq:sigrad}
\end{align}
Similarly, when radiation pressure is negligible, \comment{1.402,2.469}
\begin{align}
\Sigma_g &=\frac{2^{14/5}\upi}{3^{7/5}5^{1/5}}\;\frac{\mu^{4/5}(G\mbh
c)^{1/5}}{\kappa^{4/5} h^{3/5}\alpha^{4/5}\epsilon^{3/5}}
\;\frac{\gothr_T^{4/5}}{\gothr_R^{1/5}} \left(\frac{L}{L_{\rm Edd}}\;\frac{R_g}{r}\right)^{3/5}
\nonumber \\[5pt]
&= 1.4\times10^7\; {\rm g \; cm^{-2}} \;
\frac{\gothr_T^{4/5}}{\gothr_R^{1/5}}\left(\frac{0.1}{\alpha}\right)^{4/5}\left(\frac{\mbh}{10^8\msun}\right)^{1/5}\left(\frac{0.1}{\epsilon}\;\frac{L}{0.1L_{\rm Edd}}\;\frac{R_g}{r}\right)^{3/5}
\label{eq:siggas}
\end{align}
and
\begin{align}
H_g &=\frac{3^{1/5}5^{1/10}}{2^{2/5}\upi^{1/2}}\;\frac{h^{3/10}(G\mbh)^{9/10}
}{\mu^{2/5}\kappa^{1/10}c^{21/10}\alpha^{1/10}\epsilon^{1/5}}\;\frac{\gothr_R^{1/10}\gothr_T^{1/10}}{\gothr_z^{1/2}}\left(\frac{L}{L_{\rm
Edd}}\right)^{1/5}\left(\frac{r}{R_g}\right)^{21/20} \nonumber \\[10pt]
&= 2.5\times 10^{10}\;\cm\; \frac{\gothr_R^{1/10}\gothr_T^{1/10}}{\gothr_z^{1/2}}\left(\frac{0.1}{\alpha}\right)^{1/10}\!\left(\frac{\mbh}{10^8\msun}\right)^{9/10}\!\left(\frac{0.1}{\epsilon}\right)^{1/5}\!\left(\frac{L}{0.1L_{\rm Edd}}\right)^{1/5}\!\left(\frac{r}{R_g}\right)^{21/20}\!\!.
\end{align}
With these scalings, we can compute most properties of interest
in the disc. For example, radiation pressure dominates when $H_r>H_g$
which occurs for radii less than\comment{4.955}
\begin{equation}
r_{pr} \simeq 5.0\times10^{15}\;\cm\;
\left(\frac{\alpha}{0.1}\right)^{2/21}
\left(\frac{0.1}{\epsilon}\;\frac{L}{0.1L_{\rm
Edd}}\right)^{16/21}\left(\frac{\mbh}{10^8\msun}\right)^{23/21}
\;\frac{\gothr_R^{6/7}}{\gothr_z^{10/21}\gothr_T^{2/21}}\bigg|_{r_{pr}}.\label{eq:rpr}
\end{equation}
The disc is gravitationally unstable if Toomre's (1964) $Q$ parameter
is less than unity; this parameter is approximately
\begin{equation}
Q=\frac{\Omega^2 H}{\upi G \Sigma}.
\label{eq:toomredef}
\end{equation}
In the radiation- and gas-pressure dominated regimes
(respectively) we have \comment{3.124,3.459}
\begin{align}
Q_r &= 3.1 \times 10^{12} \;\frac{\gothr_R^3}{\gothr_z^2\gothr_T}
\left(\frac{L}{0.1L_{\rm Edd}}\;\frac{0.1}{\epsilon}\right)^2\;\frac{10^8\msun}{\mbh}\;\frac{\alpha}{0.1}\left(\frac{R_g}{r}\right)^{9/2} \nonumber \\[10pt]
Q_g &= 3.5\times 10^{4}\; \frac{\gothr_R^{3/10}}{\gothr_T^{7/10}\gothr_z^{1/2}}
\left(\frac{0.1L_{\rm
Edd}}{L}\;\frac{\epsilon}{0.1}\right)^{2/5}\left(\frac{10^8\msun}{\mbh}\right)^{13/10}\left(\frac{\alpha}{0.1}\right)^{7/10}\left(\frac{R_g}{r}\right)^{27/20}.
\label{eq:toomre}
\end{align}
Similarly, we can compute the warp radius (eq.\ \ref{eq:rwself})\comment{4.296e15,3.874e15}
\begin{align}
r_{w,r} &= 4.3\times10^{15}\;\cm\;\left(\frac{a_\bullet}{0.5}\right)^{1/5}
\left(\frac{\alpha}{0.1}\;\frac{0.1}{\epsilon}\;\frac{L}{0.1L_{\rm
Edd}}\right)^{1/5}\left(\frac{\mbh}{10^8\msun}\right)^{4/5}
\;\frac{\gothr_R^{2/5}}{\gothr_T^{1/5}\gothr_z^{1/5}}\bigg|_{r_{w,r}}\nonumber
\\[10pt]
r_{w,g} &= 3.9\times10^{15}\,\cm\,
\left(\frac{a_\bullet}{0.5}\right)^{10/29}\!\!\left(\frac{\alpha}{0.1}
\right)^{8/29}\!\!\left(\frac{\epsilon}{0.1}\right)^{6/29}\!\!\left(\frac{0.1L_{\rm
Edd}}{L}\right)^{6/29}\!\!\left(\frac{\mbh}{10^8\msun}\right)^{17/29}\!\!
\frac{\gothr_R^{2/29}}{\gothr_T^{8/29}}\bigg|_{r_{w,g}}.
\label{eq:warprad}
\end{align}
Equation (\ref{eq:warprad}) gives implicit relations for $r_w$ because
of the radial dependence of the relativistic factors. However, this
dependence is rather weak for typical AGN disc models: for the case
$a_\bullet=0.5$, $M=10^8 \msun$, $\alpha=0.1$ and $L/L_{\rm
Edd}=0.1$, we have $\gothr_R=0.81$, $\gothr_T=0.81$, and
$\gothr_z=1.01$ at $r_w$, corresponding to values of 0.96 and 1.05 for
the products of relativistic factors in the radiation-pressure and
gas-pressure dominated limits of equation (\ref{eq:warprad}).
The characteristic ratio of the viscous and self-gravity torques is
(cf.\ eq.\ \ref{eq:betaself})\comment{0.1410,0.0563}
\begin{align}
\gamma&=\frac{c_s^2}{\upi G\Sigma r}\bigg|_{r_w}=\frac{H^2
\Omega^2}{\upi G \Sigma r}\bigg|_{r_w}\nonumber \\[5pt]
&=0.14\left(\frac{0.5}{a_\bullet}\right)^{11/10}
\left(\frac{0.1}{\alpha}\right)^{1/10}
\left(\frac{0.1}{\epsilon}\;\frac{L}{0.1L_{\rm
Edd}}\right)^{19/10}\left(\frac{\mbh}{10^8\msun}\right)^{1/10}
\;\frac{\gothr_R^{9/5}\gothr_T^{1/10}}{\gothr_z^{19/10}}\bigg|_{r_{w,r}}
\nonumber \\[5pt]
&=0.056\left(\frac{0.5}{a_\bullet}\right)^{13/29}
\left(\frac{\alpha}{0.1}\right)^{7/29}
\left(\frac{0.1}{\epsilon}\;\frac{L}{0.1L_{\rm
Edd}}\right)^{2/29}\left(\frac{10^8\msun}{\mbh}\right)^{25/29}
\;\frac{\gothr_R^{9/29}}{\gothr_T^{7/29}\gothr_z}\bigg|_{r_{w,g}}
\label{eq:gamxxx}
\end{align}
where as usual the two equations correspond to the radiation-pressure
dominated and the gas-pressure dominated regions.
Thus, in our fiducial case -- a disc surrounding a $10^8\msun$ BH
radiating at 10 per cent of the Eddington luminosity, with spin parameter
$a_\bullet=0.5$, efficiency $\epsilon=0.1$, and Shakura--Sunyaev
parameter $\alpha=0.1$ -- the gravitational radius is $R_g=1.48\times
10^{13}\cm$; the warp radius is just inside the radiation-pressure
dominated region at $r_w=4.3\times 10^{15}\cm=290R_g$; the disc becomes
gas-pressure dominated outside $r_{pr}=5.0\times10^{15}\cm\simeq 340R_g$; the
disc becomes gravitationally unstable outside
$3.4\times10^{16}\cm\simeq 2300R_g$; and the disc warp
is governed by Lense--Thirring and self-gravitational torques, with
viscous torques smaller by a factor of
$\gamma\alpha_\perp\simeq0.14\alpha_\perp$ where $\alpha_\perp\sim 1$
for a Shakura--Sunyaev parameter $\alpha\simeq0.1$.
We supplement these formula with three sets of plots. These plots are
based on the analysis in equations (\ref{eq:he})--(\ref{eq:ec1z}) with
three refinements to the analytic formulae
(\ref{eq:sigrad})--(\ref{eq:gamxxx}): (i) we include both gas and
radiation pressure at all radii; (ii) we include the effects of the
relativistic parameters $\gothr_z$, $\gothr_T$, and $\gothr_R$; (iii)
we compute the efficiency $\epsilon$ from the spin parameter $a_\bullet$
using the estimates from \citet{nt73}. Thus the plots assume thin-disc
accretion with no torque at the inner boundary, which is assumed to
lie at $r_{\rm ISCO}$, the radius of the innermost stable circular orbit.
\begin{figure}
\includegraphics[width=0.9\textwidth,bb=31 0 610 465]{fig9.ps}
\caption{Properties of AGN accretion discs with $\alpha=0.1$,
$a_\bullet=0.5$, $L/L_{\rm Edd}=0.1$, and BH masses $10^7 \msun$
(black), $10^8 \msun$ (red), and $10^9 \msun$ (blue). The plots
show Toomre's $Q$ parameter (top left panel), the ratio $\gamma$
(eq.\ \ref{eq:betaself}) of self-gravity to viscous torque (top
right), the aspect ratio $H/r$ (bottom left) and the surface density
(bottom right) versus radius in units of the gravitational radius
$R_g=G\mbh/c^2$. The solid curves are computed via direct numerical
solution of equations (\ref{eq:he1z}) and (\ref{eq:ec1z}), while the
dashed and dotted curves show the analytic approximations assuming
that radiation and gas pressure (respectively) dominate. The warp
radii are marked by filled circles. }
\label{fig:mass}
\end{figure}
\begin{figure}
\includegraphics[width=0.9\textwidth,bb=31 0 610 465]{fig10.ps}
\caption{As in Fig.\ \ref{fig:mass}, except for BH mass $10^8\msun$
and Eddington ratios of $1$ (black), $0.1$ (red),
and $0.01$ (blue). }
\label{fig:ledd}
\end{figure}
Fig.\ \ref{fig:mass} shows Toomre's $Q$ (eq.\ \ref{eq:toomredef}),
the aspect radio $H/r$, the surface density $\Sigma$, and the ratio
$\gamma$ of viscous and self-gravity torques for BH masses of
$10^7\msun$, $10^8\msun$, and $10^9\msun$. Fig.\ \ref{fig:ledd}
shows a similar plot for Eddington ratios $L/L_{\rm Edd}$ of 1, 0.1,
and 0.01. Figs.\ \ref{fig:mass} and \ref{fig:ledd} show that the
transition from radiation pressure to gas pressure dominance occurs in
the range of 100 to $10^4 R_g$, and depends more strongly on $L/L_{\rm
Edd}$ than $M_\bullet$. The radii where $Q$ declines below unity
(onset of local gravitational instability) and $\gamma$ declines below
unity (self-gravity torque stronger than viscous torque) are not very
different, so care must be taken when applying analytic formulae
that assume either radiation or gas pressure to dominate.
Fig.\ \ref{fig:comprad} compares the warp radius $r_w$ to three
characteristic disc radii for a range of disc parameters. We have
defined the self-gravity radius $r_Q$ as the radius where $Q=1$,
$r_{pr}$ as the radius where the gas and radiation pressure are equal
(cf.\ eq.\ \ref{eq:rpr}), and $r_{5000}$ as the half-light radius for
emission at $5000$~\AA, assuming that the disc radiates locally as a
blackbody. Since $\gamma$ is smaller than $Q$ by a factor of $ H/r$
(see discussion following eq.\ \ref{eq:betaself}), we always have $r_w
< r_Q$. The disc is generally in the radiation-dominated regime at
$r_w$, but can fall in the gas-pressure dominated region for smaller
BH mass $M_\bullet$, smaller Eddington ratio $L/L_{\rm Edd}$, or spin
parameter $a_\bullet$ near unity. The dependence of all the
characteristic radii on $a_\bullet$ is rather weak, except for
$a_\bullet \rightarrow 0$ or 1.
Note that for $\alpha\simeq0.1$ all of the discs shown in these
figures have $\alpha\gg H/r$ (except for $r\la 100 R_g$ when
$L/L_{\rm Edd}=1$) so the condition (\ref{eq:nonres}) for non-resonant
warp behavior is satisfied by a large margin.
For most of the parameter space we have examined the warp radius $r_w$
is just outside (1--3 times larger than) the optical
radius $r_{5000}$. However, if warping causes the disc to intercept a
larger fraction of the emission from smaller radii the region where
the warp is strong may dominate the optical emission. The flux of
radiation coming from the inner disc that irradiates the outer disc is
approximately
\begin{equation}
F_{\rm irr} \approx \frac{L_{\rm in}}{4 \upi r^2} \cos \theta
\end{equation}
where $L_{\rm in}$ is the characteristic luminosity from the inner
disc and $\theta$ is the angle between the normal to the warped outer
disc and the incoming flux. For thin discs, $\cos \theta \simeq H/r
\ll 1$ and, since $H$ is independent of $r$ in the radiation-dominated
regime, $F_{\rm irr} \propto r^{-3}$. This is the same scaling as the
intrinsic disc emission (eq.\ \ref{eq:flux}) so disc irradiation has little effect on the
radial emission profile of an unwarped disc. However, if the disc
has a significant warp, $\cos \theta \gg H/r$ and the irradiating flux
can exceed the intrinsic disc emission. In this case the
characteristic disc temperature will be
\begin{equation}
T_{\rm irr} \approx \left(\frac{\chi L}{\upi \sigma_B r_{w,r}^2}\right)^{1/4}
\approx 1.1 \times 10^4 \; {\rm K} \; \left(\frac{\chi}{0.01}\right)^{1/4}
\left(\frac{0.5}{a_\bullet}\;\frac{0.1}{\alpha}\;\frac{\epsilon}{0.1}\right)^{1/10}
\left(\frac{L}{L_{\rm Edd}}\right)^{3/20}\left(\frac{10^8\msun}{\mbh}\right)^{3/20},
\end{equation}
where $\chi$ is a (poorly constrained) reduction factor added to
account for the fraction of the disc luminosity intercepted by the
warp, the characteristic emitting area of the warp, and the albedo.
The wavelength at which blackbody emission peaks for $T_{\rm irr}= 1.1
\times 10^4 \; {\rm K}$ is $\lambda \simeq c h/3 k_B T_{\rm
irr}=4400$\AA. Since $r_w$ exceeds the the nominal half-light radius
of the unirradiated disc, the reradiated emission at the
warp can easily dominate. If so, the true half-light radius for
optical emission should be roughly given by $r_w$ rather than $r_{5000}$.
\begin{figure}
\includegraphics[width=0.9\textwidth,bb=31 0 610 465]{fig11.ps}
\caption{Characteristic disc radii versus BH mass (top left panel),
Shakura--Sunyaev parameter $\alpha$ (top right), Eddington ratio
(bottom left), and BH spin (bottom right). The curves represent the
warp radius $r_w$ (eq.\ \ref{eq:rwself}; solid black line), radius
$r_Q$ at which the disc becomes gravitationally unstable (dotted red
line), transition radius from radiation-pressure to gas-pressure
dominated $r_{pr}$ (dashed blue line) and the half-light radius at
$5000$~\AA (dot-dashed green line). The fiducial model has
$M_\bullet =10^8 \msun$, $a_\bullet=0.5$, $L/L_{\rm Edd}=0.1$, and
$\alpha=0.1$, and is marked by filled circles on each curve. Only a
single parameter is varied away from the fiducial value to produce
each panel. All radii are measured in units of the gravitational
radius $R_g=G\mbh/c^2$.}
\label{fig:comprad}
\end{figure}
This result is relevant to recent constraints on the size of quasar
emission regions obtained by modeling the variability due to
gravitational microlensing in an intervening galaxy. In the majority
of cases that have been studied, the sizes inferred from microlensing
exceed the predicted half-light radii of flat $\alpha$-disc models by
factors of $\sim 3$--10 \citep[e.g.][]{mor05,poo07}. \citet{mor10}
find a best fit in which the microlensing size at $2500$\AA~ scales as
$\mbh^{0.8}$ for a sample of 11 sources with estimated $\mbh=4 \times
10^7 \msun$--$2.4 \times 10^9 \msun$. This is the same scaling as
$r_{w,r}$ with $\mbh$ in equation (\ref{eq:warprad}) and also agrees
well with the dependence of the warp radius on $\mbh$ found in Fig.\
\ref{fig:comprad}. Unfortunately this is not a very sensitive test:
for a flat disc, the radius at a given temperature scales as
$\mbh^{2/3}$, and in the Bardeen--Petterson approximation the warp
radius scales as $\mbh^{9/8}$. The absolute scale for the microlensing
size at 2500\AA~ is a factor of $\sim 6$ smaller than our estimate for
$r_{w,r}$, but this is subject to some uncertainty and might be
accounted for by bending waves excited interior to $r_w$ (compare
Fig.\ \ref{fig:sg}).
An important but poorly understood issue is what fraction of AGN
accretion discs are likely to be warped. Over long times, warps are
damped out as the BH spin axis aligns with the outer disc. A rough
estimate of this time-scale is $t_{\rm align}\simeq L_\bullet/(\upi
r^2\Sigma T_{\rm LT})_{r_w}$ where $L_\bullet$ is the spin angular momentum
of the BH and the quantity in parentheses is the Lense--Thirring
torque per unit mass $T_{LT}$ times the disc mass evaluated at the
warp radius $r_w$. Using equation (\ref{eq:lt}) and the expression for
$L_\bullet$ given just above it, we find
\begin{equation}
t_{\rm align}\simeq
\frac{\mbh}{2\upi cR_g^{3/2}}\left(\frac{r^{1/2}}{\Sigma}\right)_{r_w}=\frac{r_w^4}{2ca_\bullet R_g^3}.
\end{equation}
where in the second expression we have used (\ref{eq:rwself}) to
eliminate the surface density. For our fiducial
case -- $\mbh=10^8\msun$, $L=0.1L_{\rm Edd}$, $a_\bullet=0.5$,
$\epsilon=0.1$, $\alpha=0.1$ -- the warp radius is $\sim 300R_g$
and\comment{actual number 1.2674e5}
$t_{\rm align}=1.3\times10^5\yr(r_w/300R_g)^4$, much shorter than
the typical AGN lifetime (the Salpeter time, $5\times10^7\mbox{\;yr}$
for $\epsilon=0.1$). Much more uncertain is the time-scale on which
warps are excited. High-resolution simulations of the centres of
galaxies show order unity variations in the gas inflow rate at 0.1 pc
on time-scales less than $10^5\yr$ \citep[][fig.\ 6]{hop10} and these are presumably
accompanied by similar variations in the angular momentum of the
inflowing gas. In such an environment the orientation of the outer
parts of the accretion disc is likely to vary stochastically on
time-scales less than the damping time, and this case most AGN
accretion discs will be warped.
\subsubsection{Binary black holes}
\label{sec:bbh}
\noindent
Most galaxies contain supermassive BHs at their centres, and when
galaxies merge these BHs will spiral to within a few parsecs of the
centre of the merged galaxy through dynamical friction
\citep[e.g.,][]{bbr80,yu02}. Whether they continue to spiral to
smaller radii remains unclear, but if the binary decays to a
sufficiently small semimajor axis -- typically 0.1--0.001 pc, depending
on the galaxy and the BH mass ratio -- the loss of orbital energy
through gravitational radiation will ensure that they merge. If one of
the BHs (the primary) supports an accretion disc, and the spin axis of
the primary is misaligned with the orbital axis of the binary, the
accretion disc will be warped\footnote{There can also be a
circumbinary accretion disc, which may also be warped, but the
structure of such discs is poorly understood and we will not discuss
them here.}. In this case both the self-gravity of the disc and the
tidal field from the secondary, as well as viscous stresses and the
Lense--Thirring effect, can play important roles in shaping the
warp. For the sake of simplicity, we do not examine all of these
torques simultaneously: here we first consider an AGN accretion disc
without self-gravity orbiting one member of a binary BH, then compare the strength of the
torques and the characteristic warp radius to those in an accretion
disc with self-gravity orbiting an isolated BH.
Let $\mbh$ be the mass of the primary and $\mu\mbh$ the mass of the
other BH (the secondary). We assume for simplicity that the orbit is
circular, with semimajor axis $\rstar$. The time required for the two
BHs to merge due to gravitational radiation is \citep{pet64}
\begin{equation}
t_{\rm merge}=\frac{5}{256}\frac{c^5\rstar^4}{G^3\mbh^3\mu(1+\mu)}.
\label{eq:peters}
\end{equation}
The numbers and orbital distribution of binary BHs are not
well-constrained, either observationally or theoretically \citep[see,
for example,][]{shen13}. In the absence of other information, a
natural place to prospect for binary BHs is where the merger time
(\ref{eq:peters}) is equal to the Hubble time. Thus we will use
equation (\ref{eq:peters}) to eliminate the unknown semimajor axis
$\rstar$ in favor of the ratio $t_{\rm merge}/10^{10}\yr$. With this
substitution and using the accretion disc models from earlier in this
Section, most properties of interest are straightforward to
calculate.
The binary semimajor axis is\comment{1.9876}
\begin{equation}
\rstar=2.0\times10^{17}\cm\;[\mu(1+\mu)]^{1/4}\left(\frac{\mbh}{10^8\msun}\right)^{3/4}\left(\frac{t_{\rm
merge}}{10^{10}\yr}\right)^{1/4}.
\end{equation}
The warp radius (\ref{eq:kerr}) is\comment{8.9078}
\begin{equation}
r_w=
8.9\times10^{15}\cm\;\frac{(1+\mu)^{1/6}}{\mu^{1/18}}\left(\frac{a_\bullet}{0.5}\right)^{2/9}\left(\frac{\mbh}{10^8\msun}\right)^{5/6}
\left(\frac{t_{\rm merge}}{10^{10}\yr}\right)^{1/6}.
\end{equation}
The viscosity parameter $\beta$ (eq.\ \ref{eq:qdef}) depends on
whether the warp radius is in the radiation-pressure dominated or the
gas-pressure dominated regime. In these two cases:\comment{0.02292,0.0785}
\begin{align}
\beta_r &=0.023
\;\frac{\mu^{1/36}}{(1+\mu)^{1/12}}\left(\frac{0.5}{a_\bullet}\right)^{10/9}\left(\frac{L}{0.1L_{\rm
Edd}}\right)^2\;\left(\frac{0.1}{\epsilon}\right)^2\;\left(\frac{\mbh}{10^8\msun}\right)^{1/12}\left(\frac{10^{10}\yr}{t_{\rm
merge}}\right)^{1/12}\frac{\gothr_R^2}{\gothr_T^2}\bigg|_{r_{w}}\nonumber \\[15pt]
\beta_g &=0.079
\;\frac{(1+\mu)^{4/15}}{\mu^{4/45}}\left(\frac{0.5}{a_\bullet}\right)^{29/45}\left(\frac{0.1}{\alpha}\right)^{1/5}\left(\frac{L}{0.1L_{\rm
Edd}}\right)^{2/5}\left(\frac{0.1}{\epsilon}\right)^{2/5}\left(\frac{10^8\msun}{\mbh}\right)^{7/15}\left(\frac{t_{\rm merge}}{10^{10}\yr}\right)^{4/15}\frac{\gothr_R^{1/5}\gothr_T^{1/5}}{\gothr_z}\bigg|_{r_{w}}.
\end{align}
For our fiducial case -- $\mbh=10^8\msun$, $L=0.1L_{\rm Edd}$,
$a_\bullet=0.5$, $\epsilon=0.1$, $\alpha=0.1$, $t_{\rm
merge}=10^{10}\yr$, $\mu=1$ -- the disc becomes gas-pressure
dominated at $\sim 330R_g$ (eq.\ \ref{eq:rpr}), the warp radius is
$\sim 700R_g$, the disc becomes gravitationally unstable at $2300R_g$
(eq.\ \ref{eq:toomre}), the binary semimajor axis is
$1.6\times10^4R_g$, and the viscosity parameter is
$\beta_g=0.094$. For comparison, including self-gravity leads to a
warp radius of $\sim300R_g$ in an isolated disc (see discussion
following eq.\ \ref{eq:gamxxx}), so self-gravity is likely to have a
stronger influence on the warp shape than torques from the companion
BH, at least in the fiducial disc. Companion torques become stronger
relative to self-gravity in binary BHs with shorter merger times
$t_{\rm merge}$; of course, such systems are relatively rare because
they last for less than a Hubble time.
\section{Summary}
\label{sec:summary}
\noindent
Warped accretion discs exhibit a remarkably rich variety of
behavior. This richness arises for several reasons. First, a
number of different physical mechanisms can lead to torques on the disc:
the quadrupole potential from the central body (e.g., an oblate planet
or a binary black hole), Lense--Thirring precession, the self-gravity
of the disc, the tidal field from a companion, angular-momentum
transport by viscous or other internal disc stresses, radiation pressure, and
magnetic fields (we do not consider the latter two effects). Second,
the geometry of the disc depends critically on whether the competing
mechanisms lead to prograde or retrograde precession of the disc angular
momentum around their symmetry axes. Third, a disc can support
short-wavelength bending waves even when the disc mass is much smaller
than the mass of the central body (as in Saturn's rings).
Most previous studies of warped accretion discs around black holes
have focused on Lense--Thirring and viscous torques (the
Bardeen--Petterson approximation). If a companion star is present in
the system, as in X-ray binary stars, the Bardeen--Petterson
approximation is valid (a `high-viscosity' disc) only if the disc
viscosity is sufficiently high, $\beta\alpha_\perp\ga 1$ where
$\beta$ is given in equation (\ref{eq:betaxrb}) for typical X-ray
binary parameters and $\alpha_\perp\sim 1$ is the Shakura--Sunyaev
$\alpha$ parameter for the internal disc stresses that damp the
warp. Our results suggest that the Bardeen--Petterson approximation is
not valid (a `low-viscosity' disc) for quiescent X-ray binaries.
Models of such low-viscosity discs using the Pringle--Ogilvie
equations of motion exhibit remarkable behavior: for a given obliquity
(angle between the black-hole spin axis and companion orbital axis)
there is {\em no} steady-state solution for $\beta$ smaller than some
critical value. We have argued at the end of \S\ref{sec:critical} that
the failure of these equations probably arises because they do not
allow hyperbolic behavior but the question of how warped
low-viscosity Lense--Thirring discs actually behave remains to be
answered.
The behavior of warped accretion discs around massive black holes is
equally rich. Here there is no significant companion torque (unless
the black hole is a member of a binary system), but the
Bardeen--Petterson approximation remains suspect because it neglects
the self-gravity of the disc. In fact we find that most plausible
models of AGN accretion discs have low viscosity in the sense that
viscous torques are smaller at all radii than one or both of the Lense--Thirring and
self-gravity torques. If the viscosity is sufficiently small, spiral
bending waves are excited at the warp radius and propagate inward with
growing amplitude until they are eventually damped by viscosity or
non-linear effects. The presence of such waves may contribute to
obscuration of the disc and the illumination of the warped disc by the
central source may affect the disc spectrum or apparent size at
optical wavelengths.
It is worth re-emphasizing that many of our conclusions are
based on a simple model of the internal stresses in the disc -- the
stress tensor is that of a viscous fluid and the viscosity is related
to the pressure through the Shakura--Sunyaev $\alpha$ parameter -- that
does not correspond to the actual stress tensor, which probably arises
mostly from anisotropic MHD turbulence. The available evidence on the
validity of this model from numerical MHD simulations, discussed at
the end of \S\ref{sec:visc}, suggests that it overestimates the rate
of viscous damping of warps; if correct, this would strengthen our
conclusions about the limited validity of the Bardeen--Petterson
approximation and the importance of tidal torques and self-gravity in
shaping warped accretion discs.
Our results suggest several avenues for future work. A better
treatment of self-gravitating warped discs would merge the
Pringle--Ogilvie equations (\ref{eq:ogthree}) with a description of
the mutual torques due to self-gravity as in
\cite{ger09}. Generalizing the Pringle--Ogilvie equations to include
wavelike behavior is also a necessary step for a complete description
of warped accretion discs. Understanding the actual behavior of
low-viscosity Lense--Thirring discs that exceed the critical obliquity
is important and challenging. Simple models of the emission from
warped discs may help to resolve current discrepancies between simple
flat $\alpha$-disc models and observations of AGN spectra and sizes.
We thank Julian Krolik, Jerome Orosz, and Jihad Touma for illuminating
discussions. We thank Gordon Ogilvie for many insights and for
providing the program used to calculate the viscosity coefficients
$Q_i$. ST thanks the Max Planck Institute for Astrophysics and the
Alexander von Humboldt Foundation for hospitality and support during a
portion of this work. This research was supported in part by NASA grant
NNX11AF29G.
|
1,108,101,566,330 | arxiv | \section{Conclusions}
\label{sec:conclusions}
We have presented a motion generation network that leverages the knowledge encapsulated in CLIP, allowing intuitive operations, such as text conditioned motion generation and editing. As demonstrated, training an auto-encoder on the available motion data alone struggles to generalize well, possibly due to data quality or the complexity of the domain. Non the less, we see that the same auto-encoder with the same data can lead to a significantly better understanding of the motion manifold and its semantics, merely by aligning it to a well-behaved knowledge-rich latent space.
We restress the fascinating fact that even though CLIP has never seen anything from the motion domain, or any other temporal signal, its latent structure naturally induces semantics and disentanglement. This succeeds even though the connection between CLIP's latent space and the motion manifold is through sparse and inaccurate textual labeling. In essence, the alignment scheme transfers semantics by encouraging the encoder to place semantically similar samples closer together. Similarly, it induces the disentanglement built into the CLIP space, as can be seen, for example, in our latent-space arithmetic experiments.
Of course, MotionCLIP{} has its limitations, opening several novel research opportunities.
It struggles to understand directions, (e.g. left, right and counter-clockwise), to capture some styles (such as heavy and proud), and is of course not consistent for out-of-domain cultural reference exapmles (e.g, it fails to produce \emph{Cristiano Ronaldo}'s goal celebration, and \emph{Superman}'s signature pose).
Nonetheless, we believe MotionCLIP{} is an important step toward intuitive motion generation. Knowledge-rich disentangled latent spaces have already proven themselves as a flexible tool to novice users in other fields, such as facial images. In the future, we would like to further explore how powerful large-scale latent spaces could be leveraged to benefit additional domains. We would also like to explore more elaborate architectures and domain adaptation schemes for the main generation part, and to deepen our investigation into downstream tasks that could benefit from this powerful backbone.
\section{Introduction}
Human motion generation includes the intuitive description, editing, and generation of 3D sequences of human poses. It is relevant to many applications that require virtual or robotic characters.
Motion generation is, however, a challenging task.
Perhaps the most challenging aspect is the limited availability of data, which is expensive to acquire and to label.
Recent years have brought larger sets of motion capture acquisitions~\cite{AMASS:ICCV:2019}, sometimes sorted by classes~\cite{liu2019ntu,ji2018large} or even labeled with free text~\cite{BABEL:CVPR:2021,plappert2016kit}.
Yet, it seems that while this data may span a significant part of human motion, it is not enough for machine learning algorithms to understand the semantics of the motion manifold, and it is definitely not descriptive enough for natural language usage.
Hence, neural models trained using labeled motion data \cite{ahuja2019language2pose,lin2018generating,yamada2018paired,petrovich21actor,maheshwari2022mugl} do not generalize well to the full richness of the human motion manifold, nor to the natural language describing it.
\input{figures/diagram}
In this work, we introduce MotionCLIP{}, a 3D motion auto-encoder that induces a latent embedding that is disentangled, well behaved, and supports highly semantic and elaborate descriptions. To this end, we employ CLIP~\cite{radford2021learning}, a large scale visual-textual embedding model.
Our key insight is that even though CLIP has not been trained on the motion domain what-so-ever, we can inherit much of its latent space's %
virtue by enforcing its powerful and semantic structure onto the motion domain. To do this, we train a transformer-based~\cite{vaswani2017attention} auto-encoder that is aligned to the latent space of CLIP, using existing motion textual labels. In other words, we train an encoder to find the proper embedding of an input sequence in CLIP space, and a decoder that generates the most fitting motion to a given CLIP space latent code.
To further improve the alignment with CLIP-space, we also leverage CLIP's visual encoder, and synthetically render frames to guide the alignment in a self-supervised manner (see Figure~\ref{fig:overview}). As we demonstrate, this step is crucial for out-of-domain generalization, since it allows finer-grained description of the motion, unattainable using text.
The merit of aligning the human motion manifold to CLIP space is two-fold:
First, combining the geometric motion domain with lingual semantics benefits the semantic description of motion. As we show, this benefits tasks such as text-to-motion and motion style transfer. More importantly however, we show that this alignment benefits the motion latent space itself, infusing it with semantic knowledge and inherited disentanglement.
Indeed, our latent space demonstrates unprecedented compositionality of independent actions, semantic interpolation between actions,
and even natural and linear latent-space based editing.
As mentioned above, the textual and visual CLIP encoders offer the semantic description of motion. In this aspect, our model demonstrates never-before-seen capabilities for the field of motion generation. For example, motion can be specified using arbitrary natural language, through abstract scene or intent descriptions instead of the motion directly, or even through pop-culture references. For example, the CLIP embedding for the phrase
``wings" is decoded into a flapping motion like a bird,
and ``Williams sisters" into a tennis serve,
since these terms are encoded close to motion seen during training, thanks to CLIP's semantic understanding. Through the compositionality induced by the latent space, the aforementioned process also yields clearly unseen motions, such as the iconic web-swinging gesture that is produced for the input "Spiderman" (see this and other culture references in Figure~\ref{fig:teaser}). Our model also naturally extents to other downstream tasks. In this aspect, we depict motion interpolation to depict latent smoothness, editing to demonstrate disentanglement, and action recognition to point out the semantic structure of our latent space.
For all these applications, we show comparable or preferable results either through metrics or a user study, even though each task is compared against a method that was designed especially for it. Using the action recognition benchmark, we also justify our design choices with an ablation study.
\section{Method}
\label{sec:method}
Our goal is learning a semantic and disentangled motion representation that will serve as a basis for generation and editing tasks. To this end, we need to learn not only the mapping to this representation (encoding), but also the mapping back to explicit motion (decoding).
Our training process is illustrated in Figure~\ref{fig:overview}.
We train a transformer-based motion auto-encoder, while aligning the latent motion manifold to CLIP joint representation.
We do so using (i) a \textit{Text Loss}, connecting motion representations to the CLIP embedding of their text labels, and (ii) an \textit{Image Loss}, connecting motion representations to CLIP embedding of rendered images that depict the motion visually.
At inference time, semantic editing applications can be performed in latent space. For example,
to perform style transfer, we find a latent vector representing the style, and simply add it to the content motion representation and decode the result back into motion. Similarly, to classify an action, we can simply encode it into the latent space, and see to which of the class text embedding it is closest.
Furthermore, we use the CLIP text encoder to perform text-to-motion - An input text is decoded using the text encoder then directly decoded by our motion decoder.
The implementation of these and other applications is detailed in Section~\ref{sec:results}.
We represent motion sequences using the SMPL body model~\cite{loper2015smpl}. A sequence of length $T$ denoted $p_{1:T}$ such that
$p_i \in \mathbb{R}^{24 \times 6}$ defines orientations in 6D representation\cite{zhou2019continuity} for global body orientation and 23 SMPL joints, at the $i^\textnormal{th}$ frame.
The mesh vertices locations $v_{1:T}$ are calculated according to SMPL specifications with $\beta=0$ and a neutral-gender body model following Petrovich et al.~\cite{petrovich21actor}.
\begin{figure*}[ht]
\centering
\includegraphics[width=\textwidth]{figures/render_and_text.pdf}
\caption{A sample of the rendered frames and their text description used during training.}
\label{fig:render}
\end{figure*}
To project the motion manifold into the latent space, we learn a transformer-based auto-encoder \cite{vaswani2017attention}, adapted to the motion domain \cite{petrovich21actor,wang2021multi,li2021dance}. MotionCLIP{}'s architecture is detailed in Figure~\ref{fig:architecture}.
\textbf{Transformer Encoder.} $E$, Maps a motion sequence $p_{1:T}$ to its latent representation $z_p$. The sequence is embedded into the encoder's dimension by applying linear projection for each frame separately, then adding standard positional embedding. The embedded sequence is the input to the transformer encoder, together with additional learned prefix token $z_{tk}$. The latent representation, $z_p$ is the first output (the rest of the sequence is dropped out). Explicitly, $z_p = E(z_{tk}, p_{1:T})$.
\textbf{Transformer Decoder.} $D$, predicts a motion sequence $\hat{p}_{1:T}$ given a latent representation $z_p$. This representation is fed to the transformer as key and value, while the query sequence is simply the positional encoding of $1:T$. The transformer outputs a representation for each frame, which is then mapped to pose space using a linear projection. Explicitly, $\hat{p}_{1:T} = D(z_p)$.
We further use a differentiable SMPL layer to get the mesh vertices locations, $\hat{v}_{1:T}$.
\textbf{Losses.} This auto-encoder is trained to represent motion via reconstruction $L2$ losses on
joint orientations, joint velocities and vertices locations.
Explicitly,
\begin{equation} \label{eq1}
\begin{split}
\mathcal{L}_\textnormal{recon} = \frac{1}{|p|T} \sum_{i =1}^{T} \| p_i - \hat{p}_i\|^{2} +
\frac{1}{|v|T} \sum_{i =1}^{T} \| v_i - \hat{v}_i\|^{2} \\ + \frac{1}{|p|(T-1)} \sum_{i =1}^{T-1} \| (p_{i+1} - p_i) - (\hat{p}_{i+1} - \hat{p}_{i})\|^{2}
\end{split}
\end{equation}
Given text-motion and image-motion pairs, $(p_{1:T}, t)$, $(p_{1:T}, s)$ correspondingly, we attach the motion representation to the text and image representations using cosine distance,
\begin{equation}
\mathcal{L}_\textnormal{text} = 1 - \cos(CLIP_\textnormal{text}(t), z_p)
\end{equation}
and
\begin{equation}
\mathcal{L}_\textnormal{image} = 1 - \cos(CLIP_\textnormal{image}(s), z_p)
\end{equation}
The motion-text pairs can be derived from labeled motion dataset, whereas the images can be achieved
by rendering a single pose from a motion sequence, to a synthetic image $s$, in an unsupervised manner (More details in Section~\ref{sec:results}).
Overall, the loss objective of MotionCLIP{} is defined,
\begin{equation}
\mathcal{L} = \mathcal{L}_\textnormal{recon} + \lambda_\textnormal{text} \mathcal{L}_\textnormal{text} + \lambda_\textnormal{image} \mathcal{L}_\textnormal{image}
\end{equation}
\section{Related Work}
\subsection{Guided Human Motion Generation}
One means to guide motion generation is to condition on another domain.
An immediate, but limited, choice is conditioning on \emph{action} classes. ACTOR~\cite{petrovich21actor} and Action2Motion~\cite{guo2020action2motion} suggested learning this multi-modal distribution from existing action recognition datasets using Conditional Variational-Autoencoder(CVAE) \cite{sohn2015learning} architectures. MUGL~\cite{maheshwari2022mugl} model followed with elaborated Conditional Gaussian-Mixture-VAE~\cite{dilokthanakul2016deep} that supports up to $120$ classes and multi-person generation, based on the NTU-RGBD-120 dataset~\cite{liu2019ntu}.
Motion can be conditioned on other domains. %
For example, recent works~\cite{li2021dance,aristidou2021rhythm} generated dance moves conditioned on music and the motion prefix. Edwards~et al.~\scite{edwards2016jali} generated facial expressions to fit a speaking audio sequence.
A more straightforward approach to control motion is using another motion. In particular, for style transfer applications. Holden~et al.~\scite{holden2016deep} suggested to code style using the latent code's Gram matrix, inspired by Gatys~et al.~\scite{Gatys_2016_CVPR}. Aberman~et al.~\scite{aberman2020unpaired} injected style attributes using a dedicated temporal-invariant AdaIN layer~\cite{huang2017arbitrary}. Recently, Wen~et al.~\scite{wen2021autoregressive} encoded style in the latent code of Normalizing Flow generative model \cite{dinh2014nice}.
We show that MotionCLIP{} also encodes style in its latent representation, without making any preliminary assumptions or using a dedicated architecture.
\input{figures/pastel_blocks}
\subsection{Text-to-Motion}
The KIT dataset\cite{plappert2016kit} provides about 11 hours of motion capture sequences, each sequence paired with a sentence explicitly describing the action performed. KIT sentences describe the action type, direction and sometimes speed, but lacks details about the style of the motion, and not including abstract descriptions of motion. Current text-to-motion research is heavily based on KIT. Plappert~et al.~\scite{plappert2018learning} learned text-to-motion and motion-to-text using seq2seq RNN-based architecture.
Yamada~et al.~\scite{yamada2018paired} learned those two mappings by simultaneously training text and motion auto-encoders while binding their latent spaces using text and motion pairs. Lin~et al.~\scite{lin2018generating} further improved trajectory prediction by adding a dedicated layer. Ahuja~et al.~\scite{ahuja2019language2pose} introduced JL2P model, which got improved results with respect to nuanced concepts of the text, namely velocity, trajectory and action type. They learned joint motion-text latent space and apply training curriculum to ease optimization.
More recently, BABEL dataset~\cite{BABEL:CVPR:2021} provided per-frame textual labels ordered in $260$ classes to the larger AMASS dataset~\cite{AMASS:ICCV:2019}, including about 40 hours of motion capture. Although providing explicit description of the action, often lacking any details besides the action type, this data spans a larger variety of human motion.
MotionCLIP{} overcomes the data limitations by leveraging out-of-domain knowledge using CLIP~\cite{radford2021learning}.
\subsection{CLIP aided Methods}
Neural networks have successfully learned powerful latent representations coupling natural images with natural language describing it~\cite{he2017fine,ramesh2021zero}.
A recent example is CLIP\cite{radford2021learning}, a model coupling images and text in deep latent space using a constructive objective\cite{hadsell2006dimensionality,chen2020simple}. By training over hundred millions of images and their captions, CLIP gained a reach semantic latent representation for visual content.
This expressive representation enables high quality image generation and editing, controlled by natural language~\cite{patashnik2021styleclip,gal2021stylegan,frans2021clipdraw}.
Even more so, this model has shown that connecting the visual and textual worlds also benefits purely visual tasks~\cite{vinker2022clipasso}, simply by providing a well-behaved, semantically structured, latent space. %
Closer to our method are works that utilize the richness of CLIP outside the imagery domain. In the 3D domain, CLIP's latent space provides a useful objective that enables semantic manipulation \cite{sanghi2021clip,text2mesh,wang2021clip} where the domain gap is closed by a neural rendering.
CLIP is even adopted in temporal domains \cite{guzhov2021audioclip,Luo2021CLIP4Clip,fang2021clip2video} that utilize large datasets of video sequences that are paired with text and audio.
Unlike these works that focus on classification and retrieval, we introduce a generative approach that utilizes limited amount of human motion sequences that are paired with text.
\section{Results}
\label{sec:results}
To evaluate MotionCLIP{}, we consider its two main advantages. In Section~\ref{sec:text2motion}, we inspect MotionCLIP{}'s ability to convert text into motion. Since the motion's latent space is aligned to that of CLIP, we use CLIP's pretrained text encoder to process input text, and convert the resulting latent embedding into motion using MotionCLIP{}'s decoder. We compare our results to the state-of-the-art and report clear preference for both seen and unseen generation. We also show comparable performance to state-of-the-art style transfer work simply by adding the style as a word to the text prompt. Lastly, we exploit CLIP expert lingual understanding to convert
abstract text into corresponding, and sometimes unexpected, motion.
In Section~\ref{sec:manifold_applications} we focus on the resulting auto-encoder, and the properties of its latent-space. We inspect its smoothness and disentanglement. Smoothness is shown through well-behaved interpolations, even between distant motion. Disentanglement is demonstrated using latent space arithmetic; by adding and subtracting various motion embeddings, we achieve compositionality and semantic editing. Lastly, we leverage our latent structure to perform action recognition over the trained encoder. The latter setting is also used for ablation study. In the following, we first lay out the data used, and other general settings.
\subsection{General Settings}
We train our model on the BABEL dataset~\cite{BABEL:CVPR:2021}. It comprises about 40 hours of motion capture data, represented with the SMPL body model~\cite{loper2015smpl}. Each frame is annotated with per-frame textual labels, and is categorized into one of 260 action classes. We down sample the data to $30$ frames per-second and cut it into sequences of length $60$. We get a single textual label per sequence by listing all actions in a given sequence, then concatenating them to a single string. Finally, we choose for each motion sequence a random frame to be rendered using the \emph{Blender} software and the SMPL-X add-on~\cite{SMPL-X:2019} (See Figure~\ref{fig:render}). This process outputs triplets of (motion, text, synthetic image) which are used for training.
We train a transformer auto-encoder with $8$ layers for each encoder and decoder as described in Section~\ref{sec:method}. We align it with the \emph{CLIP-ViT-B/32} frozen model. Out of the data triplets, the text-motion pairs are used for the \emph{text loss} and image-motion pairs for the \emph{image loss}. Both $\lambda$ values are set to $0.01$ throughout our experiments.
\footnote{\url{https://github.com/GuyTevet/MotionCLIP}}
\include{figures/sports}
\begin{figure*}[t!]
\centering
\begin{overpic}[width=\textwidth,tics=10, trim=0mm 0 0mm 0,clip]{figures/kfir.pdf}
\put(25,-1){Aberman et al.~\shortcite{aberman2020unpaired}}
\put(76,-1){MotionCLIP{}}
\end{overpic}
\vspace{1pt}
\caption{Style generation. Left: style transfer by Aberman~et al.~\shortcite{aberman2020unpaired}, conditioned on action (green) and style (orange) motions. Right: MotionCLIP{} generating style from plain text input.}
\label{fig:style_user_study}
\end{figure*}
\subsection{Text-to-Motion}
\label{sec:text2motion}
\emph{Text-to-motion} is performed at inference time, using the CLIP text encoder and MotionCLIP{} decoder, without any further training.
Even though not directly trained for this task, MotionCLIP{} shows unprecedented performance in text-to-motion, dealing with explicit descriptions, subtle nuances and abstract language.
\textbf{Actions.}
We start by demonstrating the capabilities of MotionCLIP{} to generate explicit actions - both seen and unseen in training. We compare our model to JL2P~\cite{ahuja2019language2pose}. Since the two models were trained on different datasets, we define a new common ground for evaluation.
We define two new sets of samples for a user study:
(1)The \emph{in-domain set} comprises actions with textual labels that appear in at least $0.5\%$ of the labels of both datasets, and (2) the \emph{Out-of-domain set} includes textual labels that do not appear in any of the labels of both datasets, hence, unseen for both models. For fairness, we construct this set from the list of Olympic sports (both summer and winter) that are disjoint to both datasets.
We conduct a user study, comparing the generation of each model conditioned on a given textual label. For each example, we then ask users to choose which of the two motions best fits the label.
Table~\ref{table:actions} shows that MotionCLIP{} was clearly preferred by the users for both sets. Figure~\ref{fig:sports} demonstrates a variety of sports performed by MotionCLIP{}, as used in the user-study. Note how even though this is not a curated list, the motion created according to all 30 depicted text prompts resembles the requested actions.
\textbf{Styles.}
We investigate MotionCLIP{}'s ability to represent motion style, without being explicitly trained for it.
We compare the results produced by MotionCLIP{} to the style transfer model by Aberman~et al.~\scite{aberman2020unpaired}. The latter receives two input motion sequences, one indicating content and the other style, and combines them through a dedicated architecture, explicitly trained to disentangle style and content from a single sequence. In contrast, we simply feed MotionCLIP{} with the action and style textual names (e.g.``walk proud"). We show to users the outputs of the two models side-by-side and ask them to choose which one presents both style and/or action better (See Figure~\ref{fig:style_user_study}). Even though Aberman~et al.{} was trained specifically for this task and gets the actual motions as an input, rather then text, Table~\ref{table:style} shows comparable results for the two models, with an expected favor toward Aberman~et al.{}. This, of course, also means that MotionCLIP{} allows expressing style with free text, and does not require an exemplar motion to describe it. Such novel free text style augmentations are demonstrated in Figure~\ref{fig:free_style}.
\begin{figure*}
\centering
\includegraphics[width=.98\textwidth]{figures/clip_text_styles_free_texts_fig_100.pdf}
\caption{MotionCLIP{} expresses the style described as a free text.}
\label{fig:free_style}
\end{figure*}
\begin{figure*}[]
\centering
\includegraphics[width=.98\textwidth]{figures/clip_text_abstract_lang_fig_100.pdf}
\caption{Abstract language. MotionCLIP{} generates the signature motions of culture figures and phrases.}
\label{fig:abstract_language}
\end{figure*}
\begin{figure*}[h]
\centering
\includegraphics[width=\textwidth]{figures/clip_text_interp_final-by_clip_fig_100.pdf}
\caption{Latent space motion interpolation. MotionCLIP{} enables semantic interpolation between two motions.}
\label{fig:interp}
\end{figure*}
\begin{figure*}[h]
\centering
\includegraphics[width=\textwidth]{figures/clip_edit.pdf}
\caption{Latent space motion editing. MotionCLIP{} enables semantic editing in latent space. Here we demonstrate two applications (1) upper and lower body action compositions (top two examples) and (2) style transfer (the two examples at the bottom).}
\label{fig:edit}
\end{figure*}
\textbf{Abstract language.}
One of the most exciting capabilities of MotionCLIP{} is generating motion given text that doesn't explicitly describe motion. This includes obvious linguistic connections, such as the act of sitting down, produced from the input text "couch". Other, more surprising examples include mimicking the signature moves of famous real and fictional figures, like \emph{Usain Bolt} and \emph{The Karate Kid}, and other cultural references like the famous ballet performance of \emph{Swan Lake} and the \emph{YMCA} dance (Figures~\ref{fig:teaser} and ~\ref{fig:abstract_language}). These results include motions definitely not seen during training (e.g., Spiderman in Figure~\ref{fig:teaser}), which strongly indicates how well the motion manifold is aligned to CLIP space.
\subsection{Motion Manifold Applications}
\label{sec:manifold_applications}
It is already well established that the CLIP space is smooth and expressive. We demonstrate its merits also exist in the aligned motion manifold, through the following experiments.
\textbf{Interpolation}
As can be seen in Figure~\ref{fig:interp}, linear interpolation between two latent codes yields semantic transitions between motions in both time and space. This is a strong indication to the smoothness of this representation.
Here, the source and target motions (top and bottom respectively) were sampled from the validation set, and between them are three transitions evenly sampled from the linear trajectory between the two motion representations, then decoded by MotionCLIP{}.
\textbf{Latent-Based Editing}
To demonstrate how disentangled and uniform MotionCLIP{} latent space is, we experiment with latent-space arithmetic to edit motion (see Figure~\ref{fig:edit}). As can be seen, these linear operations allow motion compositionality - the upper body action can be decomposed from the lower body one, and recomposed with another lower body performance. In addition, Style can be added by simply adding the vector of the style name embedding. These two properties potentially enable intuitive and semantic editing even for novice users.
\begin {table}[h]
\centering
\include{tables/user_study_action}
\vspace{5pt}
\caption{Action generation from text - user study. \emph{pref} is the preference score of each model (when compared side-by-side). \emph{Seen in train} notes whether or not the samples are taken from a distribution seen by each model during train.
MotionCLIP{} is clearly preferred by the users.
}
\label{table:actions}
\end {table}
\begin {table}[h]
\centering
\include{tables/user_study_style}
\vspace{5pt}
\caption{Style generation - user study (preference score side-by-side). We compare our style + action generation from text, to those of Aberman et al.~\scite{aberman2020unpaired} which gets style and content motions as input. Interestingly, although not
trained to generate style, our model wins twice and break even once}
\label{table:style}
\end {table}
\begin {table}[ht]
\centering
\include{tables/action_recognition}
\vspace{5pt}
\caption{Action Recognition. Using MotionCLIP{} together with CLIP text encoder for classification yields performance marginally close to 2s-AGCN~\cite{shi2019two} dedicated architecture on the BABEL-60 benchmark.}
\label{table:action_recognition}
\end {table}
\textbf{Action Recognition}
Finally, we further demonstrate how well our latent spaces is semantically structured. We show how combined with the CLIP text encoder, MotionCLIP{} encoder can be used for action recognition. We follow BABEL $60$-classes benchmark and train the model with BABEL class names instead of the raw text. At inference, we measure the cosine distance of a given motion sequence to all $60$ class name encodings and apply softmax, as suggested originally for image classification~\cite{radford2021learning}. In table~\ref{table:action_recognition}, we compare Top-1 and Top-5 accuracy of MotionCLIP{} classifier to 2s-AGCN classifier~\cite{shi2019two}, as reported by Punnakkal~et al.~\scite{BABEL:CVPR:2021}. As can be seen, this is another example where our framework performs similarly to dedicated state-of-the-art methods, even though MotionCLIP{} was not designed for it.
|
1,108,101,566,331 | arxiv | \section{Introduction}
\begin{comment}
Consider a \emph{Jordan curve} $\gamma$, that is, a simple, closed curve in the plane.
We say that $\gamma$ has \emph{bounded curvature} if for every point $x$ on $\gamma$, there are open unit disks $U_x$ and $V_x$ and $\varepsilon_x>0$ such that
\begin{align}
x&\in\partial U_x\quad\text{and}\quad \ball{x}{\varepsilon_x}\cap U_x\subset\Int\gamma, \quad \text{and} \label{bccCond} \\
x&\in\partial V_x\quad\text{and}\quad \ball{x}{\varepsilon_x}\cap V_x\subset\Ext\gamma. \label{bccCond2}
\end{align}
A Jordan curve that satisfies condition~\eqref{bccCond}, but not necessarily~\eqref{bccCond2}, is said to have \emph{bounded convex curvature}.
Symmetrically, a Jordan curve that satisfies condition~\eqref{bccCond2}, but not necessarily~\eqref{bccCond}, is said to have \emph{bounded concave curvature}.
\end{comment}
Consider a \emph{Jordan curve} $\gamma$, that is, a simple, closed curve in the plane. We will denote by $\Int \gamma$ and $\Ext \gamma$, respectively, the interior and exterior of $\gamma$.
We say that $\gamma$ has \emph{bounded convex curvature} if for every point $x$ on $\gamma$, there is an open unit disk $U_x$ and $\varepsilon_x>0$ such that
\begin{align}
x\in\partial U_x\quad\text{and}\quad \ball{x}{\varepsilon_x}\cap U_x\subset\Int\gamma. \label{bccCond}
\end{align}
Here $\ball{x}{\varepsilon}$ is the open disk with center $x$ and radius $\varepsilon$.
Similarly, we say that $\gamma$ has \emph{bounded concave curvature} if for every point $x$ on $\gamma$, there is an open unit disk $V_x$ and $\varepsilon_x>0$ such that
\begin{align}
x\in\partial V_x\quad\text{and}\quad \ball{x}{\varepsilon_x}\cap V_x\subset\Ext\gamma. \label{bccCond2}
\end{align}
Finally we say that a curve has \emph{bounded curvature} if it has both bounded convex and concave curvature. Curves of bounded convex curvature are the focus of this article.
When we say that $\gamma$ is a curve of bounded convex curvature it will always be understood that $\gamma$ is a Jordan curve.
Figure~\ref{bccFig} shows an example of a curve of bounded convex curvature.
Note that there may be points on a curve of bounded convex (or concave) curvature where the tangent to the curve is not defined.
Our main goal is to prove the following theorem (generalizing a theorem by Pestov and Ionin~\cite{pestov1959largest} that we shall discuss later):
\begin{theorem}\label{MAINTHM}
The interior of any curve of bounded convex curvature contains an open unit disk.
\end{theorem}
The theorem does not hold if we replace the word ``convex'' with ``concave'' --- any circle of radius smaller than 1 provides a counterexample.
An appealing property of curves of bounded convex curvature is that they can be composed as described in the following observation (also see Figure~\ref{fig:union}).
\begin{figure}
\centering
\includegraphics{unionProp.pdf}
\caption{Illustration for Observation~\ref{obs:union}.
The fat curve is the composition $\gamma_3$ of $\gamma_1$ (black) and $\gamma_2$ (gray).}
\label{fig:union}
\end{figure}
\begin{observation}\label{obs:union}
Let $\gamma_1$ and $\gamma_2$ be two curves of bounded convex curvature.
Consider the unbounded connected component $R$ of $\Ext\gamma_1\cap\Ext\gamma_2$.
If the boundary $\partial R$ is a Jordan curve $\gamma_3$, then $\gamma_3$ has bounded convex curvature.
\end{observation}
Note that this result does not hold for curves of bounded curvature.
Indeed the Jordan curves $\gamma_1$ and $\gamma_2$ in Figure~\ref{fig:union} both have bounded curvature, whereas their composition $\gamma_3$ only has bounded convex curvature.
In Section~\ref{sec:app} we will explain how curves of bounded convex curvature naturally arise in problems related to computer-aided manufacturing, but first we discuss related work.
\subsection{Related work}\label{relatedWork}
All previously studied notions of bounded curvature are more restrictive, and moreover defined in terms of a parameterization of the curve, contrary to our notion of bounded convex curvature.
The curvature is often defined for curves $\gamma$ that are two times continuously differentiable and parameterized by arclength.
Then the (unsigned) curvature at $s$ is simply $\|\gamma''(s)\|$, and a curve $\gamma$ is defined to have bounded curvature if $\|\gamma''(s)\|\leq 1$ for all $s$.
We say that such curves have \emph{strongly bounded curvature} in order to avoid confusion with the curves of bounded curvature introduced in this article.
Pestov and Ionin~\cite{pestov1959largest} proved that the interior of every curve of strongly bounded curvature contains an open unit disk.
We denote this theorem as the \emph{Pestov--Ionin theorem}.
The Pestov--Ionin theorem has often been applied to problems in robot motion planning and related fields~\cite{abrahamsen2016finding,agarwal2002curvature,ahn2012reachability,ayala2015length,lazard1998complexity}.
In Section~\ref{sec:app}, we describe how curves of bounded convex curvature naturally arise in problems related to pocket machining.
Dubins~\cite{dubins1957curves} introduced the class of curves of \emph{bounded average curvature} as the curves $\gamma$ parameterized by arclength that are differentiable such that for all $s_1,s_2$, we have
\begin{align}\label{bacCond}
\|\gamma'(s_1)-\gamma'(s_2)\|\leq |s_1-s_2|.
\end{align}
For a curve $\gamma$ of bounded average curvature, the second derivative $\gamma''$ is not necessarily defined everywhere, but since $\gamma'$ satisfies the Lipschitz condition~\eqref{bacCond}, it follows that $\gamma''$ is defined almost everywhere.
Dubins mentioned that if $\gamma$ is a curve parameterized by arclength for which $\gamma''$ exists everywhere, then $\gamma$ has bounded average curvature if and only
if $\gamma$ has strongly bounded curvature.
Ahn et al.~\cite{ahn2012reachability} proved that the Pestov--Ionin theorem holds for curves of bounded average curvature, and their proof is analogous to that of Pestov and Ionin.
In particular, both proofs rely on the curve $\gamma$ being rectifiable, i.e., having finite length.
However, it is not at all clear from our more general definition that a curve $\gamma$ of bounded convex curvature is rectifiable, so that approach cannot easily be applied in our case.
Instead, our proof shows that if $\Int \gamma$ contains no unit disk, then there exists an $\alpha>0$ such that $\Int \gamma$ contains infinitely many pairwise disjoint disks of radius $\alpha$. As $\gamma$ is bounded, this is of course a contradiction.
Pankrashkin~\cite{Pankrashkin2015} gave a proof that the interior of a smooth Jordan curve of strongly bounded curvature has area at least $\pi$.
This of course follows from the Pestov--Ionin theorem, but Pankrashkin proved it by other means.
Note that the requirement on the curvature of curves of strongly bounded and bounded average curvature is completely symmetric with respect to the curve turning to the left and right when traversed in positive direction.
In contrast to that, Howard and Treibergs~\cite{howard1995reverse} introduced a class $\KKn^+$ of curves satisfying an asymmetric condition on the curvature, namely the curves $\gamma$ parameterized by arclength such that $\gamma'$ is absolutely continuous and
$$\langle \gamma'(s+h)-\gamma'(s),\mathbf n(s)\rangle\leq h$$
for all $s$ and $0<h<\pi$, where $\langle\cdot, \cdot\rangle$ is the dot-product and $\mathbf n(s)=\gamma'(s)^\bot$ is the unit normal.
They proved the Pestov--Ionin theorem for the Jordan curves in $\KKn^+$.
Abrahamsen and Thorup~\cite{abrahamsen2016finding} introduced a class of Jordan curves related to $\KKn^+$, but where the curves may have sharp concave corners without a well-defined tangent.
They gave a proof of a version of the Pestov--Ionin theorem for that class of curves.
It can be shown that each of the classes of Jordan curves mentioned here are subsets of the curves of bounded convex curvature.
It is therefore natural to investigate whether the Pestov--Ionin theorem holds for all curves of bounded convex curvature, which is exactly the statement of Theorem~\ref{MAINTHM}.
\section{Application to Pocket Machining}\label{sec:app}
\begin{figure}
\centering
\includegraphics[width=0.6\textwidth]{mill.png}\quad
\includegraphics[width=0.35\textwidth]{Face_Mill_Index_01.png}
\caption{Left: A milling machine.
The model is the Rabbit Mill v3.0 from SourceRabbit, who kindly provided permission to use the picture.
\copyright\ SourceRabbit.
Right: A milling tool. Picture by Rocketmagnet, licensed under CC BY-SA 3.0.}
\label{millFig}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=\textwidth]{cuttingCloseups2.pdf}
\caption{In each of these four situations, the thick black curve is the boundary $\partial S$ of the pocket.
The remaining material in the pocket is ensured to be between the dashed black curve and $\partial S$.
The boundary of the tool $\mathcal D$ is the dashed circle, and the solid part of the circle between the two crosses is the maximum part that can be in engagement with the material, i.e., the largest possible portion of the tool boundary cutting away material.
In the third picture, the convex corner on the path in the second picture has been rounded by an arc, thus bounding the convex curvature and reducing the maximum engagement.
The two rightmost pictures show two ways of going around a concave corner of $\partial S$.
In both cases, the maximum engagement is smaller than when the tool
follows a line segment of $\partial S$ (the case of the first picture).}
\label{cuttingCloseups}
\end{figure}
In this section we explain why it is sometimes natural to restrict oneself to curves of bounded convex curvature when choosing toolpaths for pocket machining.
Pocket machining is the process of cutting out a pocket of some specified shape in a piece of material, such as a block of metal or wood, using a milling machine; see Figure~\ref{millFig} (left).
We are given a compact region $S$ of the plane whose boundary $\partial S$ is a Jordan curve.
The task is to remove the material in $S$ using a milling machine.
Suppose that we have already removed all material in $S$ except for a thin layer close to the boundary $\partial S$ (another coarser tool has removed most of the material, but is not fine enough to do the boundary itself).
In order to remove the remaining material, we are using a tool $\mathcal D$, which can be thought of as a disk of some radius $r$, and we have to specify the toolpath.
The toolpath is a curve that the center of $\mathcal D$ should follow, and the material removed is the area swept over by $\mathcal D$ as it does so.
In practice, the tool has sharp teeth that cuts away the material as the tool spins at high speed; see Figure~\ref{millFig} (right).
The maximum thickness of the layer of remaining material is some fraction of the tool radius $r$, carefully chosen in order to limit the load on the tool.
It is an advantage if the tool center moves with constant speed while the tool is removing material, since that gives a higher surface quality of the resulting part.
Since the tool moves at constant speed, the load on the tool is heavier in a neighborhood around a convex turn than when it follows as straight line, since it has to remove more material per time unit.
In contrast to this, the load is lighter in a neighborhood around a concave turn.
See Figure~\ref{cuttingCloseups} for an illustration of this.
If the load is too heavy, the accuracy and surface quality will be inferior, and the tool can even break~\cite{han2015precise}.
It has been recommended to round the convex corners of the toolpath by circular arcs of a certain radius in order to decrease the load~\cite{choy2003corner,pateloup2004corner}.
In our terminology, this is the same as requiring the toolpath to have bounded convex curvature.
By restricting the convex curvature, we will inevitably leave more material that cannot be removed by the tool.
This can be removed by other tools that are more expensive to use in terms of machining time.
If the toolpath consists of all points at distance $r$ to $\partial S$, the concave curvature will be bounded by $1/r$, since the tool center will be ``rolling'' around any concave corner $v$ of $\partial S$ using a circular arc $A$ of radius $r$ (as in the fourth picture in Figure~\ref{cuttingCloseups}).
However, a recommended alternative way to get around $v$ is to follow the tangents to the endpoints of $A$ (as in the fifth picture in Figure~\ref{cuttingCloseups}---note that the tool will not remove any material when the center is in a neighborhood around the intersection point of the tangents).
Experience shows that this results in the corner $v$ being cut much sharper and more precisely~\cite{park2003mitered}.
This shows that the toolpaths arising in this context are required to have bounded convex curvature, whereas no bound can be given on the concave curvature.
Abrahamsen and Thorup~\cite{abrahamsen2016finding} studied the computational problem of computing the maximum region with a boundary of bounded convex curvature inside a given region in the plane, which defines the maximum region that can be cleared by the tool using a toolpath of bounded convex curvature.
A version of the Pestov--Ionin theorem (mentioned in the introduction) was used to establish the maximality of the region returned by the algorithm described in the article.
\section{Proving Theorem~\ref{MAINTHM}}
The proof of Theorem~\ref{MAINTHM} is by contradiction.
We assume that $\gamma$ is a curve of bounded convex curvature with an interior containing no open unit disk.
We then show that there exists an $\alpha>0$ such that $\Int \gamma$ contains infinitely many pairwise disjoint disks of radius $\alpha$.
As $\Int \gamma$ is bounded, this is a contradiction.
To construct these disks we need to prove a special property of curves of bounded convex curvature, namely that the radii of disks $D\subset \Int \gamma$ having $|\gamma\cap\partial D|\geq 2$ are lower bounded by some constant $\eta>0$ depending only on $\gamma$.
Our first step is to set up an alternative condition that guarantees that $\gamma$ \emph{does not} have bounded convex curvature, as stated in Lemma~\ref{diamLemma} below.
We start with the following lemma; see Figure~\ref{fig:suffNotCond}.
\begin{figure}
\centering
\includegraphics{lemma1fig.pdf}
\caption{The situation described in Lemma~\ref{suffNotCond}. The curve $\gamma$ does not have bounded convex curvature.}
\label{fig:suffNotCond}
\end{figure}
\begin{lemma}\label{suffNotCond}
Let $\gamma$ be a Jordan curve and consider a point $x$ on $\gamma$.
If there exists an open unit disk $D$ where $x\in\partial D$ such that
\begin{enumerate}
\item
there exists $ \varepsilon>0$ such that $\ball{x}{\varepsilon}\cap\Int\gamma\subset D$, and
\label{suffNotCond1}
\item
for all $\eta>0$ we have $\gamma\cap\ball{x}{\eta}\cap D\neq\emptyset$,
\label{suffNotCond2}
\end{enumerate}
then $\gamma$ does not have bounded convex curvature.
\end{lemma}
\begin{proof}
Assume for contradiction that $\gamma$ has bounded convex curvature, and choose $\varepsilon_x>0$ and $U_x$ such that condition~\eqref{bccCond} in the definition of bounded convex curvature is satisfied for $x$.
We must show that for any unit disk $D$ with $x \in \partial D$, either condition~\ref{suffNotCond1} or~\ref{suffNotCond2} of Lemma~\ref{suffNotCond} fails.
Let $D$ be such a unit disk and suppose $\varepsilon>0$ is such that condition~\ref{suffNotCond1} of the lemma is satisfied.
Let $\eta=\min\{\varepsilon_x,\varepsilon\}$.
Then,
$$\ball{x}{\eta}\cap U_x\subset
\ball{x}{\eta}\cap\Int\gamma \subset D.$$
This implies that $U_x=D$: Indeed, $U_x$ and $D$ are two unit disks with $x$ on the boundary, so if $U_x\neq D$, then $U_x\setminus D$ would contain points arbitrarily close to $x$.
Now $\ball{x}{\eta}\cap U_x \subset \Int \gamma$, so $\gamma \cap \ball{x}{\eta}\cap U_x =\emptyset$. As $U_x=D$, condition~\ref{suffNotCond2} is not satisfied. This completes the proof.
\end{proof}
For any Jordan curve $\gamma$ and two distinct points $a$ and $b$ on $\gamma$, we denote by $\gamma[a,b]$ the closed interval on $\gamma$ from $a$ to $b$ in the positive direction.
We may for example apply this notation to the boundary curve $\partial D$ for a disk $D$.
By $\gamma(a,b)$, we denote the open interval $\gamma[a,b]\setminus\{a,b\}$.
While it might be intuitively clear what it means to traverse $\gamma$ in the positive or negative direction, we give a precise definition in Appendix~\ref{sec:appen}.
We require the following lemma which phrased informally states that if $\gamma$ is traversed positively, the interior of $\gamma$ is ``to the left'' of the curve.
\begin{lemma}\label{leftrightlemma}
Let $p$ be a point on a Jordan curve $\gamma$, and let $U$ be an open disk with center $p$, sufficiently small so that $\gamma$ is not contained in $U$.
The intersection of $U$ and $\gamma$ is a collection of intervals of $\gamma$ of which one, say $\gamma(a,b)$, contains $p$. Consider the Jordan curves
$$
\alpha^+=\gamma[a,b]\cup \partial U[b,a] \quad \text{and} \quad \alpha^-=\gamma[a,b]\cup \partial U[a,b]
$$
Then $\Int \gamma$ and $\Int \alpha^+$ coincide near $p$, that is, there exists a small disk $V\subset U$ centered at $p$ such that $\Int \gamma \cap V=\Int \alpha^+ \cap V$. Similarly $\Ext \gamma$ and $\Int \alpha^-$ coincide near $p$.
\begin{comment}
Let $p$ be a point on a Jordan curve $\gamma$, and let $U$ be an open disk with centre $p$, sufficiently small that $\gamma$ is not contained in $U$.
The intersection of $U$ and $\gamma$ is a collection of intervals of $\gamma$ of which one, say $\gamma(a,b)$, contains $p$. Consider the Jordan curve
$$
\alpha=\gamma[a,b]\cup \partial U[b,a].
$$
Then $\Int \gamma$ and $\Int \alpha$ coincide near $p$, that is, there exists a small disk $V\subset U$ centered at $p$ such that $\Int \gamma \cap V=\Int \alpha \cap V$.
\end{comment}
\end{lemma}
We believe the result to be standard but we were unable to find an equivalent one in the literature, phrased for completely arbitrary Jordan curves, e.g., with no assumptions on the differentiability of the curve.
We will provide a proof in Appendix~\ref{sec:appen}.
Suppose $\gamma$ is a Jordan curve, $a,b$ are distinct points on $\gamma$, and $D$ is an open disk satisfying $a,b\in\partial D$ and $\gamma[a,b]\cap\overline D=\{a,b\}$.
If $R$ is the open region bounded by the Jordan curve $\gamma[a,b]\cup \partial D[a,b]$, then either $D\subset R$ or $D\cap R=\emptyset$.
In the former case we say that $\gamma$ \emph{winds negatively} around $D$ from $a$ to $b$ and in the latter that $\gamma$ \emph{winds positively} around $D$ from $a$ to $b$.
\begin{figure}
\centering
\includegraphics[scale=1]{NewDiamlemma.pdf}
\caption{The two cases in the proof of Lemma~\ref{diamLemma}.}
\label{fig:diamLemma}
\end{figure}
\begin{lemma}\label{diamLemma}
Let $\gamma$ be a Jordan curve and consider an interval $\gamma[a,b]$ of $\gamma$ such that $\gamma[a,b]$ is contained in an open unit disk $D$.
Suppose there is an open disk $D_r$ of radius $r\leq 1$ such that $\gamma[a,b]\cap\overline {D_r}=\{a,b\}$ and $\gamma$ winds positively around $D_r$ from $a$ to $b$.
Then $\gamma$ does not have bounded convex curvature.
\end{lemma}
\begin{proof}
The general outline of the proof is as follows: We first make a translation of $D$ into a disk $D''$ such that $\overline{D''}$ still contains $\gamma[a,b]$ and such that $\partial D''$ meets $\gamma(a,b)$ in at least one point. We then argue that we may choose a point $q\in \partial D'' \cap \gamma(a,b)$ for which Lemma~\ref{suffNotCond} applies to show that $\gamma$ does not have bounded convex curvature.
By translating and rotating we may assume about the coordinates that $D_r$ is centered at the origin and that $a=(s_0,t_0)$ and $b=(-s_0,t_0)$ for some $s_0,t_0$ with $s_0>0$ and $s_0^2+t_0^2=r^2$ (note that $t_0$ may be negative).
Suppose that $D$ is centered at $(s_1,t_1)$ and that $s_1\geq 0$ (the case $s_1\leq 0$ is dealt with in a symmetric way).
Also define the Jordan curve $\gamma'=\gamma[a,b]\cup \partial D_r[a,b]$.
We start the proof by showing the following two claims.
\begin{claim}\label{claim1}
Let $p$ be a point on the arc $\partial D_r(a,b)$. Let $m$ be the midpoint of segment $ab$ and $v=p-m$. Consider the ray $\ell_p=\{p+\alpha v: \alpha>0\}$. Then $\ell_p$ intersects $\gamma(a,b)$.
\end{claim}
\begin{proof}[Proof of Claim~\ref{claim1}]
Let $V$ be an open disk centered at $p$, so small that $V\cap \gamma[a,b]=\emptyset$.
Further let $c,d\in \partial D_r$ be such that $V\cap \partial D_r=\partial D_r(c,d)$.
Then $V \backslash \partial D_r(c,d)$ is the disjoint union of two open connected sets $V_1$ and $V_2$ satisfying $V_1\subset D_r$ and $V_2\cap \ell_p \neq \emptyset$.
Moreover, $V_1$ and $V_2$ are both subsets of $\mathbb R^2\setminus \gamma'$ and, being connected, they are each fully contained in either $\Int \gamma'$ or $\Ext \gamma'$.
Now as $p\in \gamma'$ and $\gamma'=\partial (\Int \gamma')$ by the Jordan curve theorem, it follows that either $V_1 \subset \Int \gamma'$ or $V_2\subset \Int \gamma'$.
But by the assumption on the winding direction of $\gamma$ from $a$ to $b$, we have $V_1 \cap \Int \gamma' \subset D_r \cap \Int \gamma'=\emptyset$, and so $V_2\subset \Int \gamma'$.
It follows that $\ell_p \cap \Int \gamma' \neq \emptyset$.
Furthermore, we trivially have that $\ell_p \cap \Ext \gamma'\neq \emptyset$ and so $\ell_p$ must intersect $\gamma'$.
This cannot happen at a point of $D_r[a,b]$ so $\ell_p$ must intersect $\gamma(a,b)$ as claimed.
\end{proof}
\begin{claim}\label{claim2}
Let $D_0$ be an open disk satisfying that $\gamma[a,b]\subset \overline{D_0}$. Then $\gamma'\subset \overline{D_0}$.
\end{claim}
\begin{proof}[Proof of Claim~\ref{claim2}]
It clearly suffices to show that $\overline{D_0}$ contains $\partial D_r(a,b)$. Take any point $p\in \partial D_r(a,b)$ and consider the line $\ell_p=\{p+\alpha v: \alpha>0\}$ from Claim~\ref{claim1} that intersects $\gamma(a,b)$ in some point $p+\alpha_0 v$ where $\alpha_0>0$.
Now by assumption $\overline{D_0}$ contains $\gamma[a,b]$, hence also $p+\alpha_0v$. Since $a,b\in \overline{D_0}$, and $\overline{D_0}$ is convex, $\overline{D_0}$ contains the midpoint $m$ of segment $ab$. Finally $p$ is on the line segment between $m$ and $p+\alpha_0v$ so by convexity $\overline{D_0}$ contains $p$. Since $p$ was arbitrary, this establishes the claim.
\end{proof}
We now let $D'$ be the disk $\ball{(0,t_1)}{1}$. We split the proof into two cases depicted in Figure~\ref{fig:diamLemma}.
\textbf{Case 1: $\gamma[a,b]\subset\overline{D'}$.}
In this case, we let $t''\in \mathbb R$ be minimal such that the closure of the unit disk $D''=\ball{(0,t'')}{1}$ contains $\gamma[a,b]$.
Consider the set of intersection points $P=\gamma[a,b]\cap\partial D''$, which is nonempty by construction.
We claim that $P$ contains neither $a$ nor $b$. To see this, note that the ray $\ell=\{(0,t):t> r\}$ intersects $\gamma(a,b)$ by Claim~\ref{claim1}. Now if $\partial D''$ contained $a$ (and thus by symmetry $b$) then, as $r\leq 1$, we would have $\ell\cap \overline{D''}=\emptyset$ and hence that $\ell\cap \gamma(a,b)=\emptyset$, a contradiction.
We conclude that $P$ contains neither $a$ nor $b$.
The set $\gamma(a,b)\setminus P$ is nonempty as $a,b\notin\partial D''$, and consists of pairwise disjoint open arcs.
Let $q\in P$ be an endpoint of such an arc.
We will now show that $q$ and $D''$ satisfy the conditions of Lemma~\ref{suffNotCond}, from which it follows that $\gamma$ does not have bounded convex curvature.
That $\gamma \cap \ball{q}{\varepsilon} \cap D''\neq \emptyset$ for all $\varepsilon>0$ is immediate as $q$ is an endpoint of one of the open arcs in $\gamma(a,b) \backslash P$.
It thus suffices to check condition~\ref{suffNotCond1}, as follows.
First note that either $\partial D_r[a,b]=\gamma'[a,b]$ or $\partial D_r[a,b]=\gamma'[b,a]$.
However, if $\partial D_r[a,b]=\gamma'[a,b]$, we could apply Lemma~\ref{leftrightlemma} to $\gamma'$ with $p=(0,r)$ to conclude that $D_r\subset \Int \gamma'$.
But we assumed that $D_r\cap \Int \gamma'=\emptyset$ and so it follows that $\partial D_r[a,b]=\gamma'[b,a]$.
Since $\gamma'=\partial D_r[a,b]\cup \gamma[a,b]=\gamma'[a,b]\cup \gamma'[b,a]$, we conclude that $\gamma[a,b]=\gamma'[a,b]$.
Another application of Lemma~\ref{leftrightlemma}, this time with $p=q$, gives that $\Int \gamma$ and $\Int \gamma'$ coincide locally near $q$, that is, there exists an $\varepsilon>0$ such that $\Int \gamma \cap \ball{q}{\varepsilon}=\Int \gamma' \cap \ball{q}{\varepsilon}$.
Now $\overline{D''}$ contains $\gamma[a,b]$ and hence $\gamma'$ by Claim~\ref{claim2}. Thus, $\Int \gamma' \subset D''$ and it follows that $\Int \gamma \cap \ball{q}{\varepsilon}\subset D''$, as desired.
\textbf{Case 2: $\gamma[a,b]\not\subset\overline{D'}$.}
In this case, let $s''>0$ be minimal such that the closure of $D''=\ball{(s'',t_1)}{1}$ contains $\gamma[a,b]$.
As $s''>0$, $\partial D''$ contains neither $a$ nor $b$.
Letting $P=\gamma[a,b]\cap\partial D''$, the same argument as in Case~1 finishes the proof.
\end{proof}
We are slowly setting up the stage for the proof of Theorem~\ref{MAINTHM}. Intuitively, the following lemma is unsurprising. The lemma will be helpful for checking one of the conditions of Lemma~\ref{diamLemma}, hence making it easier to apply.
\begin{lemma}\label{positivelemma}
Let $\gamma$ be a Jordan curve and $D$ an open disk contained in $\Int \gamma$. Suppose that $a,b$ are distinct points on $\gamma$ such that $\gamma[a,b]\cap \overline{D}=\{a,b\}$. Then $\gamma$ winds positively around $D$ from $a$ to $b$, that is, $D\subset\Ext(\gamma[a,b]\cup \partial D[a,b]$). Similarly, $D\subset \Int(\gamma[a,b]\cup \partial D[b,a])$.
\end{lemma}
\begin{proof}
We only prove the first statement in the theorem as the proof of the second part is similar.
Letting $\gamma'=\gamma[a,b]\cup \partial D[a,b]$ we must show that $D\subset \Ext \gamma'$. As $D\subset \Int \gamma$ it must hold that $\Int \gamma' \subset \Int \gamma$. Now either $\gamma'[b,a]=\gamma[a,b]$ or $\gamma'[b,a]=\partial D[a,b]$. Suppose first that $\gamma'[b,a]=\gamma[a,b]$ and let $p$ be any point on $\gamma(a,b)$. Applying Lemma~\ref{leftrightlemma} we find that $\Int \gamma'$ and $\Ext \gamma$ coincide near $p$. This is a contradiction as $\Int \gamma'\subset \Int \gamma$. It follows that $\gamma'[b,a]=\partial D[a,b]$. Now choose any point $p\in \partial D(a,b)$. Again applying Lemma~\ref{leftrightlemma} we find that $\Ext \gamma'$ and $\Int \partial D$ coincide near $p$. As $D\subset \mathbb R^2\setminus \gamma'$, it immediately follows that $D\subset \Ext \gamma'$, as desired.
\end{proof}
Now we can prove that if $\gamma$ has bounded convex curvature, then certain maximal disks contained in $\Int \gamma$ cannot be too small.
\begin{lemma}\label{lowerLemma}
Let $\gamma$ be a curve of bounded convex curvature. There exists a constant $\eta>0$ with the following property: If $\ball{x}{r}\subset \Int \gamma$ is an open disk of radius $r$, and $\partial \ball{x}{r}$ meets $\gamma$ in at least two points, then $r\geq \eta$.
\end{lemma}
\begin{proof}
We show the contrapositive.
Suppose that no such $\eta$ exists and take a sequence of balls $\ball{x_n}{r_n}\subset\Int\gamma$ satisfying $|\gamma\cap\partial\ball{x_n}{r_n}|\geq 2$ for all $n$ and $\lim_{n\longrightarrow \infty }r_n=0$.
Further suppose that $r_n<1$ for all $n$.
For each $n$, let $a_n,b_n $ be two distinct points in $\gamma\cap\partial\ball{x_n}{r_n}$.
Since $\gamma \times \gamma$ is compact, we may assume that $(a_n,b_n)\longrightarrow (a,b)$ for some $(a,b)\in \gamma \times \gamma$ by passing to an appropriate subsequence. As $r_n \longrightarrow 0$ we must have that $a=b$.
Let $V$ be an open ball centered at $a$ of radius $1/2$. Then $V\cap \gamma$ is a collection of open intervals one of which, say $\gamma(c,d)$, contains $a$. Let $W\subset V$ be an open ball centered at $a$ and so small that $W\cap \gamma[d,c]=\emptyset$.
As $a_n,b_n \longrightarrow a$ we must have that $a_n,b_n\in W$ for $n$ sufficiently large. But then $a_n,b_n\in \gamma(c,d)$, so either $\gamma[a_n,b_n]\subset \gamma(c,d)$ or $\gamma[b_n,a_n]\subset\gamma(c,d)$. In particular, either $\gamma[a_n,b_n]$ or $\gamma[b_n,a_n]$ is contained in an open unit disk.
We now wish to apply Lemma~\ref{diamLemma} to show that this implies that $\gamma$ does not have bounded convex curvature. Assume without loss of generality that $n$ is such that $\gamma[a_n,b_n]$ is contained in an open unit disk. Now $\gamma[a_n,b_n]\setminus\partial \ball{x_n}{r_n}$ is a collection of open intervals of $\gamma$. Moreover, the collection is nonempty as otherwise $\gamma[a_n,b_n]\subset \partial \ball{x_n}{r_n}$, and as $r_n<1$ and $\ball{x_n}{r_n}\subset \Int \gamma$, this would violate the bounded convex curvature condition. We may thus choose distinct $a_n',b_n'$ such that $\gamma(a_n',b_n')$ is such an interval.
Since $\gamma$ winds positively around $\ball{x_n}{r_n}$ from $a_n'$ to $b_n'$ by Lemma~\ref{positivelemma}, we are in a position to apply Lemma~\ref{diamLemma} and we conclude that $\gamma$ does not have bounded convex curvature.
\end{proof}
For the proof of Theorem~\ref{MAINTHM} we will also need the following easy lemma.
\begin{lemma}\label{upperLemma}
Let $\gamma$ be a Jordan curve.
Let $r_0$ be the supremum over all $r>0$ such that $\Int \gamma$ contains an open disk of radius $r$.
Then $\Int \gamma$ contains an open disk of radius $r_0$.
\end{lemma}
\begin{proof}
The proof is a standard compactness argument, using only that $\Int \gamma$ is a bounded open set. To be precise, let $f: \overline{\Int \gamma} \longrightarrow \mathbb{R}_{\geq 0}$ be defined by
\begin{align*}
f(x)=\sup \{r\geq 0: \ball{x}{r}\subset \Int \gamma\}.
\end{align*}
If we put $r':=f(x)$, then clearly $\ball{x}{r'}\subset \Int \gamma$, so we may in fact write $f(x)=\max \{r\geq 0: \ball{x}{r}\subset \Int \gamma\}$.
Now, $|f(x)-f(y)|\leq \|x-y\|$ for any $x,y\in \overline{\Int \gamma}$ and thus $f$ is continuous.
Furthermore, $\sup \{f(x): x\in \overline{\Int \gamma}\}=r_0$, and since $\overline{\Int \gamma}$ is compact, $f$ attains this maximum at some point $x_0$.
But then $\ball{x_0}{r_0}\subset \Int \gamma$.
\end{proof}
We are now ready to prove Theorem~\ref{MAINTHM}.
\begin{proof}[Proof of Theorem~\ref{MAINTHM}]
Let $\gamma$ be a curve of bounded convex curvature and assume for contradiction that $\Int \gamma$ contains no open unit disk.
By Lemma~\ref{lowerLemma} and Lemma~\ref{upperLemma}, we may choose $\eta_1$ and $\eta_2$ with $0<\eta_1\leq 1-\eta_2<1$, such that any disk $D\subset \Int \gamma$ with $|\gamma\cap\partial D|\geq 2$ satisfies $\eta_1\leq \rad D\leq 1-\eta_2$.
Let $z$ be any point of $\gamma$ and let the disk $D_0\subset \Int \gamma$ be tangent to $U_{z}$ in $z$ and of maximal radius.
We note that $\gamma\cap\partial D_0$, apart from $z$, contains at least one other point. Otherwise, $\textrm{dist}(\gamma\setminus B_{\varepsilon_{z}}(z), D_0)>0$ and then we can enlarge $D_0$, contradicting the maximality of $D_0$.
Thus $\eta_1\leq \rad D_0\leq 1-\eta_2$.
The set $\gamma\setminus \partial D_{0}$ consists of some (at least two) open intervals of $\gamma$.
Let $x_0,y_0$ be distinct points on $\gamma$ such that $\gamma(x_0,y_0)$ is such an open interval
\begin{comment}
In general for $n\geq 1$, we will define points $x_n,y_n$, disks $D_n,E_n$, and open sets $A_n\subset \Int \gamma$ inductively, such that for any $n\geq 0$
\begin{enumerate}[label=(\roman*)]
\item \label{step1}
$D_{n+1}\subset \Int \gamma$ is an open disk with radius in the interval $[\eta_1,1-\eta_2]$,
\item \label{step2}
$x_{n+1},y_{n+1}\in \gamma$ satisfy $\gamma(x_{n+1},y_{n+1})\subset \gamma(x_{n},y_{n})$, and $\gamma[x_{n+1},y_{n+1}]\cap\partial D_{n+1}=\{x_{n+1},y_{n+1}\}$,
\item \label{step3}
The open region $A_{n+1}$ bounded by the Jordan curve $\gamma[x_{n+1},y_{n+1}]\cup\partial D_{n+1}[x_{n+1},y_{n+1}]$ is contained in $A_{n}$, and
\item \label{step4}
$E_{n+1}\subset A_{n} \setminus A_{n+1}$ and the radius of $E_{n+1}$ is at least $\eta:=\min (\eta_1,\eta_2/2)$.
\end{enumerate}
\end{comment}
In general for $n\geq 0$, we will recursively define distinct points $x_n,y_n\in \gamma$, and an open disk $D_n\subset \Int \gamma$ such that $\gamma[x_n,y_n]\cap \partial D_{n}=\{x_n,y_n\}$. Letting $A_n$ be the open region bounded by the Jordan curve $\gamma[x_n,y_n]\cup \partial D_n[x_n,y_n]$, the construction satisfies, for all $n\geq 0$, that
\begin{enumerate}[label=(\roman*)]
\item \label{step1} $A_{n+1}\subset A_n$, and
\item \label{step2} $A_n \setminus A_{n+1}$ contains an open disk $E_{n+1}$ of radius at least $\eta:=\min(\eta_1,\eta_2/2)$.
\end{enumerate}
The disks $(E_n)_{n>0}$ are pairwise disjoint and all contained in $\Int \gamma$, and moreover they have radius at least $\eta>0$. As $\Int \gamma$ is bounded, this gives the desired contradiction, thus completing the proof of the theorem.
\begin{figure}
\centering
\includegraphics{thmProof.pdf}
\caption{The construction in the proof of Theorem~\ref{MAINTHM}, where $\gamma$ is the black Jordan curve.
The region $A_{n+1}$ is bounded by the fat Jordan curve.
The small disk $E_{n+1}$ is contained in $A_n$ (not excplicitly shown), but disjoint from $A_{n+1}$.}
\label{fig:thm}
\end{figure}
We have already constructed $x_0,y_0$ and $D_0$. We now describe the construction of $x_{n+1},y_{n+1}$, and $D_{n+1}$ given $x_n$, $y_n$, and $D_n$, and then argue that with this construction,~\ref{step1} and~\ref{step2} above are satisfied. Figure~\ref{fig:thm} illustrates the construction.
First of all, $\gamma[x_n,y_n]$ winds positively around $D_n$ from $x_n$ to $y_n$ by Lemma~\ref{positivelemma}, so we may apply Lemma~\ref{diamLemma} and conclude that no open unit disk contains $\gamma[x_n,y_n]$. In particular, this applies to the open unit disk having the same center as $D_n$, and as the radius of $D_n$ is at most $1-\eta_2$, there exists a point $z\in \gamma(x_n,y_n)$ with $\textrm{dist}(z,D_n)\geq \eta_2$.
Consider now the Jordan curve
$$
\gamma_1:=\gamma[x_n,y_n]\cup \partial D_n[y_n,x_n]
$$
which, by Lemma~\ref{positivelemma}, contains $D_n$. We let $D_{n+1}$ be the open disk of maximal radius contained in $\Int \gamma_1$ and tangent to $U_{z}$ in $z$.
By the same reasoning that we used to argue about $\partial D_0$ above, we must have that $\partial D_{n+1}$ meets $\gamma_1$ in at least two points.
None of these points can be in $\partial D_n(y_n,x_n)$ since this would imply that $D_{n+1}\subset D_n$ and hence that $z\in \overline{D_n}$, a contradiction. It follows that $|\gamma[x_n,y_n]\cap\partial D_{n+1}|\geq 2$. The set $\gamma\setminus \partial D_{n+1}$ is a collection of open intervals of $\gamma$, and since $|\gamma[x_n,y_n]\cap\partial D_{n+1}|\geq 2$, at least one of them, call it $\gamma(x_{n+1},y_{n+1})$, is contained in $\gamma(x_n,y_n)$. This completes the construction of $x_{n+1},y_{n+1}$, and $D_{n+1}$.
It remains to argue that with this construction, the conditions~\ref{step1} and~\ref{step2} are satisfied.
\begin{enumerate}[label=(\roman*)]
\begin{comment}
\item Clearly $D_{n+1}\subset \Int \gamma_1\subset \Int \gamma$ and since $|\gamma\cap\partial D_{n+1}|\geq 2$ we must have that $\rad D_{n+1}\in[\eta_1,1-\eta_2]$.
\item This is exactly how $x_{n+1}$ and $y_{n+1}$ were constructed.
\end{comment}
\item We make use of the following claim.
\begin{claim}\label{claimthm1}
We have that $\partial D_{n}[x_{n},y_{n}] \cap\partial D_{n+1}(x_{n+1},y_{n+1})=\emptyset$.
\end{claim}
\begin{proof}[Proof of Claim~\ref{claimthm1}]
Let the Jordan curve $\gamma_2$ be defined by
\begin{align*}
\gamma_2:=\gamma[x_{n+1},y_{n+1}]\cup \partial D_{n+1}[y_{n+1},x_{n+1}].
\end{align*}
By Lemma~\ref{positivelemma}, $D_{n+1}\subset\Int \gamma_2$, from which it follows that $\partial D_{n+1}(x_{n+1},y_{n+1})\subset \Int \gamma_2$.
Suppose for contradiction that $\partial D_{n}[x_{n},y_{n}]\cap\partial D_{n+1}(x_{n+1},y_{n+1})\neq \emptyset$. Since $x_n,y_n\notin \Int \gamma_2$, $\partial D_{n}[x_{n},y_{n}]$ must then intersect $\gamma_2$ at least twice.
But
$$
\partial D_{n}[x_{n},y_{n}]\cap \gamma(x_{n+1},y_{n+1}) \subset \partial D_{n}[x_{n},y_{n}]\cap \gamma(x_n,y_n)=\emptyset,
$$
so in fact $\partial D_{n}[x_{n},y_{n}]$ must intersect $\partial D_{n+1}[y_{n+1},x_{n+1}]$ at least twice.
It follows that $\partial D_{n}$ intersects $\partial D_{n+1}$ at least \emph{thrice}, which is a contradiction as $D_n\neq D_{n+1}$.
We conclude that $\partial D_{n}[x_{n},y_{n}] \cap \partial D_{n+1}(x_{n+1},y_{n+1})=\emptyset$, as desired.
\end{proof}
The arc $\partial D_n[x_n,y_n]$ separates $\Int \gamma_1$ into two regions, namely $D_n$ and $A_n$. The claim thus gives that either $\partial D_{n+1}(x_{n+1},y_{n+1})\subset D_n$ or $\partial D_{n+1}(x_{n+1},y_{n+1})\subset A_n$.
Now observe that $x_{n+1}\in \gamma(x_n,y_n)$ or $y_{n+1}\in \gamma(x_n,y_n)$: Indeed, $z\in \gamma(x_n,y_n)\cap \overline{D_{n+1}}$ but $\gamma(x_{n+1},y_{n+1})$ contains no point of $\overline{D_{n+1}}$.
In particular either $x_{n+1} \notin \overline{D_n}$ or $y_{n+1} \notin \overline{D_n}$ and it is therefore the case that $\partial D_{n+1}(x_{n+1},y_{n+1})\subset A_n$. Since now $\partial A_{n+1}\subset \overline{A_n}$, we get that $A_{n+1}\subset A_n$.
\item We define $E_{n+1}$ to be the disk of radius $\eta$, tangent to $U_{z}$ in $z$, and contained in $U_{z}$.
The radius of $E_{n+1}$ is at most $\eta_2/2$, and since $z\in \partial E_{n+1}$ has distance at least $\eta_2$ to $D_n$, it follows that $E_{n+1}\subset \Int \gamma_1 \setminus \overline{D_n}=A_n$.
Moreover, $E_{n+1}\cap A_{n+1}\subset D_{n+1}\cap A_{n+1}=\emptyset$, and we conclude that $E_{n+1}\subset A_n\setminus A_{n+1}$, as desired.
\end{enumerate}
Having argued that the conditions~\ref{step1} and~\ref{step2} are satisfied, the proof is complete.
\end{proof}
\section{Open problems}
We mention here two open problems that we find interesting.
\subsection{Are curves of bounded convex curvature rectifiable?}
As mentioned in the introduction, some earlier proofs of the Pestov--Ionin theorem have used that the length of $\gamma$ is finite.
In contrast, our proof relies on $\Int \gamma$ having finite area which is an immediate property of Jordan domains.
It is, however, easy to verify that \emph{if} curves of bounded convex curvature are rectifiable, i.e., has finite length, then the proof given by Pestov and Ionin~\cite{pestov1959largest} would carry through almost unchanged.
We believe this to actually be the case.
Is there a (simple) proof that curves of bounded convex curvature are rectifiable?
\begin{comment}
\subsection{Does any Jordan domain contain a unique maximal subset having boundary curves of bounded convex curvature?}
Let $\lambda$ be any Jordan curve.
Let $\Gamma$ be the set of curves $\gamma$ of bounded convex curvature such that $\Int\gamma\subset\Int\lambda$.
Consider the union
\begin{align*}
A=\bigcup_{\gamma\in\Gamma} \Int\gamma.
\end{align*}
Is the boundary $\partial A$ a disjoint union of curves of bounded convex curvature?
In the special case that $\lambda$ consists of a finite number of circular arcs and line segments, this follows from the correctness of the algorithm given by Abrahamsen and Thorup~\cite{abrahamsen2016finding} which computes the region $A$.
\end{comment}
\subsection{What is the picture in higher dimensions?}
The Jordan-Brouwer separation theorem states that if $\gamma$ is an $n$-dimensional topological sphere in $\mathbb R^{n+1}$, i.e., is obtained as the image of an injective continuous map $S^n\longrightarrow \mathbb R^{n+1}$, then the complement of $\gamma$ in $\mathbb R^{n+1}$ consists of exactly two connected components, one being bounded (the interior) and one being unbounded (the exterior).
It is easy to generalize our notion of bounded convex curvature to this setting.
We say that $\gamma$ has \emph{bounded convex curvature} if for every point $x$ on $\gamma$, there is an open $(n+1)$-dimensional unit ball $U_x$ and $\varepsilon_x>0$ such that
\begin{align}\label{bccCondGen}
x\in\partial U_x\quad\text{and}\quad \ball{x}{\varepsilon_x}\cap U_x\subset\Int\gamma.
\end{align}
The natural question is: If $\gamma$ has bounded convex curvature, does $\Int \gamma$ contain an open $(n+1)$-dimensional unit ball?
This turns out to be false.
Indeed, Lagunov and Fet~\cite{Lagunovsphere,Lagunovsphere2} studied connected $n$-dimensional $C^2$-hypersurfaces in $\mathbb R^{n+1}$ having all principal curvatures $|\kappa_i|\leq 1$.
They showed, for instance, that topological $n$-spheres with these properties all contain an $(n+1)$-ball in their interior of radius at least $r_0=\sqrt{3/2}-1 \cong 0.2246$, and that this is sharp when $n=2$.
Other relevant work was made by Lagunov~\cite{Lagunovsurf3,Lagunovsurf,Lagunovsurf2}, who showed that all compact, connected, $C^2$, $n$-dimensional hypersurfaces embedded in $\mathbb R^{n+1}$, for which all principal curvatures $\kappa_i$ satisfy $|\kappa_i|\leq 1$, contain a ball of radius $r_1=2/\sqrt{3}-1\cong 0.155$ and that this is sharp.
As our class of hypersurfaces of bounded convex curvature is less restricted (there is no assumption on differentiability and we make no requirement that the concave curvature be bounded) it is natural to ask whether it still holds that
topological $n$-spheres of bounded convex curvature contain a ball of radius $r_0$ (or $r_1$ in the case of general compact, connected, $n$-dimensional hypersurfaces embedded in $\mathbb R^{n+1}$) in their interior. Even for $n=2$ we find this an interesting question.
\subsection*{Acknowledgments}
We thank Anders Thorup for his very careful reading of the manuscript and numerous suggestions for improving the presentation, in particular by pointing out steps in our proofs that seemed intuitively clear, but in fact required detailed arguments.
A big obstacle in our work has been that many of the relevant papers are written in Russian. We thank Richard Bishop for providing us copies of his English translations of~\cite{Lagunovsphere} and~\cite{Lagunovsphere2}.
We furthermore wish to thank the teams behind \href{www.i2ocr.com} and \href{translate.google.com}, the first of which we used to convert the cyrillic script in~\cite{pestov1959largest} to machine-encoded cyrillic text and the second of which to translate the resulting text to English, together making it possible for us to understand the proof given by Pestov and Ionin.
|
1,108,101,566,332 | arxiv |
\section{Acknowledgements}
This work was partially supported by the U.S. Department of Energy (DOE) Office of Science under contract number DE-AC02-05CH11231 and under grant number DE-SC0019066; by the U.S. National Science Foundation (NSF); by the U.K. Science \& Technology Facilities Council under award numbers, ST/M003655/1, ST/M003981/1, ST/M003744/1, ST/M003639/1, ST/M003604/1, and \\ ST/M003469/1; and by the Portuguese Foundation for Science and Technology (FCT)under award number \\ PTDC/FIS-PAR/28567/2017; and by the Institute for Basic Science, Korea (budget number IBS-R016-D1). University College London and Lawrence Berkeley National Laboratory thank the U.K. Royal Society for travel funds under the International Exchange Scheme (IE141517). We acknowledge additional support from the Boulby Underground Laboratory in the U.K.; the University of Wisconsin for grant UW PRJ82AJ; and the GridPP Collaboration, in particular at Imperial College London. This work was partially enabled by the University College London Cosmoparticle Initiative. Futhermore, this research used resources of the National Energy Research Scientific Computing Center, a DOE Office of Science User Facility supported by the Office of Science of the U.S. Department of Energy under Contract No. DE-AC02-05CH11231. The University of Edinburgh is a charitable body, registered in Scotland, with the registration number SC005336. The research supporting this work took place in whole
or in part at the Sanford Underground Research Facility
(SURF) in Lead, South Dakota. The assistance of SURF
and its personnel in providing physical
access and general logistical and technical support is
acknowledged. SURF is a federally sponsored research facility under Award Number DE-SC0020216.
\section{Conclusion}
\label{sec:summary}
Considerable progress has been made towards
implementing the LZ conceptual and technical designs
described in Refs. ~\cite{Mount:2017qzi,akerib:2015cja}.
The start of science operations is expected
2020. The projected background rate enables a 1000~day exposure
of the 5.6~tonne fiducial mass, with a spin-independent
cross-section sensitivity
of 1.5$\times10^{-48}$~cm$^2$
(90\% C.L.) at 40 GeV/c$^2$. This will
probe a significant portion of the viable WIMP dark matter
parameter space.
LZ is also be sensitive to spin-dependent interactions,
through the odd neutron number isotopes $^{129}$Xe
and $^{131}$Xe (26.4\% and 21.2\% respectively by mass).
For spin-dependent WIMP-neutron(-proton) scattering a sensitivity
of 2.7$\times10^{-43}$~cm$^2$ (8.1$\times10^{-42}$~cm$^2$) is expected
at 40~GeV/c$^2$.
\section{Overview}
\label{sec:overview}
In this article we describe the design and
assembly of the LUX-ZEPLIN (LZ) experiment, a
search for dark matter particles at the
Sanford Underground Research Facility (SURF) in Lead,
South Dakota, USA. LZ is capable of observing
low energy nuclear recoils, the
characteristic signature of the scattering
of WIMPs (Weakly Interacting Massive Particles).
It is hosted in the Davis Campus
water tank at SURF, formerly the home of the LUX
experiment\cite{akerib:2012ys}. LZ features a
large liquid xenon (LXe) time projection
chamber (TPC), a well-established technology
for the direct detection of WIMP dark matter
for masses greater than a few GeV.
The detector design and experimental
strategy derive strongly from the
LUX and ZEPLIN--III experiments~\cite{Akerib:2016vxi,akimov:2011tj}.
A Conceptual Design Report and a Technical Design Report were completed in
2015 and 2017, respectively~\cite{Mount:2017qzi,akerib:2015cja}.
The projected cross-section
sensitivity of the experiment is $1.5\times
10^{-48}$ cm$^2$ for a 40 GeV/c$^2$ WIMP (90\%~C.L.)~\cite{Akerib:2018lyp}.
\begin{figure*}[t!]
\centering
\includegraphics[trim={0 0.0cm 0 0.0cm},clip,width=0.65\linewidth]{fig1.pdf}
\caption{Rendering of the LZ experiment, showing the major detector subsystems. At the center is the liquid xenon TPC (1), monitored by two arrays of PMTs and serviced by various cable and fluid conduits (upper and lower). The TPC is contained in a double-walled vacuum insulated titanium cryostat and surrounded on all sides by a GdLS Outer Detector (2). The cathode high voltage connection is made horizontally at the lower left (5). The GdLS is observed by
a suite of 8" PMTs (3) standing in the water (4) which provides shielding for the detector. The pitched conduit on the right (6) allows for neutron calibration sources to illuminate the detector.}
\label{fig:LZSolid}
\end{figure*}
A cutaway drawing of the experiment is shown in Fig.~\ref{fig:LZSolid}. The LZ TPC monitors 7 active tonnes (5.6 tonnes fiducial) of LXe above its cathode. Ionizing interactions in the active region create prompt and secondary scintillation signals (‘S1’ and ‘S2’), and these are observed as photo-electrons (PEs) by two arrays of photomultiplier tubes (PMTs). The nature of the interaction, whether electronic recoil (‘ER’) or nuclear recoil (‘NR’), is inferred from the energy partition between S1 and S2. The location of the event is measured from the drift time delay between S1 and S2 (z coordinate) and from the S2 spatial distribution (x and y coordinates). The TPC is housed in an inner cryostat vessel (ICV), with a layer of ‘skin’ LXe acting as a high voltage stand-off. The skin is separately instrumented with PMTs to veto gamma and neutron interactions in this region. The ICV is suspended inside the outer cryostat vessel (OCV), cooled by a set of LN thermosyphons, and thermally isolated by an insulating vacuum. Both the ICV and OCV are fabricated from low radioactivity titanium~\cite{Akerib:2017iwt}. The cryostat stands inside the Davis Campus water tank, which provides shielding from laboratory gammas and neutrons. An additional set of PMTs immersed in the water observe an Outer Detector (OD) comprised of acrylic vessels (AVs) surrounding the cryostat. The AVs contain organic liquid scintillator loaded with Gadolinium (GdLS) for efficient neutron and gamma tagging. The water tank and OD are penetrated by various TPC services, including vacuum insulated conduits for LXe circulation, instrumentation cabling, neutron calibration guide tubes, and the cathode high voltage (HV) connection.
One goal of the experimental architecture is to minimize the amount of underground fabrication and assembly of the various detector sub-systems. The LZ TPC is assembled and integrated into the ICV in a surface laboratory cleanroom at SURF, with the ICV outer diameter taking maximal advantage of the available space in the Yates shaft. The OCV and OD, being larger than the ICV, cannot be transported underground in monolithic form. Therefore the OCV is segmented into three flanged components and integrated and sealed in the Davis Campus water tank, while the OD is subdivided into ten hermetic AVs. This architecture does not require any underground titanium welding or acrylic bonding.
Besides the instrumented skin and OD, several other design choices distinguish LZ from its LUX predecessor. The cathode HV connection, for example, is made at a side port on the cryostat, while the PMT cables from the lower half of the TPC are pulled from the bottom. The heat exchanger for LXe condensation, evaporation, and circulation is located in a separate and dedicated cryostat outside the water tank, with LXe being circulated to and from the bottom of the detector through vacuum insulated transfer lines. To continuously reject heat from the LN thermosyphon systems, a cryocooler is installed above the water tank in the Davis Campus. This eliminates the need to transport LN to the underground, except during cryocooler maintenance and repair.
The experimental strategy is driven by the need to control radon, krypton, and neutron backgrounds. Control of dust on all xenon-wetted parts is essential, since it can be an important source of radon. Kr is removed from the vendor supplied xenon using an off-site charcoal chromatography facility. This purification step takes place prior to the start of underground science operations. Gamma backgrounds are highly suppressed by the self-shielding of the TPC, and by careful control and selection of detector materials. Neutrons from spontaneous fission and alpha capture on light nuclei are efficiently tagged and vetoed by the OD and skin.
\section{The Xenon Detector: TPC and Skin}
\label{sec:TPC}
The Xenon Detector is composed of the TPC and its Xe Skin Veto companion. The central TPC contains 7 tonnes of active LXe which constitutes the WIMP target. This volume measures approximately 1.5~m in diameter and height, and is viewed by two arrays of PMTs. The liquid phase produces prompt S1 pulses. This is topped by a thin layer of vapor (8 mm thick) where delayed S2 electroluminescence light is produced from ionization emitted across the surface. Around and underneath the TPC, the Xe Skin detector contains an additional $\sim$2~tonnes of liquid, also instrumented with PMTs. This space is required for dielectric insulation of the TPC but it constitutes an anti-coincidence scintillation detector in its own right. An overview of the Xenon Detector is shown in Fig.~\ref{fig:TPC-overview}.
\begin{figure}[h!]
\centering
\includegraphics[trim={0 0.0cm 0 0.0cm},clip,width=0.80\linewidth]{fig2a.jpg} \\
\vspace{0.5cm}
\includegraphics[trim={0 0.0cm 0 0.0cm},clip,width=0.65\linewidth]{fig2b.jpg}
\caption{The assembled Xenon Detector. Upper panel labels: 1-Top PMT array; 2-Gate-anode and weir region (liquid level); 3-Side skin PMTs (1-inch); 4-Field cage; 5-Cathode ring; 6-Reverse field region; 7-Lower side skin PMTs (2-inch); 8-Dome skin PMTs (2-inch). Lower panel photo by Matthew Kapust, Sanford Underground Research Facility.}
\label{fig:TPC-overview}
\end{figure}
The design of the Xenon Detector optimizes: i) the detection of VUV photons generated by both S1 and S2, through carefully chosen optical materials and sensors both in the TPC and the Xe Skin; and ii) the detection of ionization electrons leading to the S2 response, through carefully designed electric fields in the various regions of the TPC. The hardware components involved in the transport and detection of photons and of electrons in the detector are described in Sections~\ref{sec:opticaltpc} and \ref{sec:electricaltpc}. Section~\ref{sec:fluidstpc} describes the flow and the monitoring of the LXe fluid itself.
\subsection{Optical Performance of the TPC}
\label{sec:opticaltpc}
Both the S1 and the S2 signals produced by particle interactions consist of vacuum ultraviolet (VUV) photons produced in the liquid and gas phases, respectively. It is imperative to optimize the detection of these optical signals. For the S1 response, the goal is to collect as many VUV photons as possible, as this determines the threshold of the detector. This is achieved primarily by the use of high quantum efficiency (QE) PMTs optimized for this wavelength region, viewing a high-reflectance chamber covered in PTFE, and by minimizing sources of photon extinction in all materials. Good photocathode coverage is also essential. For the S2 response, the gain of the electroluminescence process makes it easier to collect enough photons even at the lowest energies, and the main design driver is instead to optimize the spatial resolution, especially for peripheral interactions.
The TPC PMTs are 3--inch diameter Hamamatsu R11410--22, developed for operation in the cold liquid xenon and detection of the VUV luminescence. The “-22” variant was tuned for LZ in particular: both for ultra-low radioactivity and for resilience against spurious light emission observed at low temperature in previous variants~\cite{Akimov:2015}. The average cold QE is 30.9\% after accounting for the dual photoelectron emission effect measured for xenon scintillation~\cite{Paredes:2018hxp}. Key parameters were tested at low temperature for all tubes, including pressure and bias voltage resilience, gain and single photoelectron response quality, afterpulsing and dark counts. These are critical parameters that directly influence the overall performance of the detector. The procurement, radioassay and performance test campaign lasted for nearly three years. The PMTs are powered by resistive voltage divider circuits attached to the tubes inside the detector. The voltage ladder is that recommended by Hamamatsu, using negative bias to extract the signal near ground potential (two independent cables are used for signal and bias). The nominal operating gain of the PMTs is $3.5\times 10^6$ measured at the end of the signal cables.
\begin{figure}[h!]
\centering
\includegraphics[trim={0 0.0cm 0 0.0cm},clip,width=0.85\linewidth]{fig3a.jpg} \\
\vspace{0.5cm}
\includegraphics[trim={0 0.0cm 0 0.0cm},clip,width=0.85\linewidth]{fig3b.jpg}
\caption{Arrays of R11410--22 PMTs viewing the TPC. Upper panel: front view of the top PMT array within its assembly and transportation enclosure. Note the circular PMT arrangement at the periphery transitioning to compact hexagonal towards the center, and the coverage of non-sensitive surfaces by interlocking pieces of highly reflective PTFE. Photo by Matthew Kapust, Sanford Underground Research Facility. Lower panel: Back view of bottom array PMTs in hexagonal arrangement, showing cable connections and routing as well as 18 2-inch dome PMTs, which are part of the skin veto system. Also visible are the titanium support trusses and the LXe distribution lines. Most surfaces are covered in PTFE to aid light collection.}
\label{fig:PMT-arrays}
\end{figure}
Two PMT arrays detect the xenon luminescence generated in the TPC. These are shown in Fig.~\ref{fig:PMT-arrays}. An upward-looking “bottom” array immersed in the liquid contains 241 units arranged in close-packed hexagonal pattern. A downward-looking “top” array located in the gas phase features 253~units arranged in a hybrid pattern that transitions from hexagonal near the center to circular at the perimeter. This design was chosen to optimize the position reconstruction of the S2 signal for interactions near the TPC walls, a leading source of background in these detectors. The structural elements of the arrays are made from low-background titanium. These include a thin plate reinforced by truss structures with circular cut-outs to which the individual PMTs are attached. In the bottom array this plate sits at the level of the PMT windows. The exposed titanium is covered with interlocking PTFE pieces to maximize VUV reflectance. The PMTs are held by Kovar belts near their mid-point and attached to this plate by thin PEEK rods. In the top array the structural plate is located at the back of the tubes, and the gaps between PMT windows are covered with more complex interlocking PTFE pieces secured to the back plate. The array designs ensure that mechanical stresses induced by the thermal contraction of PTFE and other materials does not propagate significantly to the PMT envelopes. A number of blue LEDs (Everlight 264-7SUBC/C470/S400) are installed behind plastic diffusers between PMTs at the face of both arrays. These are used to help optimize and calibrate the PMT gains and the timing response of the detector. The assembly and transport of the PMT arrays required a robust QA process to prevent mechanical damage, dust contamination, and radon-daughter plate-out. At the center of this program were specially-designed hermetic enclosures that protected the arrays during assembly, checkout, transport and storage until assembly into the TPC at SURF.
A key element of the optical systems is the $\sim$20~km of cabling used for PMT and sensor readout. A 50-Ohm coaxial cable from Axon Cable S.A.S. (part no. P568914A\textasciicircum) was selected for electrical and radioactivity performance. This cable has a copper-made inner conductor and braid, and extruded FEP insulator and outer sleeve (1.3-mm nominal diameter). The 12~m span from the detector to the external feed-throughs means that signal attenuation, dispersion and cross-talk are important considerations. The individual cables were pre-assembled into bundles which are routed together through two conduits that carry the cables from the top and bottom of the detector. An additional consideration is the potential for radon emanation. This is especially important for the fraction of the cabling located near room temperature. Low intrinsic radioactivity of the cable materials can be easily achieved, but dust and other types of contamination trapped within the fine braid during manufacture can be problematic. We developed additional cleanliness measures with Axon to mitigate this and have opted for a jacketed version which acts as a further radon barrier. In addition, the xenon flow purging the cable conduits is directed to the inline radon-removal system described in Sec.~\ref{sec:XeHandling}.
After the PMTs, the next main optical component of the detector is the PTFE that defines the field cage and covers the non-sensitive detector components. The optical performance of the detector depends strongly on its VUV reflectivity -- VUV photons reflect several times off PTFE surfaces before detection -- and the radiopurity of this material is also critical due to its proximity to the active volume. We identified both the PTFE powder and process that optimized radiological purity and VUV reflectivity in liquid xenon during a long R\&D campaign~\cite{Neves:2016tcw,Haefner:2016ncn}. The PTFE selected was Technetics 8764 (Daiken M17 powder), whose reflectivity when immersed in LXe we measured as 0.973 ($>$0.971 at 95\% C.L.). Our data are best fitted by a diffuse-plus-specular reflection model for this particular material, which was tested using the procedures described in Ref.~\cite{Neves:2016tcw}. Most PTFE elements were machined from moulds of sintered material, while thinner elements are `skived' from cast cylinders manufactured from the same powder.
Other factors influence the photon detection efficiency (PDE) for S1 and S2 light in the TPC. These include photon absorption by the electrode grids (wires and ring holders) and absorption by impurities in the liquid bulk. With realistic values for these parameters our optical model predicts a photon detection efficiency of around 12\% for S1 light.
The optical design of the S2 signal is optimized for robust reconstruction of low energy events at the edge of the TPC, in particular from the decay of Rn daughters deposited on the field cage wall, termed ``wall events". These events may suffer charge loss, thus mimicking nuclear recoils~\cite{lee2015}. If mis-reconstructed further into the active region, they can be a significant background. Our aim is to achieve $\sim$10$^6$:1 rejection for this event topology in the fiducial volume. A detailed study of this issue led to the adoption of a circular PMT layout near the detector edge, with the final PMT row overhanging the field cage inner walls, an optimized distance between the top PMT array and the liquid surface, and an absorbing optical layer (Kapton foil) covering the lateral conical wall in the gas phase.
\subsection{The Xe Skin Detector}
An important component of the Xenon Detector is the Xe Skin, the region containing around 2 tonnes of LXe between the field cage and the inner cryostat vessel. A primary motivation for this liquid is to provide dielectric insulation between these two elements. In addition to its electrical standoff function, it is natural to instrument this region for optical readout so that it can act as a scintillation-only veto detector, especially effective for gamma-rays. Also, if the skin were not instrumented, light from particle interactions or electrical breakdown in this region could leak in to the TPC unnoticed and create difficult background topologies. To further suppress this pathology, the LZ field cage is designed to optically isolate the skin from the TPC.
The side region of the skin contains 4~cm of LXe at the top, widening to 8~cm at cathode level for increased standoff distance. This is viewed from above by 93 1--inch Hamamatsu R8520-406 PMTs. These are retained within PTFE structures attached to the external side of the field cage, located below the liquid surface. At the bottom of the detector a ring structure attached to the vessel contains a further 20 2--inch Hamamatsu R8778 PMTs viewing upward into this lateral region, as shown in Fig.~\ref{fig:SkinDome}.
\begin{figure}[h!]
\centering
\includegraphics[trim={0 0.0cm 0 0.0cm},clip,width=0.83\linewidth]{fig4a.jpg} \\
\vspace{0.5cm}
\includegraphics[trim={0 0.0cm 0 0.0cm},clip,width=0.83\linewidth]{fig4b.jpg}
\caption{Upper panel: CAD section of the TPC below the cathode showing the location of the 2" bottom side skin (1) and lower dome (2) PMTs. Lower panel: Photograph showing the PTFE panelling attached to the ICV that ensures high reflectance in the skin region and the lower side skin PMT ring at the bottom of the vessel.}
\label{fig:SkinDome}
\end{figure}
The dome region of the skin at the bottom of the detector is instrumented with an additional 18 2--inch R8778 PMTs. These are mounted horizontally below the bottom array, with 12 looking radially outward and 6 radially inward. To enhance light collection, all PMTs in that region and array truss structures are dressed in PTFE. Moreover, PTFE tiles line the ICV sides and bottom dome. To attach the PTFE lining, low profile titanium buttons designed to minimize field effects were epoxied to the ICV wall with MasterBond EP29LPSP cryogenic epoxy. Holes were machined into the PTFE tiles to fit around the buttons, and PTFE washers were attached to the buttons with PTFE screws to secure the tiles in place. These are visible in Fig.~\ref{fig:SkinDome}.
\subsection{TPC Electrostatic Design}
\label{sec:electricaltpc}
The S2 signature detected from particle interactions in the liquid xenon comes from the transport of ionization electrons liberated by the recoiling nucleus or electron, and their subsequent emission into the gas phase above the liquid, where the signal is transduced into a second VUV pulse via electroluminescence. Great care is required to ensure that the various electric field regions in the detector achieve this with high efficiency and with low probability for spurious responses. The LZ detector is instrumented as a traditional three-electrode two-phase detector, with cathode and gate wire-grid electrodes establishing a drift field in the bulk of the liquid, and a separate extraction and electroluminescence region established between the gate and an anode grid. The former sits 5~mm below the surface and the latter is 8~mm into the gas. The nominal operating pressure of the detector is 1.8~bara. At nominal fields, each electron emitted into the gas generates $\sim$820 electroluminescence photons.
The nominal 300~V/cm drift field established in the active region of the detector requires application of an operating voltage of $-$50~kV to the cathode grid, which allows LZ to meet its baseline performance for particle discrimination. The design goal is $-$100 kV, the maximum operating voltage for the system. The system to deliver the HV to the cathode grid contains some of the highest fields in the detector. The HV is delivered from the power supply (Spellman SL120N10, 120~kV) via a room-temperature feed-through and into a long vacuum-insulated conduit entering the detector at the level of the cathode grid, as shown in Fig.~\ref{fig:CathodeCone}. Most of the system was tested to $-$120~kV in liquid argon, except for the flexible component connecting the grading structure to the cathode, for which a similarly-shaped part was tested in liquid argon to surface fields 30\% higher than those needed to meet the design goal.
\begin{figure}[h!]
\centering
\includegraphics[trim={0 0.0cm 0 0.0cm},clip,width=0.95\linewidth]{fig5.jpg}
\caption{The interface of the high voltage system with the cathode. 1-Polyethylene high voltage cable; 2-LXe displacer; 3-LXe space; 4-Stress cone; 5-Grading rings.}
\label{fig:CathodeCone}
\end{figure}
\begin{figure*}[t!]
\centering
\includegraphics[trim={0 0.0cm 0 0.0cm},clip,width=0.95\linewidth]{fig6.jpg}
\caption{The electron extraction region assembly on the loom. 1-Anode grid; 2-Gate grid. During TPC operations the anode is above the LXe surface and the gate is below. The liquid level is registered to this assembly by three weir spillovers.}
\label{fig:FullSizeGrid}
\end{figure*}
The cable enters the xenon space through two o-rings at the core of the feed-through system mounted on top of the water tank. The space between them is continuously pumped to provide a vacuum guard, monitored by a Residual Gas Analyzer. Located at room temperature and far from the detector, a leak-tight seal to the xenon space can be reliably achieved and the feed-through materials are not a radioactivity concern. Another key feature of the cathode HV system is the reliance on a single span of polyethylene cable (Dielectric Sciences SK160318), connecting the power supply all the way to the cathode in the liquid xenon many meters away. This 150~kV-rated cable features a conductive polyethylene sheath and center core and contains no metal components, avoiding differential contraction and thermal stress issues, and precluding the appearance of insulation gaps between the dielectric and the sheath, which contract equally. The HV line ends in a complex voltage-grading structure near the cathode grid ring where the conductive sheath splays away. This grades the potential along the dielectric, preventing very high field regions inside the detector. The maximum field in the liquid xenon is 35~kV/cm. This voltage grading system ends in a bayonet connector that allows rapid engagement to the cathode ring during installation, minimizing exposure of the detector to radon.
Inside the ICV there is a significant insulation stand-off distance of 4--8 cm between the field cage and the inner vessel, and there is no instrumentation or cabling installed along the length of the skin region, where the field is high and the possibility of discharges and stray light production would be concerning. This region is instead optimized for optical readout, becoming an integral part of the LZ veto strategy.
The drift region is 145.6~cm long between cathode and gate electrodes, and 145.6~cm in diameter, enclosed by a cylindrical field cage which defines the optical environment for scintillation light and shapes the electric field for electron transport. The field cage is constructed of 58 layers of PTFE, which provides insulation and high reflectivity, with a set of embedded titanium electrode rings. The layers are 25~mm tall, the Ti rings 21~mm tall, and each layer of PTFE is azimuthally segmented 24 times. Due to their proximity to the active volume, these are critical materials for LZ. The metal rings are made from the same batch of titanium used for the cryostat~\cite{Akerib:2017iwt} (the PTFE is described above). A key design driver was to achieve a segmented field cage design: to prevent the excessive charge accumulation observed in continuous PTFE panels, and to better cope with the significant thermal contraction of PTFE between ambient and low temperature. The field cage embeds two resistive ladders connecting the metal rings, each with two parallel 1~G$\Omega$ resistors per section (the first step has 1~G$\Omega$ in parallel with 2~G$\Omega$ to tune the field close to the cathode ring). This ladder ensures a vertical field with minimal ripple near the field cage.
The lower PMT array cannot operate near the high potential of the cathode and so a second, more compact ladder is required below that electrode. This reverse field region (RFR) contains only 8 layers with two parallel 5~G$\Omega$ resistors per section, and terminates 13.7~cm away at a bottom electrode grid which shields the input optics of the PMTs from the external field.
The electrode grids are some of the most challenging components of the experiment, both to design and to fabricate. Mechanically, these are very fragile elements that nonetheless involve significant stresses and require very fine tolerances for wire positioning. Besides optimizing the conflicting requirements of high optical transparency and mechanical strength, electrical resilience was an additional major driver -- spurious electron and/or light emission from such electrodes is a common problem in noble liquid detectors~\cite{rebel:2014uia,Tomas:2018}.
The anode and gate grids are depicted in Fig.~\ref{fig:FullSizeGrid}.
All grids are made from 304 stainless steel ultra-finish wire~\cite{cfw} woven into meshes with a few mm pitch using a custom loom. Key parameters of the four LZ grids are listed in Table~\ref{tab:Grids}. Each wire is tensioned with 250~g weights on both ends, and the mesh is glued onto a holder ring. The glue, MasterBond EP29LPSP cryogenic epoxy, is dispensed by a computer-controlled robotic system. It includes acrylic beads that prevent external stresses from being transferred to the wire crossings. A second metal ring captures the glued region, and the tensioning weights are released after curing. This woven mesh has several advantages over wires stretched in a single direction. The load on the ring set is azimuthally uniform and purely radial, allowing the mass of the rings to be minimized. The region of non-uniform field near the wires is smaller for a mesh, which improves the uniformity and hence energy resolution obtained in the S2 channel. Finally, a mesh grid has lower field transparency than stretched wires, resulting in a more uniform overall drift field. To preserve high S2 uniformity, it is important that the woven mesh have uniform wire spacing. The loom design included several features to achieve high uniformity during fabrication, and great care was taken in subsequent grid handling to avoid displacing wires.
\begin{table}[tbh]
\setlength{\extrarowheight}{3pt}
\caption[TPC electrode grid parameters]{TPC electrode grid parameters (all \SI{90}{\degree} woven meshes).}
\centering
\begin{tabular} {lrrcc}
\hline
Electrode & Voltage & Diam. & Pitch & Num. \\
& (kV) & ($\mu$m) & (mm) & \\
\hline
Anode & $+$5.75 & 100 & 2.5& 1169\\
Gate & $-$5.75 & 75 & 5.0& 583\\
Cathode & $-$50.0 & 100 & 5.0& 579\\
Bottom & $-$1.5 & 75 & 5.0& 565\\
\hline
\end{tabular}
\label{tab:Grids}
\end{table}
At the top of the detector, the electron extraction and electroluminescence region, which contains the gate-anode system, is one of its most challenging aspects (see Fig.~\ref{fig:ER}.) It establishes the high fields that extract electrons from the liquid and then produce the S2 light. The quality of the S2 signal is strongly dependent on both the small- and large-scale uniformity achieved in this region. In particular, the anode contains the finest mesh of any LZ grid (2.5~mm pitch) as this drives the S2 resolution.
\begin{figure}[t!]
\centering
\includegraphics[trim={0 0.0cm 0 0.0cm},clip,width=0.95\linewidth]{fig7.jpg}
\caption{The electron extraction and electroluminescence region. 1-TPC PMT; 2-Anode grid; 3-Gate grid; 4-Weir; 5-Xe Skin PMT.}
\label{fig:ER}
\end{figure}
An important consideration was the electrostatic deflection of the gate-anode system. We have directly measured the deflection of the final grids as a function of field using a non-contact optical inspection method. This matches expectations from electrostatic and mechanical modeling, and predicts a $\sim$1.6~mm decrease in the 13~mm gap at the 11.5~kV nominal operating voltage. As a consequence the field in the gas phase varies from 11.5~kV/cm at the center to 10.1~kV/cm at the edge. The combined effect of field increase and gas gap reduction increases the S2 photon yield by 5\% at the center. This effect can be corrected in data analysis. The gate wires sustain the strongest surface fields of any cathodic element in the detector ($\simeq$52 kV/cm, with no grid deflection, and $\simeq$58~kV/cm with 1.6 mm combined gate/anode deflection).
A major QA program was implemented to ensure the high quality of the grids throughout manufacture, cleaning, storage and transport, and to prevent damage and dust contamination. A key feature of this program was a series of measurements of the high voltage behavior in Xe gas of ~1/10th scale and full scale prototype grids, and the final cathode, gate and anode grids. Emission of single electrons to a rate as low as $\sim$Hz was measured via an S2-like electroluminescent signal with PMTs. These measurements confirmed earlier work~\cite{Tomas:2018} showing that electron emission is strongly reduced by passivation, thus the production gate grid was passivated in citric acid for 2 hours at $\sim$125$^{\circ}$F at AstroPak~\cite{Astropak}.
\subsection{Fluid Systems and Sensors}
\label{sec:fluidstpc}
Purified and sub-cooled LXe is prepared by the LXe tower and delivered to the Xenon Detector through two vacuum-insulated supply lines that connect at the ICV bottom center flange (see Sec.~\ref{sec:XeHandling}). One line flushes the lower dome and side skin, the other fans out through a manifold into seven PTFE tubes that penetrate the lower PMT array and supply LXe to the TPC. The fluid returns to the external purification system by spilling over three weirs that establish the liquid surface height. The weirs have 23.3~cm circumference, are uniformly spaced in azimuth around the top of the TPC, and are mounted to the gate grid ring so that the liquid level is well registered to the location of the gate and anode grids. The weirs drain through three tubes that penetrate the ICV in the side skin region and descend in the insulating vacuum space. The three lines are ganged together near the bottom of the ICV and return liquid to the purification circuit through a common vacuum-insulated transfer line.
A variety of sensors monitor the behavior and performance of the TPC. Six Weir Precision Sensors (WPS) measure the liquid level to within $\approx$20~$\mu$m in the gate-anode region. An additional WPS is installed in the lower dome to monitor filling and draining of the detector. Long level sensors (LLS) are installed in the LXe tower for providing information during detector filling and for monitoring during normal operation. RF loop antennae (LA) and acoustic sensors (AS) monitor the electrostatic environment of the detector during electrode biasing and thereafter during operation. The combination of WPS and AS sensors will also be used to detect disturbances of the fluid system and especially the liquid surface, such as bubbling or dripping. These will be aided by dedicated resistors installed in the bottom array that will be used to create bubbles in the LXe so that their signature can be characterized. At the top of the detector, a hexapod structure connects the top PMT array to the ICV lid through six displacement sensors, allowing the displacement and tilt between these two elements to be measured to within 0.1~degrees. This is especially important to prevent major stresses arising during cool-down of the TPC. Finally, PT100 thermometers are distributed at both ends of the detector. By design, all sensors and their cabling are excluded from the side skin region and other high electric field regions. All sensors are read out by dedicated electronics attached to flanges enclosed in the signal breakout boxes.
\section{Cryogenics and Xe Handling}
\subsection{Cryostat and cryogenics}
\label{sec:cryostat}
The Xenon Detector and its LXe payload are
contained in the Inner Cryostat Vessel. The Outer Cryostat Vessel
provides its vacuum jacket. As shown in Fig.~\ref{fig:CryostatAssembly},
the OCV is supported at the bottom by three legs. The same assembly
provides shelves for the GdLS AVs
located underneath the OCV. The ICV is suspended from the top head of
the OCV with a mechanism enabling its levelling from above.
Three long tubes run vertically to deploy calibration sources
into the insulating vacuum space between the vessels (see Sec.~\ref{sec:Calibrations}).
Both vessels were designed in
compliance with the ASME Boiler and
Pressure Vessel Code Section VIII Div. 1.
The ICV consists of a top head and a bottom vessel connected by a
large flange near the top. The maximum outer
diameter of the ICV is constrained
by the cross-section of the Yates
shaft. Its tapered shape is to reduce the electric field near the cathode.
The TPC structure is anchored to the bottom of the ICV
through six dedicated ports in
the dished end. Three angled ports below the main flange are provided
for the LXe weir drain return lines.
Two ports at the top head and the central port
at the bottom are for the PMT and instrumentation cables. The high voltage
port has been machined on the inside to form a curvature minimizing the
electric field around the cathode HV feed-through. Five plastic blocks are
attached to the tapered part of the ICV wall to prevent the ICV from
swinging during a seismic event.
The OCV consists of three segments in order to fit the
largest into the conveyance of the Yates shaft. A port in the
center of the top head hosts the low energy neutron source
deployed in a large tungsten ``pig" (see Sec.~\ref{sec:Calibrations}).
A reinforcing ring allows the top
AVs to rest on the OCV head.
The entire cryostat assembly is made out of a carefully selected
ultra-radiopure Grade-1 titanium sourced from a single titanium
supplier~\cite{Akerib:2017iwt}.
After a comprehensive material search campaign, a 5 metric ton
Titanium Gr-1 slab was procured from Timet and used to
fabricate all the stock material
required for the cryostat. Initially it was cut into
three pieces in order to roll the plates with multiple
thicknesses, forge all the flanges, and the ports and
to draw the welding wires. The ICV and OCV were
fabricated from this material at Loterios in Milan, Italy (see
Fig.~\ref{fig:CryostatAssembly}). The cleaning and etching
of the ICV and OCV is described in Sec.~\ref{sec:Materials}.
\begin{figure}[h!]
\centering
\includegraphics[trim={0 0.0cm 0 0.0cm},clip,width=0.80\linewidth]{fig8.pdf}
\caption{The ICV and OCV during a test assembly at Loterios in Italy, prior to cleaning and etching. The ICV is suspended from the top dome of the OCV. 1-ICV; 2-middle section OCV; 3-top dome section OCV; 4-ICV weir drain port; 5-OCV cathode high voltage port.}
\label{fig:CryostatAssembly}
\end{figure}
The ICV is maintained at its operating temperature by a set of
closed-loop thermosyphon heat pipes
utilizing nitrogen as the process fluid. The thermosyphons deliver heat
from the ICV
to the Cryogen On Wheels (COW), a bath of LN located
above the water tank in the Davis Cavern. A cryocooler, model
SPC-1 from DH industries, re-condenses the boil-off nitrogen from
the COW and transfers the heat to the chilled
water system.
During cryocooler maintenance and repair,
the COW can be filled by transporting LN to the Davis Cavern
from the surface. Four 450 liter
storage dewars located underground,
act as an intermediate LN repository to enable this mode of operation.
Six copper coldheads are bolted to welded titanium fins on the ICV
exterior and are serviced by three thermosyphon lines.
The coldheads are placed
at a height just below the LXe level.
The cooling power of each
thermosyphon is set by adjusting the amount of process
nitrogen in each circuit. Fine adjustment is provided by PID-controlled
trim heaters located on each coldhead. Two additional
thermosyphon circuits remove heat from the LXe tower
(see Sec.~\ref{sec:XeHandling}).
The total heat budget of the experiment is
estimated to be 700~W. The largest contributing item,
at 349~W, is due to the inefficiency of the primary two-phase xenon
circulation heat exchanger.
The thermosyphon trim heaters and the heat leak into the
ICV each account for about 115~W.
\subsection{Online Xe handling and purification}
\label{sec:XeHandling}
The online Xe purification system
continuously removes electronegative impurities from the Xe while also
providing some measure of Rn removal and control.
Rejection of electronegatives begins during the final assembly of
the detector with a TPC outgassing campaign
described in Sec.~\ref{sec:UGAssembly}. The electron
lifetime goal is 800 $\mu$s, sufficient to drift charge
from the cathode to the anode while suffering an acceptable
signal reduction factor of 1/e.
\begin{figure*}[t!]
\centering
\includegraphics[trim={0 0.0cm 0 0.0cm},clip,width=0.75\linewidth]{fig9.pdf}
\caption{Overview of the online Xe purification system. LXe
in the Xenon Detector (right) spills over a weir drain and
flows horizontally to the Liquid Xenon tower, which stands outside
the water tank. It is vaporized in a two phase heat exchanger,
pumped through a hot zirconium getter, and returned to the detector
after condensing. Cryovalves control the flow of LXe between
the LXe tower and the Xenon Detector. A radon removal system
treats Xe gas in the cable conduits and breakout feed-throughs
before sending it to the compressor inlet.}
\label{fig:XeCirculation}
\end{figure*}
An overview of the system
is shown in Fig.~\ref{fig:XeCirculation}. Xe gas
is pumped through a hot zirconium getter at a
design flow rate of 500 standard liters per
minute (SLPM), taking 2.4~days
to purify the full 10 tonne Xe inventory in a single pass.
The getter, model PS5-MGT50-R from SAES~\cite{SAES},
operates at 400 $^{\circ}$C. For thermal efficiency, the getter
features a heat exchanger to couple the inlet and outlet
Xe gas streams, substantially reducing the 3~kW heat burden at
500 SLPM. A pre-heater ensures that the gas strikes
the getter bed at the operating temperature. Besides electronegative
removal, the getter bed also serves as a permanent repository
for the tritium and $^{14}$C radio-labeled methane species
that calibrate the beta decay response of
the TPC (see Sec.~\ref{sec:Calibrations}).
Circulation flow is established by two all-metal
diaphragm gas compressors,
model A2-5/15 from Fluitron~\cite{fluitron}.
The two compressors operate in parallel, each
capable of 300 SLPM at 16 PSIA inlet pressure.
The system operates
with one compressor at reduced flow rate during periodic maintenance.
The total achieved gas flow is trimmed by a bellows-sealed
bypass proportional valve, model 808 from RCV.
Both circulation compressors have two stages,
each featuring copper seals plated onto stainless steel
diaphragms. All-metal sealing technology was chosen to
limit radon ingress from air.
The LXe tower is a cryogenic device standing
on the floor of the Davis Cavern
outside the water tank and at a height somewhat
below the Xenon Detector. Its primary purpose
is to interface the liquid and gaseous portions of the online
purification circuit and to efficiently exchange heat between them. There are four vessels in the tower:
the reservoir vessel, the two-phase heat exchanger (HEX),
the subcooler vessel, and the subcooler HEX.
The reservoir vessel collects LXe departing
the Xenon Detector via the weir system. It features a standpipe
construction to decouple its liquid level from that
in the weir drain line.
LXe flows from the bottom of the reservoir into the
two-phase HEX, where it vaporizes after exchanging
heat with purified Xe gas returning from the getter.
The two-phase HEX is an ASME-rated brazed plate
device made by Standard Xchange
consisting of corrugated stainless steel.
On its other side, condensing LXe flows into
the subcooler vessel and subcooler HEX. The vessel
separates any remaining Xe gas from the LXe, while
the HEX cools the LXe to below its saturation temperature.
The HEX consists of three isolated elements:
the LXe volume, an LN thermosyphon coldhead
cooled to 77~K, and a thin thermal coupling gap.
The power delivered to the LXe
can be varied from 90 W and 480 W by adjusting the composition
of the He/N$_2$ gas mixture in the gap.
An additional thermosyphon coldhead integrated with the
reservoir removes excess heat
during cooldown and operations.
Both the reservoir and the subcooler vessels
are equipped with LXe purity monitors (LPMs) to monitor
electronegatives entering and exiting the Xenon Detector.
Each LPM is a small, single-phase TPC which drifts free electrons
over a distance, and measures the attenuation of the electrons
during that transit.
LXe flows between the LXe tower and the Xenon Detector through
three vacuum insulated transfer lines that run across the bottom
of the Davis Cavern water tank. Two lines connect to the bottom
of the ICV and supply sub-cooled LXe to the TPC and
skin regions of the Xenon Detector. The third line
returns LXe from the ICV weir drain
system to the reservoir. The lines are constructed
by Technifab with an integrated vacuum insulation jacket.
They are further insulated
from the water by an additional vacuum shell.
Cryogenic control valves
from WEKA regulate the LXe flow in each of the three lines.
Conduits connect to the ICV at its
lower flange and upper dome to service PMT and instrumentation cables
to the TPC. The lower conduit, which is vacuum insulated and filled with
LXe, travels across the water tank floor,
penetrates its side wall, and mates with a
vertical LXe standpipe. Its cables emerge
into gaseous Xe and then connect to breakout feed-throughs
at the standpipe top. Two upper conduits filled with gaseous
Xe connect the ICV top head
to breakout feed-throughs and service cables
to the upper part of the Xenon Detector.
The Xe gas in the cable conduits and breakout feed-throughs are treated
for radon removal by a cold synthetic charcoal column
drawing 0.5 SLPM of Xe gas flow. The system is designed to
sequester $^{222}$Rn for three half-lives,
or 12.7 days, allowing 90\% of these atoms to decay.
The sequestration is accomplished by a gas chromatographic process that
employs 10 kilograms of synthetic charcoal (Saratech Spherical Adsorbent,
Blücher GmbH) cooled to 190~K~\cite{Pushkin:2018wdl}.
The technique was previously
demonstrated in Ref.\cite{abe201250}.
The charcoal was etched in nitric acid and rinsed with distilled water
to reduce its radon emanation rate.
Besides the LPMs, surveillance of the
impurity content of the Xe is also provided
by two coldtrap mass spectrometry
systems~\cite{leonard:2010zt,dobi:2011vc}.
These devices monitor for the presence
of stable noble gas species such as
$^{84}$Kr and $^{40}$Ar and also for electronegatives
such as O$_2$.
Ten standard liter samples of Xe gas are collected
and passed through a coldtrap cooled to 77~K,
a temperature at which Xe is retained while many impurities species
pass through. The outlet of the coldtrap is monitored
by a Residual Gas Analyzer (an RGA200 from SRS).
The sensitivity for detecting $^{84}$Kr in Xe
is better than 10 parts-per-quadrillion (g/g).
The coldtrap is cooled either with a pulse tube
refrigerator (model PT60 from Cryomech Inc.) or with an
open flask dewar of liquid nitrogen. One of these systems is
permanently plumbed to fixed locations
in the Xe handling system; the other acts as a mobile
utility system to be deployed as needed. Both are highly automated
and allow for multiple measurements per day.
To recover the ten tonne Xe
inventory to long term storage,
two high pressure gas compressors
(Fluitron model D1-20/120) pump Xe gas into 12 Xe
storage packs. The recovery compressors use the same all-metal
diaphragm technology as the circulation compressors.
Each Xe storage pack consist of 12
DOT-3AA-2400 49.1 liter cylinders sealed with
Ceoduex D304 UHP tied diaphragm valves and ganged
together in a steel frame. Each pack
weighs 1,800~kg when full. During Xe recovery
heat is added to the LXe
by electrical heaters and by softening the insulating
vacuum of the lower cable conduit. Two backup diesel
generators are provided in case of a power outage.
The emergency recovery logic is described in Sec.~\ref{subsec:Controls}.
All elements of the online Xe handling system were cleaned
for ultra high vacuum with solvents and rinsed in de-ionized water.
Orbital welds conform to ASME B31.3. Where possible,
stainless steel components have been etched in citric acid
to reduce radon emanation.
\subsection{Removal of Kr from Xe}
Beta decay of $^{85}$Kr in the LXe is a challenging
ER background source.
The acceptable Kr concentration, derived by
assuming an isotopic abundance of
$^{85}$Kr/Kr $\sim 2\times10^{-11}$, is
Kr/Xe $<$ 0.3~parts-per-trillion (g/g).
This concentration is achieved prior to the start of LZ operations
by separating trace Kr from the Xe inventory
with a gas charcoal chromatography process.
A total of 800~kg of Calgon PCB activated
charcoal is employed, divided evenly into two
columns. The charcoal was washed with water
to remove dust and baked under an N$_2$ purge
for 10 days, ultimately achieving
a charcoal temperature of 150 $^\circ$C.
During processing the Xe inventory is mixed with He
carrier gas circulated by a compressor
(model 4VX2BG-131 from RIX).
The Xe/He mixture is passed
through one of the two charcoal columns
at a pressure of 1.7 bara.
Trace Kr reaches the column outlet first and
is directed to an LN-cooled charcoal trap where it
is retained.
A Leybold-Oerlikon Roots blower and screw pump located
at the charcoal column outlet then activates, dropping
the column pressure to 10~mbar. This purges
the purified Xe from the column. The Xe is
separated from the He carrier gas by freezing it
at 77~K in an LN-cooled heat exchange vessel.
This freezer is periodically warmed to vaporize
the Xe ice, and the recovered
Xe gas is transferred at room temperature by
a Fluitron Xe recovery compressor
to one of the twelve Xe storage packs described above.
The entire 10 tonne
Xe inventory is processed in 16~kg batches. The chromatographic
and Xe purge cycles each take about 2 hours.
Two charcoal columns are employed to allow processing
and column purging to proceed in parallel.
A Kr rejection factor greater than
1000 can be achieved in a single pass through the system;
two passes are envisioned to achieve the required concentration.
The processing is monitored by a coldtrap Xe mass spectrometry
system for quality assurance. After processing, the Xe storage
packs are shipped from SLAC to SURF in preparation for
condensing into the Xenon Detector.
\section{Outer Detector}
\label{sec:OD}
The principal purpose of the Outer Detector is to tag neutrons scattering events in the TPC. Most neutrons originate from radioactive impurities in material immediately adjacent to the TPC, such as those from ($\alpha$,n) processes on PTFE. The OD is a near-hermetic liquid scintillator detector designed to capture and tag neutrons within a time window that allows the signals to be correlated with the NR in the TPC.
The detection medium for the OD is gadolinium-doped liquid scintillator contained within segmented acrylic vessels that surround the OCV.
Neutrons are detected predominantly through capture on $^{155}$Gd and $^{157}$Gd; a total of 7.9 MeV ($^{155}$Gd) or 8.5 MeV ($^{157}$Gd) is released in a post-capture cascade of, on average, 4.7 gammas. About 10\% of neutrons capture on hydrogen, emitting a single 2.2~MeV gamma. Gammas induce scintillation within the LS which is subsequently collected by the 120 8--inch PMTs that view the OD from a support system inside the water tank. To maximize light collection efficiency, there is both a Tyvek curtain behind, above and below the PMTs, and a layer of Tyvek surrounding the cryostat.
The OD has been designed to operate with a neutron detection efficiency of greater than 95\%. To optimize this efficiency, the concentration of Gd was chosen such that capture on H is sub-dominant. Furthermore, the time between a signal in the TPC and a neutron capture in the OD impacts the efficiency (see Fig.~\ref{fig:ODInefficiency}). The level of Gd chosen for LZ reduces the average capture time of thermal neutrons in liquid scintillator from 200~$\mu$s to 30~$\mu$s. However, there is a significant population of neutrons that survive several times longer than 30~$\mu$s. Simulations demonstrate that neutrons can spend significant time scattering and thermalizing within the acrylic walls of the OD vessels. To minimize
this effect, the acrylic walls are designed to be as thin as is structurally possible. Using less acrylic also reduces the number of H-captures. The use of a 500~$\mu$s time window allows for an efficiency of 96.5\% for a 200 keV threshold, while achieving a deadtime of less than 5\%.
\begin{figure}[h!]
\centering
\includegraphics[trim={0 0.0cm 0 0.0cm},clip,width=0.95\linewidth]{fig10.pdf}
\caption{Monte Carlo derived OD inefficiency as a function of veto window (time between S1 in the TPC and signal in the OD). The energy thresholds referenced in the legend are for electron recoils. \label{fig:ODInefficiency}}
\end{figure}
\subsection{Outer Detector systems}
\begin{figure}[t!]
\centering
\includegraphics[trim={0 0.0cm 0 0.0cm},clip,width=0.85\linewidth]{fig11.jpg}
\caption{The outer detector system in an exploded view. The four large side vessels are shown in green and the 5 smaller top and bottom vessels are shown in blue. Also shown are water displacers in red, and the stainless steel base in grey.}
\label{fig:explodedOD}
\end{figure}
A total of ten ultra-violet transmitting (UVT) Acrylic Vessels have been fabricated by Reynolds Polymer Technology in Grand Junction, Colorado. These consist of four side vessels, three bottom vessels, two top vessels and a `plug` which can be removed for photoneutron source deployment, see Fig.~\ref{fig:explodedOD}. The AVs were designed as segmented to allow transport underground and into the water tank with no acrylic bonding necessary on site.
All acrylic walls for the side AVs are nominally 1--inch thick. For the top and bottom AVs, the side and domed acrylic walls are 0.5--inch thick, whereas the flat tops and bottoms are 1--inch thick for structural reasons. The AVs contain various penetrations for conduits and cabling.
All sheets of acrylic used for fabrication were tested for optical transmission and were found to exceed 92\% between 400 and 700~nm, meeting requirements. Acrylic samples were screened with ICP-MS (see Sec.~\ref{sec:HPGeMS})and found to be sufficiently low in radioactive contaminants.
The liquid scintillator used in the OD consists of a linear alkyl benzene (LAB) solvent doped at 0.1\% by mass with gadolinium. The full chemical make-up of the GdLS cocktail is shown in Table~\ref{tab:GdLS}.
Gd is introduced into LAB using trans-3-Methyl-2-hexenoic acid (TMHA) as a chelation agent. Other components are the fluor, 2,5-Diphenyloxazole (PPO), and a wavelength shifter, 1,4-Bis(2-methylstyryl)-benzene (bis-MSB). The emission spectrum of this mix spans 350 to 540~nm, with peaks at 400 and 420~nm, and the absorption length in this range is of order 10~m.
LAB, TMHA and PPO are purified in order to remove metallic and coloured impurities to improve optical transmission; LAB and TMHA by thin-film distillation and PPO by water-extraction and recrystallization. For the chelated Gd product, a self-scavenging method is used to induce precipitation of uranium and thorium isotopes to improve radiopurity.
Twenty-two tonnes of GdLS contained in 150 55-gallon drums are shipped to SURF and transferred into the AVs through a reservoir. Exposure to air is minimized, as oxygen, radon and krypton negatively impact the GdLS performance. The GdLS is bubbled with nitrogen while in the reservoir in order to remove dissolved oxygen and maximize the light yield. Furthermore, a light yield test is performed on each drum of GdLS before transfer into the AVs. The test apparatus consists of a dark box containing a radioactive source, one PMT, and a small sample of the GdLS.
\begin{table*} [ht]
\caption{Chemical components in \SI{1}{\liter} of GdLS.}
\centering
\begin{tabular}
{c c c c}
\hline
{\bfseries Acronym} &
{\bfseries Molecular Formula} &
{\bfseries Molecular Weight (g/mol)} &
{\bfseries Mass (g)} \\
\hline
\vphantom{\Large L}LAB & C$_{17.14}$H$_{28.28}$ & 234.4 & 853.55 \\
PPO & C$_{15}$H$_{11}$NO& 221.3 & 3.00 \\
bis-MSB & C$_{24}$H$_{22}$ & 310.4 & 0.015 \\
TMHA & C$_9$H$_{17}$O$_2$ & 157.2 & 2.58 \\
Gd & Gd & 157.3 & 0.86 \\ \hline
\vphantom{\Large G}
GdLS & C$_{17.072}$H$_{28.128}$O$_{0.0126}$N$_{0.0037}$Gd$_{0.0015}$ & 233.9 & 860.0 \\ \hline
\end{tabular}
\label{tab:GdLS}
\end{table*}
A total of 120 Hamamatsu R5912 8--inch PMTs view the GdLS and AVs from a Tyvek curtain situated 115~cm radially from the outer wall of the acrylic (see Fig.~\ref{fig:LZSolid}). The interaction rate in the OD from radioistopes in the PMT system is predicted to be only 2.5~Hz due to the shielding provided by the water gap. The PMTs are arranged in 20 ladders spaced equally around the water tank in $\phi$ with 6 PMTs per ladder. Each PMT is held by a `spider' support and covered by a Tyvek cone.
A dedicated Optical Calibration System (OCS) has been designed for PMT monitoring and measurement of the optical properties of the GdLS and acrylic. Thirty LED duplex optical fibres are mounted with the PMT supports, with an additional five beneath the bottom AVs placed specifically to check transmission through the acrylic. Short, daily calibrations with the OCS will be performed in order to check the PE/keV yield at the veto threshold, and weekly calibrations will be used to check PMT gains and optical properties.
\subsection{Performance}
The performance of the OD strongly depends on its event rate. The sources can be divided into internal and external; internal backgrounds are contamination intrinsic to the OD, i.e. inside the GdLS; while external sources can be subdivided again into radioactivity from LZ components and radioactivity from the Davis Cavern itself (see Table~\ref{tab:ODrate}).
Radioactive contaminants internal to the GdLS have been measured through a campaign with an LS Screener, a small detector containing 23~kg of liquid scintillator viewed by three low background LZ PMTs, fully described in Ref.~\cite{Haselschwardt:2018}. The LS Screener took data with both loaded (with Gd) and unloaded (no Gd) samples of the liquid scintillator that is used in the Outer Detector. The unloaded sample allowed a clear determination of what contaminants were introduced during the Gd-doping process, as well as a clearer low energy region to allow a measurement of the $^{14}$C concentration, particularly important as its rate influences the choice of energy threshold. Use of pulse shape discrimination allowed for efficient separation of alpha decays from betas and gammas, and constraints were placed on activity from the $^{238}$U, $^{235}$U and $^{232}$Th decay-chains, $^{40}$K, $^{14}$C, $^{7}$Be (from cosmogenic activation of carbon in the LS), $^{85}$Kr and $^{176}$Lu. Some surprising and significant findings of the LS Screener were the dominance of the rate from nuclides within the $^{235}$U chain, and the presence of $^{176}$Lu, now known to be introduced when doping with gadolinium, since neither were observed in the unloaded LS sample. A more aggressive purification of the GdLS resulted in a decrease in activity of almost all contaminants. The new, lower activities were used in combination with the LS Screener results to predict a rate of 5.9 Hz above the nominal 200~keV veto threshold for the OD.
\begin{table}[h]
\centering
\footnotesize
\caption{\small Predictions for the event rate in the Outer Detector. Rates are given in the case of the nominal 200~keV threshold. \label{tab:ODrate}}
\begin{tabular}{ l l c } \hline
\textbf{Type} & \textbf{Component} & \textbf{OD Rate (Hz)} \\ \hline
\multirow{6}{*}{External} & PMTs \& Bases & 0.9 \\
& TPC & 0.5 \\
& Cryostat & 2.5 \\
& OD & 8.0 \\
& Davis Cavern & 31 \\ \hline
Internal & GdLS & 5.9 \\ \hline
\textbf{Total} & & \textbf{51} \\ \hline
\end{tabular}
\end{table}
The biggest contribution to the rate in the OD is from the radioactivity within the Davis Cavern. Contamination of the cavern walls with on the order of tens of Bq/kg for $^{40}$K, $^{238}$U and $^{232}$Th has been established using dedicated measurements of the $\gamma$-ray flux with a NaI detector~\cite{Akerib:2019sek}, and simulation studies suggest a rate above 200 keV of $27\pm7$~Hz, concentrated in the top and bottom AVs.
With an expected overall rate of $\sim$50~Hz, the OD can be expected to operate with an efficiency of 96.5\% for a 200~keV threshold. The energy threshold of the OD is nominally a number of photoelectrons corresponding to an energy deposit of 200~keV, predicted to be 10~PE by photon transport simulations. The threshold is chosen to eliminate the rate from internal $^{14}$C contamination, as it is a low energy $\beta$-decay with an endpoint of 156~keV. The OD may be operated instead with a 100~keV threshold, depending on the observed rate, which would decrease the inefficiency at a window of 500~$\mu$s from 3.5\% to 2.8\%.
The impact of the OD on NR backgrounds is characterized through neutron Monte Carlo simulations. The total NR background in 1000 livedays is predicted to be reduced from 12.31 to 1.24 NR counts when the OD and skin vetoes are applied, with the OD providing most of the vetoing power. Due to the spatial distribution of these NRs in the LXe TPC, the OD is necessary to utilize the full 5.6 tonne fiducial volume.
\section{Calibrations}
\label{sec:Calibrations}
\newcommand{\isot}[2]{$^{\textrm{#2}}$#1}
Many attributes of the LZ detector response require \emph{in situ} calibration. Calibration goals range from low-level quantities such as PMT gain and relative timing to high-level quantities such as models of electron recoil and nuclear recoil response. To these ends, the LZ detector includes significant accommodations for a suite of calibrations. Large-scale accommodations (some visible in Fig.~\ref{fig:LZSolid}) include three small-diameter conduits to transport external sources to the cryostat side vaccum region, one large-diameter conduit to transport large ($\gamma$,n) sources to the cryostat top, and two evacuated conduits to enable neutron propagation from an external neutron generator to the cryostat side.
\renewcommand{\arraystretch}{1.1}
\begin{table} [t]
\caption{Overview of radioactive nuclide sources planned for LZ calibration, grouped according to deployment method. A: gaseous sources released into GXe circulation, B: sealed sources lowered down small-diameter conduits to cryostat side vacuum, C: ($\gamma$,n) sources requiring dense shielding, lowered down a large-diameter conduit to the cryostat top, D: DD generator sources, in which neutrons travel through conduits from the generator, through the water tank and outer detector.}
\centering
\begin{tabular}
{|l|l|l|l|l|}
\hline
& Nuclide & Type & Energy [keV] & $\tau_{1/2}$ \\
\hline
& \isot{Kr}{83m} & $\gamma$ & 32.1 , 9.4 &1.83~h \\
& \isot{Xe}{131m} & $\gamma$ & {164} &11.8~d\\
A & $^{220}$Rn & $\alpha, \beta, \gamma$ & various & 10.6~h \\
& $^3$H & $\beta$ & 18.6 endpoint &12.5~y \\
& $^{14}$C & $\beta$ & 156 endpoint &5730~y\\
\hline
& $^{241}$AmLi & ($\alpha$,n) & 1500 endpoint $^{(a)}$ &432~y \\
& $^{252}$Cf & n & Watt spectrum &2.65~y \\
& $^{241}$AmBe & ($\alpha$,n) & 11,000 endpoint &432~y \\
&$^{57}$Co & $\gamma$ & {122} &0.74~y \\
B &$^{228}$Th & $\gamma$ & {2615} &1.91~y \\
&$^{22}$Na & $\gamma$ & {511,1275} &2.61~y \\
&$^{60}$Co & $\gamma$ & 1173 , 1333 &5.27~y \\
&$^{133}$Ba & $\gamma$ & {356} &10.5~y \\
&$^{54}$Mn & $\gamma$ & {835} &312~d \\
\hline
& $^{88}$YBe & ($\gamma$,n) & 152 &107~d \\
C & $^{124}$SbBe & ($\gamma$,n) & 22.5 &60.2~d \\
& $^{205}$BiBe & ($\gamma$,n) & 88.5 &15.3~d \\
& $^{206}$BiBe & ($\gamma$,n) & 47 &6.24~d \\
\hline
D & DD & n & 2450 &$-$ \\
& D Ref. & n & $272\rightarrow400$ &$-$ \\
\hline
\end{tabular}
\label{table:sourcelist}
\end{table}
\subsection{Internal sources}
Gaseous sources can mix with the LXe in order to reach the central active volume, where self-shielding limits calibration via external gamma sources. The baseline suite of such `internal' sources is listed in Group A of Table~\ref{table:sourcelist}.
Long-lived gaseous sources (\isot{H}{3}, \isot{C}{14}) can be stored as a pressurized gas, with purified Xe serving as the carrier. Because the nuclide is long-lived, it must be in a chemical form that can be efficiently removed by the getter (see Sec.~\ref{sec:XeHandling}). The LZ implementation builds on the successful example of LUX, in which isotopically-labeled CH$_4$ served as the calibration gas. CH$_4$ was seen to be efficiently removed, as long as it did not contain trace amounts of other labeled hydrocarbons~\cite{dobi2014, akerib:2015wdi}.
The short-lived gaseous sources (\isot{Kr}{83m}, \isot{Xe}{131m}, $^{220}$Rn) are stored in the form of their parent nuclide, which can be handled and stored in a compact solid form and placed within `generator' plumbing in which it emanates the calibration daughter. \isot{Rb}{83} serves as the parent nuclide of \isot{Kr}{83m}, and is deposited in aqueous solution on high purity, high surface area charcoal before baking (as in~\cite{akerib:2017eql}). \isot{I}{131} serves as the parent nuclide of \isot{Xe}{131m}, and is commercially available in a pill form of high Xe emanation efficiency. \isot{Th}{228} serves as the parent nuclide of \isot{Rn}{220}, and is available commercially from Eckert \& Ziegler as a thin electroplated film for optimal Rn emanation. In the LZ implementation, these generator materials are housed in transportable and interchangeable plumbing sections (see Fig.~\ref{fig:generatorplumbing}). These assemblies contain both a port for material access and a pair of sintered nickel filter elements (3~nm pore size, Entegris WG3NSMJJ2) to prevent contamination by the parent nuclide of the active Xe.
Both long-lived and short-lived gaseous sources require precise dose control on the injected activity, accomplished via a gas handling system dedicated to injection control. A specific GXe cylinder supplies the carrier gas to transport small quantities of calibration gas through a series of high-precision Mass Flow Controllers (Teledyne-Hastings HFC-D-302B) and volumes of precise pressure measurement (MKS 872). Once a dose of calibration gas has been isolated, the volume containing the dose is flushed into the main GXe circulation flow path, either before the getter for noble-element calibration species or after the getter for long-lived CH$_4$-based species.
\begin{figure}[t!]
\centering
\includegraphics[trim={0 0.0cm 0 0.0cm},clip,width=0.80\linewidth]{fig12.jpg}
\caption{TOP: Three example solid materials containing parent nuclides that emit daughter calibration gaseous sources. Charcoal dosed with \isot{Rb}{83} is fixed to the a 1/2-inch VCR plug (1). A gas-permeable pill (2) containing \isot{I}{131} and a disk source (3) of electroplated \isot{Th}{228} can also be fixed in place. BOTTOM: Photograph of a typical gaseous source generator. Carrier Xe gas flows from left to right. The active parent material is stored in the central region (4), accessed via a 1/2-inch VCR port. This region is bounded by a pair of filter elements (5) of 3~nm pore size sintered nickel and then a pair of lockable manual valves for isolation during shipping and installation.}
\label{fig:generatorplumbing}
\end{figure}
\subsection{External sources}
External sources are lowered through three 23.5~mm ID 6~m long conduits to the vacuum region between the ICV and OCV. Each conduit is capped by a deployment system (Fig.~\ref{fig:sourcedeployment}) which raises and lowers the sources with final position accuracy of $\pm$5~mm. The position measurement is accomplished via an ILR1181-30 Micro-Epsilon laser ranger (visible at the top of Fig.~\ref{fig:sourcedeployment}), supplying live data for an active feedback protocol to a SH2141-5511 (SANYO DENKO Co) stepper motor. A $\sim$100~$\mu$m nylon composite filament suspends the sources, rated to a maximum load of 12~kg. The external sources themselves are in most cases commercial sources (Eckert \& Ziegler type R). A special case is the AmLi source, custom fabricated but of the same form factor. To enable smooth transport up and down the conduit, each source is epoxied and encapsulated at the lower end of a 5" long by 0.625" diameter acrylic cylinder. The top end contains the capsule holder allowing connection to the filament and includes a ferromagnetic connection rod for recovery in case of filament breakage.
\begin{figure}[h!]
\centering
\includegraphics[trim={0 0.0cm 0 0.0cm},clip,width=0.95\linewidth]{fig13.jpg}
\caption{LEFT: One of three external source deployment systems, including the laser ranging system (top black component) and the stepper motor and gear/winding assembly (enclosed in a KF50 Tee at the back). A transparent plate makes visible the region in which the sources are installed and removed. RIGHT: An external source assembly, showing the acrylic body, the source region (at bottom), and the filament connection, black skids, and laser reflector (at top).}
\label{fig:sourcedeployment}
\end{figure}
\subsection{Photoneutron sources}
A selection of photoneutron ($\gamma$,n) sources, including $^{88}$YBe, $^{124}$SbBe, $^{204}$BiBe and $^{205}$BiBe, are planned to calibrate the nuclear recoil energy range from below 1 keV up to about 4.6 keV. This range corresponds to the expected energy depositions from \isot{B}{8} solar neutrino coherent scattering. Only about one neutron is produced for every $10^{4}$ gammas emitted, so a significant quantity of gamma shielding is required (see Fig.~\ref{fig:neutronsources}). The neutrons are quasi mono-energetic at production (within a few percent) but undergo additional scatterings before they reach the liquid xenon. The utility of this calibration source is derived from the endpoint energy the neutron deposits, which simulations indicate will be clearly distinguishable after a few days of calibration.
A ${\sim}$140 kg tungsten shield block is designed to be deployed at the top of LZ via a crane. In the unlikely event the shield block were to become lodged inside the LZ water tank, it would be possible to separately remove the conical structure which contains the gamma source and the Be.
\subsection{Deuterium-deuterium neutron sources}
An Adelphi DD-108 deuterium-deuterium (DD) neutron generator produces up to $10^8$ neutrons per second. A custom upgrade will allow up to $10^9$ n/s. The neutrons are delivered through the Davis Cavern water tank and Outer Detector via dedicated neutron conduits. There are two sets of conduits, one level and one inclined at 20 degrees from the horizontal. Each conduit assembly includes a 2--inch diameter and a 6--inch diameter path, and all are filled with water during dark matter search. As shown in Fig.~\ref{fig:neutronsources}, the generator is permanently mounted on an Ekko Lift (model EA15A), and surrounded by custom neutron shielding material. A kinematic mounting plate, located between the forks of the lift, will bolt to threaded inserts in the concrete floor. This is designed to provide precise, repeatable positioning.
The DD-108 produces 2450 keV mono-energetic neutrons. This source has already been used by LUX to obtain a precise, \emph{in-situ} calibration of the low-energy nuclear recoil
response~\cite{akerib:2016mzi}. In addition to this mode of operation, LZ obtains 272~keV quasi mono-energetic neutrons by reflecting the 2450 keV beam from a deuterium oxide (D$_2$O) target. This allows the lowest nuclear recoil energies to be calibrated with decreased uncertainty.
\begin{figure}[h!]
\centering
\includegraphics[trim={0 0.0cm 0 0.0cm},clip,width=0.95\linewidth]{fig14.jpg}
\caption{LEFT: Brass mockup of the photoneutron source shielding assembly, which will be made of tungsten alloy. The main body is at top. Depicted below is a conical insert that houses radioactive source. RIGHT: Rendering of the DD generator in its boron-doped shielding assembly, all mounted on a movable positioning system.}
\label{fig:neutronsources}
\end{figure}
\section{Electronics and Controls}
\label{sec:FrontEnd}
The signal processing electronics, described in detail in Sec.~\ref{subsec:SignalFlow}, processes the signals from 494 TPC PMTs, 131 skin PMTs, and 120 outer-detector PMTs.
The electronics is designed to ensure a detection efficiency for single photoelectrons (PEs) of at least 90\%. The PMT signals are digitized with a sampling frequency of 100~MHz and 14-bit accuracy. The gain and shaping parameters of the amplifiers are adjusted to optimize the dynamic range for the PMTs. The dynamic range for the TPC PMTs is defined by the requirement that the S2 signals associated with a full-energy deposition of the 164 keV $^{131m}$Xe activation line do not saturate the digitizers. Larger energy depositions will saturate a number of channels of the top PMT array, but the size of the S2 signal can be reconstructed using the S2 signals detected with the bottom PMT array. The saturation of a few top PMTs for S2 signal will not impact the accuracy of the position reconstruction. For the skin PMTs, the dynamic range is defined by the requirement that the skin PMT signals associated with the interaction of a 511-keV $\gamma$-ray in the skin do not saturate the digitizers. Simulations show that such an interaction can generate up to 200 PEs in a single PMT, depending on the location of the interaction. The dynamic range for the outer-detector PMTs is defined by the requirement that the size of the outer-detector PMT signals associated with neutron capture on Gd, which generates a $\gamma$-ray cascade with a total energy between 7.9 and 8.5~MeV, do not saturate the analog and digital electronics. Such events generate at most 100 PEs in a single PMT. For muon interactions in the outer detector, a few PMTs may saturate, depending on the location of the muon track.
The Data Acquisition system (DAQ) is designed to allow LED calibrations of the TPC PMTs in about 10 minutes. This requires an event rate of 4-kHz, resulting in a $\sim$340~Mb/s total waveform data rate. Monte Carlo simulations predict a total background rate is about 40~Hz. The background rate between zero and 40~keV is about 0.4~Hz. Due to the maximum drift time of 800~$\mu$s in the TPC, the rate for TPC source calibrations is limited to 150~Hz. A 150~Hz calibration rate results in a 10\% probability of detecting a second calibration event within the drift time of the previous calibration event.
The slow control system is responsible for controlling and monitoring all LZ systems. It is described in detail in Sec.~\ref{subsec:Controls}.
\subsection{Signal Flow}
\label{subsec:SignalFlow}
The processing of the signals generated by the TPC PMTs is schematically shown in Fig.~\ref{fig:signalFlow}. The TPC and skin PMTs operate at a negative HV, supplied by the HV system, using MPOD EDS 20130n\_504 and MPOD EDS 20130p\_504 HV distribution modules from from WIENER Power Electronics~\cite{wiener}. HV filters are installed at the HV flange on the breakout box. The PMT signals leave the breakout box via a different flange and are processed by the analog front-end electronics. The amplified and shaped signals are connected to the DAQ. The digitized data are sent to Data Collectors and stored on local disks.
The PMTs of the outer-detector system operate at positive HV. The same type of amplifier used for the TPC and skin PMTs is also used for the outer-detector PMTs.
\begin{figure}[h!]
\centering
\includegraphics[trim={0 0.0cm 0 0.0cm},clip,width=0.95\linewidth]{fig15.pdf}
\caption{Schematic of the signal processing of the TPC PMTs. The TPC and outer-detector PMTs use dual-gain signal processing. The skin PMTs only utilize the high-gain section of the amplifiers. The signals are digitized using the DDC-32 digitizer, developed for LZ in collaboration with SkuTek~\cite{skutek}. Event selection is made by the Data Sparsification (DS) system~\cite{druszkiewicz:2015pcl}. }
\label{fig:signalFlow}
\end{figure}
The PMT signals are processed with dual-gain amplifiers. The low-gain channel has a pulse-area gain of four and a 30-ns full width at tenth maximum (FWTM) shaping-time constant. The high-gain channel has a pulse-area gain of 40 and a shaping time of 60-ns (FWTM). The high-gain channel is optimized for an excellent single PE response. The shaping times and gains are derived from one assumption: the DAQ has a usable dynamic range of 1.8~V at the input. A 0.2~V offset is applied to the digitizer channels in order to measure signal undershoots of up to 0.2~V. Measurements with prototype electronics and the LZ PMTs have shown that the same amplifier parameters can be used for all of them. For the TPC and the Outer Detector PMTs, both high-gain and low-gain channels are digitized; for the skin PMTs, only the high-gain channel is digitized.
The top-level architecture of the DAQ system is shown schematically in Fig.~\ref{fig:signalFlow}. The DDC-32s digitizers, eveloped for LZ in collaboration with SkuTek~\cite{skutek}, continuously digitize the incoming PMT signals and store them in circular buffers. When an interesting event is detected, the Data Extractors (DE) collect the information of interest from the DDC-32s. The DEs compress and stack the extracted data using their FPGAs and send the data to Data Collectors for temporary storage. The Event Builder takes the data organized by channels and assembles the buffers into full event structures for online and offline analysis. The DAQ operation is controlled by the DAQ Master for high-speed operations such as system synchronization and waveform selection, and by the DAQ Expert Control/Monitoring system, not shown in Fig.~\ref{fig:signalFlow}, for slow operations such as running setup/control and operator diagnostics. The entire system runs synchronously with one global clock.
The performance of the entire signal processing chain has been evaluated in an electronics chain test. Pre-production prototypes of the analog and digital as well as signal cables of the same type and length as those to be installed at SURF were used. The measured response of a four-sample wide S1 filter is shown in Fig.~\ref{fig:spheEff}. The measured single photoelectron (PE) efficiency is 99.8\%, much better than the requirement of 90\%. The threshold at which the false-trigger rate is 1~Hz is 43 ADC Counts (ADCC) or 16\% of a single PE.
\begin{figure}[h!]
\centering
\includegraphics[trim={0 0.0cm 0 0.0cm},clip,width=0.95\linewidth]{fig16.pdf}
\caption{Measurements of the TPC PMT response to a single PE. The output of the four-sample wide S1 filter, in units of ADC Counts (ADCC), is shown.}
\label{fig:spheEff}
\end{figure}
\subsection{Data Flow and Online Data Quality}
\label{subsec:DataFlow}
The data flow is schematically shown in Fig.~\ref{fig:dataFlow}. Five event builders (EB) assemble the events by extracting the relevant information from the Data Collector disks, DAQ1-DAQ15. A 10 Gigabit per second (Gbs) line connects the Data Collectors to the Event Builders. The event files are stored on the 16~TB local disk array of each EB, before being transferred to the 192~TB RAID array installed on the surface. From there, the event files are distributed to the data-processing centers for offline data processing and analysis.
\begin{figure}[h!]
\centering
\includegraphics[trim={0 0.0cm 0 0.0cm},clip,width=0.95\linewidth]{fig17.pdf}
\caption{A schematic of the LZ data flow.}
\label{fig:dataFlow}
\end{figure}
Redundant connections exist between the surface and the Davis Cavern. A pair of managed 10~Gbs switches is installed on the surface and underground. Each switch on the surface is connected via two fibers, travelling through the Yates and the Ross shafts, to the two underground switches. Since each link supports 10~Gbs, this configuration can support data transfer rates of up to 40~Gbs.
A fraction of the data is analyzed online by the Detector Quality Monitor (DQM), running on a dedicated online server, installed in the Davis Laboratory. The DQM applies elements of the offline analysis to the data being collected, in order to monitor the performance of the detector. For example, during $^{83m}$Kr calibrations, the DQM will monitor the electron life time and the energy resolution. Various detector parameters, such as PMT multiplicity, hit distributions across both PMT arrays, and trigger rates, will be monitored. If significant deviations from prior observed patterns are seen, experts will be automatically notified via the slow control system.
\subsection{Controls }
\label{subsec:Controls}
The Controls system performs supervisory control and monitoring of all the major subsystems of the experiment, including cryogenics, fluid handling, detector diagnostic sensors, high voltage, and electronics monitoring. Not included in this system are the SURF-managed controls ensuring personnel safety (for example, oxygen deficiency alarms and sensors). The functionality provided by the Controls system can be classified in the following four categories:
1) protection against xenon loss and contamination,
2) experiment parameter monitoring and logging,
3) control over LZ subsystems, and
4) providing the interface to operators.
In order to minimize risks associated with possible xenon loss or contamination, instruments and subsystems are divided in two major groups with respect to the perceived impact of their possible malfunction on the integrity of the xenon supply. Those where equipment failure or operator error can lead to xenon loss or contamination are designated ``critical" and the rest are ``non-critical". During a power failure, a combination of UPS and generator power will provide continuity to the critical components of the slow control system.
The PLC system provides automatic protection in the case of an emergency. If the Xe pressure inside the ICV reaches a threshold value, or if there is an extended power outage in the Davis Cavern, the Xe compressors will activate and the vaporized xenon will be safely transferred to the Xe storage packs. In these scenarios Xe transfer occurs automatically without assistance from a human operator. Additionally, the PLC is programmed with a set of interlocks to protect the experiment from erroneous operator commands or equipment failure during routine operations.
The functional diagram of the slow control system and its interaction with the experiment subsystems and infrastructure is shown in Fig.~\ref{fig:SlowControl}. The core of the slow control system is composed of three components: (1) the integrated supervisory control and data acquisition (SCADA) software platform \emph{Ignition} from Inductive Automation~\cite{sqlbridge:2015}, (2) a Siemens SIMATIC S7-410H dual-CPU PLC with Redundant Hot Backup, allowing bumpless transfer from one active CPU to the backup, and (3) associated I/O modules~\cite{Siemens410-5h}. Non-critical instruments connect directly to the Ignition server, typically using MODBUS-TCP over the slow control local network. Critical instruments are managed by the PLCs, which in turn communicates with the Ignition server.
The Ignition server provides a single operator interface, alarm system, and historical record for all slow control instrumentation. It provides authorized users with access to specific controls. In addition, the Ignition server provides the scripting engine for experiment automation. Ignition also provides a GUI for accessing historical data in the form of plots by local and remote clients. The configuration data (properties of sensors and controls, alarms, and user preferences) are stored in the local configuration database.
\begin{figure*}[h!]
\centering
\includegraphics[width=0.85\linewidth]{fig18.pdf}
\caption{Slow control functional diagram.}
\label{fig:SlowControl}
\end{figure*}
For critical systems, which include xenon handling, cryogenics, and grid high voltage, PLCs add an additional layer between the slow control software and the physical instruments. This ensures uninterrupted control of critical instruments and separates the low level logic governed by PLCs from the higher level operations run by the Ignition scripting engine.
The PLC system is responsible for the automated recovery of xenon in emergency scenarios and must be robust against single-point failures. This begins with the the Siemens dual-CPU system. The system is powered by redundant 24~VDC supplies fed by two separate electrical panels, one of which is supported by an 8~kVA APC Symmetra UPS. Both panels are also supported by backup diesel powered electrical generator, capable of powering the PLC system and xenon recovery compressors for the time required to complete xenon recovery.
Each Siemens CPU has its own connection to a common set of S7-300 I/O Modules, which connect directly to individual instruments/sensors, as well as to a set of PROFIBUS-DP Y-links, allowing for redundant control of PROFIBUS instruments. In either case, redundancy does not in general extend to the I/O modules and instruments. However, those instruments critical for xenon recovery are duplicated to eliminate single-point failures.
Several integrated instruments, including the xenon compressors, vacuum pump systems, and the liquid nitrogen generator, have their own dedicated PLCs which function as PROFIBUS slaves to the Master PLC. These smaller PLCs are either provided and programmed by the instrument vendor or, in the case of the xenon compressors, built and programmed by LZ.
Each of the four xenon compressors (two circulation and two recovery) is run by a dedicated Beckhoff CX8031 PLC, which governs compressor startup and shutdown sequences. The PLCs running the two xenon recovery compressors have the extra feature that they are capable of initiating xenon recovery in response to an over-pressure scenario, triggered by xenon pressure transducers connected directly to each PLC. The intent is that emergency xenon recovery will be initiated and coordinated by the Master PLC, with this slave-initiated recovery as a backup.
The choice of Ignition as the software platform for LZ Controls is based on the wide range of highly customizable, easy to use core functions and tools for development of efficient and robust SCADA systems. Ignition also comes with a versatile toolbox for GUI design and a comprehensive library of device drivers supporting most of the hardware used in LZ. One of these drivers supports Siemens PLCs allowing to export the PLC tags (internal variables). Devices connected to the PLC can be exposed to Ignition as sensors and controls. For the rest of the devices the preferred protocol is MODBUS which is also supported by the Ignition server. For those devices that do not support MODBUS (e.g. RGA units), two interfacing methods are envisaged: 1) custom Java drivers written in the Ignition software development kit, and 2) Python MODBUS server with plugin system or custom Python drivers.
The operator interface is comprised of a set of panels, organized into a tree view. At a lower GUI level, the panels represent the experiment subsystem via functional diagrams with relevant information on the state of sensors and controls displayed in real time. The authorized system experts can also use these GUI panels to alter the controls not protected by the PLC interlocks. At a higher level, several panels show the real-time status of the system and subsystems from which a trained operator is able to tell the overall health of the system. These panels also allow to assess high level information, for example alarm status, current operation mode, status of automation scripts, etc. At a very high level, a single summary plot of the entire system can be prepared by slow control and sent to run control for display in a single status panel on the run control GUI.
\section{Detector Assembly}
\label{sec:UGAssembly}
LZ is being installed inside the existing 25' diameter, 20'
tall Davis Cavern water tank used for the LUX experiment.
Access to the water tank is down the 4850' vertical
Yates shaft and through horizontal drifts.
Some large items of equipment are segmented and
transported underground in sections, including the
OCV (three sections), and the tall AVs (four sections).
Other items, like the ICV and its TPC payload, and the
LXe tower, are transported and installed after being
fully assembled on the surface.
Substantial changes to the infrastructure of the Davis Campus were
required to accommodate the detector size increase from LUX to LZ. This included
creating more floor space with a platform for cable breakouts,
conversion of the compressor room roof to a space for purification and
Xe sampling equipment, and converting two previously unused excavations to
spaces for Xe storage and Rn removal equipment.
Underground assembly started in 2018 with the transport of the 12-foot
tall AVs. Each AV was contained in a heavy steel frame.
At the top of the Yates shaft the frame was mounted
in a rotatable assembly,
slung under the cage, and lowered to the 4850' level.
The rotatable assembly was used to move each 5000 pound unit around
duct work and other obstacles between the cage and the
Davis Cavern upper deck.
Each AV was wrapped in two plastic bags to keep dust from contaminating
the acrylic and the water tank. The AVs
are sensitive to large temperature
changes so timing and speed were also important
considerations during transport.
\begin{figure}[t!]
\centering
\includegraphics[trim={0 0.0cm 0 0.0cm},clip,width=0.90\linewidth]{fig19.jpg}
\caption{Intermediate assembly. A section through the Davis Campus showing the OCV ready to receive the ICV, and the ICV suspended above with cable conduits. 1-Deck; 2-Water tank.}
\label{fig:intermediateassembly}
\end{figure}
After the four tall AVs were placed inside
the water tank, the three sections
of the OCV were transported underground and installed on
support legs inside. After a leak cheak, and
a final washing of the water tank and its equipment,
the OCV top was removed and stored to prepare for ICV installation,
and cleanroom protocols were instituted for entering and exiting the area.
An underground radon reduction system treats the air supplied
to the water tank during the remainder of the assembly.
Much of the TPC assembly work was done in the Surface Assembly Lab
(SAL) at SURF because of its superior logistics
compared to the underground.
A well sealed class 1000 clean
room with reduced radon air supply~\cite{ateko}
and a clean monorail crane was used
for cryostat acceptance, TPC assembly, and insertion of the TPC into the
ICV. The Reduced Radon System (RRS) has been
measured to produce air laden with
radon at a specific activity of 4~mBq/m$^3$ given
input air at about 8.5~Bq/m$^3$.
The PMT and instrumentation cables are routed through
three reinforced bellows attached
to the ICV, and the entire assembly is transported underground
together. During transport the bellows are sealed
and the ICV is purged with boil off LN. This
begins the process of removing H$_2$O, O$_2$, and Kr from
the large mass of PTFE in the detector. All subsequent assembly
steps are designed to keep the TPC under nitrogen purge until it
can be evacuated and filled with Xe gas.
The TPC is assembled and installed into the ICV vertically.
To move underground, the assembly is rotated
to horizontal, set in a transport frame,
and moved to the Yates headframe.
Slings attached to the underside of the cage
are used to lift the assembly into a vertical position
underneath the cage.
This process is reversed at the 4850' level,
returning it to horizontal, for transport down the drift.
It is set vertical again
on a shielding platform over the water tank. A temporary clean room is
constructed around the ICV, and the last protective bag is removed.
The ICV is connected to the
OCV top by three tie bars (instrumented threaded rods) that allow for
precision leveling of the TPC relative to the liquid Xe surface while
minimizing thermal losses. During final lowering into the
OCV the ICV is supported by the tie bars.
Once sealed, the ICV is evacuated, and the AVs
and OD PMTs are assembled around the OCV in the water tank.
A rendering of the ICV just prior to nesting within the OCV is shown in Fig.~\ref{fig:intermediateassembly}.
A second radon reduction system is available in the Davis Cavern to supply
air depleted of radon. It has demonstrated an output of 100~mBq/m$^3$
given input air of 70~Bq/m$^3$.
The water and GdLS are
co-filled to minimize stress in the AV
walls. The GdLS will be transported underground in 150 sealed
barrels. Each is placed on a scale in a clean tent before connection to
the liquid transfer system. The liquid is pumped into the GdLS
reservoir and distributed to the AVs. Once the liquids are filled
detector commissioning can begin.
\section{Materials Screening \& Backgrounds}
\label{sec:Materials}
Material screening is the primary route to controlling the ER and NR backgrounds resulting from radioactivity in the experiment. Measurements of radioactive nuclides in and on all detector components are required. The ubiquitous and naturally occurring radioactive materials (NORM) of particular concern are the \Pgg-ray emitting nuclides $^{40}$K; $^{137}$Cs; and $^{60}$Co, as well as $^{238}$U, $^{235}$U, $^{232}$Th, and their progeny. The U and Th chains are also responsible for neutron production following spontaneous fission and ($\alpha$,n) reactions. Kr and Rn outgassing from materials into the Xe also results in ER backgrounds, and $\alpha$-emitting Rn daughters can contribute to neutron backgrounds when deposited on certain materials.
Fixed contamination, referring to non-mobile NORM nuclides embedded within materials, is the dominant source of neutron and $\gamma$-ray emission in LZ. To ensure a sub-dominant contribution from fixed contaminants, relative to irreducible backgrounds, all materials considered for use in the construction of the experiment are screened for NORM nuclides to $\approx$0.2~mBq/kg sensitivities, equivalent to tens of parts per trillion (ppt) g/g for $^{238}$U and $^{232}$Th. This ensures a maximum contribution from fixed contamination of less than 0.4~NR and $1\times 10^{-6}$ events/(kev $\cdot$ kg $\cdot$ year) ER in the LZ exposure. For materials such as PTFE, which are produced in granular form before being sintered in molds, plate-out constitutes additional risk because surface contamination of the granular form becomes contamination in bulk when the granules are poured into molds. A limit of 10~mBq/kg of $^{210}$Po and $^{210}$Pb in PTFE maintains an NR contribution of $<$0.1 count in the LZ exposure.
Radon emanated from within detector components is the dominant contributor to background in LZ, primarily through the ``naked'' beta emission from $^{214}$Pb in the $^{222}$Rn sub-chain as it decays to $^{214}$Bi. The $^{214}$Bi beta decay itself is readily identified by the subsequent $^{214}$Po alpha decay that would be observed within an LZ event timeline (T$_{1/2}$=160~$\mu$s). Similar coincidence rejection also occurs where beta decay is accompanied by a high-energy $\gamma$-ray, which may still be tagged by the Xe Skin or OD vetoes even if it leaves the active Xe volume. $^{220}$Rn generates $^{212}$Pb, resulting in $^{212}$Bi-$^{212}$Po sequential events which can be tagged. Radon daughters are readily identified through their alpha decay signatures and can be used to characterize the $^{222}$Rn and $^{220}$Rn decay chain rates and distributions in the active region, providing a useful complement to estimating radon concentration from the beta decay contribution to the ER background.
The specific activity of $^{222}$Rn in LZ is required to be less than 2~$\si{\micro}$Bq/kg of LXe, equivalent to 20~mBq. All xenon-wetted components in LZ have been constructed from low-radon materials selected through dedicated measurement facilities. Due to the large number of expected emitters, these facilities were required to achieve sensitivity of 0.2~mBq to $^{222}$Rn. Measurements are made at room temperature, however, the expected emanation can depend strongly on temperature depending on the source material. A conservative approach is adopted in estimating radon emanation in our model, taking credit for a reduction at LXe temperatures wherever this is supported in the literature. Significant background from $^{220}$Rn is not expected given its very short half-life; we conservatively include in our background model a contribution from $^{220}$Rn of 20\% of the ER counts from $^{222}$Rn.
The accumulation of $^{222}$Rn-daughters plated-out during the manufacture and assembly of components as well as dust and particulates contribute to the LZ background. The $\alpha$-particle emitting Rn-daughters can induce neutrons via ($\alpha$,n) processes, particularly problematic for materials with large cross-sections for this process such as flourine, present in the TPC walls (PTFE). Plate-out on the inside of the TPC walls causes $\alpha$-particles and recoiling ions to enter the active volume. The risk of mis-reconstructing these recoils into the fiducial volume in particular sets stringent constraints on plate-out on the PTFE. LZ instituted a target for plate-out of $^{210}$Pb and $^{210}$Po of less than 0.5~mBq/m$^{2}$ on the TPC walls and below 10~mBq/m$^{2}$ everywhere else.
Generic dust containing NORM also releases $\gamma$-rays and induce neutron emission. Dust is further expected to be the single largest contributor to radon emanation. Dust contamination is limited to less than 500~ng/cm$^{2}$ on all wetted surfaces in the detector and xenon circulation system. Under the conservative assumption that 25\% of $^{222}$Rn is released from the dust, via either emanation or recoils out of small grain-size particulates, this limits the $^{222}$Rn activity from dust to less than 10 mBq.
\renewcommand{\arraystretch}{1.8}
\begin{table*}[t]
\footnotesize
\caption{Primary material radio-assay techniques, indicating isotopic sensitivity and detection limits, as well as typical throughput or single-sample measurement duration.}
\centering
\begin{tabular}{ |>{\centering\arraybackslash}m{1.8cm} | >{\centering\arraybackslash}m{1.7cm}|>{\centering\arraybackslash}m{2.1cm}|>{\centering\arraybackslash}m{1.0cm}|>{\centering\arraybackslash}m{1.4cm}|>{\centering\arraybackslash}m{2.8cm}|>{\centering\arraybackslash}m{2.6cm}|>{\centering\arraybackslash}m{1.4cm}|}
\hline
\textbf{Technique} & \textbf{Isotopic Sensitivity} & \textbf{Typical Sensitivity} & \textbf{Sample \newline Mass} & \textbf{Sampling Duration} & \textbf{Destructive/Non-destructive and Notes} & \textbf{Locations (and Number of Systems if $>1$)} & \textbf{Samples Assayed}\\
\hline
\textbf{HPGe} &
$^{238}$U, $^{235}$U, $^{232}$Th chains, $^{40}$K, $^{60}$Co, $^{137}$Cs any $\gamma$-ray emitter &
\SI{5e-11}{g/g}~U, \SI{e-10}{g/g}~Th &
\si{\kg} &
Up to 2 weeks &
Non-destructive, very versatile, not as sensitive as other techniques, large samples & SURF~$\times6$, LBNL~$\times1$, U.~Alabama~$\times2$, Boulby~$\times7$
& 926 \\
\hline
\textbf{ICP-MS} &
$^{238}$U, $^{235}$U, $^{232}$Th (top of chain) &
\SI{e-12}{g/g}& \si{mg} to \si{g} &
Days &
Destructive, requires sample digestion, preparation critical & UCL, IBS, BHUC, U.~Alabama
& 157\\
\hline
\textbf{NAA} &
$^{238}$U, $^{235}$U, $^{232}$Th (top of chain), K &
\SIrange{e-12}{e-14}{g/g} &
\si{g} &
Days to weeks &
Destructive, useful for non-metals, minimal sample preparation & Irradiated at MITR-II, HPGe assay at U.~ Alabama
& 3\\
\hline
\textbf{GD-MS} &
$^{238}$U, $^{235}$U, $^{232}$Th (top of chain) & \SI{e-10}{g/g} &
\si{mg} to \si{g} &
Days &
Destructive, minimal matrix effects, cannot analyze ceramics and other insulators & National Research Council Canada
& 2 \\
\hline
\textbf{Radon Emanation} &
$^{222}$Rn &
\SI{0.1}{mBq} &
\si{\kg} &
1 to 3 weeks &
Non-destructive, large samples, limited by size of emanation chamber & UCL~$\times2$, U.~Maryland, SDSM\&T~$\times2$, U.~Alabama~$\times2$
& 175 \\
\hline
\textbf{Surface $\alpha$}& $^{210}$Pb, $^{210}$Bi, $^{210}$Po & 120~$\alpha$/(m$^{2} \cdot$ day) & g to kg & $<$1 week & Non-destructive, thin samples, large surface area required & SDSM\&T~(Si), Brown~(XIA), Boulby~(XIA), U.~Alabama~(Si)
& 306 \\[1ex] \hline
\end{tabular}
\label{table:nonlin}
\end{table*}
\subsection{HPGe + MS Techniques and Instruments}
\label{sec:HPGeMS}
The LZ screening campaign deploys several mature techniques for the identification and characterization of radioactive species within these bulk detector materials, primarily \Pgg-ray spectroscopy with High Purity Germanium (HPGe) detectors and Inductively-Coupled Plasma Mass Spectrometry (ICP-MS), supported by Neutron Activation Analysis (NAA). These complementary techniques collectively produce a complete picture of the fixed radiological contaminants.
Sensitivity to U and Th decay chain species down to $\approx$10~ppt has been demonstrated using ultralow-background HPGe detectors. HPGe can also assay $^{60}$Co, $^{40}$K, and other radioactive species emitting $\gamma$-rays. This technique is nondestructive and, in addition to screening of candidate materials, finished components can be assayed prior to installation. Under the assumption of secular equilibrium, the U and Th content, assuming natural terrestrial abundance ratios, may be inferred from the measurement of isotopic decay emissions lower in their respective chains. However, secular equilibrium can be broken through removal of reactive nuclides during chemical processing or through emanation. HPGe readily identifies the concentrations of nuclides from mid- to late-chain $^{238}$U and $^{232}$Th, particularly those with energies in excess of several hundred keV. Background-subtracted $\gamma$-ray counting is performed around specific energy ranges to identify radioactive nuclides. Taking into account the detector efficiency at that energy for the specific sample geometry allows calculation of isotopic concentrations. A typical assay lasts 1--2 weeks per sample to accrue statistics at the sensitivities required for the LZ assays. These direct $\gamma$-ray assays probe radioactivity from the bulk of materials and may identify equilibrium states.
Sixteen HPGe detectors located in facilities both above- and underground have been used for LZ, with differences in detector types and shielding configuration providing useful dynamic range both in terms of sensitivity to particular nuclides and to varying sample geometries. The detectors are typically several hundreds of grams to several kilograms in mass, with a mixture of n-type, p-type, and broad energy Ge (BEGe) crystals, providing relative efficiency at the tens of percent through to in excess of 100\% (as compared to the detection efficiency of a ($3\times3$)-inch NaI crystal for 1.33~MeV $\gamma$-rays from a $^{60}$Co source placed 25~cm from the detector face). While p-type crystals can be grown to larger sizes and hence require less counting time due to their high efficiency, the low energy performance of the n-type and broad energy crystals is superior due to less intervening material between source and active Ge. Clean samples are placed close to the Ge crystal and assayed for several days to weeks in order to accrue sufficient statistics, depending on the minimum detectable activity (MDA). The detectors are generally shielded with low-activity Pb and Cu, flushed with dry nitrogen to displace the Rn-carrying air, and sometimes are surrounded by veto detectors to suppress background from Compton scattering that dominates the MDA for low-energy $\gamma$-rays. To reduce backgrounds further, most of the detectors are operated in underground sites~\cite{Scovell:2017srl,Mount:2017iam}, lowering the muon flux by several orders of magnitude. We also utilize a number of surface counters, some of them employing active cosmic rate veto systems, that are particularly useful for pre-screenings before more sensitive underground assays.
To ensure uniform analysis outputs for all HPGe detectors, a cross-calibration program was performed using all detectors active in 2014. This involved the blind assay of a Marinelli beaker containing $\approx2$~kg of Rhyolite sourced from the Davis Cavern at SURF. This sample had previously been characterized using the MAEVE p-type HPGe detector at LBNL. Across the eight detectors online at the time, assays for both $^{238}$U and $^{232}$Th were within $1\sigma$ of each-other. As additional detectors have been brought online, consistency has been assured by cross-calibration intra-facility.
ICP-MS offers precise determination of elemental contamination with potentially up to 100$\times$ better sensitivity for the progenitor U and Th concentrations compared to \Pgg-ray spectroscopy. Since ICP-MS directly assays the $^{238}$U, $^{235}$U, and $^{232}$Th progenitor activity it informs the contribution to neutron flux from ($\alpha$,n) in low-Z materials, but also the contribution from spontaneous fission, which in specific materials can dominate. However, it cannot identify daughter nuclides in the U and Th decay chains that are better probed by HPGe. The ICP-MS technique assays very small samples that are atomized and measured with a mass spectrometer. As a destructive technique, it is not used on finished components. The limitation of ICP-MS is that the sample must be acid soluble and that several samples from materials must be screened to probe contamination distribution and homogeneity. Assays take 1--2 days per material, dominated by the sample preparation time, where extreme care must be taken to avoid contamination of solvents and reactants.
ICP-MS assays for LZ materials have been performed using several facilities, the majority of which operate Agilent 7900 ICP-MS systems within minimum ISO Class 6 cleanrooms~\cite{Dobson:2017esw}. These are capable of achieving sensitivity to U and Th in materials at the level of several ppt. Protocols and methodologies for sample preparation are largely based on well established procedures~\cite{Leonard:2007uv,LaFerriere:2014rva,Grinberg:2005}.
These measurements take significant time owing to the need for high statistics, standard addition calibration of high-concentration samples, and frequent machine cleaning. For more routine measurements, the backgrounds are simply monitored and reported as an equivalent concentration, and lower concentration samples allow for external calibrations and a significantly relaxed cleaning schedule, allowing to measure less demanding samples at the rate of several per day with sensitivities to U and Th on the order of a few hundred ppt. Finally, some initial work has been done to develop measurements of potassium using the cold plasma configuration, leading to measurements of a few hundred ppb of K.
NAA has been used by LZ to assay PTFE to sub-ppt g/g levels of $^{238}$U and $^{232}$Th that is not well suited to HPGe, due to the sensitivity, nor ICP-MS, due to difficulty in digesting PTFE. However, as with ICP-MS, this technique requires small sample masses, does not assay finished components, and assumptions of secular equilibrium need to be made since this technique measures the top of the U and Th chains. Samples are irradiated with neutrons from a reactor to activate some of the stable nuclides, which subsequently emit $\gamma$-rays of well-known energy and half-life that are detected through $\gamma$-ray spectroscopy. Elemental concentrations are then inferred, using tabulated neutron-capture cross sections convolved with the reactor neutron spectra. Depending on the surface treatment, NAA can probe both bulk and surface contamination. Its application and sensitivity are limited by the composition of the material.
\subsection{Radon Emanation Techniques and Instruments}
Radon emanation measurements of all xenon-wetted components within the inner cryostat and those in the gas system that come into contact with Xe during experimental operation have been performed using four facilities available to LZ, listed in Table~\ref{table:nonlin}. In all of the facilities radon is accumulated in emanation chambers, where samples typically remain for two weeks or more. In many cases multiple measurements are performed per sample. The radon is then transferred using a carrier gas to a detector to be counted.
In one of the stations, the radon atoms are collected and counted by passing the radon-bearing gas through liquid, dissolving most of the radon. The scintillator is then used to count Bi-Po coincidences, detected through gated coincidence logic, using one PMT viewing the scintillator. In the other three stations, radon atoms and daughters are collected electrostatically onto silicon PIN diode detectors to detect $^{218}$Po and $^{214}$Po alpha decays. The radon screening systems have been developed such that anything from individual components through to sections of LZ pipework may be assayed.
All stations were initially evaluated using calibrated sources of radon, with a cross-calibration program performed to ensure the accuracy of each system's overall efficiency and ability to estimate and subtract backgrounds.
The radon-emanation screening campaign extends beyond the material selection and construction phase and into detector integration and commissioning phases. A system is available underground at SURF in order to screen large-scale assembled detector elements and plumbing lines. As pieces or sections are completed during installation of gas pipework for the LZ experiment, they are isolated and assessed for Rn emanation and outgassing for early identification of problematic seals or components that require replacement, cleaning, or correction.
\subsection{Surface Assays (XIA, Si, dust microscopy) }
Two sensitive detectors have been used to carry out assays of Rn plate-out to ensure the requirements are met and inform the experiment background model. The first is the commercial XIA Ultralo-1800 surface alpha detector system, suitable for routine screening of small samples including witness plates and coupons deployed during component assembly and transport to track exposure. The second detector employs a panel of large-area Si detectors installed in a large vacuum chamber. These systems exceed the requisite sensitivity to $^{210}$Po at the level of 0.5~mBq/m$^{2}$. Dust assays are performed using high-powered microscopy and x-ray fluorescence techniques.
\renewcommand{\arraystretch}{1.4}
\begin{table*}[]
\centering
\footnotesize
\label{table:backgrounds_condensed}
\caption[BackgroundsTable]{The estimated backgrounds from all significant sources in the LZ 1000~day WIMP search exposure. Mass-weighted average activities are shown for composite materials. Solar $^{8}B$ and hep neutrinos are only expected to contribute at very low energies (\textit{i.e. WIMP masses}) and are excluded from the table.
}
\begin{tabular}{| m{12.6cm} | >{\centering\arraybackslash}m{0.7cm} | >{\centering\arraybackslash}m{0.7cm} |}
\hline
\multirow{ 2}{*}{\textbf{Background Source}} & \textbf{ER} & \textbf{NR} \\
& \textbf{(cts)} & \textbf{(cts)} \\
\hline \hline
\textbf{Detector Components} & \textbf{9} & \textbf{0.07} \\
\hline
\hline
{\textbf{Surface Contamination}} & \textbf{40} & \textbf{0.39} \\
\hline
{Dust (intrinsic activity, 500 ng/cm$^{2}$)}& 0.2 & 0.05 \\
{Plate-out (PTFE panels, 50 nBq/cm$^{2}$)} & - & 0.05 \\
{$^{210}$Bi mobility (0.1 $\mu$Bq/kg)} & 40.0 & - \\
{Ion misreconstruction (50 nBq/cm$^{2}$)} & - & 0.16 \\
{$^{210}$Pb (in bulk PTFE, 10 mBq/kg)} & - & 0.12 \\
\hline \hline
{\textbf{Laboratory and Cosmogenics}} & \textbf{5} & \textbf{0.06} \\
\hline
{Laboratory Rock Walls} & 4.6 & 0.00 \\
{Muon Induced Neutrons} & - & 0.06 \\
{Cosmogenic Activation} & 0.2 & - \\
\hline \hline
{\textbf{Xenon Contaminants}} & \textbf{819} & \textbf{0} \\
\hline
{$^{222}$Rn (1.81 $\mu$Bq/kg)} & 681 & - \\
{$^{220}$Rn (0.09 $\mu$Bq/kg)} & 111 & - \\
{$^{nat}$Kr (0.015 ppt)} & 24.5 & - \\
{$^{nat}$Ar (0.45 ppb)} & 2.5 & - \\
\hline \hline
{\textbf{Physics}} & \textbf{322} & \textbf{0.51} \\
\hline
{$^{136}$Xe 2$\nu\beta\beta$} & 67 & - \\
{Solar Neutrinos: $pp$+$^{7}$Be+$^{13}$N} & 255 & - \\
{Diffuse Supernova Neutrinos} & - & 0.05 \\
{Atmospheric Neutrinos} & - & 0.46 \\
\hline \hline
{Total} & 1,195 & 1.03 \\
{Total (with 99.5~$\%$ ER discrimination, 50~$\%$ NR efficiency)} & 5.97 & 0.51 \\
\hline
{\textbf{Sum of ER and NR in LZ for 1000 days, 5.6 tonne FV, with all analysis cuts}} & \multicolumn{2}{c|}{\textbf{6.49}} \\
\hline
\end{tabular}
\end{table*}
\subsection{Cleaning procedures and protocols (ASTM standards)}
A rigorous program of cleanliness management is implemented to ensure that the accumulated surface and dust contamination are monitored, tracked and do not exceed requirements. All detector components that contact xenon must be cleaned and assembled according to validated cleanliness protocols to achieve the dust deposition levels below 500~ng/cm$^{2}$ and plate-out levels below 0.5~mBq/m$^{2}$ for the TPC inner-walls, and 10~mBq/m$^{2}$ everywhere else. Witness plates accompany the production and assembly of all detector components to ensure QC and demonstrate QA through the plate-out and dust assays. The titanium
cryostat was cleaned by AstroPak~\cite{Astropak} to
ASTM standard IEST-STD-CC1246 (rev E)
level 50R1 (ICV) and level VC-0.5-1000-500UV (OCV).
The ICV cleaning standard is equivalent to the requirement
that mass density of dust be less than 100~ng/cm$^2$.
The vessels were etched according to ASTM B-600.
As described in Sec.~\ref{sec:UGAssembly}, detector integration is done in a reduced-radon cleanroom at the SAL at SURF. Dust and plate-out monitoring on-site was continuously performed to measure and maintain compliance with tolerable dust and plate-out levels.
\subsection{Backgrounds summary }
Measured material radioactivity and anticipated levels of dispersed and surface radioactivity are combined with the Monte Carlo simulations and analysis cuts to determine background rates in the detector. Table~\ref{table:backgrounds_condensed} presents integrated background ER and NR counts in the 5.6~tonne fiducial mass for a 1000 live day run using a reference cut-and-count analysis, both before and after ER discrimination cuts are applied.
For the purposes of tracking material radioactivity throughout the design and construction of LZ, Table~\ref{table:backgrounds_condensed} is based on a restricted region of interest relevant to a 40~GeV/c$^{2}$ WIMP spectrum, equivalent to approximately 1.5--6.5~keV for ERs and 6--30~keV for NRs.
The expected total from all ER(NR) background sources is 1195(1.24) counts in the full 1000~live day exposure. Applying discrimination against ER at 99.5\% for an NR acceptance of 50\% (met for all WIMP masses given the nominal drift field and light collection efficiency in LZ~\cite{Mount:2017qzi}) suppresses the ER(NR) background to 5.97(0.62) counts. Radon presents the largest contribution to the total number of events. Atmospheric neutrinos are the largest contributor to NR counts, showing that LZ is approaching the irreducible background from coherent neutrino scattering~\cite{billard:2013qya}.
\section{Offline Computing} \label{sec:Offline}
The LZ data is stored, processed and distributed using two data centers, one in the U.S.
and one in the U.K. Both data centers are capable of storing, processing, simulating and
analyzing the LZ data in near real-time. Resource optimization, redundancy, and ease of
data access for all LZ collaborators are the guiding principles of this dual data-center design.
LZ raw data are initially written to the surface staging computer at SURF, which was designed
with sufficient capacity to store approximately two months of LZ data, running in WIMP-search
mode. This guarantees continuity of operations in case of a major network outage. The surface
staging computer at SURF transfers the raw data files to the U.S. data center, where initial
processing is performed. The reconstructed data files are made available to all groups in the
collaboration and represent the primary input for the physics analyses. The U.S. data center
is hosted at the National Energy Research Scientific Computing (NERSC) center~\cite{NERSC:web}.
Data simulation and reconstruction at NERSC is performed using a number of Cray
supercomputers~\cite{Perlmutter:web}.
The raw and reconstructed files are mirrored to the U.K. data center (hosted at Imperial College London)
both as a backup, and to share the load of file access and processing. Subsequent
reprocessing of the data (following new calibrations, updated reconstruction and identification
algorithms, etc.) is expected to take place at one or both locations, with the newly generated
files mirrored at both sites and made available to the collaboration. Simulation and data
processing at the U.K. data center are performed using distributed GridPP
resources~\cite{Faulkner:2006:gridpp,Britton:2009:gridpp}.
\subsection{The Offline Software Stack}
The LZ Offline software stack is based on standard HEP frameworks, specifically Geant4 for
simulations~\cite{agostinelli:2002hh} and Gaudi for reconstruction~\cite{barrand:2001ny}.
The simulation package is called BACCARAT~\cite{Akerib:sims-2019}. This software provides object-oriented
coding capability specifically tuned for noble liquid detectors, working on top of the Geant4 engine.
BACCARAT can produce detailed, accurate simulations of the LZ detector response and backgrounds,
which are crucial both for detector design and during data analysis. The physics accuracy of the LZ
simulations package was validated during the science run of LUX, as described in~\cite{Akerib:2011ec}.
BACCARAT is integrated into the broader LZ analysis framework, from production to validation and analysis.
Two output formats are supported, a raw simulation output at the interaction level, and a reduced tree
format at the event level. Both output files are written in the {\normalfont\ttfamily ROOT} data format~\cite{Brun:1997pa}.
A Detector Electronics Response package can be used to emulate the signal processing done by the front-end
electronics of LZ. It reads raw photon hits from BACCARAT to create mock digitized waveforms, organized and
written in an identical format to the output of the LZ data acquisition system. These can be read in by
the analysis software, providing practice data for framework development and analysis.
The data processing and reconstruction software (LZ analysis package, or LZap for short) extracts the PMT
charge and time information from the digitized signals, applies the calibrations, looks for S1 and S2
candidate events, performs the event reconstruction, and produces the so-called reduced quantity (RQ) files,
which represent the primary input for the physics analyses.
LZap is based on the Hive version of the Gaudi code, which is specially designed to provide
multi-threading at the sub-event level. The framework supports the development of different physics
modules from LZ collaborators and automatically takes care of the basic data handling (I/O, event/run
selection, DB interfaces, etc.). Gaudi features
a well established mechanism for extending its input and output modules to handle custom formats. This
functionality has has been exploited in the design of the LZ raw data and RQ formats.
All non-DAQ data, i.e. any data that is not read out by the DAQ with each event, is stored in a database
known as the ``conditions database''. This database can automatically understand the interval of validity
for each piece of data (based on timestamps), and supports data versioning for instances such as when better
calibrations become available. This design implements a hierarchy of data sources, which means that during
development of code or calibrations it is possible to specify alternate sources, allowing for the validation
of updated entries.
All LZ software is centrally maintained through a software repository based on GitLab~\cite{gitlab:web}.
GitLab provides a continuous integration tool, allowing for
automatic testing and installation of the offline codebase on the U.S. and U.K. data center servers. Build
automation is inherited from the Gaudi infrastructure and supported via CMake. Release Management and Version
Control standards were strictly enforced from a very early stage of the experiment to ensure sharing,
verifiability and reproducibility of the results. Each code release undergoes a battery of tests before being
deployed to production. Release management ensures that all the changes are properly communicated and documented,
to achieve full reproducibility.
Software distribution is achieved via CernVM File System (CVMFS)~\cite{Blomer:2011zz}. CVMFS is a CERN-developed
network file system based on HTTP and optimized to deliver experiment software in a fast, scalable, and reliable way.
Files and file metadata are cached and downloaded on demand. CVMFS features robust error handling and
secure access over untrusted networks~\cite{Dykstra:2014kea}. The LZ CVMFS server is
visible to all the machines in the U.S. and U.K. data centers. All the LZ software releases and external
packages are delivered via CVMFS: this ensures a unified data production and analysis stream, because every user can
access identical builds of the same executables, removing possible dependencies on platform and configuration.
A rigorous program of Mock Data Challenges has been enacted, in order to validate the entire software stack and to
prepare the collaboration for science analysis. The first data challenge simulated the initial LZ commissioning and
tested the functionality of the reconstruction framework. The second data challenge covered the first science
run of LZ and tested the entire data analysis chain, including calibrations, detailed backgrounds and potential signals.
The third and final data challenge is currently underway (2019) and will test the complete analysis strategy, validating
the readiness of the offline system just before the underground installation of LZ. |
1,108,101,566,333 | arxiv | \section{Hypergraph Colouring}
In their seminal 1975 paper, \citet{EL75} introduced what is now called the Lov\'asz Local Lemma. This tool is one of the most powerful probabilistic techniques for proving the existence of combinatorial objects. Their motivation was hypergraph colouring. A \defn{hypergraph} $G$ consists of a set $V(G)$ of \defn{vertices} and a set $E(G)$ of \defn{edges}, each of which is a subset of $V(G)$. A \defn{colouring} of a hypergraph $G$ is a function that assigns a `colour' to each vertex of $G$. A colouring of $G$ is \defn{proper} if no edge of $G$ is monochromatic. The \defn{chromatic number $\chi(G)$} is the minimum number of colours in a proper colouring of $G$. The \defn{degree} of a vertex $v$ in a hypergraph $G$ is the number of edges that contain $v$. A hypergraph is \defn{$r$-uniform} if each edge has size $r$. \citet{EL75} proved (using the Lov\'asz Local Lemma) that $\chi(G) \leq \ceil{(4r\Delta)^{1/(r-1)}}$ for every $r$-uniform hypergraph $G$ with maximum degree $\Delta$. The following result is a consequence of the strengthened Lov\'asz Local Lemma first stated by \citet{Spencer77}; see the book by \citet{MR02} for a comprehensive treatment.
\begin{thm}[\citep{EL75,Spencer77}]
\label{EL}
For every $r$-uniform hypergraph $G$ with maximum degree $\Delta$,
$$\chi(G) \leq \ceil{(e(r(\Delta-1)+1))^{1/(r-1)}}.$$
\end{thm}
This paper presents a general theorem for colouring hypergraphs, which in the special case of proper hypergraph colouring, (slightly) improves the upper bound in \cref{EL}. Moreover, the proof directly shows that there are exponentially many such colourings. The proof uses a simple counting argument inspired by a recent result for nonrepetitive colourings by \citet{Rosenfeld20}, which in turn is inspired by the power series method for pattern avoidance~\citep{Rampersad11,Ochem16,BW13}.
It is well known that the proof of \cref{EL} works in the setting of list colourings, which we now introduce. Let $G$ be a hypergraph. A \defn{list-assignment} for $G$ is a function $L$ that assigns each vertex $v$ of $G$ a set $L(v)$, whose elements are called \defn{colours}. If $|L(v)|= c$ for each vertex $v$ of $G$, then $L$ is a \defn{$c$-list-assignment}. An \defn{$L$-colouring} of $G$ is a function $\phi$ such that $\phi(v)\in L(v)$ for each vertex $v$ of $G$. The \defn{choosability} $\chi_{\textup{ch}}(G)$ is the minimum integer $c$ such that $G$ has a proper $L$-colouring for every $c$-list-assignment $L$ of $G$. For a list assignment $L$ of a hypergraph $G$, let $P(G,L)$ be the number of proper $L$-colourings of $G$.
The following theorem is our first contribution.
\begin{thm}
\label{Main}
For all integers $r\geq 3$ and $\Delta \geq 1$, and for every $r$-uniform hypergraph $G$ with maximum degree $\Delta$,
$$\chi_{\text{ch}}(G) \leq c := \ceil*{ \Big( \frac{r-1}{r-2} \Big) \big((r-2)\Delta\big)^{1/(r-1)} }.$$
Moreover, for every $c$-list assignment $L$ of $G$,
$$P(G,L) \geq \big( (r-2)\Delta \big)^{|V(G)|/(r-1)}.$$
\end{thm}
We now compare the above-mentioned bounds. Since $(\frac{r-1}{r-2})^{r-2} < e$, it follows that $\big(\frac{r-1}{r-2} \big) \big((r-2)\Delta\big)^{1/(r-1)} < ( e(r-1)\Delta )^{1/(r-1)}$, and assuming $\Delta\geq r-1$, the bound in \cref{Main} is slightly better than the bound in \cref{EL}. The difference is most evident for small $r$. For example, if $r=3$ then the bound in \cref{Main} is $\ceil[\big]{2\sqrt{\Delta}\,}$ compared with $\ceil[\big]{\sqrt{e(3\Delta-2)}\,}$ from \cref{EL}.
Several researchers have communicated to us that, with a little effort, one can conclude the existence of exponentially many colourings using the Lov\'asz Local Lemma (or other methods), although as far as we are aware no general result of this nature is published. One attraction of our proof is that it gives exponentially many colourings for free. Indeed, this stronger conclusion is a key to enabling the simple proof. See \citep{Harris21,AS16a,Pluhar09,Alon85,AB88,Gebauer13,Cherkashin11,Beck78,MRT77} for more results on colouring hypergraphs with given maximum degree or number of edges, and see \citep{Thomassen07a,Thomassen07b,DMS19,KP18,DS17,Harutyunyan} for other theorems showing the existence of exponentially many colourings in various graph settings.
\cref{Main} is a special case of a more general result that we introduce in the following section. Then, in \cref{Examples}, we apply this general result to a variety of colouring problems, including hypergraph colouring, graph colouring, independent transversals, star colouring, nonrepetitive colouring, frugal colouring, Ramsey number lower bounds, and $k$-SAT. \cref{Reflections} concludes by comparing our general result with other techniques including the Lov\'asz Local Lemma and entropy compression.
\section{General Framework}
\label{GeneralFramework}
For a hypergraph $G$ (allowing parallel edges), let $\mathcal{C}_G$ be the set of all colourings $\phi:V(G)\to\mathbb{Z}$. (For concreteness, we assume all colours are integers.)\ For an edge $e$ of $G$, let $\mathcal{C}_e$ be the set of all colourings $\phi:e \to \mathbb{Z}$. An \defn{instance} is a pair $(G,\mathcal{B})$ where $G$ is a hypergraph and $\mathcal{B}=(\mathcal{B}_e\subseteq \mathcal{C}_e: e\in E(G))$. A colouring $\phi\in\mathcal{C}_G$ is $\mathcal{B}$-\defn{bad} if, for some edge $e\in E(G)$, we have that $\phi$ restricted to $e$ is in $\mathcal{B}_e$. Every other colouring in $\mathcal{C}_G$ is \defn{$\mathcal{B}$-good}. For an integer $c\geq 1$, we say $G$ is \defn{$(\mathcal{B},c)$-choosable} if there is a $\mathcal{B}$-good $L$-colouring of $G$ for every $c$-list assignment $L$ of $G$. For a list assignment $L$ of $G$, let $P(G,\mathcal{B},L)$ be the number of $\mathcal{B}$-good $L$-colourings of $G$.
Fix an instance $(G,\mathcal{B})$ and consider an edge $e$ of $G$. A subset $S\subseteq e$ \defn{determines} $\mathcal{B}_e$ if any two colourings in $\mathcal{B}_e$ that agree on $S$ are identical. For every vertex $v$ in $e$, we assume that $\mathcal{B}_e$ is determined by some subset of $e\setminus\{v\}$. (Consider this assumption to be part of the definition of `instance'.)\ Then define the \defn{weight} of $(v,e)$ to be $|e|-1-|S|$, where $S$ is a minimum-sized subset of $e\setminus\{v\}$ that determines $\mathcal{B}_e$.
For each vertex $v$ of $G$, let \defn{$E_k(v)$} be the number of pairs $(v,e)$ with weight $k$.
For example, to model proper colouring in an $r$-uniform hypergraph $G$, for each edge $e$ of $G$, let $\mathcal{B}_e$ be the monochromatic colourings in $\mathcal{C}_e$. Then a colouring is $\mathcal{B}$-good if and only if it is proper. For every edge $e$ and every vertex $v$ in $e$, if $w$ is any vertex in $e\setminus\{v\}$, then $\{w\}$ determines $\mathcal{B}_e$, implying that $(v,e)$ has weight $r-2$.
\begin{thm}
\label{General}
Let $(G,\mathcal{B})$ be an instance. Assume there exist a real number $\beta\geq 1$ and an integer $c\geq 1$ such that for every vertex $v$ of $G$,
\begin{equation}
\label{Key}
c \geq \beta + \sum_{k\geq0} \beta^{-k} E_k(v) .
\end{equation}
Then $G$ is $(\mathcal{B},c)$-choosable. Moreover, for every $c$-list assignment $L$ of $G$,
$$P(G,\mathcal{B},L) \;\geq\; \beta^{|V(G)|}.$$
\end{thm}
Before proving \cref{General} we make a couple of minor observations.
If $\beta>1$ then \cref{General} guarantees exponentially many $\mathcal{B}$-good colourings.
If $\beta=1$ then \cref{General} guarantees at least one $\mathcal{B}$-good colouring.
In most applications $\beta>1$, but on one occasion the case $\beta=1$ is of interest (see \cref{Proper}).
When applying \cref{General} it is not necessary to determine the weight of a pair exactly; it suffices to determine a lower bound on the weight (because of the $\beta^{-k}$ term in \cref{Key}, where $\beta\geq 1$).
\cref{General} is an immediate corollary of the following lemma. If $(G,\mathcal{B})$ is an instance
with $\mathcal{B}=(\mathcal{B}_e:e\in E(G))$, and $H$ is a sub-hypergraph of $G$, then $(H,\mathcal{B})$ refers to the instance $\big(H,(\mathcal{B}_e:e\in E(H))\big)$. Similarly, if $L$ is a list-assignment for $G$, then we consider $L$ (restricted to $V(H)$) to be a list-assignment for $H$.
\begin{lem}
\label{GeneralInduction}
Let $(G,\mathcal{B})$ be an instance. Assume there exist a real number $\beta\geq 1$ and an integer $c\geq 1$ such that $\cref{Key}$ holds for every vertex $v$ of $G$. Then for every $c$-list assignment $L$ of $G$, for every induced sub-hypergraph $H$ of $G$, and for every vertex $v$ of $H$,
$$P(H,\mathcal{B},L) \;\geq\; \beta\, P(H-v,\mathcal{B},L).$$
\end{lem}
\begin{proof}
We proceed by induction on $|V(H)|$.
The base case with $|V(H)|=1$ is trivial.
Let $H$ be an induced sub-hypergraph of $G$, and assume the claim holds for all induced sub-hypergraphs of $G$ with less than $|V(H)|$ vertices. Let $v$ be any vertex of $H$. Let $X$ be the set of $\mathcal{B}$-bad $L$-colourings of $H$ that are $\mathcal{B}$-good on $H-v$. Then
\begin{align}
\label{F}
P(H,\mathcal{B},L) \;=\; c \, P(H-v,\mathcal{B},L) \,-\,|X|.
\end{align}
We now find an upper bound for $|X|$. For each $L$-colouring $\phi$ in $X$ there is an edge $e\in E(H)$ containing $v$ such that $\phi\in \mathcal{B}_e$ (if there are several options for $e$, fix a choice arbitrarily). Charge $\phi$ to $(v,e)$. Let $X_k$ be the set of colourings in $X$ that are charged to a pair with weight $k$.
Consider $\phi$ in $X_k$ charged to $(v,e)$. Let $S$ be a minimum-sized subset of $e\setminus\{v\}$ that determines $\mathcal{B}_e$. Let $T:=e\setminus S$. Then $|T|=k+1$ and $v\in T$. Since $\phi$ is $\mathcal{B}$-good on $H-v$, we know that $\phi$ is also $\mathcal{B}$-good on $H-T$. Since $S$ determines $\mathcal{B}_e$, the number of $L$-colourings in $X_k$ charged to $(v,e)$ is at most $P(H-T,\mathcal{B},L)$. By induction,
\begin{align*}
P(H-v,\mathcal{B},L) \;\geq\; \beta^k \, P(H-T,\mathcal{B},L).
\end{align*}
Thus the number of $L$-colourings in $X_k$ charged to $(v,e)$ is at most $\beta^{-k}\,P(H-v,\mathcal{B},L)$.
Hence $|X_k| \,\leq\, E_k(v)\,\beta^{-k}\,P(H-v,\mathcal{B},L)$, and
\begin{align*}
|X|
\;=\; \sum_{k\geq 0} |X_k|
\;\leq\; P(H-v,\mathcal{B},L) \; \sum_{k\geq 0} E_k(v) \, \beta^{-k} .
\end{align*}
By \cref{F},
\begin{align*}
P(H,\mathcal{B},L)
\;\geq \; & c \, P(H-v,\mathcal{B},L) \,-\, P(H-v,\mathcal{B},L)\;
\sum_{k\geq 0} \beta^{-k} E_k(v).
\end{align*}
By \cref{Key}, $P(H,\mathcal{B},L) \,\geq \, \beta\, P(H-v,\mathcal{B},L)$, as desired.
\end{proof}
\section{Examples}
\label{Examples}
In this section, we apply \cref{General} for various types of (hyper)graph colouring problems and for $k$-SAT. In most cases, \cref{General} matches or improves on the best known bound on the number of colours (as a function of maximum degree), and in addition shows that there are exponentially many colourings.
\subsection{Proper Colouring}
\label{Proper}
First we prove \cref{Main}. Let $G$ be an $r$-uniform hypergraph with maximum degree $\Delta$ where $r\geq 3$. For each edge $e$ of $G$, let $\mathcal{B}_e$ be the monochromatic colourings in $\mathcal{C}_e$; then a colouring is $\mathcal{B}$-good if and only if it is proper. Each pair $(v,e)$
has weight $r-2$, and $E_{r-2}(v)\leq\Delta$. Observe that \cref{Key} holds with $\beta := \big( (r-2)\Delta \big)^{1/(r-1)}$ and
$c:= \ceil[\big]{ \big( \frac{r-1}{r-2} \big) \big((r-2)\Delta\big)^{1/(r-1)} } $. \cref{Main} then follows from \cref{General}.
Now consider proper colouring in a graph with maximum degree $\Delta$ (the case $r=2$ in the above). Then every pair $(v,e)$
has weight 0, and $E_0(v)\leq\Delta$. Thus $c:=\ceil{ \Delta + \beta}$ satisfies \cref{Key}. \cref{General} with $\beta=1$ says that every graph $G$ with maximum degree $\Delta$ is $(\Delta+1)$-choosable. \cref{General} with $\beta\geq 2$ says that for every $(\Delta+\beta)$-list assignment $L$ of $G$ there are at least $\beta^{|V(G)|}$ $L$-colourings. These well-known facts are easily proved by a greedy algorithm. It is interesting that the above general framework includes such statements (the Lov\'asz Local Lemma does not). Note that the Local Action Lemma of \citet{Bernshteyn14} is another general-purpose tool that implies $(\Delta+1)$-colourability; also see \citep{Bernshteyn17}.
See \citep{Rassmann17,Rassmann19} for results about the number of 2-colourings in random
hypergraphs and about the number of $k$-colourings in random graphs.
\subsection{Star Colouring}
A colouring $\phi$ of a graph $G$ is a \defn{star colouring} if it is proper and every bichromatic subgraph is a star forest; that is, there is no 2-coloured $P_4$ (path on four vertices). The \defn{star chromatic number} $\chi_{\text{st}}(G)$ is the minimum number of colours in a star colouring of $G$. \citet{FRR04} proved (using the Lov\'asz Local Lemma) that $\chi_{\text{st}}(G) \leq O(\Delta^{3/2})$ for every graph $G$ with maximum degree $\Delta$, and that this bound is tight up to a $O(\log\Delta)$ factor. The best known bound is $\chi_{\text{st}}(G) \leq \sqrt{8}\Delta^{3/2}+\Delta$ proved by \citet{EsperetParreau} using entropy compression. Both these methods work for star choosability. We prove the same bound holds with exponentially many colourings.
\begin{thm}
\label{StarColouring}
Every graph $G$ with maximum degree $\Delta$ is star $\ceil{\Delta + \sqrt{8\Delta}(\Delta-1)}$-choosable. Moreover, for every $\ceil{\Delta + \sqrt{8\Delta}(\Delta-1)}$-list assignment $L$, there are at least $\big(\sqrt{2\Delta}(\Delta-1)\big)^{|V(G)|}$ star $L$-colourings of $G$.
\end{thm}
\begin{proof}
Define the following hypergraph $G'$ with $V(G')=V(G)$. Introduce one edge $e=\{v,w\}$ to $G'$ for each edge $vw$ of $G$, where $\mathcal{B}_e$ is the set of $L$-colourings $\phi\in\mathcal{C}_e$ such that $\phi(v)=\phi(w)$, and introduce one edge $e=\{u,v,w,x\}$ to $G'$ for each $P_4$ subgraph $(u,v,w,x)$ of $G$, where $\mathcal{B}_e$ is the set of $L$-colourings $\phi\in\mathcal{C}_e$ such that $\phi(u)=\phi(w)$ and $\phi(v)=\phi(x)$. For any list assignment $L$ of $G$, note that $G$ is star $L$-colourable if and only if $P(G',\mathcal{B},L)\geq 1$. Also, the weight of each 2-element edge is 0, and the weight of each 4-element edge is 1. Thus $E_0(v)\leq\Delta$ and $E_1(v) \leq 2\Delta(\Delta-1)^2$. Since \cref{Key} is satisfied with $\beta:=\sqrt{2\Delta}(\Delta-1)$ and $c:=\ceil{\Delta + \sqrt{8\Delta}(\Delta-1)}$, the result follows from \cref{General}.
\end{proof}
\subsection{Nonrepetitive Graph Colouring}\label{ss:nonrep}
Let $\phi$ be a colouring of a graph $G$. A path $(v_1,\dots,v_{2t})$ in $G$ is \defn{repetitively coloured} by $\phi$ if $\phi(v_i)=\phi(v_{t+i})$ for each $i\in\{1,\dots,t\}$. A colouring $\phi$ of $G$ is \defn{nonrepetitive} if no path in $G$ is repetitively coloured by $\phi$. The \defn{nonrepetitive chromatic number} $\pi(G)$ is the minimum number of colours in a nonrepetitive colouring of $G$. The \defn{nonrepetitive choice number} $\pi_{\text{ch}}(G)$ is the minimum integer $c$ such that $G$ has a nonrepetitive $L$-colouring for every $c$-list assignment $L$ of $G$. \citet{AGHR02} proved that $\pi(G)\leq O(\Delta^2)$ for every graph with maximum degree $\Delta$, and that this bound is tight up to a $O(\log\Delta)$ factor. The proof shows the same bound for $\pi_{\text{ch}}$. Several authors subsequently improved the constant in the $O(\Delta^2)$ term:
to $36\Delta^2$ by \citet{Grytczuk07},
to $16\Delta^2$ by \citet{Gryczuk-IJMMS07},
to $(12.2+o(1))\Delta^2$ by \citet{HJ-DM11},
and to $10.4\Delta^2$ by \citet{KSX12}.
All these proofs used the Lov\'asz Local Lemma.
\citet{DJKW16} improved the constant to 1, by showing that for every graph $G$ with maximum degree $\Delta$,
\begin{equation}
\label{DeltaSquared}
\pi(G) \leq \Delta^2 + O(\Delta^{5/3}).
\end{equation}
The proof of \citet{DJKW16} uses entropy compression; see \citep{GMP14,EsperetParreau} for refinements and simplifications to the method. Equation~\cref{DeltaSquared} was subsequently proved using the Local Cut Lemma of \citet{Bernshteyn17} and using cluster-expansion \citep{BFPS11,Aprile14}. Most recently, \citet{Rosenfeld20} proved \cref{DeltaSquared} with exponentially many colourings. His paper inspired the present work. We now show that the result of Rosenfeld follows from our general framework. Note that all of the above results hold in the setting of choosability.
\begin{thm}
For every graph $G$ with maximum degree $\Delta$, if
$$\beta:= (1+2^{1/3} \Delta^{-1/3})(\Delta-1)^2 \quad \text{ and } \quad
c:= \ceil{ \beta + 2^{-2/3} \Delta^{5/3} (1+ 2^{1/3} \Delta^{-1/3} )^2 },$$
then $G$ is nonrepetitively $c$-choosable. Moreover, for every $c$-list assignment $L$ of $G$ there are at least
$\beta^{|V(G)|}$ nonrepetitive $L$-colourings of $G$.
\end{thm}
\begin{proof}
Let $G'$ be the hypergraph with $V(G')=V(G)$, where there is an edge $V(P)$ for each path $P$ in $G$ of even order. Here we consider a path to be a subgraph of $G$, so that a path and its reverse contribute one edge to $G'$. For each edge $e$ of $G'$ corresponding to a path $P$ in $G$ of order $2t$, let $\mathcal{B}_e$ be the set of $L$-colourings $\phi\in\mathcal{C}_e$ such that $P$ is repetitively coloured by $\phi$. Thus $G$ is nonrepetitively $L$-colourable if and only if $P(G',\mathcal{B},L)\geq 1$.
Consider an edge $e$ of $G'$ corresponding to a path $P$ in $G$ on $2t$ vertices. For each vertex $v$ in $P$,
any colouring $\phi\in \mathcal{B}_e$ is uniquely determined by $\phi$ restricted to the $t$ vertices in the half of $P$ not containing $v$. Hence $(v,e)$ has weight $t-1$. Every vertex of $G$ is in at most $t\Delta(\Delta-1)^{2t-2}$ paths on $2t$ vertices. So $E_{t-1}(v) \leq t\Delta(\Delta-1)^{2t-2}$. Equation~\cref{Key} requires
$$c \geq \beta + \sum_{t\geq1} t\Delta(\Delta-1)^{2t-2} \, \beta^{1-t} .$$
Define $\beta:= (1+\epsilon)(\Delta-1)^2$ where $\epsilon>0$ is defined shortly.
Equation~\cref{Key} requires
$$c \geq (1+\epsilon)(\Delta-1)^2 +
\Delta\sum_{t\geq 1} t \, (1+\epsilon)^{-t+1}
= (1+\epsilon)(\Delta-1)^2 + \epsilon^{-2}(1+\epsilon)^2 \Delta .$$
Define $\epsilon := 2^{1/3} \Delta^{-1/3}$ (to approximately minimise $(1+\epsilon)(\Delta-1)^2 + \epsilon^{-2}(1+\epsilon)^2 \Delta$). Then \cref{Key} holds with $c$ defined above,
and the result follows from \cref{General}.
\end{proof}
\subsection{Frugal Colouring}
For an integer $k\geq 1$, a colouring $\phi$ of a graph $G$ is \defn{$k$-frugal} if $\phi$ is proper and $\big|\{w\in N_G(v): \phi(w)=i\} \big| \leq k$ for every vertex $v$ of $G$ and for every colour $i$, where $N_G(v)$ is the set of neighbours of $v$ in $G$. A 1-frugal colouring of $G$ corresponds to a proper colouring of $G^2$. \citet{HMR97} proved that for each integer $k\geq 1$ and sufficiently large $\Delta$, every graph with maximum degree $\Delta$ has a $k$-frugal colouring with $\max\{(k+1)\Delta,\frac{e^3}{k}\Delta^{1+1/k}\}$ colours. An example due to Alon shows that this upper bound is within a constant factor of optimal~\citep{HMR97}. In particular, for all $\Delta\geq k\geq 1$, Alon constructed a graph with maximum degree at most $\Delta$ that has no $k$-frugal colouring with $\frac{1}{2k}\Delta^{1+1/k}$ colours. Here we improve the constant in the upper bound without assuming that $\Delta$ is sufficiently large, and with exponentially many colourings.
\begin{thm}\label{Frugal}
For all integers $\Delta>k\ge2$, let
\begin{equation*}
\beta:=\left( (k-1)\Delta \binom{ \Delta-1}{k} \right)^{1/k}
\quad \text{and} \quad
c:= \Delta + \ceil*{ \frac{k\beta}{k-1} } .
\end{equation*}
Then every graph $G$ with maximum degree $\Delta$ has a $k$-frugal $c$-colouring. Moreover, for every
$c$-list-assignment $L$ of $G$, the number of $k$-frugal
$L$-colourings of $G$ is at least $\beta^{|V(G)|}$.
\end{thm}
\begin{proof}
Let $G'$ be the hypergraph with $V(G')=V(G)$, where every edge of $G$ is an edge of $G'$, and $\{w_1,\dots,w_{k+1}\}$ is an edge of $G'$ for every vertex $v$ of $G$ and set $\{w_1,\dots,w_{k+1}\}\subseteq N_G(v)$. In the latter case, we say the edge is \defn{centred} at $v$. For every edge $e=\{v,w\}$ of $G'$, let $\mathcal{B}_e$ be the set of $L$-colourings $\phi\in\mathcal{C}_e$ such that $\phi(v)=\phi(w)$. For every edge $e=\{w_1,\dots,w_{k+1}\}$ of $G'$, let $\mathcal{B}_e$ be the set of $L$-colourings $\phi\in\mathcal{C}_e$ such that $\phi(w_1)=\phi(w_2)=\dots=\phi(w_{k+1})$. Then a colouring of $G$ is $k$-frugal if and only if it is $\mathcal{B}$-good.
For each edge $e=\{v,w\}$ of $G'$, both $(v,e)$ and $(w,e)$ have weight 0. Consider an edge $e=\{w_1,\dots,w_{k+1}\}$ of $G'$ centred at $v$. For each $i\in\{1,\dots,k+1\}$, the pair $(w_i,e)$ has weight $k-1$, since every colouring $\phi\in\mathcal{B}_e$ is determined by $\{w_j\}$ for any $j\neq i$.
Consider a vertex $v$ of $G$. Then $E_0(v)\leq\Delta$. Now consider a pair $(v,e)$ with non-zero weight. Then $(v,e)$ has weight $k-1$, and $e=\{w_1,\dots,w_k,v\}$ is centred at some vertex $u$, for some vertices $w_1,\dots,w_k\in N_G(u)\setminus\{v\}$. There are at most $\Delta$ choices for $u$ and at most $\binom{\Delta-1}{k}$ choices for $w_1,\dots,w_k$. Thus $E_{k-1}(v) \leq \Delta \binom{\Delta-1}{k}$.
Hence
\begin{align*}
\beta + \sum_{i\geq0} E_i(v) \, \beta^{-i}
\le
\beta + \Delta + \Delta \binom{\Delta-1}{k} \, \beta^{1-k}
= \Delta + \frac{k\beta}{k-1} \leq c.
\end{align*}
The result follows from \cref{General}.
\end{proof}
Since $k (k-1)^{-1+1/k} \to 1$ and $\binom{\Delta-1}{k}^{1/k} \leq \frac{e}{k}(\Delta-1)$,
\cref{Frugal} implies this:
\begin{cor}\label{cy:asyfrugal}
As $\Delta> k\rightarrow\infty$, for every $\ceil{(e+o(1))\Delta^{1+1/k}/k}$-list-assignment $L$ of a graph $G$ with maximum degree $\Delta$, the number of $k$-frugal $L$-colourings of $G$ is at least $\beta^{|V(G)|}$.
\end{cor}
Note that Alon's example in \citep{HMR97} shows that \cref{cy:asyfrugal} is within a factor of $2e+o(1)$ of optimal.
\subsection{Independent Transversals and Constrained Colourings}
Consider a hypergraph $G$. A set $X\subseteq V(G)$ is \defn{independent} if no edge of $G$ is a subset of $X$. Consider a partition $V_1,\dots,V_n$ of $V(G)$. A \defn{transversal} of $V_1,\dots,V_n$ is a set $X$ such that $|X\cap V_i|=1$ for each $i$. Let $\ell:V(G)\to\{1,\dots,n\}$ be the function where $\ell(v):=i$ for each vertex $v\in V_i$. For $S\subseteq V(G)$, let $\ell(S):=\{\ell(v):v\in S\}$.
An edge $e$ of $G$ is \defn{stretched} by $V_1,\dots,V_n$ if $|\ell(e)|=|e|$.
The following theorem provides a condition that guarantees an independent transversal.
\begin{thm}
\label{ISH}
Fix integers $r\geq 2$ and $t\geq 1$. For an $r$-uniform hypergraph $G$, let $V_1,\dots,V_n$ be a partition of $V(G)$ such that $|V_i| \geq t$ and at most $r^{-r} (r-1)^{r-1} t^{r-1}\,|V_i|$ stretched edges in $G$ intersect $V_i$, for each $i \in\{1,\dots,n\}$. Then there exist at least $(\frac{r-1}{r}t)^n$ independent transversals of $V_1,\dots,V_n$.
\end{thm}
\begin{proof}
Non-stretched edges do not influence whether a transversal is independent, so we may assume that every edge is stretched. We may also assume that $|V_i|=t$, since if $|V_i|>t$ and $v$ is a vertex in $V_i$ with maximum degree, then by removing $v$ and its incident edges we obtain another hypergraph satisfying the assumptions. Let $X$ be the hypergraph with $V(X):=\{1,\dots,n\}$, where for each edge $\{v_1,\dots,v_r\}$ of $G$ there is an edge $e=\{\ell(v_1),\dots,\ell(v_r)\}$ in $X$. By assumption, each vertex $i$ of $X$ has degree at most
$r^{-r} (r-1)^{r-1} t^{r-1}|V_i| = r^{-r} (r-1)^{r-1} t^r$. Let $L$ be the list-assignment of $X$ with $L(i):=V_i$ for each $i\in\{1,\dots,n\}$. For each edge $e$ of $X$ corresponding to edge $\{v_1,\dots,v_r\}$ of $G$, let $\mathcal{B}_e$ be the set consisting of the $L$-colouring $\phi$ of $e$ with $\phi(\ell(v_j))=v_j$ for each $j\in\{1,\dots,r\}$. Thus $\mathcal{B}$-good $L$-colourings of $X$ correspond to independent transversals of $V_1,\dots,V_n$. Since $\mathcal{B}_e$ is determined by $\emptyset$, each pair $(i,e)$
has weight $r-1$. Define $\beta:= \frac{r-1}{r}t$. Then
$$|L(i)| = t = \beta + \frac{(r-1)^{r-1}\, t^r}{r^r\,\beta^{r-1}} \geq \beta + \frac{E_{r-1}(i) }{\beta^{r-1}}.$$
Thus \cref{Key} holds and the result follows from \cref{General}.
\end{proof}
\citet{EGL94} study independent transversals in a particular family of sparse hypergraphs. They define an \defn{$[n,k,r]$-hypergraph} to be an $r$-uniform hypergraph $G$ whose vertex set $V(G)$ is partitioned into $n$ sets $V_1, \dots, V_n$, each with $k$ vertices, such that every edge is stretched by $V_1,\dots,V_n$ and for every $r$-element subset $S$ of $\{1,2,\dots,n\}$ there is exactly one edge $e\in E(G)$ such that $\ell(e)=S$. \citet{EGL94} defined $f_r(k)$ to be the maximum integer $n$ such that every $[n,k,r]$-hypergraph has an independent transversal.
Using the Lov\'asz Local Lemma, they proved that if
\begin{equation}
\label{TheirEqn}
e \left(\binom{n}{r} - \binom{n-r}{r} \right) < k^r,
\end{equation}
then $f_r(k) \geq n$.
Observe that for every $[n,k,r]$-hypergraph $G$ with partition $V_1,\dots,V_n$, for each $i\in\{1,\dots,n\}$, exactly $\binom{n-1}{r-1}$ edges of $G$ intersect $V_i$. Thus \cref{ISH} implies that if
\begin{equation}
\label{OurEqn}
\binom{n-1}{r-1} \leq \frac{(r-1)^{r-1} k^{r}}{r^r},
\end{equation}
then $f_r(k)\geq n$. We now compare these last two results. Consider $r$ to be fixed. As $k$ grows, the largest $n$ satisfying \cref{TheirEqn} or \cref{OurEqn} also grows, so we can think of $n$ being large relative to $r$. Then
\begin{align}
&\hspace{-2em}\frac{(r-1)^{r-1}\left[\binom{n}{r}-\binom{n-r}{r}\right]}{r^r\binom{n-1}{r-1}}\nonumber \\
&=\left(\frac{r-1}{r}\right)^{r-1}\frac{n}{r^2}\left[1-\frac{(n-r)!^2}{n!(n-2r)!}\right]\nonumber\\
&=\left(\frac{r-1}{r}\right)^{r-1}\frac{n}{r^2}\left[1-\prod_{i=0}^{r-1}\frac{n-r-i}{n-i}\right]\nonumber\\
&\ge \left(\frac{r-1}{r}\right)^{r-1}\frac{n}{r^2}\left[1-\left(\frac{n-r}{n}\right)^{r}\,\right]\nonumber\\
&=\left(\frac{r-1}{r}\right)^{r-1}\left[1-\binom{r}{2}\frac{1}{n}+\frac{n}{r^2}\sum_{i=2}^{\lceil r/2\rceil}\left(\binom{r}{2i-1}\left(\frac{r}{n}\right)^{2i-1}-\binom{r}{2i}\left(\frac{r}{n}\right)^{2i}\right)\right]\nonumber\\
&\ge \left(\frac{r-1}{r}\right)^{r-1}\left[1-\frac{r^2}{2n}+\frac{n}{r^2}\sum_{i=2}^{\lceil r/2\rceil}\binom{r}{2i-1}\left(\frac{r}{n}\right)^{2i-1}\left(1-\frac{(r-2i+1)r}{2in}\right)\right]\nonumber\\
&\ge\left(\frac{r-1}{r}\right)^{r-1}\left[1-\frac{r^2}{2n}\right]\label{e:bnd}
\end{align}
if $n\geq{r^2}/{4}$. Also $(1-1/r)^{r-1}>1/e$. Hence, if $n$ is sufficiently large relative to $r$, then \cref{e:bnd} will exceed $1/e$, and \cref{OurEqn} implies \cref{TheirEqn}. In other words, our bound on $f_r(k)$ is better when $k$ is sufficiently large relative to $r$. \citet{Yuster-CPC97,Yuster-DM97} used a different argument to get a better bound in the case of graphs ($r=2$).
\cref{ISH} in the case of graphs says:
\begin{cor}
\label{IS}
Fix an integer $t\geq 1$. For a graph $G$, let $V_1,\dots,V_n$ be a partition of $V(G)$ such that $|V_i| \geq t$ and there are at most $\frac{t}{4}|V_i|$ edges in $G$ with exactly one endpoint in $V_i$, for each $i \in\{1,\dots,n\}$. Then there exist at least $(\frac{t}{2})^n$ independent transversals of $V_1,\dots,V_n$.
\end{cor}
\cref{IS} immediately implies the following result (since the average degree out of $V_i$ is at most the maximum degree).
\begin{cor}
\label{ISbasic}
For a graph $G$ with maximum degree at most $\Delta$, if $V_1,\dots,V_n$ is a partition of $V(G)$ such that $|V_i| \geq 4\Delta$ for each $i \in\{1,\dots,n\}$, then there exist at least $(2\Delta)^n$ independent transversals of $V_1,\dots,V_n$.
\end{cor}
We now compare \cref{IS,ISbasic} with the literature. \citet{RW12} proved the weakening of \cref{IS} with $\frac{t}{4}$ replaced by $\frac{t}{2e}$ and with $(\frac{t}{2})^n$ replaced by 1, and \citet{DEKO20} noted that \cref{IS} holds with $(\frac{t}{2})^n$ replaced by $1$ (using different terminology). Similarly, \citet{Alon94} proved the weakening of \cref{ISbasic} with $4\Delta$ replaced by $2e\Delta$ and with $(2\Delta)^n$ replaced by 1. The proofs of \citet{RW12} and \citet{Alon94} used the Lov\'asz Local Lemma, while the proof of \citet{DEKO20} used the Local Cut Lemma. Using a different method, \citet{Haxell01} proved the strengthening of \cref{ISbasic} with $4\Delta$ replaced by $2\Delta$, but with $(2\Delta)^n$ replaced by 1. The bound here of $2\Delta$ is best possible \citep{BES-DM75,Yuster-DM97}. It is open whether $\frac{t}{4}$ in \cref{IS} can be improved to $\frac{t}{2}$; see \citep{KK20}. See \citep{LS07,Yuster-CPC97,GS20} for more on independent transversals in graphs.
These results are related to the following `constrained colouring' conjecture of \citet{Reed99}:
\begin{conj}[\citep{Reed99}]
\label{ReedConj}
Let $L$ be a $(k+1)$-list assignment of a graph $G$ such that for each vertex $v$ of $G$ and colour $c \in L(v)$, there are at most $k$ neighbours $w\in N_G(v)$ with $c\in L(w)$. Then there exists a proper $L$-colouring of $G$.
\end{conj}
\citet{Haxell01} observed the following connection between constrained colourings and independent transversals. Consider an $f(k)$-list-assignment $L$ of a graph $G$. Let $H$ be the graph with $V(H):=\{(v,c):v\in V(G), c\in L(v)\}$, where $(v,c)(w,c)\in E(H)$ for each edge $vw\in E(G)$ and colour $c\in L(v)\cap L(w)$. Let $H_v:=\{(v,c):c\in L(v)\}$. Then $(H_v:v\in V(G))$ is a partition of $H$ with each $|H_v|\geq f(k)$ such that proper $L$-colourings of $G$ correspond to independent transversals of $(H_v:v\in V(G))$. Now if we assume that for each vertex $v$ and colour $c\in L(v)$ there are at most $k$ neighbours $w\in N_G(v)$ with $c\in L(w)$, then $H$ has maximum degree at most $k$. Hence the above-mentioned result of \citet{Alon94} proves \cref{ReedConj} with $k+1$ replaced by $2ek$ (also proved by \citet{Reed99}), and the above-mentioned result of \citet{Haxell01} proves \cref{ReedConj} with $k+1$ replaced by $2k$. \citet{BH02} disproved \cref{ReedConj}. The best asymptotic result, due to \citet{RS02}, says that for each $\epsilon>0$ there exists $k_0$ such that \cref{ReedConj} holds with $k+1$ replaced by $(1+\epsilon)k$ for all $k\geq k_0$. None of these results conclude that there are exponentially many colourings. \cref{IS} and the above connection by \citet{Haxell01} implies the following result:
\begin{cor}
\label{Constrained}
Fix an integer $t\geq 2$. Let $L$ be a $t$-list assignment of a graph $G$ such that for each vertex $v$ of $G$,
$$ 4\sum_{w\in N_G(v)}\!\! |L(v)\cap L(w) | \,\leq\, t^2.$$
Then there exist at least $(\frac{t}{2})^{|V(G)|}$ proper $L$-colourings of $G$.
\end{cor}
Taking $t=4k$ we obtain the following result in the direction of \cref{ReedConj}:
\begin{cor}
\label{ConstrainedBasic}
Let $L$ be a $4k$-list assignment of a graph $G$ such that for each vertex $v$ of $G$ and colour $c \in L(v)$, there are at most $k$ neighbours $w\in N_G(v)$ such that $c\in L(w)$. Then there exist at least $(2k)^{|V(G)|}$ proper $L$-colourings of $G$.
\end{cor}
The following stronger result can also be proved using a variant of \cref{General}.
\begin{thm}
\label{LastTheorem}
Let $L$ be a list-assignment of a graph $G$ such that for every vertex $v$ of $G$,
\begin{equation}
\label{LastAssumption}
|L(v)| \; \geq \; 4 \sum_{w\in N_G(v)} \frac{ |L(v) \cap L(w)| }{ |L(w)| }.
\end{equation}
Then there exist at least $\prod_{v\in V(G)} \frac{|L(v)|}{2}$ proper $L$-colourings.
\end{thm}
\begin{proof}
We proceed by induction on $|V(H)|$ with the following hypothesis:
for every induced subgraph $H$ of $G$, and for every vertex $v$ of $H$,
$$P(H,L) \;\geq\; \frac{|L(v)|}{2}\, P(H-v,L).$$
(The proof is very similar to that of \cref{GeneralInduction} except that $\beta$ depends on $v$; in particular, $\beta_v=\frac{|L(v)|}{2}$.)\
The base case with $|V(H)|=1$ is trivial.
Let $H$ be an induced subgraph of $G$, and assume the claim holds for all induced subgraphs of $G$ with less than $|V(H)|$ vertices. Let $v$ be any vertex of $H$.
Let $X$ be the set of improper $L$-colourings of $H$ that are proper on $H-v$. Then
\begin{align}
\label{FF}
P(H,L) \;=\; |L(v)| \; P(H-v,L) \,-\,|X|.
\end{align}
We now find an upper bound for $|X|$. For $w\in N_G(v)$, let $X_w$ be the set of colourings $\phi$ in $X$ such that $\phi(v)=\phi(w)$.
Each $L$-colouring in $X$ is in some $X_w$. Thus
\begin{align*}
|X| \;\leq\; \sum_{w\in N_G(v)}\!\!\! |X_w| \;\leq\; \sum_{w\in N_G(v)} P(H-v-w,L) \; |L(v)\cap L(w)|.
\end{align*}
By induction, $P(H-v,L) \;\geq\; \frac{|L(w)|}{2} \, P(H-v-w,L)$.
Hence
\begin{align*}
\label{XXX}
|X| \leq \sum_{w\in N_G(v)} \!\!\! \frac{2 |L(v)\cap L(w)|}{|L(w)|} \, P(H-v,L) .
\end{align*}
By \cref{FF}
\begin{align*}
P(H,L) \;\geq& \; |L(v)| \; P(H-v,L) \,- \sum_{w\in N_G(v)}\!\!\! \frac{2 |L(v)\cap L(w)|}{|L(w)|} \, P(H-v,L).
\end{align*}
By \cref{LastAssumption}, $P(H,L) \geq \frac{|L(v)|}{2} \,P(H-v,L)$, as desired.
\end{proof}
Note that \cref{LastTheorem} immediately implies \cref{Constrained}, taking $|L(v)|=2t$ for each $v$.
\subsection{Ramsey Numbers}
For integers $k,c\geq 2$, let \defn{$R_c(k)$} be the minimum integer $n$ such that every edge $c$-colouring of $K_n$ contains a monochromatic $K_k$. \citet{Ramsey30} and \citet{ES35} independently proved that $R_c(k)$ exists. The best asymptotic lower bound on $R_2(k)$ is due to \citet{Spencer75,Spencer77} who proved that
\begin{equation}
\label{RamseySpencerAsymptotics}
R_2(k)\geq \bigg(\frac{\sqrt{2}}{e}-o(1)\bigg) k\, 2^{k/2}.
\end{equation}
More precisely, \citet{Spencer75,Spencer77} proved that if
\begin{equation}
\label{RamseySpencer}
e\binom{k}{2}\left( \binom{n-2}{k-2} + 1 \right) < 2^{\binom{k}{2}-1},
\end{equation}
then there exists an edge 2-colouring of $K_n$ with no monochromatic $K_k$, implying $R_2(k)>n$. \cref{General} leads to an analogous result with the same asymptotics, but with slightly better lower order terms. For a graph $G$ and integer $k\geq 2$, let \defn{$D_k(G)$} be the maximum, taken over all edges $vw\in E(G)$, of the number of $k$-cliques in $G$ containing $v$ and $w$.
\begin{thm}
\label{RamseyLemma}
Fix integers $k\geq 3$ and $c\geq 2$. Let $m:=\binom{k}{2}-1$. Then for every graph $G$ with
\begin{equation}\label{e:Dbound}
D_k(G) \leq \frac{(m-1)^{m-1}\,c^m}{m^m},
\end{equation}
there exists an edge $c$-colouring of $G$ with no monochromatic $K_k$. In fact, there exists at least $\big( D_k(G) (m-1) \big)^{|E(G)|/m}$ such colourings.
\end{thm}
\begin{proof}
Let $G'$ be the hypergraph with $V(G'):=E(G)$, where $S\subseteq E(G)$ is an edge of $G'$ whenever $S$ is the edge-set of a $K_k$ subgraph in $G$. For each edge $vw$ of $G$, let $L(vw):=\{1,\dots,c\}$. For each edge $S$ of $G'$, let $\mathcal{B}_S$ be the set of monochromatic $L$-colourings of $S$. Thus $\mathcal{B}$-good $L$-colourings of $G'$ correspond to edge $c$-colourings of $G$ with no monochromatic $K_k$. Each pair $(v,e)$
has weight $m-1$, and $E_{m-1}(v)\leq D_k(G)$. Thus \cref{Key} holds if
\begin{equation}
\label{KeyRamsey}
c \geq \beta + D_k(G) \,\beta^{1-m}.
\end{equation}
To minimise the right-hand side of this expression, define $\beta := \big( D_k(G) \, (m-1) \big)^{1/m}$. Then \cref{e:Dbound} implies
\cref{KeyRamsey}, so the result follows from \cref{General}.
\end{proof}
Applying \cref{RamseyLemma} to a complete graph gives the following corollary.
\begin{cor}
\label{Ramsey}
For every integer $k\geq 3$ and $c\geq 2$, if $m:=\binom{k}{2}-1$ and
$$ \frac{m^m}{(m-1)^{m-1}} \binom{n-2}{k-2} \leq c^m $
then there exists an edge $c$-colouring of $K_n$ with no monochromatic $K_k$, and $R_c(k)>n$.
\end{cor}
Since $ \frac{m^m}{(m-1)^{m-1}} < em = e\big( \tbinom{k}{2}-1 \big)$, \cref{Ramsey} is slightly stronger than \cref{RamseySpencer}. While this improvement only changes the implicit lower order term in \cref{RamseySpencerAsymptotics}, we consider it to be of interest, since it suggests a new approach for proving lower bounds on $R_c(k)$.
\subsection{$k$-SAT}
The $k$-SAT problem takes as input a Boolean formula $\psi$ in conjunctive normal form, where each clause has exactly $k$ distinct literals, and asks whether there is a satisfying truth assignment for $\psi$. The Lov\'asz Local Lemma proves that if each variable is in at most $\frac{2^k}{ke}$ clauses, then there exists a satisfying truth assignment; see \citep{GMSW09} for a thorough discussion of this topic. The following result (slightly) improves upon this bound (since $\big( \frac{k-1}{k} \big)^{k-1} > \frac{1}{e}$), and moreover, guarantees exponentially many truth assignments.
\begin{thm}
\label{kSAT}
Let $\psi$ be a Boolean formula in conjunctive normal form, with variables $v_1,\dots,v_n$ and clauses $c_1,\dots,c_m$, each with exactly $k$ literals. Assume that each variable is in at most $\Delta := \frac{2^k}{k}
\big( \frac{k-1}{k} \big)^{k-1} $ clauses. Then there exists a satisfying truth assignment for $\psi$. In fact, there are at least $(2-\frac{2}{k})^n$ such truth assignments.
\end{thm}
\begin{proof}
Let $G$ be the hypergraph with $V(G)=\{v_1,\dots,v_n\}$ and $E(G)=\{e_1,\dots,e_m\}$, where edge $e_i$ consists of those variables in clause $c_i$. So $G$ is $k$-uniform. Let $L(v_i)=\{0,1\}$ for each vertex $v_i$. Let $\mathcal{B}_{e_i}$ be the set of $L$-colourings of $e_i$ such that $c_i$ is not satisfied. Satisfying truth assignments for $\psi$ correspond to $\mathcal{B}$-good $L$-colourings of $G$. Each pair $(v,e)$
has weight $k-1$. Thus $E_{k-1}(v)\leq\Delta$ and $E_i(v) = 0$ for all $i\neq k-1$. Then \cref{Key} holds with $\beta := 2-\frac{2}{k}$ and $c:=2$. The result follows from \cref{General}.
\end{proof}
Note that \citet{GST16} proved that if each variable is in at most $(1-o(1))\frac{2^{k+1}}{ke}$ clauses, then there exists a satisfying truth assignment, and that this bound is best possible up to the $o(1)$ term; see \citet{Harris21} for further improvements. These results improve upon the bound in \cref{kSAT} by a factor of 2. However, \cref{kSAT} may still be of interest since it gives exponentially many satisfying assignments and is an immediate corollary of our general framework.
See \citep{CW18} for bounds on the number of satisfying truth assignments in random {$k$}-{SAT} formulas.
\section{Reflection}
\label{Reflections}
We now reflect on \cref{General}, which provides a general framework
for colouring hypergraphs of bounded degree.
First we discuss minimising the number of colours in \cref{General}.
To do so, one
needs to minimise the right hand side of \cref{Key}, which is a
Laurent series $Q(\beta)$ with nonnegative integer coefficients. We
assume that at least one edge has positive weight, since otherwise
$Q(\beta)$ is linear. We also assume that the coefficients in
$Q(\beta)$ grow slowly enough that it and its first two derivatives
converge for all $\beta>R$ for some real number $R$. For example,
when the weight of edges is bounded (which is true in every example in
this paper outside of \cref{ss:nonrep}), we are optimising a Laurent
polynomial, and may take $R=0$. Now, $Q''(\beta)>0$ for all
$\beta>R$, so we expect a unique minimum for $Q(\beta)$ on the
interval $[R,\infty)$, say at $\beta=\beta_0$. Since $Q'(1)\le0$ (or
$R>1$), we must have $\beta_0\ge 1$. Even using a value of
$\beta\ne\beta_0$, one still obtains a non-trivial result from
\cref{General}. In fact, choosing $\beta>\beta_0$ may be desirable
if one wants to find conditions under which there are more
colourings than are guaranteed by taking $\beta=\beta_0$.
Compared with the Lov\'asz Local Lemma, \cref{General} has the advantage of directly proving the existence of exponentially many colourings, and often gives slightly better bounds. The proof of \cref{General} is elementary, and as discussed above, \cref{Key} is often easier to optimise than the General Lov\'asz Local Lemma.
\cref{General} should also be compared with entropy compression, which is a method that arose from the algorithmic proof of the Lov\'asz Local Lemma due to \citet{MoserTardos}. See \citep{BD17,DJKW16,DFMS20,EsperetParreau,GMP20} for examples of the use of entropy compression in the context of graph colouring. We expect that the results in \cref{Examples} can be proved using entropy compression. For example, see \citep[Theorem~12]{GMP14} for a generic graph colouring lemma in a similar spirit to our \cref{General} that is proved using entropy compression. However, we consider the proof of \cref{General} and the proofs of results that apply \cref{General} to be simpler than their entropy compression counterparts, which require non-trivial analytic techniques from enumerative combinatorics. On the other hand, entropy compression has the advantage that it provides an explicit algorithm to compute the desired colouring, often with polynomial expected time complexity.
It is also likely that our results in \cref{Examples} can be proved using the Local Cut Lemma~\citep{Bernshteyn17} or via cluster expansion \citep{BFPS11}. The advantage of \cref{General} is the simplicity and elementary nature of its proof. See \citep{FLP20,APS21} for results connecting the Lov\'asz Local Lemma, entropy compression, and cluster expansion.
Finally, we mention a technical advantage of the Lov\'asz Local Lemma and of entropy compression. In the setting of hypergraph colouring, the Lov\'asz Local Lemma and entropy compression need only bound the number of edges that intersect a given edge, whereas \cref{General} requires a bound on the number of edges that contain a given vertex (because the proof is by induction on the number of vertices).
\subsection*{Acknowledgements} Thanks to Danila Cherkashin, Ewan Davies, Louis Esperet, David Harris, Gwena\'el Joret, Ross Kang, Matthieu Rosenfeld and Lutz Warnke for helpful feedback on an earlier version of this paper.
{\fontsize{10.5pt}{11.3pt}
\selectfont
\let\oldthebibliography=\thebibliography
\let\endoldthebibliography=\endthebibliography
\renewenvironment{thebibliography}[1]{%
\begin{oldthebibliography}{#1}%
\setlength{\parskip}{0ex}%
\setlength{\itemsep}{0ex}%
}{\end{oldthebibliography}}
\def\soft#1{\leavevmode\setbox0=\hbox{h}\dimen7=\ht0\advance \dimen7
by-1ex\relax\if t#1\relax\rlap{\raise.6\dimen7
\hbox{\kern.3ex\char'47}}#1\relax\else\if T#1\relax
\rlap{\raise.5\dimen7\hbox{\kern1.3ex\char'47}}#1\relax \else\if
d#1\relax\rlap{\raise.5\dimen7\hbox{\kern.9ex \char'47}}#1\relax\else\if
D#1\relax\rlap{\raise.5\dimen7 \hbox{\kern1.4ex\char'47}}#1\relax\else\if
l#1\relax \rlap{\raise.5\dimen7\hbox{\kern.4ex\char'47}}#1\relax \else\if
L#1\relax\rlap{\raise.5\dimen7\hbox{\kern.7ex
\char'47}}#1\relax\else\message{accent \string\soft \space #1 not
defined!}#1\relax\fi\fi\fi\fi\fi\fi}
|
1,108,101,566,334 | arxiv | \section*{Introduction}
The Beilinson-Hodge conjecture (${\rm BH}(X,n)$) asserts the surjectivity of the
cycle map
$$
\bar{c}_{n,n}: H^n_M(X,\Q(n)) \xr{} \Hom_{{\rm MHS}}(\Q(-n),H^{n}(X,\Q))
$$
for all integers $n\geq 1$ and every smooth complex algebraic variety $X$.
Informally speaking, the conjecture means that every complex holomorphic $n$-form
with logarithmic poles along the boundary divisor of every compactification
of $X$ and rational cohomology class comes from a meromorphic form of the
shape
$$
\frac{1}{(2\pi i)^n} \sum_{j} m_j \cdot \frac{df_{j1}}{f_{j1}}\wedge \dots \wedge
\frac{df_{jn}}{f_{jn}}
$$
with $f_{jk}\in \C(X)^*$ and $m_j\in \Q$.
It is well-known that the conjecture holds for $n=1$ (see, for example, \cite[Proposition~2.12]{EV}). For $n\geq 2$, Asakura and Saito provided evidence
for the conjecture by studying the Noether-Lefschetz locus of Beilinson-Hodge
cycles (see \cite{AS1},\cite{AS2},\cite{AS3}). By work of Arapura and Kumar, ${\rm BH}(X,n)$ is known to hold for every $n$ provided that $X$
is a semi-abelian variety or a product of curves \cite{AK}.
In our paper we consider only the case $n=2$, and make two observations.
First, for a smooth and projective variety $X$ the Beilinson-Hodge
conjecture ${\rm BH}(\eta,2)$ for the generic point $\eta$ of $X$ is
equivalent to the injectivity of the cycle map
$$
\frac{H^3_M(X,\Q(2))}{H^1_M(X,\Q(1))\cdot H^2_M(X,\Q(1))}\xr{}
\frac{H^3_{\mc{H}}(X,\Q(2))}{H^1_{\mc{H}}(X,\Q(1))\cdot H^2_{\mc{H}}(X,\Q(1))}
$$
to absolute Hodge cohomology (Proposition \ref{proposition-ex-seq-general}). The left hand side is called the group of
\emph{indecomposable} cycles and has been studied
by M\"uller-Stach \cite{MS}. In general, indecomposable cycles exist, but
the image via the cycle class map is a countable group.
By ${\rm BH}(\eta,2)$ we mean the surjectivity of the cycle class map
$$
H^2_{M}(\C(X),\Q(2))\xr{} \varinjlim_{U\subset X}\Hom_{{\rm MHS}}(\Q(-2),H^{2}(U,\Q)),
$$
where $U$ runs over all open subsets of $X$, and $\C(X)$ denotes the function field.
The second observation is that if $X$ satisfies $H^1(X,\C)=0$ then
${\rm BH}(U,2)$ for all open sets $U\subset X$ is equivalent to ${\rm BH}(\eta,2)$
(Proposition \ref{proposition-reduction-to-generic-point}).
The statement makes perfect sense when $H^1(X,\C)\neq 0$, but we can prove
it only in the case $H^1(X,\C)=0$.
Combining the two observations we obtain the main theorem of the paper.
\begin{thm*}[cf.~Theorem \ref{main-thm}]\label{main-thm-intro}
Let $X$ be smooth and connected. Let $\bar{X}$ be a smooth compactification of $X$.
We denote by ${\rm CH}_0(\bar{X})\otimes_{\Z}\Q$ the Chow group of zero cycles on $\bar{X}$.
If $\deg:{\rm CH}_0(\bar{X})\otimes_{\Z}\Q \xr{} \Q$ is an isomorphism then $BH(X,2)$ holds.
\end{thm*}
For the proof we use a theorem of Bloch and Srinivas \cite{BS} which states
that the indecomposable cycles vanish whenever the assumptions of our
theorem are satisfied.
\subsection*{Acknowledgements}
It is a pleasure to thank H\'el\`ene Esnault for her strong encouragement.
I thank Donu Arapura, Florian Ivorra and Manish Kumar for useful discussions.
\section{Cycle class to absolute Hodge cohomology}
\subsection{Higher Chow groups and absolute Hodge cohomology}
Let $X$ be a smooth algebraic variety over the complex numbers.
Absolute Hodge cohomology was introduced by Beilinson
(\cite{Bei}, cf.~\cite[\textsection2]{JA}). Beilinson constructs for
every complex algebraic variety $X$ an object $\underline{R\Gamma}(X,\Q)$
in the derived category of mixed Hodge structures $D^b(MHS)$ such that
$$
H^i(\underline{R\Gamma}(X,\Q))=H^i(X,\Q) \quad \text{for all $i$.}
$$
Absolute Hodge cohomology $H_{\mc{H}}^{\bullet}(X,\Q(\bullet))$ is defined as follows:
$$
H_{\mc{H}}^{q}(X,\Q(p))=\Hom_{D^b(MHS)}(\Q,\underline{R\Gamma}(X,\Q)(p)[q]),
$$
for all $p,q$.
The natural spectral sequence
$$
E_2^{ij}={\rm Ext}^{i}_{MHS}(\Q,H^{j}(X,\Q)(p))\Rightarrow H^{i+j}_{\mc{H}}(X,\Q(p)),
$$
and vanishing of ${\rm Ext}^i$ for $i>1$, induces short exact sequences
\begin{equation}\label{ssabsolute Hodge}
0\xr{} {\rm Ext}^1(\Q,H^{q-1}(X,\Q)(p)) \xr{} H^q_{\mc{H}}(X,\Q(p)) \xr{}
\Hom(\Q,H^q(X,\Q)(p)) \xr{} 0.
\end{equation}
Note that $\Hom$s and
${\rm Ext}$s are taken in the category of mixed Hodge structures.
If $X$ is smooth and proper then we have a comparison isomorphism
with Deligne cohomology
$$
H^q_{\mc{H}}(X,\Q(p)) \cong H^q_{\mc{D}}(X,\Q(p)),
$$
provided that $q\leq 2p$ \cite[\textsection2.7]{JA}.
\subsection{Cycle class map}
Let $DM_{gm,\Q}$ be Voevodsky's triangulated category of motivic complexes
with rational coefficients over $\C$ (\cite{Vt},\cite{V}). Denoting by
$Sm/\C$ the category of smooth complex algebraic varieties, there is a functor
$$
Sm/\C \xr{} DM_{gm,\Q}, \quad X\mapsto M_{gm}(X).
$$
Motivic cohomology is defined by
$$
H_{M}^{q}(X,\Q(p))=\Hom_{DM_{gm}}(M_{gm}(X),\Q(p)[q]),
$$
for $X$ smooth and $p\geq 0, q\in \Z$. There is a comparison isomorphism \cite{Vh} with Bloch's higher Chow groups
$$
H_{M}^{q}(X,\Q(p))\cong {\rm CH}^{p}(X,2p-q)\otimes \Q.
$$
By Levine \cite{Le} and Huber (\cite{Hu1},\cite{Hu2}) we have realizations
\begin{equation}\label{equation-realization}
r_{\mc{H}}:DM_{gm}\xr{} D^b(MHS)
\end{equation}
at disposal, such that $\underline{R\Gamma}(X,\Q)$ is the dual of $r_{\mc{H}}(M_{gm}(X))$ and
$r_{\mc{H}}(\Q(1))=\Q(1)$.
The realizations are triangulated $\otimes$-functors and therefore induce
cycle maps
\begin{equation}\label{cycle-map}
c_{p,2p-q}: H^{q}_M(X,\Q(p))\xr{} H^{q}_{\mc{H}}(X,\Q(p))
\end{equation}
which are compatible with the localization sequence. An explicit construction
of a cycle map by using currents is presented in \cite{K-L-abeljacobi}.
Composition of $c_{p,2p-q}$ with the projection from \eqref{ssabsolute Hodge} yields
\begin{equation}\label{bar-cycle-map}
\bar{c}_{p,2p-q}: H^{q}_M(X,\Q(p))\xr{} \Hom_{MHS}(\Q,H^{q}(X,\Q)(p)).
\end{equation}
\subsection{Beilinson-Hodge conjecture}
Let $X$ be a smooth algebraic variety over $\C$, and $n\geq 0$ an integer.
\begin{conjecture}[Beilinson-Hodge conjecture]
BH(X,n): The cycle map $\bar{c}_{n,n}$ \eqref{bar-cycle-map} is surjective.
\end{conjecture}
\begin{remark}\label{remark-n=1}
If $X$ is smooth then
$$
c_{1,1}:H_M^1(X,\Q(1))\xr{} H_{\mc{H}}^1(X,\Q(1))
$$
is an isomorphism \cite[Proposition~2.12]{EV}. In particular, BH(X,1) holds.
\end{remark}
\section{Beilinson-Hodge conjectures for the generic point}
\subsection{Coniveau spectral sequences}
The main technical tool of our paper is the coniveau spectral sequence
for motivic and absolute Hodge cohomology. The existence and construction
of the coniveau spectral is well-known and follows from the yoga of exact couples
as in \cite{BO}. Because we couldn't provide a reference for the case
of absolute Hodge cohomology we will explain the construction
in this section.
\subsubsection{} In the following, $?$ will stand for $M$ or $\mc{H}$.
For $p\geq 0$ we denote by $X^{(p)}$ the set of codimension $p$ points of $X$.
For a point $x\in X$ (not necessarily closed) we define
$$
H^{q}_{?}(x,\Q(p)):=\varinjlim_{x\in U\subset X} H^q_{?}(U,\Q(p)),
$$
where $U$ runs over all open neighborhoods of $x$. For all $n\geq 0$, the coniveau
spectral sequence reads:
\begin{equation}\label{equation-coniveau-spectral-seq}
E_{1}^{p,q}=\bigoplus_{x\in X^{(p)}, p\leq n} H^{q-p}_{?}(x,\Q(n-p))\Rightarrow H^{p+q}_?(X,\Q(n)).
\end{equation}
The terms $E_1^{p,q}$ with $p>n$ are zero.
\subsubsection{}
In order to construct the coniveau spectral sequence we use the
category of finite correspondences $Cor_\C$ \cite[Lecture~1]{V}.
There is an obvious functor
$$
Sm/\C\xr{} Cor_\C, \quad X\mapsto [X].
$$
We denote by $\mc{H}^b(Cor_\C)$ the homotopy category of bounded complexes \cite[\textsection2.1]{Vt}.
By construction \cite[Definition~2.1.1]{Vt} there is a triangulated
functor
\begin{equation}\label{equation-functor-Cor-to-DM}
\mc{H}^b(Cor_\C) \xr{} DM_{gm,\Q}.
\end{equation}
\begin{definition}
Let $X$ be smooth and $Y\subset X$ a closed set. We define $c_{Y}X$
to be the complex
$$
[X\backslash Y]\xr{\jmath} \underset{\deg=0}{[X]}
$$
in $\mc{H}^b(Cor_\C)$. The map $\jmath$ is the open immersion.
\end{definition}
Let $Y\subset X$ be a closed subset. For an open subset $U$ of $X$
and a closed subset $Y'$ of $U$ such that $Y\cap U\subset Y'$
we get a morphism of complexes
\begin{equation}\label{equation-c-maps}
c_{Y'}U\xr{} c_{Y}X.
\end{equation}
\begin{lemma}\label{lemma-distingueshed-triangles}
Let $X$ be smooth.
Let $Y_1,Y_2$ be closed subsets of $X$ with $Y_2\subset Y_1$.
\begin{enumerate}
\item
The morphisms
$$
c_{Y_1\backslash Y_2}(X\backslash Y_2) \xr{} c_{Y_1}X \xr{} c_{Y_2}X \xr{+1} c_{Y_1\backslash Y_2}(X\backslash Y_2) [1]
$$
induced by \eqref{equation-c-maps} and
\begin{align*}
c_{Y_2}X\xr{} c_{Y_1\backslash Y_2}(X\backslash Y_2)[1] \\
\xymatrix
{
[X]
&
\\
[X\backslash Y_2]\ar[u]\ar[r]^{-{\rm id}}
&
[X\backslash Y_2]
\\
&
[X\backslash Y_1]\ar[u]^{-{\rm incl}}
}
\end{align*}
form a distinguished triangle in $\mc{H}^b(Cor_\C)$.
\item If $Y_2'\subset Y'_1$ are closed subsets of $X$ such that $Y_i\subset Y'_i$, for $i=1,2$, then the morphisms from \eqref{equation-c-maps} induce
a morphism of distinguished triangles
$$
\xymatrix
{
c_{Y_1\backslash Y_2}(X\backslash Y_2) \ar[r] &
c_{Y_1}X \ar[r] &
c_{Y_2}X \ar[r]^{+1}&
\\
c_{Y'_1\backslash Y'_2}(X\backslash Y'_2) \ar[r]\ar[u] &
c_{Y'_1}X \ar[r]\ar[u] &
c_{Y'_2}X \ar[r]^{+1}\ar[u]
&
}
$$
\end{enumerate}
\begin{proof}
For (1).
equivalent to $0$. There is an obvious isomorphism in $\mc{H}^b(Cor_\C):$
\begin{align*}
{\rm cone}(c_{Y_1}X \xr{} c_{Y_2}X)[-1]\xr{} c_{Y_1\backslash Y_2}(X\backslash Y_2),\\
\xymatrix
{
[X]
&
&
\\
[X] \oplus [X\backslash Y_2] \ar[u]\ar[rr]^{-{\rm pr}_2}
&
&
[X\backslash Y_2]
\\
[X\backslash Y_1]\ar[u]^{({\rm incl},-{\rm incl})}\ar[rr]^{=}
&
&
[X\backslash Y_1],\ar[u]
}
\end{align*}
rendering commutative the diagram
$$
\xymatrix
{
c_{Y_2}X[-1]\ar[r] \ar[d]&
{\rm cone}(c_{Y_1}X \xr{} c_{Y_2}X)[-1]\ar[r] \ar[d]&
c_{Y_1}X \ar[r] \ar[d]&
c_{Y_2}X \ar[d]
\\
c_{Y_2}X[-1]\ar[r]&
c_{Y_1\backslash Y_2}(X\backslash Y_2) \ar[r] &
c_{Y_1}X \ar[r] &
c_{Y_2}X.
}
$$
For (2). Straight-forward.
\end{proof}
\end{lemma}
\begin{definition}
Let $X$ be smooth and $Y\subset X$ a closed subset.
For all $n\geq 0$ and $q\in \Z$ we define
\begin{align*}
H^q_{Y,M}(X,\Q(n))&:= \Hom_{DM_{gm}}(c_YX,\Q(n)[q])\\
H^q_{Y,\mc{H}}(X,\Q(n))&:= \Hom_{D^b(MHS)}(r_{\mc{H}}(c_YX),\Q(n)[q]).
\end{align*}
We implicitly used the functor \eqref{equation-functor-Cor-to-DM}.
\end{definition}
From \eqref{equation-c-maps} we obtain a map
$$
H^*_{Y,?}(X,\Q(n)) \xr{} H^*_{Y',?}(U,\Q(n))
$$
if $U$ is an open subset of $X$ and $Y\cap U\subset Y'$.
\subsubsection{}
For $p\geq 0$ we denote by $Z^p=Z^p(X)$ the set closed
subsets of $X$ of codimension $\geq p$, ordered by inclusion. Let $Z^p/Z^{p+1}$ denote the ordered set of pairs
$(Z,Z')\in Z^p\times Z^{p+1}$ such that $Z\supset Z'$, with the ordering
$$
(Z,Z')\geq (Z_1,Z'_1) \quad \text{if $Z\supset Z_1$ and $Z'\supset Z'_1$.}
$$
We can form for all $n\geq 0$ and $p\in \Z$:
\begin{align*}
H^*_{Z^p,?}(X,\Q(n))&:=\begin{cases} \varinjlim_{Z\in Z^p} H^*_{Z,?}(X,\Q(n)) &\text{if $p\geq 0$,}\\
H^*_?(X,\Q(n)) &\text{if $p\leq 0$.}\end{cases}\\
H^*_{Z^p/Z^{p+1},?}(X,\Q(n))&:=\begin{cases} \varinjlim_{(Z,Z')\in Z^p/Z^{p+1}}
H^*_{Z\backslash Z',?}(X\backslash Z',\Q(n)) &\text{if $p\geq 0$,}\\
0 &\text{if $p<0$.}\end{cases}
\end{align*}
In view of Lemma \ref{lemma-distingueshed-triangles}, we obtain for every
$(Z,Z')\in Z^p/Z^{p+1}$ a long exact sequence
\begin{equation}\label{equation-long-exact-sequence-cohomology-supports}
H^*_{Z',?}(X,\Q(n))\xr{} H^*_{Z,?}(X,\Q(n)) \xr{}
H^*_{Z\backslash Z',?}(X\backslash Z',\Q(n))\xr{+1},
\end{equation}
and we can take the limit to get a long exact sequence
\begin{equation}\label{equation-triangle}
H^*_{Z^{p+1},?}(X,\Q(n))\xr{} H^*_{Z^p,?}(X,\Q(n)) \xr{}
H^*_{Z^p/Z^{p+1},?}(X,\Q(n))\xr{+1}.
\end{equation}
This also holds for $p<0$ for trivial reasons. We form an exact couple as
follows
\begin{align}
D&:=\bigoplus_{p\in \Z} H^*_{Z^p,?}(X,\Q(n)), \label{align-exact-couple} \\
E&:= \bigoplus_{p\geq 0} H^*_{Z^p/Z^{p+1},?}(X,\Q(n)), \nonumber
\end{align}
and the exact triangle induced by \eqref{equation-triangle}:
$$
\xymatrix
{
D \ar[rr]
&
&
D \ar[dl]
\\
&
E. \ar[ul]
&
}
$$
Setting
\begin{equation*}
E^{p,q}_{1}:=H^{p+q}_{Z^p/Z^{p+1},?}(X,\Q(n)),
\end{equation*}
the exact couple yields a spectral sequence
\begin{equation}\label{equation-coniveau-spectral-seq-support}
E^{p,q}_{1}\Rightarrow H^{p+q}_?(X,\Q(n)),
\end{equation}
for all $n\geq 0$, such that
$$
E^{p,q}_{\infty}= \frac{N^pH^{p+q}_?(X,\Q(n))}{N^{p+1}H^{p+q}_?(X,\Q(n))},
$$
with
$$
N^{p}H^{i}_?(X,\Q(n))={\rm image}(H^i_{Z^p,?}(X,\Q(n)) \xr{} H^i_{?}(X,\Q(n))).
$$
\begin{lemma} \label{lemma-purity}
Let $X$ be smooth and $n\geq 0$.
\begin{enumerate}
\item If $p\leq n$ then
$$
H^{q+p}_{Z^p/Z^{p+1},?}(X,\Q(n))\cong \bigoplus_{x\in X^{(p)}} H^{q-p}(x,\Q(n-p)) , \quad \text{for all $q\in \Z$.}
$$
\item If $p>n$ then
$$
H^{q}_{Z^p/Z^{p+1},?}(X,\Q(n))=0,\quad \text{for all $q\in \Z$.}
$$
\end{enumerate}
\begin{proof}
The set
$$
S=\{(Z,Z')\in Z^p/Z^{p+1}\mid \text{$Z\backslash Z'$ is smooth of pure codimension $=p$}\}$$
is a cofinal subset of $Z^p/Z^{p+1}$; thus
$$
H^{q+p}_{Z^p/Z^{p+1},?}(X,\Q(n))=\varinjlim_{(Z,Z')\in S} H^*_{Z\backslash Z',?}(X\backslash Z',\Q(n))
$$
From the Gysin triangle \cite[Proposition~3.5.4]{Vt} we obtain for all
$(Z,Z')\in S$ a natural isomorphism
$$
c_{Z\backslash Z'} (X\backslash Z') \cong M_{gm}(Z\backslash Z')(p)[2p],
$$
in $DM_{gm}$.
For (1). By using cancellation we obtain
$$
H^{q+p}_{Z\backslash Z',?}(X\backslash Z',\Q(n))=H^{q-p}_{?}(Z\backslash Z',\Q(n-p))
$$
for all $(Z,Z')\in S$, and the restriction maps
$$
H^{q-p}_{?}(Z\backslash Z',\Q(n-p)) \xr{} \bigoplus_{x\in X^{(p)}} H^{q-p}_?(x,\Q(n-p))
$$
induce the desired isomorphism.
For (2). Suppose $p>n$. We claim that
\begin{equation}\label{equation-H*Z}
H^{*}_{Z,?}(X,\Q(n))=0
\end{equation}
for all $Z\in Z^p(X)$. In view of the long exact sequence
\eqref{equation-long-exact-sequence-cohomology-supports} this will prove the claim.
By definition the vanishing of $H^{*}_{Z,?}(X,\Q(n))$ follows if the restriction map
$$
H^*_?(X,\Q(n))\xr{} H^*_?(X\backslash Z,\Q(n))
$$
is an isomorphism. Set $U:=X\backslash Z$. For $?=M$ we can use
the comparison isomorphism with higher Chow groups. It is
sufficient to prove that the restriction induces an isomorphism
of complexes
\begin{equation}\label{equation-restriction-X-to-U}
Z^n(X,\bullet)\xr{\cong} Z^n(U,\bullet),
\end{equation}
where $Z^n(?,\bullet)$ denotes Bloch's cycle complex.
Since $X\backslash U$ has codimension $>n$, the map \eqref{equation-restriction-X-to-U}
is injective. For the surjectivity, let $A\in Z^n(U,m)$ be the class of an
irreducible subvariety of $U\times \Delta^m$. By definition $A$ has codimension
$n$ and meets all faces $U\times \Delta^{i}$ properly. Let $\bar{A}$ be
the closure of $A$ in $X\times \Delta^{m}$. Since
$$
\bar{A}\cap (X\times \Delta^{i})\subset (A\cap (U\times \Delta^{i}))\cup ((X\backslash U)\times \Delta^{i}),
$$
and $(X\backslash U)\times \Delta^{i}$ has codimension $>n$ in
$X\times \Delta^{i}$, we conclude that $\bar{A}\in Z^n(X,m)$.
For $?=\mc{H}$.
In view of \eqref{ssabsolute Hodge}, we need to prove that the restriction induces
isomorphisms
\begin{align}
\label{align-Hom} \Hom_{MHS}(\Q(-n),H^q(X,\Q)) &\xr{\cong} \Hom_{MHS}(\Q(-n),H^q(U,\Q)), \\
\label{align-Ext} {\rm Ext}^1_{MHS}(\Q(-n),H^q(X,\Q)) &\xr{\cong} {\rm Ext}^1_{MHS}(\Q(-n),H^q(U,\Q)),
\end{align}
for all $q$. In order to prove \eqref{align-Hom} and \eqref{align-Ext} we
use the exact sequence
\begin{multline}\label{multline-exact-seq-HS}
0\xr{} \frac{H^q_{X\backslash U}(X,\Q)}{{\rm im}(H^{q-1}(U,\Q))} \xr{} H^q(X,\Q) \xr{} H^q(U,\Q)\xr{} \\
\ker\left(H^{q+1}_{X\backslash U}(X,\Q)\xr{} H^{q+1}(X,\Q)\right)\xr{} 0.
\end{multline}
Note that $\frac{H^q_{X\backslash U}(X,\Q)}{{\rm im}(H^{q-1}(U,\Q))}$
and $\ker(H^{q+1}_{X\backslash U}(X,\Q)\xr{} H^{q+1}(X,\Q))$ are Hodge structures of weight $\geq 2p$. If $E$ is any mixed Hodge structure of weight $\geq 2p$ then
$$
\Hom(\Q(-n),E)=0, \quad {\rm Ext}^1(\Q(-n),E)=0,
$$
because $p>n$. Therefore \eqref{multline-exact-seq-HS} implies the statement.
\end{proof}
\end{lemma}
\begin{proposition} \label{proposition-coniveau-spectral-seq}
Let $X$ be smooth and $?=M$ or $?=\mc{H}$. Let $n\geq 0$ be an integer.
\begin{enumerate}
\item There is a spectral sequence
$$
E_{1,?}^{p,q}=\bigoplus_{x\in X^{(p)}, p\leq n} H^{q-p}_{?}(x,\Q(n-p))\Rightarrow H_?^{p+q}(X,\Q(n))
$$
such that
$$
E^{p,q}_{\infty,?}= \frac{N^pH^{p+q}_?(X,\Q(n))}{N^{p+1}H^{p+q}_?(X,\Q(n))},
$$
with
$$
N^{p}H^{*}_?(X,\Q(n))=\bigcup_{\substack{U\subset X \\ {\rm cd}(X\backslash U)\geq p}}
\ker(H^*_?(X,\Q(n)) \xr{} H^*_{?}(U,\Q(n))),
$$
where $U$ runs over all open subsets with ${\rm codim} (X\backslash U)\geq p$.
\item The cycle map induces a morphism of spectral sequences
$$
[E_{1,M}^{p,q}\Rightarrow H_M^{p+q}(X,\Q(n))] \xr{} [E_{1,\mc{H}}^{p,q}\Rightarrow H_{\mc{H}}^{p+q}(X,\Q(n))].
$$
\end{enumerate}
\begin{proof}
For (1). The statement follows from the spectral sequence \eqref{equation-coniveau-spectral-seq-support} and Lemma \ref{lemma-purity}.
For (2). The realization $r_{\mc{H}}$ \eqref{equation-realization} induces
a morphism of the exact couples \eqref{align-exact-couple}.
\end{proof}
\end{proposition}
\subsection{$E_1$ complexes of the coniveau spectral sequence}
Let $X$ be smooth and connected, we denote by $\eta$ the generic point of $X$.
The cycle map induces a morphism between the
$E^{\bullet,2}_1$ complexes of the coniveau spectral sequence
(Proposition \ref{proposition-coniveau-spectral-seq}) for $n=2$:
\begin{equation}\label{diagram-Gersten}
\xymatrix
{
E^{\bullet,2}_{1,M}:
H^2_M(\eta,\Q(2)) \ar[r]\ar[d]
&
\oplus_{x\in X^{(1)}} H^1_M(x,\Q(1))\ar[r]\ar[d]
&
\oplus_{x\in X^{(2)}} \Q \ar[d]
\\
E^{\bullet,2}_{1,\mc{H}}:
H^2_{\mc{H}}(\eta,\Q(2)) \ar[r]\ar[d]
&
\oplus_{x\in X^{(1)}} H^1_{\mc{H}}(x,\Q(1))\ar[r]\ar[d]
&
\oplus_{x\in X^{(2)}} \Q \ar[d]
\\
\Hom(\Q,H^2(\eta,\Q)(2)) \ar[r]
&
\oplus_{x\in X^{(1)}} \Hom(\Q,H^1(x,\Q)(1))\ar[r]
&
\oplus_{x\in X^{(2)}} \Q
}
\end{equation}
We call the complex in the first line $G_{M}(X,2)$,
the complex in the second line $G_{\mc{H}}(X,2)$, and finally the complex in the
third line is called $G_{HS}(X,2)$. The complex $G_{HS}(X,2)$ is induced
by $G_{\mc{H}}(X,2)$ via \eqref{ssabsolute Hodge}.
For $G_M(X,2)$ the group $H^2_M(\eta,\Q(2))$
is the component in degree $=0$, and the grading is defined similarly for
$G_{\mc{H}}(X,2)$ and $G_{HS}(X,2)$. Via Gersten-Quillen resolution we have
\begin{equation}\label{equation-H1K2}
H^1(G_M(X,2))=H^1(X,\mc{K}_2)\otimes_{\Z} \Q,
\end{equation}
where $\mc{K}_2$ is Quillen's K-theory Zariski sheaf associated to the presheaf
$U\mapsto K_2(\mathcal{O}_X(U))$.
\begin{proposition}\label{proposition-cohomology-Gersten}
Let $X$ be smooth and connected.
\begin{itemize}
\item[(i)] There is a natural isomorphism $H^1(G_M(X,2))\xr{\cong} H^3_M(X,\Q(2)).$
\item[(ii)] There is a natural injective map
$
H^1(G_{\mc{H}}(X,2))\xr{} H^3_{\mc{H}}(X,\Q(2)).
$
We call the image $H^3_{{\mc{H}},{\rm alg}}(X,\Q(2))$.
\item[(iii)] There is a natural isomorphism
$$
H^1(G_{HS}(X,2))\xr{} H^3_{\mc{H},{\rm alg}}(X,\Q(2))/\left(H^1_{\mc{H}}(\C,\Q(1))\cdot H^2_{\mc{H}}(X,\Q(1))\right).
$$
\item[(iv)] The above maps form a commutative diagram
$$
\xymatrix
{
H^1(G_M(X,2))\ar[r]^{\cong} \ar[d]
&
H^3_M(X,\Q(2))\ar@{>>}[d]^{c_{2,1}}
\\
H^1(G_{\mc{H}}(X,2))\ar[r]^{\cong} \ar[d]
&
H^3_{{\mc{H}},{\rm alg}}(X,\Q(2))\ar[d]
\\
H^1(G_{HS}(X,2))\ar[r]^-{\cong}
&
H^3_{{\mc{H}},{\rm alg}}(X,\Q(2))/H^1_{{\mc{H}}}(\C,\Q(1))\cdot H^2_{{\mc{H}}}(X,\Q(1)),
}
$$
and
$$
c_{2,1}:H^3_M(X,\Q(2))\xr{} H^3_{{\mc{H}},{\rm alg}}(X,\Q(2))
$$
is surjective.
\end{itemize}
\end{proposition}
\begin{proof
Statement (i) is proved in \cite{MS}.
\emph{Proof of (i) and (ii).} We use the coniveau spectral sequence
(Proposition \ref{proposition-coniveau-spectral-seq})
\begin{equation}
\label{equation-spectral-sequence}
E_{1}^{p,q}=\bigoplus_{x\in X^{(p)}} H^{q-p}_{?}(x,\Q(n-p)) \Rightarrow H^{p+q}_{?}(X,\Q(n)),
\end{equation}
where $?$ is $M$ or $\mc{H}$, and $0\leq p\leq n$. We have
$$
G_{?}(X,2)=E^{\bullet,2}_1
$$
for $n=2$.
We get $E_{2}^{1,2}=E_{\infty}^{1,2}$ and $E_{\infty}^{2,1}=0=E^{3,0}_{\infty}$
for obvious reasons. Therefore we obtain an exact sequence
\begin{equation}\label{equation-short-exact-sequence-H32}
0\xr{} E^{1,2}_{\infty}\xr{} H^3_{?}(X,\Q(2))\xr{} E^{0,3}_{\infty} \xr{} 0,
\end{equation}
with
$$
E^{1,2}_{\infty}=\ker(H^3_{?}(X,\Q(2))\xr{} H^3_{?}(\eta,\Q(2))).
$$
For $?=M$ we have $H^3_{M}(\eta,\Q(2))=0$; for $?=\mc{H}$ we define
$$
H^3_{{\mc{H}},{\rm alg}}(X,\Q(2)):=\ker(H^3_{\mc{H}}(X,\Q(2))\xr{} H^3_{\mc{H}}(\eta,\Q(2)))=N^1H^3_{\mc{H}}(X,\Q(2)).
$$
For (iii).
If $X$ is smooth then it is not difficult to see that
$$
{\rm Pic}(X)\otimes \Q = H^2_M(X,\Q(1)) \cong H^2_{\mc{H}}(X,\Q(1)).
$$
It follows that via the isomorphism
$H^1(G_{\mc{H}}(X,2))\cong H^3_{\mc{H},{\rm alg}}(X,\Q(2))$ the subgroup
$H^1_{\mc{H}}(\C,\Q(1))\cdot H^2_{\mc{H}}(X,\Q(1))$ of $H^3_{\mc{H},{\rm alg}}(X,\Q(2))$ corresponds to the image
of $\oplus_{x\in X^{(1)}}\C^*\otimes_{\Z}\Q$ in $H^1(G_{\mc{H}}(X,2))$.
For every point $x\in X^{(1)}$ we have an exact sequence
$$
0\xr{} \C^*\otimes_{\Z} \Q \xr{} H^1_{\mc{H}}(x,\Q(1))\xr{} \Hom_{MHS}(\Q,H^1(x,\Q)(1))\xr{} 0,
$$
and therefore
$$
\ker(H^1(G_{\mc{H}}(X,2))\xr{} H^1(G_{HS}(X,2)))={\rm im}(\oplus_{x\in X^{(1)}}\C^*\otimes_{\Z}\Q \xr{} H^1(G_{\mc{H}}(X,2))).
$$
This implies the claim.
Statement (iv) is obvious.
\end{proof}
\begin{remark}
For a smooth projective variety $X$ we know that
$$
H^1(G_{HS}(X,2))\xr{\cong} H^3_{{\mc{H}},{\rm alg}}(X,\Q(2))/\left(H^1_{\mc{H}}(\C,\Q(1))\cdot H^2_{\mc{H}}(X,\Q(1))\right)
$$
is a countable group \cite{MS}.
This is a consequence of the fact that
deformations $a'$ of a class $a\in H^3_M(X,\Q(2))$ have the same image via
$c_{2,1}$ modulo the group $H^1_{\mc{H}}(\C,\Q(1))\cdot H^2_{\mc{H}}(X,\Q(1))$.
There exist examples of K3-surfaces $X$ such that $H^1(G_{HS}(X,2))\neq 0$ \cite{MS}.
\end{remark}
\begin{definition}\label{definition-decomposable}
Let $X$ be smooth, connected and projective.
We denote by
$$
{\rm image}(H^1_M(\C,\Q(1))\cdot H^2_M(X,\Q(1)))=:H^3_M(X,\Q(2))_{{\rm dec}}\subset H^3_M(X,\Q(2))
$$
the subgroup of decomposable cycles. In the same way we define $H^3_{\mc{H}}(X,\Q(2))_{{\rm dec}}$.
\end{definition}
Note that
\begin{align} \label{align-shouldbealemma1}
\C^*\otimes_{\Z}\Q&=H^1_M(\C,\Q(1))\cong H^1_{\mc{H}}(\C,\Q(1)), \\
{\rm Pic(X)}\otimes_{\Z}\Q&=H^2_M(X,\Q(1)))\cong H^2_{\mc{H}}(X,\Q(1))). \label{align-shouldbealemma2}
\end{align}
\begin{lemma}\label{lemma-injective-dec}
If $X$ is smooth, projective, and $H^1(X)=0$, then the maps
\begin{align*}
H^1_{M}(\C,\Q(1))\otimes_{\Q} H^2_{M}(X,\Q(1)))&\xr{} H^3_M(X,\Q(2))\\
H^1_{\mc{H}}(\C,\Q(1))\otimes_{\Q} H^2_{\mc{H}}(X,\Q(1)))&\xr{} H^3_{\mc{H}}(X,\Q(2))
\end{align*}
are injective. In particular,
$$
H^3_M(X,\Q(2))_{{\rm dec}} \xr{} H^3_{\mc{H}}(X,\Q(2))_{{\rm dec}}
$$
is an isomorphism.
\end{lemma}
\begin{proof}
By using the cycle map it is sufficient to prove the statement
for absolute Hodge cohomology. The assumption $H^1(X)=0$ implies
$$
H^2_{\mc{H}}(X,\Q(1))) \cong \Hom(\Q(-1),H^2(X,\Q)).
$$
The pure Hodge structure $H^2(X,\Q)$ is polarizable and therefore
$$
\Hom(\Q(-1),H^2(X,\Q))\otimes \Q(-1)\subset H^2(X,\Q)
$$
is a direct summand which we call ${\rm Hg}^{1,1}$. We get
\begin{multline*}
({\rm Hg}^{1,1}\otimes_{\Q} \C)/(2\pi i)^2\cdot {\rm Hg}_{\Q}^{1,1}={\rm Ext}^1(\Q(-2),{\rm Hg}^{1,1}) \\ \subset {\rm Ext}^1(\Q(-2),H^2(X,\Q)) \subset H^3_{\mc{H}}(X,\Q(2))
\end{multline*}
and clearly $H^1_{\mc{H}}(\C,\Q(1))\otimes_{\Q} H^2_{\mc{H}}(X,\Q(1)))$ is
mapping isomorphically onto
$
({\rm Hg}^{1,1}\otimes_{\Q} \C)/(2\pi i)^2\cdot {\rm Hg}_{\Q}^{1,1}.
$
Because of \eqref{align-shouldbealemma1} and \eqref{align-shouldbealemma2} we
conclude that
$$
H^3_M(X,\Q(2))_{{\rm dec}} \xr{} H^3_{\mc{H}}(X,\Q(2))_{{\rm dec}}
$$
is an isomorphism.
\end{proof}
\begin{proposition}\label{proposition-Gersten-H0}
Let $X$ be smooth and connected. Restriction to the generic point
yields the following equalities:
\begin{align}
H^2_M(X,\Q(2))&\xr{\cong} H^0(G_{M}(X,2)) \label{align-1-Gersten-H0} \\
H^2_{\mc{H}}(X,\Q(2))&\xr{\cong} H^0(G_{\mc{H}}(X,2)). \label{align-2-Gersten-H0}
\end{align}
\begin{proof}
We use the coniveau spectral sequence (Proposition \ref{proposition-coniveau-spectral-seq}) for $n=2$. We have
$$
E^{0,2}_{2,?}=H^0(G_?(X,2))
$$
for $?=M$ and $?=\mc{H}$. Note that $E^{2,q}_{1,?}=0$ for $q\neq 2$. Thus
$E^{0,2}_{2,?}=E^{0,2}_{\infty,?}$ and $E^{2,0}_{\infty,?}=0$. Moreover,
$E^{1,1}_{1,?}=0$, because $H^0_?(U,\Q(1))=0$ for every $U$. It follows
that $E^{1,1}_{\infty,?}=0$ and
$$
H^2_{?}(X,\Q(2))=E^{0,2}_{\infty,?}=E^{0,2}_{2,?}.
$$
\end{proof}
\end{proposition}
\begin{lemma}\label{lemma-stupid-reduction-genpoint}
Let $X$ be smooth and connected. We denote by $\eta$ the generic point of $X$.
If
$$
H^2_M(\eta,\Q(2))\xr{} H^2_{\mc{H}}(\eta,\Q(2))
$$
is surjective then
$$
H^2_M(X,\Q(2))\xr{} H^2_{\mc{H}}(X,\Q(2))
$$
is surjective.
\begin{proof}
We use Proposition \ref{proposition-Gersten-H0} and need to prove that
$$
H^0(G_{M}(X,2))\xr{} H^0(G_{\mc{H}}(X,2))
$$
is surjective.
Since $H^1_M(x,\Q(1))=H^1_{\mc{H}}(x,\Q(1))$ for every point $x\in X$ of codimension
$=1$ this follows immediately from diagram \eqref{diagram-Gersten}.
\end{proof}
\end{lemma}
\begin{proposition}\label{proposition-HS-Gersten-H0}
If $X$ is smooth, connected and projective then
\begin{equation*}
H^0(G_{HS}(X,2))=0.
\end{equation*}
\begin{proof}
Let $\eta\in X$ be the generic point.
For
$$a\in H^0(G_{HS}(X,2))\subset \Hom(\Q(-2),H^2(\eta,\Q))$$
we can find an
effective divisor $D$ such that $a$ is induced by a cohomology class
$a'\in \Hom(\Q(-2),H^2(X\backslash D,\Q))$. Let $S$ be a closed subset of
$X$ of codimension $\geq 2$, such that $D\backslash S$ is smooth. Denoting
$X':=X\backslash S$, we claim that $a'\mid_{X'\backslash D}$ maps to zero
in $H^1(D\backslash S,\Q)(-1)$ via the boundary map of the localization
sequence for singular cohomology. Indeed, the map
$$
\Hom(\Q(-1), H^1(D\backslash S,\Q))\xr{} \bigoplus_{x\in X^{(1)}}\Hom(\Q(-1), H^1(x,\Q))
$$
is injective and therefore the claim follows from $a\in H^0(G_{HS}(X,2))$.
Now $a'\mid_{X'\backslash D}$ defines an extension of Hodge structures
$$
0\xr{} {\rm image}(\Q(-1)^{\pi_0(D\backslash S)})\xr{} E \xr{} \Q(-2)\xr{} 0,
$$
with $E\subset H^2(X',\Q)$.
We note that $H^2(X',\Q)=H^2(X,\Q)$ is a pure Hodge structure of weight $=2$,
and therefore the same holds for $E$. Thus the extension is trivial and
$a'\mid_{X'\backslash D}$ lifts to $\Hom(\Q(-2),H^2(X',\Q))=0$.
This proves that $a'\mid_{X'\backslash D}=0$ and implies $a=0$.
\end{proof}
\end{proposition}
\subsection{An exact sequence for projective varieties with vanishing $H^1$}
\begin{lemma}\label{lemma-reglobalisation}
Let $X$ be smooth, projective and connected.
Suppose that $H^1(X)=0$.
We denote by $\eta$ the
generic point of $X$.
There is an exact sequence
$$
H^2_M(\eta,\Q(2))\xr{} H^2_{\mc{H}}(\eta,\Q(2)) \xr{} H^3_M(X,\Q(2)) \xr{} H^3_{{\mc{H}}}(X,\Q(2)).
$$
\begin{proof}
Via Proposition \ref{proposition-cohomology-Gersten} we identify
$$
H^3_M(X,\Q(2))\cong H^1(G_M(X,2)), \quad H^1(G_{\mc{H}}(X,2))\subset H^3_{{\mc{H}}}(X,\Q(2))
$$
and need to show that there is an exact sequence
$$
H^2_M(\eta,\Q(2))\xr{} H^2_{\mc{H}}(\eta,\Q(2)) \xr{} H^1(G_M(X,2)) \xr{} H^1(G_{\mc{H}}(X,2)).
$$
We work with diagram \eqref{diagram-Gersten}. The map
$$
H^2_{\mc{H}}(\eta,\Q(2))\xr{} H^1(G_M(X,2))
$$
is defined by using $E^{1,2}_{1,M}=E^{1,2}_{1,\mc{H}}$.
The assumptions on $X$
imply that $H^2_{\mc{H}}(X,\Q(2))=0$ and therefore $H^0(G_{\mc{H}}(X,2))=0$ by
Proposition \ref{proposition-Gersten-H0}. The rest of the proof
involves only diagram chasing.
\end{proof}
\end{lemma}
\subsection{Main theorem}
\begin{thm}\label{main-thm}
Let $X$ be smooth and connected. Let $\bar{X}$ be a smooth compactification of $X$.
We denote by ${\rm CH}_0(\bar{X})\otimes_{\Z}\Q$ the Chow group of zero cycles on $\bar{X}$.
If $\deg:{\rm CH}_0(\bar{X})\otimes_{\Z}\Q \xr{} \Q$ is an isomorphism then $BH(X,2)$ holds.
\end{thm}
\begin{proof}
In view of Lemma \ref{lemma-stupid-reduction-genpoint}
it is sufficient to show that
$
\bar{c}_{2,2}:H^2_M(\eta,\Q(2))\xr{} H^2_{\mc{H}}(\eta,\Q(2))
$
is surjective, where $\eta$ is the generic point on $X$.
Now, choose a smooth projective
model $Y$ of $\eta$. Since $\bar{X}$ and $Y$ are birational we conclude that
${\rm CH}_0(Y)\otimes_{\Z}\Q\cong \Q$. It follows that $H^1(Y)=0$.
We claim that
$$
H^3_M(Y,\Q(2))_{{\rm dec}} = H^3_M(Y,\Q(2)).
$$
This implies the theorem by using Lemma \ref{lemma-reglobalisation}, because
$$
H^3_M(Y,\Q(2))_{{\rm dec}} \subset H^3_{\mc{H}}(Y,\Q(2))_{{\rm dec}}
$$
by Lemma \ref{lemma-injective-dec}.
In view of Proposition \ref{proposition-cohomology-Gersten} and \eqref{equation-H1K2}
it is sufficient to prove that the cokernel of
$$
\C^*\otimes_{\Z} {\rm Pic(X)}\xr{} H^1(X,\mc{K}_2)
$$
is torsion. This is proved in \cite[Theorem~3(i)]{BS}.
\end{proof}
\subsection{An exact sequence for projective varieties}
\begin{proposition}\label{proposition-ex-seq-general}
Let $X$ be smooth, projective and connected.
We denote by $\eta$ the generic point of $X$.
There is an exact sequence
\begin{multline*}
H^2_M(\eta,\Q(2))\xr{} \Hom(\Q(-2),H^2(\eta,\Q)) \xr{} H^3_M(X,\Q(2))/H^3_M(X,\Q(2))_{{\rm dec}} \\ \xr{} H^3_{{\mc{H}}}(X,\Q(2))/H^3_{{\mc{H}}}(X,\Q(2))_{{\rm dec}}.
\end{multline*}
\begin{proof}
Via Proposition \ref{proposition-cohomology-Gersten} we identify
\begin{align*}
H^1(G_M(X,2)) &\cong H^3_M(X,\Q(2)),\\
H^1(G_{HS}(X,2))&\subset H^3_{\mc{H}}(X,\Q(2))/H^3_{\mc{H}}(X,\Q(2))_{{\rm dec}}.
\end{align*}
We work with diagram \eqref{diagram-Gersten}. For the map
$$
\Hom(\Q(-2),H^2(\eta,\Q)) \xr{} H^3_M(X,\Q(2))/H^3_M(X,\Q(2))_{{\rm dec}}
$$
we observe that
$$
0\xr{} \bigoplus_{x\in X^{(1)}} \C^*\otimes \Q \xr{} \bigoplus_{x\in X^{(1)}} H^1_M(x,\Q(1)) \xr{} \bigoplus_{x\in X^{(1)}} \Hom(\Q,H^1(x,\Q)(1)) \xr{} 0
$$
is exact and
$$
H^3_M(X,\Q(2))_{{\rm dec}}={\rm im}(\bigoplus_{x\in X^{(1)}} \C^*\otimes \Q \xr{} H^3_M(X,\Q(2))).
$$
The assumptions on $X$
imply $H^0(G_{HS}(X,2))=0$ by
Proposition \ref{proposition-HS-Gersten-H0}.
The rest of the proof
involves only diagram chasing.
\end{proof}
\end{proposition}
\begin{remark}
Proposition \ref{proposition-ex-seq-general} has also been proved by de Jeu and Lewis in \cite[Corollary~4.14]{JL}, and more generally with integral coefficients in \cite[Corollary~6.5]{JL}.
\end{remark}
\begin{proposition}\label{proposition-reduction-to-generic-point}
Let $X$ be smooth, projective and connected. Suppose that $H^1(X,\Q)=0$.
The following statements are equivalent.
\begin{enumerate}
\item $BH(U,2)$ holds for all open subsets $U$ of $X$.
\item $BH(\eta,2)$ holds for the generic point $\eta$ of $X$.
\end{enumerate}
\begin{proof}
Only $(2)\Rightarrow (1)$ is interesting. Proposition \ref{proposition-ex-seq-general}
implies that
$$H^3_M(X,\Q(2))/H^3_M(X,\Q(2))_{{\rm dec}} \xr{} H^3_{{\mc{H}}}(X,\Q(2))/H^3_{{\mc{H}}}(X,\Q(2))_{{\rm dec}}$$
is injective. From Lemma \ref{lemma-injective-dec}, we see that
$$
H^3_M(X,\Q(2))_{{\rm dec}} \xr{} H^3_{{\mc{H}}}(X,\Q(2))_{{\rm dec}}
$$
is an isomorphism. Thus
$$
H^3_M(X,\Q(2)) \xr{} H^3_{{\mc{H}}}(X,\Q(2))
$$
is injective. It follows from Lemma \ref{lemma-reglobalisation} that
$$
H^2_M(\eta,\Q(2)) \xr{} H^2_{{\mc{H}}}(\eta,\Q(2))
$$
is surjective. Lemma \ref{lemma-stupid-reduction-genpoint} applied to $U$
implies the claim.
\end{proof}
\end{proposition}
\begin{question}
Suppose $X$ is smooth, connected, projective, but $H^1(X)\neq 0$.
We denote by $\eta$ the generic point of $X$.
What is the relation between $BH(U,2)$, for all $U$ open in $X$,
and the surjectivity of
$$
H^2_M(\eta,\Q(2))\xr{} \Hom(\Q(-2),H^2(\eta,\Q))?
$$
\end{question}
|
1,108,101,566,335 | arxiv | \section{\label{sec:level1}Introduction}
The following survey is by no means the first one ever conducted on this topic. For instance in 1997, some 48 participants of a UMBC quantum mechanics workshop were asked by Tegmark \cite{Tegmark} about their favored interpretation of quantum mechanics. Although the Copenhagen interpretation won obtaining 13 votes (only topped by the "None of the above/undecided" category that received 18 votes), the many-worlds interpretation followed closely with 8 votes. According to Tegmark, this "indicated a rather striking shift in opinion compared to the old days when the Copenhagen interpretation reigned supreme."\\
Most recently, Schlosshauer et al.\cite{Zeilinger} published a poll conducted among the 35 participants of the conference "Quantum Physics and the Nature of Reality" at the International Academy Traunkirchen, Austria, organized by A.~Zeilinger.
This latter study serves as a model for this work. Questionnaire and methodology are chosen equal with one exception: To the question "What is your favorite interpretation of quantum mechanics" (question 12), the answer "shut up and calculate" has actually been included as already suggested by Schlosshauer et al. The questioned group is similar in affiliations to theirs, but probably at an earlier stage of their careers. It will be interesting to compare the results and give a potentially different snapshot of opinions towards the foundations of quantum mechanics.
The occasion for this survey was given in early 2013 by a conference about the "Philosophy of Quantum Mechanics" that took place in a remote hut in the black forest, Germany.
Of the 21 participants of this conference, 18 turned in their completed questionnaires. Of these, 12 stated their affiliation as physics, 5 as mathematics and 2 as philosophy (multiple or no answers were possible). Most of them are late master students or early PhD students.
Naturally, such a small sample is not representative for all physicists or people concerned with quantum mechanics. Hence, this study is of rather informative character, but may be nonetheless interesting. The questionnaire and methodology have been chosen intentionally almost equal to that of Schlosshauer et al. \cite{Zeilinger} in order to allow comparison, not taking the small size of the sample all too serious.
This report is organized as follows. In Sec. \ref{sec:results} the results of the poll are discussed.
In the subsequent Sec. \ref{sec:corr}, correlations among the answers are shown.
Discussion and summary of the results are given in Sec. \ref{sec:dis} and \ref{sec:sum}.
For the sake of completeness, in Appendix \ref{sec:corrtab} the whole correlation table is added.
\section{\label{sec:results}Results}
In this section, the results of the poll are to be discussed. The questionnaires were handed out at the end of the conference and collected soon afterwards. Multiple answers were possible and sometimes no answer was checked. No additional explanation concerning the meaning of questions or answers was given.
\begin{center}
\includegraphics[width=.7\linewidth]{./Question-1}
\end{center}
Quite contrary to the polls of Schlosshauer et al.\cite{Zeilinger} and Tegmark \cite{Tegmark}, 3 people favored the de Broglie-Bohm \cite{deBroglie,Bohm1,Bohm2} and no one the Everett interpretation (see Ref. \cite{Tegmark} and references therein) of quantum mechanics (compare question 12). The higher support for the hidden determinism can thus be understood, because the interpretation of Bohm might have been considered well compatible with this answer by some participants.
\begin{center}
\includegraphics[width=.7\linewidth]{./Question-2}
\end{center}
This question was answered by trend like the corresponding one of Schlosshauer et al\cite{Zeilinger}. There is a virtual draw between affirmative and negative answers. As could be understood from some write-ins, the vagueness in the words "properties" and "measurement" added to this draw.
\begin{center}
\includegraphics[width=.7\linewidth]{./Question-3}
\end{center}
The result is clearly negative here about Einstein's view of quantum mechanics. Although in the first question, 17\% asserted that the randomness of individual quantum events is only apparent, all participants either found Einstein's view of quantum mechanics wrong or at least remained hesitant. Important for this outcome is also the fact that it was never concretized what Einstein's view actually is.
\begin{center}
\includegraphics[width=.7\linewidth]{./Question-4}
\end{center}
In what concerns Bohr's view of quantum mechanics, the great majority of people maintain a rather reserved attitude and only few were willing to support or decline Bohr's view. However, nobody felt confident to call Bohr's view correct per se.
\begin{center}
\includegraphics[width=.7\linewidth]{./Question-5}
\end{center}
Obviously, most participants regard the measurement problem as solved or at least not to pose such a big threat for the general theory. On some questionnaires, though, the words "will be solved in another way" were underlined or highlighted in another way, making clear that this problem may be still all but settled.
\begin{center}
\includegraphics[width=.7\linewidth]{./Question-6}
\end{center}
Most participants found that the implication of Bell's inequalities is "some notion of nonlocality". Since the word "realism" always led to lively discussions at the conference, the small support for the first answer can be understood, since many people may have wanted to evade the complex of problems that is connected to this notion.
The participants unanimously agreed upon the fact that the implication is \textit{not} that unperformed measurements have no results.
\begin{center}
\includegraphics[width=.7\linewidth]{./Question-7}
\end{center}
In regard of quantum information, most participants again remained conservative and said that they have to wait and see. This is quite contrary to the study of Schlosshauer et al. \cite{Zeilinger}, where almost three fourth of the people called it a breath of fresh air for quantum foundations. Obviously, the answers to this question fiercely depend on the community being polled.
\begin{center}
\includegraphics[width=.7\linewidth]{./Question-8}
\end{center}
There were many write-ins like "No idea." or "Define 'working' and 'useful'!". Given this indeterminacy, a good fifth does not believe that we will ever have such thing as a quantum computer and the rest of the answers peaks at "in 10 to 25 years" showing a general optimism regarding the progress in this field.
\begin{center}
\includegraphics[width=.7\linewidth]{./Question-9}
\end{center}
All answers are scattered approximately uniformly between the possible answers. The purely statistical interpretation received considerably more votes than in the poll of Schlosshauer et al. \cite{Zeilinger}.
\begin{center}
\includegraphics[width=.7\linewidth]{./Question-10}
\end{center}
The conference participants acknowledge some special role on behalf of the observer. One third see him as a complex (quantum) system and two thirds consider him either important for the application of the formalism or even fulfilling a distinguished physical role. Probably, the much discussed (quantum) mind-body problem, which was also hotly debated at the conference, played a part in this question.
\begin{center}
\includegraphics[width=.7\linewidth]{./Question-11}
\end{center}
A third of the respondents wait for a new, deeper theory than quantum mechanics and another third are convinced that reconstructions of quantum theory can, in principle, give useful insights, but do not relieve one from the need to find a proper interpretation.
\begin{center}
\includegraphics[width=.8\linewidth]{./Question-12}
\end{center}
This is a somehow surprising result. The elsewise sovereign Copenhagen interpretation loses ground against the de Broglie-Bohm interpretation \cite{deBroglie,Bohm1,Bohm2}. This is partly the influence of decided minorities in small populations, because the participants of the conference were all but representative of the whole physicists' community. Not surprisingly, the outcome is well different from the observed distribution by Tegmark \cite{Tegmark} or Schlosshauer et al. \cite{Zeilinger}.
Interestingly, the "Shut up and calculate." interpretation was the other big winner of this question.
\begin{center}
\includegraphics[width=.7\linewidth]{./Question-13}
\end{center}
The result shows an inclined distribution with a trend towards changing the favorite interpretation several times. A third of the participants do not even have a preferred interpretation. It is striking, however, that the answer "I have no preferred interpretation." did not receive the same number of votes as in Question 12, but at least there is a strong correlation (compare Sec. \ref{sec:corr} and especially Fig. \ref{fig:strongcor}). Also from the correlations of Fig. \ref{fig:strongcor}, one can deduce that the people who stated that they have switched interpretation several times were likely to state that the actual interpretation one assumes depends a lot on personal philosophical prejudice.
\begin{center}
\includegraphics[width=.7\linewidth]{./Question-14}
\end{center}
Clearly, most of the participants think that the actual choice of interpretation is a matter of philosophical prejudice. Due to this high support, this answer could also be found quite often in the correlation table of Appendix \ref{sec:corrtab}. It seems that supporters of many different interpretations had to accept that in the absence of experimental evidence, the choice of interpretation is to a high degree a product of personal preference where reasons like philosophical beliefs or mathematical esthetics in the formulation of the theory play a decisive role.
The same trend was also observed by Schlosshauer et al. \cite{Zeilinger}.
\begin{center}
\includegraphics[width=.7\linewidth]{./Question-15}
\end{center}
More than half of the participants deemed the superposition of macroscopically distinct states in principal possible. A good third even thinks that they will be realized experimentally. This optimistic view is in congruency with the poll of Schlosshauer et al. \cite{Zeilinger}.
\begin{center}
\includegraphics[width=.7\linewidth]{./Question-16}
\end{center}
At least the good spirits did not drop at the end of this conference, since there is a strong belief that there will be conferences even 50 years from now. Moreover, there seem to be already willing organizers (11\%)\dots
\section{\label{sec:corr}Correlations}
It is quite enlightening to look for correlations in the answers. The criteria are chosen as with Schlosshauer et al. \cite{Zeilinger}. To suppress noise, any answer A to be correlated has to be checked by at least 4 participants. An answer B is called correlated if the fraction of votes that B received from A voters $\#(A\cap B)/\#A\geq T$ exceeds not only a threshold value $T$, but also the total fraction of votes for A by some gap value G, $\#(A\cap B)/\#A\geq (1+G) \cdot \#A/18$. The pair $(T,G)$ can be $(100\%,20\%)$ (strong correlation), $(75\%,15\%)$ (medium correlation) or $(50\%,10\%)$ (weak correlation).
Strong correlations alone are shown in Fig. \ref{fig:strongcor} and together with medium correlations in Fig. \ref{fig:medcor}. There are too many weak correlation to present them in a diagram, so they were added in Appendix \ref{sec:corrtab} for reference.
Note that due to the nature of the conditional probability, the above definition of correlation is not symmetric. As in the study of Schlosshauer et al. \cite{Zeilinger}, surprisingly few mutual correlations appear.
\begin{figure}[htp]
\begin{center}
\includegraphics[width=.9\linewidth]{./Korrelation2}
\end{center}
\caption{Strong correlations between answers are shown. All answers have been checked by at least 4 participants. An arrow pointing from an answer A to answer B means that B is strongly correlated to A. For the definition of correlation, see Sec. \ref{sec:corr}.}
\label{fig:strongcor}
\end{figure}
\begin{figure}[htp]
\begin{center}
\includegraphics[width=\linewidth]{./Korrelation8}
\end{center}
\caption{Strong and medium correlations between answers are shown. All answers have been checked by at least 4 participants. A fat (red) arrow pointing from an answer A to answer B means that B is strongly correlated to A, a regular (blue) arrow means medium correlation. For the definition of correlation, see Sec. \ref{sec:corr}. Arrows always originate from centers of the sides of rectangles.}
\label{fig:medcor}
\end{figure}
\section{\label{sec:dis}Discussion}
In this section, a short discussion of the findings of this poll shall be given. There were some questions that found large support, most prominently the question whether the choice of interpretation is a matter of philosophical prejudice, where 78\% of the answers were strongly affirmative.
Also, there seems to be much confidence in experimental progress, because in question 15, a clear majority is convinced that the superposition of macroscopically distinct states is in principal possible (56\%) and 39\% think that it will be even realized experimentally.
The opinion that in 50 years, there will still be conferences devoted to quantum foundations shows that this problem is taken seriously by many people. Of course, there is always some biased error when a question like this is posed on a conference that deals explicitly with this problem.
Einstein's and Bohr's views found no unrestricted support as nobody could motivate himself to answer that their views are correct without limitation.
As became clear from comparison with the study of Schlosshauer et al. \cite{Zeilinger}, the views and opinions concerning the foundations of quantum mechanics depend on the basic population that is being polled, but nevertheless there were many parallels that could be drawn.
\section{\label{sec:sum}Summary}
In summary, another small snapshot of the views on the foundations of quantum theory was presented. While the details of such snapshots depend on the respondents who share their views, it was shown that there is still much controversy in many questions that lie at the very foundations of quantum mechanics. Although there is a general optimism towards experimental progress and theoretical effort, many people believe that there will still be conferences about this matter in 50 years. All the more, it is important to stay open to discussion given this multitude of convictions and interpretations.
\begin{acknowledgments}
The author wishes to thank J.~Kleiner for the invitation and the rest of the conference attendees for many interesting discussions and their disposition to participate in this poll on the foundations of quantum mechanics. Additionally, the kind support of the ``Studienstiftung des deutschen Volkes'' (German National Merit Foundation) is kindly acknowledged here.
\end{acknowledgments}
|
1,108,101,566,336 | arxiv | \section{Introduction}
Quantum error-correcting codes play an important role in quantum
computing and quantum communication. Just as in classical coding theory, one central theme in quantum error-correction is the construction
of quantum codes that have good parameters. In \cite{CRSS98}, Calderbank \emph{et al.} presented an effective method to construct nice
quantum codes by using some mathematical techniques which made it possible to construct quantum codes
from classical codes over $\mathbb{F}_{2}$ or $\mathbb{F}_{4}$. Rains \cite{R99}, Ashikhmin and Knill \cite{AK01} then generalized their results to the nonbinary cases. In particular, one can construct quantum codes via classical codes with Euclidean or Hermitian self-orthogonality properties.
Let $q$ be a prime power. A $q$-ary quantum code is just a vector subspace of the Hilbert space $(\mathbb{C}^{q})^{\bigotimes n}\cong\mathbb{C}^{q^{n}}$, where $\mathbb{C}$ is the field of complex numbers and $n$ is called the length of the quantum code. We use $((n, K, d))_{q}$ or $[[n, k, d]]_{q}$ to denote a $q$-ary quantum code of length
$n$, dimension $K$ and minimum distance $d$, where $k=\log_{q}K$. An $[[n, k, d]]_{q}$ quantum code can detect up to $d - 1$ quantum errors and correct up
to $\lfloor\frac{d-1}{2}\rfloor$ quantum errors. Thus for fixed $n$ and $k$, it is desirable to construct $[[n, k, d]]_{q}$-quantum codes with minimum distance $d$ as large as possible. However, similar to
the classical Singleton bound, the parameters of an $[[n, k, d]]_{q}$ quantum code have to satisfy the quantum Singleton bound:
\begin{lemma} (\cite{R99,AK01,KKKS06} Quantum Singleton Bound)\label{lem1.1}
For any $[[n, k, d]]_{q}$ quantum code, we have
\[2d \leq n - k + 2.\]
\end{lemma}
A quantum code achieving this quantum Singleton bound is called a \emph{quantum maximum-distance-separable
(MDS) code}. Just as in the classical case, it is desirable to find more constructions of quantum MDS codes.
In 2001, Ashikhmin and Knill \cite{AK01} gave the following useful theorem
for constructing quantum stabilizer codes from classical codes.
\begin{theorem} (Hermitian Construction)\label{thm1.2}
If there exists an $[n, k, d]_{q^{2}}$-linear code $C$ with $C^{\perp_{H}} \subseteq C$, where $C^{\perp_{H}}$ is the Hermitian dual code of $C$, then there exists an $[[n, 2k-n, \geq d]]_{q}$-quantum code.
\end{theorem}
Note that the Hermitian dual code of an MDS code is still an MDS code. So we replace the code $C$ by its Hermitian dual $C^{\perp_{H}}$ in Theorem \ref{thm1.2} and obtain the following corollary for the quantum MDS codes.
\begin{corollary} (Hermitian Construction for Quantum MDS Codes)\label{cor1.3}
If there exists an $[n, k, n-k+1]_{q^{2}}$-MDS code $C$ with $C \subseteq C^{\perp_{H}}$, then there exists an $[[n, n-2k, k+1]]_{q}$-quantum MDS code.
\end{corollary}
Given a quantum MDS code, we can obtain a new quantum code with smaller length and minimum distance by the following lemma.
\begin{lemma} (\cite{GR15} Propagation Rule)\label{lem1.4}
If there exists an $[[n, n-2d+2, d]]_{q}$-quantum MDS code, then there exists an $[[n-1, n-2d+3, d-1]]_{q}$-quantum MDS code.
\end{lemma}
In the past decade, a lot of research work has been done for construction of quantum MDS codes and several new families of quantum MDS codes have been found by employing different methods. If the classical MDS conjecture is true, then there are no $q$-ary quantum MDS codes of length $n$ exceeding $q^{2}+1$ except when $q$ is even and $d=4$ or $d=q^{2}$ in which case $n \leq q^{2}+2$ (see \cite{KKKS06}). Quantum MDS codes of length up to $q + 1$ have been constructed for all possible dimensions through classical Euclidean self-orthogonal codes (see \cite{RGB04, GBR04, JX12}). Since the constraint of Euclidean self-orthogonality, the minimum distance of these quantum MDS codes is less than or equal to $\frac{q}{2}+1$. Thus Hermitian self-orthogonal codes are applied to construct quantum MDS codes with larger minimum distance. Some quantum MDS codes of length $n$ with specific values $n=q^{2}+1, q^{2}, \frac{q^{2}+1}{2}$ and minimum distance $d >q/2+1$ are obtained (see \cite{GBR04, G11, KZ12}). Due to their elegant algebraic structures, constacyclic codes, pseudo-cyclic codes and generalized Reed-Muller codes are also used to construct some quantum MDS codes of length $n$ with $q+1 < n \leq q^{2}+1$ and relatively large minimum distance (see \cite{KZ12, AKS07, KZL14,WZ15,ZG15,ZC14,CLZ15, LMG16, SK05}). In \cite{LXW08}, Li \emph{et al.} first presented a unified framework for constructing quantum MDS codes by employing the classical generalized Reed-Solomon (GRS) codes. Jin \emph{et al.} \cite{JLLX10}, Jin and Xing \cite{JX12,JX14} generalized and developed the method in \cite{LXW08}, and constructed several new families of quantum MDS codes with flexible parameters. Since then, GRS codes have been widely applied for constructing quantum MDS codes with minimum distance larger than $\frac{q}{2}+1$ in recent years (see \cite{JKW17,ZG17,SYZ17,FF18}).
In this paper, we will construct some new quantum MDS codes with relatively large minimum distance through classical Hermitian self-orthogonal GRS codes. The key point of constructing Hermitian self-orthogonal GRS codes is to find suitable evaluation points $a_{1}, a_{2}, \ldots, a_{n} \in \mathbb{F}_{q^{2}}$, such that a certain system
of homogenous equations over $\mathbb{F}_{q^{2}}$ related to these evaluation points has solutions over $\mathbb{F}_{q}^{*}$ (see Lemma \ref{lem2.1} and Remark \ref{rem2.2}). In \cite{JX14}, Jin and Xing first chose a class of multiplicative subgroups of $\mathbb{F}^{*}_{q^{2}}$ as the evaluation points to construct Hermitian self-orthogonal GRS codes. In \cite{ZG17, SYZ17}, the authors generalized the method of \cite{JX14} and considered some multiplicative subgroups of $\mathbb{F}^{*}_{q^{2}}$ and their cosets as the evaluation points. In the present paper, we consider some multiplicative subgroups of $\mathbb{F}^{*}_{q^{2}}$ and their cosets with more general parameters. Moreover, we add the zero element into them so that we can provide more constructions of new quantum MDS codes with longer lengths. Consequently, some known results can be easily derived from ours by the propagation rule of Lemma \ref{lem1.4}. More precisely, we provide some $[[n, n - 2k, k + 1]]_{q}$-quantum MDS codes with the
following parameters:
\begin{description}
\item[\textnormal{(i)}] $n=1+r\frac{q^{2}-1}{s}$, and $1 \leq k \leq r\frac{q-1}{s}$, where $s \mid (q-1)$ and $1 \leq r \leq s$ (See Theorem \ref{thm3.2});
\item[\textnormal{(ii)}] $n=1+r\frac{q^{2}-1}{2s+1}$, and $1 \leq k \leq (s+1)\frac{q+1}{2s+1}-1$, where $q > 2$, $(2s+1) \mid (q+1)$ and $1 \leq r \leq 2s+1$ (See Theorem \ref{thm4.3} (i));
\item[\textnormal{(iii)}] $n=1+(2t+1)\frac{q^{2}-1}{2s+1}$, and $1 \leq k \leq (s+t+1)\frac{q+1}{2s+1}-1$, where $q > 2$, $(2s+1) \mid (q+1)$ and $0 \leq t \leq s-1$ (See Theorem \ref{thm4.3} (ii));
\item[\textnormal{(iv)}] $n=1+r\frac{q^{2}-1}{2s}$, and $1 \leq k \leq (s+1)\frac{q+1}{2s}-1$, where $2s \mid (q+1)$ and $2 \leq r \leq 2s$ (See Theorem \ref{thm5.3} (i));
\item[\textnormal{(v)}] $n=1+(2t+2)\frac{q^{2}-1}{2s}$, and $1 \leq k \leq (s+t+1)\frac{q+1}{2s}-1$, where $2s \mid (q+1)$ and $0 \leq t \leq s-2$ (See Theorem \ref{thm5.3} (ii));
\item[\textnormal{(vi)}] $n=(2t+1)\frac{q^{2}-1}{2s}$, and $1 \leq k \leq (s+t)\frac{q+1}{2s}-2$, where $2s \mid (q+1)$ and $1 \leq t \leq s-1$ (See Theorem \ref{thm6.3}).
\end{description}
We make some remarks as follows:
\begin{enumerate}
\item The minimum distances of quantum MDS codes of cases (i)-(vi) can be larger than or equal to $\frac{q}{2}+1$ (for case (i), we let $\frac{r}{s}\geq \frac{q}{2(q-1)}$);
\item Applying the propagation rule (see Lemma \ref{lem1.4}) for cases (i), (iv) and (v), we obtain the results presented in \cite[Theorem 4.12]{SYZ17}, \cite[Theorem 4.2]{ZG17} and \cite[Theorem 4.8]{SYZ17}, respectively;
\item The case (ii) extends the result of \cite[Theorem 3.2 (i)]{JKW17} where a stricter condition $\textnormal{gcd}(r,q) = 1$ is required;
\item When $r=2t+1$ (resp. $r=2t+2$) and $t >0$, the codes from case (iii) (resp. (v)) have the same length but larger minimum distance than that of case (ii) (resp. (iv));
\item When $t \geq 2$, the quantum MDS codes from case (vi) have larger minimum distance than that of \cite[Theorem 4.2]{ZG17}.
\end{enumerate}
We list some examples of $[[n, n-2k, k+1]]_{q}$-quantum MDS codes from our constructions as follows.
\begin{description}
\item[(i)] $5 \mid (q-1)$, $n=1+\frac{4}{5}(q^{2}-1)$, $1 \leq k \leq \frac{4}{5}(q-1)$;
\item[(ii)] $5 \mid (q+1)$, $n=1+\frac{2}{5}(q^{2}-1)$, $1 \leq k \leq \frac{3}{5}(q+1)-1$;
\item[(iii)] $7 \mid (q+1)$, $n=1+\frac{5}{7}(q^{2}-1)$, $1 \leq k \leq \frac{6}{7}(q+1)-1$;
\item[(iv)] $4 \mid (q+1)$, $n=1+\frac{3}{4}(q^{2}-1)$, $1 \leq k \leq \frac{3}{4}(q+1)-1$;
\item[(v)] $6 \mid (q+1)$, $n=1+\frac{2}{3}(q^{2}-1)$, $1 \leq k \leq \frac{5}{6}(q+1)-1$;
\item[(vi)] $8 \mid (q+1)$, $n=\frac{7}{8}(q^{2}-1)$, $1 \leq k \leq \frac{7}{8}(q+1)-2$.
\end{description}
To the best of our knowledge, all the above quantum MDS codes are new.
The rest of this paper is organized as follows. In Section 2, we recall some basic results about Hermitian self-orthogonality and generalized Reed-Solomon codes. In Sections 3, 4, 5 and 6, we present six new classes of quantum MDS codes from generalized Reed-Solomon codes. We conclude this paper in Section 7.
\section{Preliminaries}
\label{sec:1}
In this section, we briefly review some basic results about Hermitian self-orthogonality and
generalized Reed-Solomon (GRS for short) codes. In addition, some technical lemmas for our constructions are also presented.
Let $q$ be a prime power. Let $\mathbb{F}_{q}$ be the finite field with $q$ elements and $\mathbb{F}_{q}^{*}$ be the multiplicative
group of nonzero elements of $\mathbb{F}_{q}$. A $q$-ary $[n, k, d]$-linear code is just a vector subspace of $\mathbb{F}_{q}^{n}$ with dimension $k$ and minimum Hamming distance $d$, and $n$ is called the length of the code. It is well known that $n$, $k$ and $d$ have to satisfy the Singleton bound: $d \leq n-k+1$. A code achieving the Singleton bound is called a \emph{maximum distance separable} (MDS) code.
Throughout this paper, we denote the all zero vector by $\textbf{0}$. For a vector $\textbf{c}=(c_{1}, \ldots, c_{n}) \in \mathbb{F}_{q^{2}}^{n}$, we denote by $\textbf{c}^{i}$ the vector $(c_{1}^{i}, \ldots, c_{n}^{i})$. And $0^{0}$ is set to be 1. For any two vectors $\textbf{x}=(x_{1}, \ldots, x_{n}) \in \mathbb{F}^{n}_{q^{2}}$ and $\textbf{y}=(y_{1}, \ldots, y_{n}) \in \mathbb{F}^{n}_{q^{2}}$, the usual Euclidean product of $\textbf{x}$ and $\textbf{y}$ is defined as $\langle \textbf{x} , \textbf{y} \rangle\triangleq\sum_{i=1}^{n}x_{i}y_{i}$. For a linear code $C$ of length $n$ over $\mathbb{F}_{q^{2}}$,
the Euclidean dual code of $C$ is defined as
\[C^{\perp} := \{\textbf{x} \in \mathbb{F}_{q^{2}}^{n} : \langle \textbf{x}, \textbf{c} \rangle =0 ,\textnormal{ for all } \textbf{c} \in C \},\]
and the Hermitian dual code of $C$ is defined as
\[C^{\perp_{H}} := \{\textbf{x} \in \mathbb{F}_{q^{2}}^{n} : \langle \textbf{x}, \textbf{c}^{q} \rangle =0 ,\textnormal{ for all } \textbf{c} \in C \}.\]
The code $C$ is called Hermitian self-orthogonal if $C \subseteq C^{\perp_{H}}$.
It is easy to show that $C^{\perp_{H}} = (C^{(q)})^{\perp}$, where $C^{(q)} = \{\textbf{c}^{q} : \textbf{c} \in C \}$. For a matrix $A=(a_{ij})$ over $\mathbb{F}_{q^{2}}$, we denote by $A^{(q)}$ the matrix $(a_{ij}^{q}).$ Let $C$ be a linear code over $\mathbb{F}_{q^{2}}$ with a generator matrix $G$, then $G^{(q)}$ is a generator matrix of $C^{(q)}$ hence a parity-check matrix of $C^{\perp_{H}}$.
Choose $n$ distinct elements $a_{1}, \ldots, a_{n}$ of $\mathbb{F}_{q^{2}}$ and $n$ nonzero elements $v_{1}, \ldots, v_{n}$ of $\mathbb{F}_{q^{2}}^{*}$. Put $\textbf{a}= (a_{1}, \ldots, a_{n})$ and $\textbf{v}=(v_{1}, \ldots, v_{n})$. Then the generalized
Reed-Solomon code over $\mathbb{F}_{q^{2}}$ associated to $\textbf{a}$ and $\textbf{v}$ is defined as follows.
\begin{eqnarray*}
GRS_{k}(\textbf{a}, \textbf{v}) &\triangleq& \{(v_{1}f(a_{1}), \ldots, v_{n}f(a_{n})) \\
&& : f(x) \in \mathbb{F}_{q^{2}}[x], \textnormal{ and deg}(f(x)) \leq k-1 \}.
\end{eqnarray*}
It is well known that the code $GRS_{k}(\textbf{a}, \textbf{v})$ is a $q^{2}$-ary $[n, k, n - k + 1]$-MDS code.
A generator matrix of $GRS_{k}(\textbf{a}, \textbf{v})$ is given by
\begin{equation*}
G_{k}(\textbf{a}, \textbf{v})=\left(
\begin{array}{cccc}
v_{1} & v_{2} & \cdots & v_{n} \\
v_{1}a_{1} & v_{2}a_{2} & \cdots & v_{n}a_{n} \\
\vdots & \vdots & \ddots & \vdots \\
v_{1}a_{1}^{k-1} & v_{2}a_{2}^{k-1} & \cdots & v_{n}a_{n}^{k-1} \\
\end{array}
\right).
\end{equation*}
From the above discussion, we can easily obtain the following useful lemma, which was also given in \cite{ZG17,SYZ17,JX14}.
\begin{lemma} (\cite{ZG17,SYZ17, JX14})\label{lem2.1}
Let $a_{1}, \ldots, a_{n}$ be $n$ pairwise distinct elements of $\mathbb{F}_{q^{2}}$ and let $v_{1}, \ldots, v_{n}$ be $n$ nonzero elements of $\mathbb{F}_{q^{2}}^{*}$. Put $\textbf{a}= (a_{1}, \ldots, a_{n})$ and $\textbf{v}=(v_{1}, \ldots, v_{n})$. Then the GRS code $GRS_{k}(\textbf{a}, \textbf{v})$ is Hermitian self-orthogonal if and only if $\langle \textbf{a}^{qi+j}, \textbf{v}^{q+1} \rangle = 0$, for all $0 \leq i, j \leq k-1$.
\end{lemma}
\begin{remark}\label{rem2.2}
If we set $\textbf{u}=(u_{1},u_{2},\ldots, u_{n}):=\textbf{v}^{q+1}$, then $\textbf{u} \in (\mathbb{F}^{*}_{q})^{n}$. Thus from Lemma \ref{lem2.1}, to construct a Hermitian self-orthogonal MDS code, it is sufficient to make sure that the system of homogenous equations $\langle \textbf{a}^{qi+j}, \textbf{u} \rangle = 0$ (for all $0 \leq i, j \leq k-1$) over $\mathbb{F}_{q^{2}}$ has a solution $\textbf{u} \in (\mathbb{F}^{*}_{q})^{n}$.
\end{remark}
Before giving our constructions, we need two technical lemmas. The first lemma provides a sufficient condition under which a certain system of homogenous equations over $\mathbb{F}_{q^{2}}$ has solutions over $\mathbb{F}^{*}_{q}$.
\begin{lemma}\label{lem2.3}
Suppose $r >0$. Let $A$ be an $r \times (r+1)$ matrix over $\mathbb{F}_{q^{2}}$ and satisfy the following two properties: 1) any $r$ columns of $A$ are linearly independent; 2) $A^{(q)}$ is row equivalent to $A$. Then the following system of homogenous equations
$A\textbf{u}^{T}=\textbf{0}^{T}$
has a solution $\textbf{u}=(u_{0}, u_{1}, \ldots, u_{r}) \in (\mathbb{F}^{*}_{q})^{r+1}$.
\end{lemma}
\begin{proof}
From Property 1), the rank of $A$ is equal to $r$. By Property 2) and \cite[Theorem 2.2]{JX14}, the system of homogenous equations
$A\textbf{u}^{T}=\textbf{0}^{T}$ has a nonzero solution $\textbf{u} \in (\mathbb{F}_{q})^{r+1}$. Let $C$ be the linear code over $\mathbb{F}_{q^{2}}$ with generator matrix $A$. Then $C$ is an $[r+1, r, 2]$-MDS code from Property 1) and $\textbf{u}$ is a nonzero codeword of $C^{\perp}$. Note that $C^{\perp}$ is an $[r+1, 1, r+1]$-MDS code, thus $\textbf{u} \in (\mathbb{F}^{*}_{q^{2}})^{r+1}$, hence $\textbf{u} \in (\mathbb{F}^{*}_{q})^{r+1}$. The lemma is proved.
\end{proof}
The second lemma is given as follows.
\begin{lemma}\label{lem2.4}
\begin{description}
\item[(i)] Suppose $(2s+1) \mid (q+1)$ and $m= \frac{q^{2}-1}{2s+1}$. Let $1 \leq k \leq (s+1+t)\frac{q+1}{2s+1}-1$, where $0\leq t \leq s-1$. Then for any $0 \leq i, j \leq k-1$, $m \mid (qi+j)$ if and only if $qi+j \in \{0, (s-t+1)m, (s-t+2)m, \ldots, (s+t)m\}$.
\item[(ii)] Suppose $2s \mid (q+1)$ and $m= \frac{q^{2}-1}{2s}$. Let $1 \leq k \leq (s+1+t)\frac{q+1}{2s}-1$, where $0 \leq t \leq s-2$. Then for any $0 \leq i, j \leq k-1$, $m \mid (qi+j)$ if and only if $qi+j \in \{0, (s-t)m, (s-t+1)m, \ldots, (s+t)m\}$.
\end{description}
\end{lemma}
\begin{proof}
We only need to prove Part (i) since the proof of Part (ii) is completely similar. According to the conditions, it is easy to see that $k \leq q-1$. Hence, for any $0 \leq i, j \leq k-1$ , we have $qi+j <(q+1)k \leq q^{2}-1$. Suppose $(i, j) \neq (0,0)$. If $qi+j=\ell m=\ell\frac{q^{2}-1}{2s+1}$, then $0< \ell < 2s+1$. Note that
\[ qi+j=\ell\frac{q^{2}-1}{2s+1}=q\left(\frac{\ell(q+1)}{2s+1}-1\right)+\left(q-\frac{\ell(q+1)}{2s+1}\right).\]
Thus
\[i=\frac{\ell(q+1)}{2s+1}-1, j=q-\frac{\ell(q+1)}{2s+1}.\]
If $\ell \geq s+1+t$, then
\[i=\frac{\ell(q+1)}{2s+1}-1 \geq (s+1+t)\frac{q+1}{2s+1}-1 \geq k,\]
which contradicts to the assumption that $i \leq k-1$;
If $\ell \leq s-t$, then
\[j=q-\frac{\ell(q+1)}{2s+1}\geq (s+1+t)\frac{q+1}{2s+1}-1 \geq k,\]
which contradicts to the assumption that $j \leq k-1$.
Thus $s-t+1 \leq \ell \leq s+t$.
The conclusion follows.
\end{proof}
\section{Quantum MDS codes of length $n=1+r\frac{q^{2}-1}{s}$, where $s \mid (q-1)$}
In this section, we construct a class of quantum MDS codes of length $n=1+r\frac{q^{2}-1}{s}$, where $s \mid (q-1)$ and $1 \leq r \leq s$. We first prove the following lemma.
\begin{lemma}\label{lem3.1}
Let $x_{1}, \ldots, x_{r}$ be $r$ pairwise distinct nonzero elements of $\mathbb{F}_{q}$.
Then the system of equations
\begin{equation}\label{eq1}
\left\{
\begin{aligned}
u_{0}+ u_{1}+\cdots +u_{r} = 0 \\
x_{1}u_{1}+ x_{2}u_{2}+\cdots +x_{r}u_{r} = 0 \\
\vdots ~~~~~~~~~~~~~~& \\
x_{1}^{r-1}u_{1}+ x_{2}^{r-1}u_{2}+\cdots +x_{r}^{r-1}u_{r}= 0
\end{aligned}\right.
\end{equation}
has a solution $\textbf{u}\triangleq(u_{0}, u_{1},\ldots, u_{r}) \in (\mathbb{F}_{q}^{*})^{r+1}$.
\end{lemma}
\begin{proof}
Let
\[A=\left(
\begin{array}{ccccc}
1 & 1 & 1 & \cdots & 1\\
0 & x_{1} & x_{2} & \cdots & x_{r} \\
\vdots & \vdots & \vdots & \ddots & \vdots \\
0 & x_{1}^{r-1} & x_{2}^{r-1} & \cdots & x_{r}^{r-1}\\
\end{array}
\right)
.\]
Then the system (\ref{eq1}) of equations is equivalent to the following equation
\begin{equation*}
A\textbf{u}^{T}=\textbf{0}^{T}.
\end{equation*}
Note that any $r$ columns of $A$ form a Vandermonde matrix, which is invertible. Thus any $r$ columns of $A$ are linearly independent. Since $x_{1}, \ldots, x_{r} \in \mathbb{F}_{q}$, $A^{(q)}=A$. The conclusion then follows from Lemma \ref{lem2.3}.
\end{proof}
\vskip 1mm
Set $m = \frac{q^{2}-1}{s}$. Let $\theta \in \mathbb{F}_{q^{2}}$ be an $m$-th primitive root of unity, and let $\langle\theta \rangle$ be the cyclic subgroup of $\mathbb{F}_{q^{2}}^{*}$ generated by $\theta$. Let $\beta_{1}, \ldots , \beta_{r} \in \mathbb{F}_{q^{2}}^{*}$
such that $\{\beta_{i} \langle \theta \rangle\}^{r}_{i
=1}$ represent distinct cosets of $\mathbb{F}_{q^{2}}^{*}/\langle \theta \rangle$. Put
\[\textbf{a}=(0, \beta_{1}, \beta_{1}\theta, \ldots, \beta_{1}\theta^{m-1}, \ldots , \beta_{r}, \beta_{r}\theta, \ldots, \beta_{r}\theta^{m-1}) \in \mathbb{F}_{q^{2}}^{n}.\]
Set
\[\textbf{v}=(v_{0},\underbrace{ v_{1},\ldots,v_{1}}_{m\textnormal{ times}},\ldots,\underbrace{v_{r},\ldots,v_{r}}_{m\textnormal{ times}}),\]
where $v_{0}, v_{1},\ldots, v_{r} \in \mathbb{F}_{q^{2}}^{*}$.
Then
\begin{equation}\label{eq2}
\langle\textbf{a}^{0}, \textbf{v}^{q+1}\rangle = v_{0}^{q+1}+ (v_{1}^{q+1}+\cdots+v_{r}^{q+1})m.
\end{equation}
And for any $(i,j)\neq (0,0)$, we have
\[\langle\textbf{a}^{qi+j}, \textbf{v}^{q+1}\rangle
=\sum_{\ell=1}^{r}\beta_{\ell}^{qi+j}v_{\ell}^{q+1}\sum_{\nu=0}^{m-1}\theta^{\nu(qi+j)},\]
thus
\[\langle\textbf{a}^{qi+j}, \textbf{v}^{q+1}\rangle=0, \textnormal{ when }m \nmid (qi+j),\]
and
\begin{equation}\label{eq3}
\langle\textbf{a}^{qi+j}, \textbf{v}^{q+1}\rangle=m\sum\limits_{\ell=1}^{r}\beta_{\ell}^{qi+j}v_{\ell}^{q+1}, \textnormal{ when }m \mid (qi+j).
\end{equation}
Now, our first construction is given as follows.
\begin{theorem}\label{thm3.2}
Let $q$ be a prime power. Suppose $s \mid (q-1)$ and $1 \leq r \leq s$. Put $n=1+r\frac{q^{2}-1}{s}$. Then for any $1 \leq k \leq r\frac{q-1}{s}$, there exists an $[[n, n-2k, k+1]]_{q}$-quantum MDS code.
\end{theorem}
\begin{proof}
Keep the notations as above. Let $x_{\ell}=\beta_{\ell}^{m}$, for $\ell = 1, \ldots, r$. Then $x_{1}, \ldots, x_{r}$ are pairwise distinct. Indeed, if $x_{\ell} = x_{\ell'}$ for some $1 \leq \ell \neq \ell' \leq r$, then $(\frac{\beta_{\ell}}{\beta_{\ell'}})^{\frac{q^{2}-1}{s}}=1$ hence $\frac{\beta_{\ell}}{\beta_{\ell'}} \in \langle \theta \rangle$. This is impossible since $\beta_{\ell}$ and $\beta_{\ell'}$ lie in two distinct cosets of $\mathbb{F}_{q^{2}}^{*}/\langle \theta \rangle$. Note that $(q+1) \mid m$, so $x_{\ell} \in \mathbb{F}_{q}$. Then, according to Lemma \ref{lem3.1}, there exists a vector $\textbf{u}=(u_{0}, u_{1}, \ldots, u_{r}) \in (\mathbb{F}_{q}^{*})^{r+1}$ which is a solution of the system (\ref{eq1}) of equations.
For $i=1, 2, \ldots, r$, we let $v_{i} \in \mathbb{F}_{q^{2}}^{*}$ such that $v_{i}^{q+1}=u_{i}$ and let $v_{0} \in \mathbb{F}_{q^{2}}^{*}$ such that $v_{0}^{q+1}=u_{0}m$. Then from Eq. (\ref{eq2}),
\begin{eqnarray*}
\langle\textbf{a}^{0}, \textbf{v}^{q+1}\rangle &=& v_{0}^{q+1}+ (v_{1}^{q+1}+\cdots+v_{r}^{q+1})m \\
&=& u_{0}m+(u_{1}+\cdots+u_{r})m=0.
\end{eqnarray*}
Since $1 \leq k \leq r\frac{q-1}{s}$, $qi+j \leq (q+1)(k-1)<r\frac{q^{2}-1}{s}=rm$. Thus, for any $0 \leq i, j\leq k-1$, $m \mid (qi+j)$ only if $qi+j=\mu m$ for some $0 \leq \mu \leq r-1$. Thus by Eq. (\ref{eq3}), when $qi+j=\mu m$ ($1 \leq \mu \leq r-1$), we have
\[\langle\textbf{a}^{qi+j}, \textbf{v}^{q+1}\rangle= m\sum_{\ell=1}^{r}\beta_{\ell}^{\mu m}v_{\ell}^{q+1}=m\sum_{\ell=1}^{r}x_{\ell}^{\mu}u_{\ell}=0.\]
In summary,
\[\langle \textbf{a}^{qi+j}, \textbf{v}^{q+1} \rangle =0,\textnormal{ for all } 0 \leq i, j \leq k-1.\]
By Lemma \ref{lem2.1}, $GRS_{k}(\textbf{a}, \textbf{v})$ is a Hermitian self-orthogonal MDS code with parameters $[n, k, n-k+1]$. The conclusion then follows from Corollary \ref{cor1.3}.
\end{proof}
\begin{remark}
When $\frac{r}{s}>\frac{q}{2(q-1)}$, the quantum codes constructed in Theorem \ref{thm3.2} have minimum distance $r\frac{q-1}{s}+1 >\frac{q}{2}+1.$
\end{remark}
Applying the propagation rule (see Lemma \ref{lem1.4}) for Theorem \ref{thm3.2}, we immediately obtain the following corollary which is one of main results in \cite{SYZ17}.
\begin{corollary} (\cite[Theorem 4.12]{SYZ17})\label{cor3.4}
Let $q$ be a prime power. Let $s \mid (q-1)$ and $1 \leq r \leq s$. Put $n=r\frac{q^{2}-1}{s}$. Then for any $1 \leq k \leq r\frac{q-1}{s}-1$, there exists an $[[n, n-2k, k+1]]_{q}$-quantum MDS code.
\end{corollary}
On the other hand, taking $r=s$ in Theorem \ref{thm3.2}, we obtain the following known result.
\begin{corollary} (\cite{GBR04})\label{cor3.5}
Let $q$ be a prime power. Then for any $1 \leq k \leq q-1$, there exists a $[[q^{2}, q^{2}-2k, k+1]]_{q}$-quantum MDS code.
\end{corollary}
In the following example, a new family of quantum MDS codes is given by Theorem \ref{thm3.2}.
\begin{example}
Let $(r, s) = (4, 5)$ in Theorem \ref{thm3.2}. Then when $5 \mid (q-1)$, there exists an $[[1+\frac{4}{5}(q^{2}-1), 1+\frac{4}{5}(q^{2}-1)-2k, k+1]]_{q}$ quantum MDS code for any $1 \leq k \leq \frac{4}{5}(q-1)$.
\end{example}
\section{Quantum MDS codes of length $n=1+r\frac{q^{2}-1}{2s+1}$, where $(2s+1) \mid (q+1)$}
In this section, we construct quantum MDS codes of length $n=1+r\frac{q^{2}-1}{2s+1}$, where $(2s+1) \mid (q+1)$. If $r=2s+1$, then $n=q^{2}$. The $q$-ary quantum MDS codes of length $q^{2}$ have been already constructed in \cite{GBR04} (see also Corollary \ref{cor3.5}). To simplify the following discussion, we assume that $1 \leq r < 2s+1$. Set $m=\frac{q^{2}-1}{2s+1}$. Before giving our construction, we need the following lemmas.
\begin{lemma}\label{lem4.1}
Suppose that $q > 2$ and $r \geq 1$. Then there exist $u_{0}, u_{1}, \ldots, u_{r} \in \mathbb{F}_{q}^{*}$ such that
\[\sum_{i=0}^{r}u_{i}=0.\]
\end{lemma}
\begin{proof}
We prove this lemma by induction on $r$. If $r=1$, this is trivial. For $r \geq 2$, by induction, the equation $\sum_{i=0}^{r}u_{i}=0$ has solutions $u_{0}, \ldots, u_{r-2}, (u_{r-1}+u_{r}):=u \in \mathbb{F}_{q}^{*}$. Now, take $u_{r-1} \in \mathbb{F}_{q}^{*}\backslash\{u\}$ and $u_{r}=u-u_{r-1} \neq 0$. The desired conclusion follows.
\end{proof}
\begin{lemma}\label{lem4.2}
Suppose $(2s+1) \mid (q+1)$ and $m=\frac{q^{2}-1}{2s+1}$. Let $\omega$ be a primitive element of $\mathbb{F}_{q^{2}}$ and $r=2t+1$, where $0 \leq t \leq s-1$. Then the following system of equations
\begin{equation}\label{eq4}
\left\{
\begin{aligned}
\sum_{\ell=0}^{r}u_{\ell} & =0 \\
\sum_{\ell=1}^{r}\omega^{\ell \mu m}u_{\ell} &=0,\textnormal{ for }\mu=s-t+1, \ldots, s+t,
\end{aligned}\right.
\end{equation}
has a solution $\textbf{u}\triangleq(u_{0}, u_{1}, \ldots, u_{r}) \in (\mathbb{F}_{q}^{*})^{r+1}.$
\end{lemma}
\begin{proof}
Let $\alpha=\omega^{m}$ be a primitive $(2s+1)$-th root of unity and let $a= s-t+1$. It is easy to verify that $\alpha^{a+\nu} \neq \alpha^{a+\nu'} \neq 1$ for any $0 \leq \nu \neq \nu' \leq r-2$. Let
\[A=\left(
\begin{array}{ccccc}
1& 1 & 1 & \cdots & 1 \\
0 & \alpha^{a} & \alpha^{2a} & \cdots & \alpha^{ra} \\
0 & \alpha^{a+1} & \alpha^{2(a+1)} & \cdots & \alpha^{r(a+1)} \\
\vdots & \vdots & \vdots & \ddots & \vdots \\
0 & \alpha^{a+r-2} & \alpha^{2(a+r-2)} & \cdots & \alpha^{r(a+r-2)}\\
\end{array}
\right)\]
be an $r \times (r+1)$ matrix over $\mathbb{F}_{q^{2}}$. Then the system (\ref{eq4}) of equations is equivalent to the following equation
\begin{equation*}
A\textbf{u}^{T}=\textbf{0}^{T}.
\end{equation*}
For any $1 \leq i \leq r+1$, let $A_{i}$ be the $r\times r$ matrix obtained from $A$ by deleting the $i$-th column.
Then
\[\det(A_{1})=(\alpha^{(r-1)a+\frac{(r-1)(r-2)}{2}})\det (B_{1}) \neq 0,\]
where \[B_{1}=\left(
\begin{array}{ccccc}
1 & 1 & 1 & \cdots & 1 \\
1 & \alpha^{a} & \alpha^{2a} & \cdots & \alpha^{(r-1)a} \\
\vdots & \vdots & \vdots &\ddots & \vdots \\
1 & \alpha^{a+r-2} & \alpha^{2(a+r-2)} & \cdots & \alpha^{(r-1)(a+r-2)} \\
\end{array}
\right),\]
and for $2 \leq i \leq r+1$
\[ \det(A_{i}) = b_{i} \det (B_{i})\neq 0,\]
where $b_{i}=\alpha^{a}\cdots\alpha^{(i-1)a}\alpha^{(i+1)a}\cdots\alpha^{ra}$ and
\[B_{i}=\begin{pmatrix}
\begin{smallmatrix}
1 & \cdots & 1 & 1 & \cdots & 1 \\
\alpha & \cdots & \alpha^{i-1} & \alpha^{i+1} & \cdots & \alpha^{r} \\
\alpha^{2} & \cdots & \alpha^{2(i-1)} & \alpha^{2(i+1)} & \cdots & \alpha^{2r} \\
\vdots & \ddots & \vdots & \vdots & \ddots & \vdots \\
\alpha^{r-2} & \cdots & \alpha^{(i-1)(r-2)} & \alpha^{(i+1)(r-2)} & \cdots & \alpha^{r(r-2)}
\end{smallmatrix}
\end{pmatrix}.\]
Hence any $r$ columns of $A$ are linearly independent.
On the other hand, since $(2s+1) \mid (q+1)$, we have
\[\alpha^{i(a+j)q}=\alpha^{-i(s-t+1+j)}=\alpha^{i(s+t-j)}=\alpha^{i(a+r-2-j)},\]
for any $1 \leq i \leq r$ and $0 \leq j \leq r-2$.
Thus $A$ is row equivalent to $A^{(q)}$. The conclusion then follows from Lemma \ref{lem2.3}.
\end{proof}
Let $\omega$ be a primitive element of $\mathbb{F}_{q^{2}}$ and $\theta=\omega^{2s+1}$ be a primitive $m$-th root of unity ($m=\frac{q^{2}-1}{2s+1}$). It is easy to verify that
\[\omega^{i_{1}}\theta^{j_{1}} \neq \omega^{i_{2}}\theta^{j_{2}} \]
for any $1 \leq i_{1} \neq i_{2} \leq r$ and $0 \leq j_{1} \neq j_{2} \leq m-1.$
Put
\[\textbf{a}=(0, \omega, \omega\theta, \ldots, \omega\theta^{m-1}, \ldots , \omega^{r}, \omega^{r}\theta, \ldots, \omega^{r}\theta^{m-1}) \in \mathbb{F}_{q^{2}}^{n}.\]
Set
\[\textbf{v}=(v_{0},\underbrace{ v_{1},\ldots,v_{1}}_{m\textnormal{ times}},\ldots,\underbrace{v_{r},\ldots,v_{r}}_{m\textnormal{ times}}),\]
where $v_{0}, v_{1},\ldots, v_{r} \in \mathbb{F}_{q^{2}}^{*}$.
Similar to the discussion before Theorem \ref{thm3.2}, we have
\begin{equation}\label{eq5}
\langle\textbf{a}^{0}, \textbf{v}^{q+1}\rangle = v_{0}^{q+1}+ (v_{1}^{q+1}+\cdots+v_{r}^{q+1})m.
\end{equation}
For any $(i,j)\neq (0,0)$,
\[\langle\textbf{a}^{qi+j}, \textbf{v}^{q+1}\rangle=0, \textnormal{ when }m \nmid (qi+j),\]
and
\begin{equation}\label{eq6}
\langle\textbf{a}^{qi+j}, \textbf{v}^{q+1}\rangle=m\sum\limits_{\ell=1}^{r}\omega^{\ell(qi+j)}v_{\ell}^{q+1}, \textnormal{ when }m \mid (qi+j).
\end{equation}
Now, we present our second construction as follows.
\begin{theorem}\label{thm4.3}
Let $q > 2$ be a prime power, $(2s+1) \mid (q+1)$ and $1 \leq r < 2s+1$. Put $n=1+r\frac{q^{2}-1}{2s+1}$.
\begin{description}
\item[(i)] For any $1 \leq k \leq (s+1)\frac{q+1}{2s+1}-1$, there exists an $[[n, n-2k, k+1]]_{q}$-quantum MDS code.
\item[(ii)] If $r=2t+1$, where $0 \leq t \leq s-1$, then for any $1 \leq k \leq (s+1+t)\frac{q+1}{2s+1}-1$, there exists an $[[n, n-2k, k+1]]_{q}$-quantum MDS code.
\end{description}
\end{theorem}
\begin{proof}
Keep the notations as above.
\vskip 1mm
(i): Suppose $1 \leq k \leq (s+1)\frac{q+1}{2s+1}-1$.
By Lemma \ref{lem4.1}, there exist $u_{0}, u_{1}, \ldots, u_{r} \in \mathbb{F}_{q}^{*}$ such that
\[\sum_{i=0}^{r}u_{i}=0.\]
For $i=1, 2, \ldots, r$, let $v_{i} \in \mathbb{F}_{q^{2}}^{*}$ such that $v_{i}^{q+1}=u_{i}$ and let $v_{0} \in \mathbb{F}_{q^{2}}^{*}$ such that $v_{0}^{q+1}=u_{0}m$.
Then by Eq. (\ref{eq5}),
\[\langle\textbf{a}^{0}, \textbf{v}^{q+1}\rangle = u_{0}m+(u_{1}+\cdots+u_{r})m=0.\]
Taking $t=0$ in Lemma \ref{lem2.4} (i), we obtain that $m \mid (qi+j)$ if and only if $(i,j)=(0,0)$. Thus from the above discussion,
\[\langle \textbf{a}^{qi+j}, \textbf{v}^{q+1} \rangle =0,\textnormal{ for all } 0 \leq i, j \leq k-1.\]
By Lemma \ref{lem2.1}, $GRS_{k}(\textbf{a}, \textbf{v})$ is a Hermitian self-orthogonal MDS code with parameters $[n, k, n-k+1]$. Part (i) then follows from Corollary \ref{cor1.3}.
\vskip 2mm
(ii): Suppose $r=2t+1$, where $0 \leq t \leq s-1$ and $1 \leq k \leq (s+t+1)\frac{q+1}{2s+1}-1$. By Lemma \ref{lem4.2}, there exist $u_{0}, u_{1}, \ldots, u_{r} \in \mathbb{F}_{q}^{*}$ which satisfy the system (\ref{eq4}) of equations. For $i=1, 2, \ldots, r$, let $v_{i} \in \mathbb{F}_{q^{2}}^{*}$ such that $v_{i}^{q+1}=u_{i}$ and let $v_{0} \in \mathbb{F}_{q^{2}}^{*}$ such that $v_{0}^{q+1}=u_{0}m$.
Then by Eq. (\ref{eq6}),
\[\langle\textbf{a}^{0}, \textbf{v}^{q+1}\rangle = u_{0}m+(u_{1}+\cdots+u_{r})m=0.\]
By Lemma \ref{lem2.4} (i), $m \mid (qi+j)$ if and only if $qi+j \in \{0, (s-t+1)m, (s-t+2)m, \ldots, (s+t)m \}$. Thus by Eq. (\ref{eq6}), when $qi+j=\mu m$ ($s-t+1 \leq \mu \leq s+t$), we have
\[\langle\textbf{a}^{qi+j}, \textbf{v}^{q+1}\rangle= m\sum_{\ell=1}^{r}\omega^{\ell\mu m}v_{\ell}^{q+1}=m\sum_{\ell=1}^{r}\omega^{\ell\mu m}u_{\ell}=0.\]
Hence
\[\langle\textbf{a}^{qi+j}, \textbf{v}^{q+1}\rangle=0,\]
for all $ 0 \leq i, j \leq k-1.$ By Lemma \ref{lem2.1}, $GRS_{k}(\textbf{a}, \textbf{v})$ is a Hermitian self-orthogonal MDS code with parameters $[n, k, n-k+1]$. Part (ii) then also follows from Corollary \ref{cor1.3}.
\\The proof of this theorem is completed.
\end{proof}
\begin{remark}\label{rem4.4}
\begin{description}
\item[i)] The minimum distance of the quantum codes constructed in Theorem \ref{thm4.3} can be larger than $\frac{q}{2}+1$.
\item[ii)] Part (i) of Theorem \ref{thm4.3} extends the result of \cite[Theorem 3.2 (i)]{JKW17} where a stricter condition $\textnormal{gcd}(r,q) = 1$ is required.
\item[iii)] When $r=2t+1$ and $1 \leq t \leq s-1$, the quantum codes from Part (ii) of Theorem \ref{thm4.3} have larger minimum distance than that of Part (i).
\end{description}
\end{remark}
Shi \emph{et al.} \cite[Theorem 4.2]{SYZ17} constructed a family of quantum MDS codes of length $n=r\frac{q^{2}-1}{2s+1}$, where $r=2t+2$ is even. For $r=2t+1$ odd, applying the propagation rule (see Lemma \ref{lem1.4}) for Theorem \ref{thm4.3} (ii), we can immediately obtain the following result.
\begin{corollary}\label{cor4.5}
Let $q > 2$ be a prime power, $(2s+1) \mid (q+1)$ and $0 \leq t \leq s-1$. Put $n=(2t+1)\frac{q^{2}-1}{2s+1}$. Then for any $1 \leq k \leq (s+1+t)\frac{q+1}{2s+1}-2$, there exists an $[[n, n-2k, k+1]]_{q}$-quantum MDS code.
\end{corollary}
\begin{remark}\label{rem4.6}
Jin \emph{et al.} \cite[Theorem 3.2 (ii)]{JKW17} constructed a family of $q$-ary quantum MDS codes with parameters $[[r\frac{q^{2}-1}{2s+1}, r\frac{q^{2}-1}{2s+1}-2k, k+1]]$, for any $k \leq (s+1)\frac{q+1}{2s+1}-1$, where $(2s+1) \mid (q+1)$ and $\gcd(r, q) >1$. If $t \geq 1$ and $2s+1 \neq q+1$, then $(s+1+t)\frac{q+1}{2s+1}-1 \geq (s+1)\frac{q+1}{2s+1}$ and hence the quantum codes of Corollary \ref{cor4.5} have larger minimum distance.
\end{remark}
\begin{example}In this example, we give some new quantum MDS codes from Theorem \ref{thm4.3}.
\begin{description}
\item[(i)] Let $(r, s)=(2,2)$ in Theorem \ref{thm4.3} (i). Then, when $5 \mid (q+1)$, there exists a $[[1+\frac{2}{5}(q^{2}-1), 1+\frac{2}{5}(q^{2}-1)-2k, k+1]]_{q}$ quantum MDS code for any $1 \leq k \leq \frac{3}{5}(q+1)-1$;
\item[(ii)] Let $(r,s)=(5, 3)$ in Theorem \ref{thm4.3} (ii). Then, when $7 \mid (q+1)$, there exists a $[[1+\frac{5}{7}(q^{2}-1), 1+\frac{5}{7}(q^{2}-1)-2k, k+1]]_{q}$ quantum MDS code for any $1 \leq k \leq \frac{6}{7}(q+1)-1$.
\end{description}
\end{example}
\section{Quantum MDS codes of length $n=1+r\frac{q^{2}-1}{2s}$, where $2s \mid (q+1)$}
In this section, we construct quantum MDS codes of length $n=1+r\frac{q^{2}-1}{2s}$, where $1 \leq r \leq 2s$ and $2s \mid (q+1)$. If $r=2s$, then $n=q^{2}$; If $r=s=1$, then $n=\frac{q^{2}+1}{2}$. The $q$-ary quantum MDS codes of lengths $q^{2}$ and $\frac{q^{2}+1}{2}$ have been already constructed in \cite{GBR04} and \cite{KZ12}, respectively. To simplify the following discussion, we assume that $r < 2s$ and $s > 1$. In this section, we denote $m:=\frac{q^{2}-1}{2s}$. We first provide two technical lemmas as follows.
\begin{lemma}\label{lem5.1}
Suppose that $q$ is odd and $r \geq 2$. Then the following system of equations
\begin{equation}\label{eq7}
\left\{
\begin{aligned}
\sum_{k=0}^{r}u_{k} & =0 \\ \sum_{i=1}^{r}(-1)^{i}u_{i} & =0
\end{aligned}\right.
\end{equation}
has a solution $\textbf{u}\triangleq(u_{0}, u_{1}, \ldots, u_{r}) \in (\mathbb{F}_{q}^{*})^{r+1}$.
\end{lemma}
\begin{proof}
Note that the system (\ref{eq7}) of equations is equivalent to
\[u_{0}+\sum_{i=1, i\textnormal{ odd}}^{r}(2u_{i})=u_{0}+\sum_{j=2, j \textnormal{ even}}^{r}(2u_{j})=0.\]
The conclusion then follows from Lemma \ref{lem4.1}.
\end{proof}
According to Lemma \ref{lem2.3}, we can prove the following lemma similarly as Lemma \ref{lem4.2}. Hence, we omit the details of the proof.
\begin{lemma}\label{lem5.2}
Suppose $2s \mid (q+1)$. Let $\omega$ be a primitive element of $\mathbb{F}_{q^{2}}$ and $r=2t+2$, where $0 \leq t \leq s-2$. Then the following system of equations
\begin{equation*}\label{7}
\left\{
\begin{aligned}
\sum_{\ell=0}^{r}u_{\ell} &=0 \\
\sum_{\ell=1}^{r}\omega^{\ell \mu m}u_{\ell} &=0,~~\mu=s-t, s-t+1, \ldots, s+t,
\end{aligned}\right.
\end{equation*}
has a solution $\textbf{u}\triangleq(u_{0}, u_{1}, \ldots, u_{r}) \in (\mathbb{F}_{q}^{*})^{r+1}.$
\end{lemma}
Let $\omega$ be a primitive element of $\mathbb{F}_{q^{2}}$ and $\theta=\omega^{2s}$ be a primitive $m$-th root of unity.
Put
\[\textbf{a}=(0, \omega, \omega\theta, \ldots, \omega\theta^{m-1}, \ldots , \omega^{r}, \omega^{r}\theta, \ldots, \omega^{r}\theta^{m-1}) \in \mathbb{F}_{q^{2}}^{n}.\]
Now, we give our third construction as follows.
\begin{theorem}\label{thm5.3}
Let $q$ be a prime power, $2s \mid (q+1)$ and $2 \leq r < 2s$. Put $n=1+r\frac{q^{2}-1}{2s}$.
\begin{description}
\item[(i)] For any $1 \leq k \leq (s+1)\frac{q+1}{2s}-1$, there exists an $[[n, n-2k, k+1]]_{q}$-quantum MDS code.
\item[(ii)] If $r=2t+2$, where $0 \leq t \leq s-2$, then for any $1 \leq k \leq (s+t+1)\frac{q+1}{2s}-1$, there exists an $[[n, n-2k, k+1]]_{q}$-quantum MDS code.
\end{description}
\end{theorem}
\begin{proof}
By employing Lemmas \ref{lem2.1}, \ref{lem4.2} and \ref{lem5.2}, the theorem can be proved similarly as Theorem \ref{thm4.3}. We omit the details.
\end{proof}
\begin{remark}\label{rem5.4}
\begin{description}
\item[i)] The minimum distance of quantum codes constructed in Theorem \ref{thm5.3} can be larger than $\frac{q}{2}+1$.
\item[ii)] When $r=2t+2$ and $1 \leq t \leq s-2$, the quantum codes from Part (ii) of Theorem \ref{thm5.3} have larger minimum distance than that of Part (i).
\end{description}
\end{remark}
Applying the propagation rule (see Lemma \ref{lem1.4}) for Theorem \ref{thm5.3} (i) and (ii), we immediately obtain the following corollaries which were given in \cite{ZG17} and \cite{SYZ17}, respectively.
\begin{corollary} (\cite[Theorem 4.2]{ZG17})\label{cor5.5}
Let $q$ be a prime power, $2s\mid (q+1)$ and $2 \leq r < 2s$. Put $n=r\frac{q^{2}-1}{2s}$. Then for any $1 \leq k \leq (s+1)\frac{q+1}{2s}-2$, there exists an $[[n, n-2k, k+1]]_{q}$-quantum MDS code.
\end{corollary}
\begin{corollary} (\cite[Theorem 4.8]{SYZ17})\label{cor5.6}
Let $q$ be a prime power, $2s\mid (q+1)$ and $0 \leq t \leq s-2$. Put $n=(2t+2)\frac{q^{2}-1}{2s}$. Then for any $1 \leq k \leq (s+t+1)\frac{q+1}{2s}-2$, there exists an $[[n, n-2k, k+1]]_{q}$-quantum MDS code.
\end{corollary}
\begin{example}In this example, we give some new quantum MDS codes from Theorem \ref{thm5.3}.
\begin{description}
\item[(i)] Let $(r, s)=(3,2)$ in Theorem \ref{thm5.3} (i). Then, when $4 \mid (q+1)$, there exists a $[[1+\frac{3}{4}(q^{2}-1), 1+\frac{3}{4}(q^{2}-1)-2k, k+1]]_{q}$ quantum MDS code for any $1 \leq k \leq \frac{3}{4}(q+1)-1$;
\item[(ii)] Let $(r,s)=(4, 3)$ in Theorem \ref{thm5.3} (ii). Then, when $6 \mid (q+1)$, there exists a $[[1+\frac{2}{3}(q^{2}-1), 1+\frac{2}{3}(q^{2}-1)-2k, k+1]]_{q}$ quantum MDS code for any $1 \leq k \leq \frac{5}{6}(q+1)-1$.
\end{description}
\end{example}
\section{Quantum MDS codes of length $n=(2t+1)\frac{q^{2}-1}{2s}$, where $2s \mid (q+1)$}
Suppose $2s \mid (q+1)$ and $0 \leq t \leq s-1$. In \cite[Theorem 4.8]{SYZ17}, Shi \emph{et al.} constructed a family of quantum MDS codes of length $(2t+2)\frac{q^{2}-1}{2s}$ (see Corollary \ref{cor5.6}). In this section, we contribute to construct a family of quantum MDS codes of length $(2t+1)\frac{q^{2}-1}{2s}$. Before giving our construction, we need the following lemmas.
\begin{lemma}\label{lem6.1}
Suppose that $3 \leq \tau < q+1$. Let $M$ be a $(\tau-2) \times \tau$ matrix over $\mathbb{F}_{q^{2}}$ and satisfy the following two properties: 1) $M$ and $M^{(q)}$ are row equivalent; 2) any $\tau-2$ columns of $M$ are linearly independent. Then the following equation
\begin{equation*}\label{9}
M\textbf{x}^{T}=\textbf{0}^{T}
\end{equation*}
has a solution $\textbf{x}=(x_{1}, x_{2}, \ldots, x_{\tau}) \in (\mathbb{F}_{q}^{*})^{\tau}$.
\end{lemma}
\begin{proof}
Let $M_{1}$ (resp. $M_{\tau}$) be the $(\tau-2)\times (\tau-1)$ matrix obtained from $M$ by deleting the first (resp. the last) column. From the conditions, we obtain that $M_{1}$ and $M_{\tau}$ satisfy the properties in Lemma \ref{lem2.3} (let $r=\tau-1$). Thus the following two equations
\begin{equation*}\label{10}
M_{1}\textbf{u}^{T}=\textbf{0}^{T},~ M_{\tau}\textbf{v}^{T}=\textbf{0}^{T}
\end{equation*}
have nonzero solutions $\textbf{u}=(u_{2}, u_{3}, \ldots, u_{\tau}) \in (\mathbb{F}_{q}^{*})^{\tau-1}$ and $\textbf{v}=(v_{1}, v_{2}, \ldots, v_{\tau-1}) \in (\mathbb{F}_{q}^{*})^{\tau-1},$ respectively. Since $\tau < q+1$, we may choose an element $\alpha \in \mathbb{F}_{q}^{*}\backslash \{\frac{u_{2}}{v_{2}},\ldots, \frac{u_{\tau-1}}{v_{\tau-1}}\}$. Let $\textbf{x}=(0, \textbf{u})-\alpha(\textbf{v},0)$, then $\textbf{x} \in (\mathbb{F}_{q}^{*})^{\tau}$ and
\[M\textbf{x}^{T}=\left(
\begin{array}{c}
0 \\
M_{1}\textbf{u}^{T} \\
\end{array}
\right)
+\left(
\begin{array}{c}
M_{\tau}\textbf{v}^{T} \\
0 \\
\end{array}
\right)=\textbf{0}^{T}.\]
The lemma is proved.
\end{proof}
\begin{lemma}\label{lem6.2}
Suppose $2s \mid (q+1)$ and $m=\frac{q^{2}-1}{2s}$. Let $\omega$ be a primitive element of $\mathbb{F}_{q^{2}}$ and $r=2t+1$, where $1 \leq t \leq s-1$.
Then the following system of equations
\begin{equation}\label{eq8}
\sum_{\ell=1}^{r}\omega^{\ell (\mu m-q-1)}u_{\ell} =0,\textnormal{ for }\mu=s-t+1, \ldots, s+t-1,
\end{equation}
has a solution $\textbf{u}\triangleq(u_{1}, u_{2}, \ldots, u_{r}) \in (\mathbb{F}_{q}^{*})^{r}.$
\end{lemma}
\begin{proof}
Denote $\alpha=\omega^{m}$, $\eta=\omega^{-q-1}$ and $a=s-t+1$. Then $\alpha^{2s}=1$ and $\eta \in \mathbb{F}_{q}$. Let
\[M=\left(
\begin{array}{cccc}
\alpha^{a}\eta & \alpha^{2a}\eta^{2} & \cdots & \alpha^{ra}\eta^{r} \\
\alpha^{a+1}\eta & \alpha^{2(a+1)}\eta^{2} & \cdots & \alpha^{r(a+1)}\eta^{r} \\
\vdots & \vdots & \ddots & \vdots \\
\alpha^{a+r-3}\eta & \alpha^{2(a+r-3)}\eta^{2} & \cdots & \alpha^{r(a+r-3)}\eta^{r}\\
\end{array}
\right)\]
be an $(r-2) \times r$ matrix over $\mathbb{F}_{q^{2}}$.
Then the system (\ref{eq8}) of equations is equivalent to the following equation
\begin{equation}\label{eq9}
M\textbf{u}^{T}=\textbf{0}^{T}.
\end{equation}
Since $2s \mid (q+1)$, we have
\begin{eqnarray*}
(\alpha^{i(a+j)}\eta^{i})^{q} &=& \alpha^{qi(a+j)}\eta^{qi}=\alpha^{-i(s-t+1+j)}\eta^{i} \\
&=& \alpha^{i(s+t-1-j)}\eta^{i}=\alpha^{i(a+r-3-j)}\eta^{i},
\end{eqnarray*}
for any $1 \leq i \leq r$ and $0 \leq j \leq r-3$.
Thus $M$ is row equivalent to $M^{(q)}$. Let $M_{ij}$ $(1 \leq i \neq j \leq r)$ be the $(r-2)\times(r-2)$ matrix obtained from $M$ by deleting the $i$-th and $j$-th columns. It is not hard to verify that $\det(M_{ij}) \neq 0$. Thus by Lemma \ref{lem6.1}, Eq. (\ref{eq9}) has a solution $\textbf{u} \in (\mathbb{F}_{q}^{*})^{r}$.
The lemma is proved.
\end{proof}
Set $m=\frac{q^{2}-1}{2s}$. Let $\omega$ be a primitive element of $\mathbb{F}_{q^{2}}$ and $\theta=\omega^{2s}$ be a primitive $m$-th root of unity. Put
\[\textbf{a}=(\omega, \omega\theta, \ldots, \omega\theta^{m-1}, \ldots , \omega^{r}, \omega^{r}\theta, \ldots, \omega^{r}\theta^{m-1}) \in \mathbb{F}_{q^{2}}^{n}.\]
Set
\[\textbf{v}=( v_{1},v_{1}\theta, \ldots,v_{1}\theta^{m-1},\ldots, v_{r}, v_{r}\theta,\ldots,v_{r}\theta^{m-1}),\]
where $v_{1}, \ldots, v_{r} \in \mathbb{F}_{q^{2}}^{*}$.
Then
\[\langle\textbf{a}^{qi+j}, \textbf{v}^{q+1}\rangle
=\sum_{\ell=1}^{r}\omega^{\ell(qi+j)}v_{\ell}^{q+1}\sum_{\nu=0}^{m-1}\theta^{\nu(qi+j+q+1)}.\]
Thus
\[\langle\textbf{a}^{qi+j}, \textbf{v}^{q+1}\rangle=0, \textnormal{ when } m \nmid (qi+j+q+1),\]
and
\begin{equation}\label{eq10}
\langle\textbf{a}^{qi+j}, \textbf{v}^{q+1}\rangle=m\sum_{\ell=1}^{r}\omega^{\ell(qi+j)}v_{\ell}^{q+1}, \textnormal{ when }m \mid (qi+j+q+1).
\end{equation}
Now, we give our last construction as follows.
\begin{theorem}\label{thm6.3}
Let $q$ be a prime power. Suppose $2s \mid (q+1)$ and $r=2t+1$, where $1 \leq t \leq s-1$. Put $n=r\frac{q^{2}-1}{2s}$.
Then for any $1 \leq k \leq (s+t)\frac{q+1}{2s}-2$, there exists an $[[n, n-2k, k+1]]_{q}$-quantum MDS code.
\end{theorem}
\begin{proof}
Keep the notations as above. By Lemma \ref{lem6.2}, there exist $u_{1}, \ldots, u_{r} \in \mathbb{F}_{q}^{*}$ such that
\[\sum_{\ell=1}^{r}\omega^{\ell (\mu m-q-1)}u_{\ell} =0,\]
for all $s-t+1 \leq \mu \leq s+t-1$.
For $1 \leq i \leq r$, we let $v_{i} \in \mathbb{F}_{q^{2}}^{*}$ such that $v_{i}^{q+1}=u_{i}$. Note that $qi+j+q+1=q(i+1)+(j+1)$. We can prove similarly as Lemma \ref{lem2.4} (ii) that $m \mid (qi+j+q+1)$ if and only if $qi+j+q+1 \in \{(s-t+1)m, (s-t+2)m, \ldots, (s+t-1)m \}$. Hence from Eq. (\ref{eq10}), when $qi+j+q+1=\mu m$ ($s-t+1 \leq \mu \leq s+t-1$), we have
\begin{eqnarray*}
\langle\textbf{a}^{qi+j}, \textbf{v}^{q+1}\rangle &=& m\sum_{\ell=1}^{r}\omega^{\ell(\mu m-q-1)}v_{\ell}^{q+1} \\
&=& m\sum_{\ell=1}^{r}\omega^{\ell(\mu m-q-1)}u_{\ell}=0.
\end{eqnarray*}
Thus
\[\langle \textbf{a}^{qi+j}, \textbf{v}^{q+1} \rangle =0,\textnormal{ for all } 0 \leq i, j \leq k-1.\]
By Lemma \ref{lem2.1}, $GRS_{k}(\textbf{a}, \textbf{v})$ is a Hermitian self-orthogonal MDS code with parameters $[n, k, n-k+1]$. Theorem \ref{thm6.3} then follows from Corollary \ref{cor1.3}.
\end{proof}
According to Theorem \ref{thm6.3} and Corollary \ref{cor5.6}, we obtain the following corollary.
\begin{corollary}\label{cor6.4}
Let $q$ be a prime power. Suppose $2s \mid (q+1)$ and $3 \leq r \leq 2s$. Put $n=r\frac{q^{2}-1}{2s}$.
Then for any $1 \leq k \leq (s+\lceil\frac{r-1}{2}\rceil)\frac{q+1}{2s}-2$, there exists an $[[n, n-2k, k+1]]_{q}$-quantum MDS code.
\end{corollary}
\begin{remark}\label{rem6.5}
Zhang and Ge \cite[Theorem 4.2]{ZG17} (see also Corollary \ref{cor5.5}) constructed a family of $q$-ary quantum MDS codes with parameters $[[r\frac{(q^{2}-1)}{2s},r\frac{(q^{2}-1)}{2s}-2k, k+1]]$, $k \leq (s+1)\frac{q+1}{2s+1}-2$, where $2s \mid (q+1)$. If $r \geq 4$, then $(s+\lceil\frac{r-1}{2}\rceil)\frac{q+1}{2s}-1 > (s+1)\frac{q+1}{2s}-1$ and hence the quantum codes of Corollary \ref{cor6.4} have larger minimum distance.
\end{remark}
In the following example, a new family of quantum MDS codes is given from Theorem \ref{thm6.3}.
\begin{example}
Let $(r, s)=(7,4)$ in Theorem \ref{thm6.3}. Then, when $8 \mid (q+1)$, there exists a $[[\frac{7}{8}(q^{2}-1), \frac{7}{8}(q^{2}-1)-2k, k+1]]_{q}$ quantum MDS code for any $1 \leq k \leq \frac{7}{8}(q+1)-2$.
\end{example}
\section{Conclusion}
In this paper, we have constructed six new classes of $q$-ary quantum MDS codes by using Hermitian self-orthogonal GRS codes. Most of our quantum MDS codes have minimum distance larger than $\frac{q}{2}+1$. Some quantum MDS codes presented in \cite{ZG17} and \cite{SYZ17} can be easily derived from ours via the propagation rule. We also generalize and improve some results in \cite{ZG17},\cite{SYZ17}, and \cite{JKW17}.
\section*{Acknowledgment}
This work was supported in part by the 973 Program of
China under Grant 2013CB834204, in part by the National Natural Science
Foundation of China under Grant 61571243 and Grant 61771273, in part by
the Nankai Zhide Foundation, and in part by the Fundamental Research Funds
for the Central Universities of China.
|
1,108,101,566,337 | arxiv | \section{Introduction}
\section{introduction}
Graphene, a single-atom-thick honeycomb sheet of carbon atoms\cite{sc04Novoselov,nature05Novoselov}, possesses unusual material characteristics as compared to other 2D electron systems. Having high electron mobilities, the particles in Graphene are orders of magnitude faster than in silicon; they conduct heat much more efficiently than in diamond and conduct current order of magnitude better than in copper. Among other unique properties, Graphene is transparent and impermeable to most gases and liquids, including helium\cite{nature20Sun}. It is harder than diamond and more elastic than fiber carbon at the same time.
Unique electronic properties of Graphene\cite{nature05Kim,prl05Gusynin,iop06Wunsch,prl06Altland,prl06Cheianov,prl06Aleiner,beenakker3,beenakker1,beenakker2,prl07Kim,prb07Mariani,prl08Tse,prb08Bena,prb09Bena,rmp09Castroneto,nature09Kim,prb10Bena,pr10Vozmediano,prb10Virosztek,prl11Stauber,rmp11goerbig,prb11Vaishnav,rmp11Dassarma,rmp12Kotov,IOP12Gordon,np12Nandkishore,prb13Bena,prb13Lawlor,pr16Amorim,prb18Rusin,nature19Dutreix,arxiv20Agarwal,arxiv20Sedrakyan,prb19maiti,nature20Sun} stem from the fact that it is single-atom-thick.
It supports carriers with Dirac-like dispersion.
When doping is low, the Fermi level is located in the close vicinity of $K$ and $K'$ points in the Brillouin zone. The reason for this is that Graphene quasiparticles possess chiral properties related to the two-sublattice structure of the honeycomb lattice. The latter also implies that the lattice unit cell contains two sites (atoms), leading to a ``pseudospin" degree of freedom.
One of the most prominent effects in regular 2D electron systems
is the interaction-induced zero-bias anomaly in the tunnel density of states (DOS). For small
impurity concentration, this anomaly can be traced to the fact\cite{prb97Glazman}
that impurities are dressed with
Friedel oscillations of the electron density\cite{52Friedel} which falls off as $1/r^2$ with distance, $r$,
from the impurity. Modification of the wave functions due to scattering of electrons from the
dressed impurities gives rise to the singular correction to the self-energy. Upon the advent
of Graphene, the calculations similar to that in 2D gas,\cite{prl06Cheianov,iop06Wunsch,prb07Mariani} indicated
that the zero-bias anomaly in Graphene is absent. The
underlying reason for this absence is that, with matrix underlying Hamiltonian of spin-orbit type, the backscattering of electrons is forbidden\cite{prb99Raikh}. As a result, the Friedel oscillations in Graphene falls off
faster than in 2D gas, as $1/r^3$.
Since Refs. \onlinecite{prl06Cheianov,prb07Mariani,iop06Wunsch} the Friedel oscillations in Graphene were studied
in tiniest details, both analytically within continuum approximation and numerically, within tight-binding approximation. The results are summarized in the review\cite{physique16Bena}.
Application of a magnetic field turns the spectrum of Graphene into a ladder of non-equidistant Landau
levels. The corresponding perturbation of the electron density around an impurity can be cast into a
sum over these levels\cite{physique16Bena,prb18Rusin}. Still, at elevated temperatures, the
discreteness of the Landau levels does not manifest itself, and the behavior of the Friedel
oscillations with distance becomes quite a nontrivial issue.
A natural expectation is that a weak, non-quantizing magnetic field, modifies the Friedel
oscillations in Graphene in the same way as in the 2D electron gas\cite{prl07Sedrakyan}.
By causing the curving of the semiclassical trajectories, the field gives rise to the
position-dependent magnetic phase, and, thus, breaks the periodicity of the oscillations.
Still, the decay law of the oscillations remains the same as in a zero field.
In fact, such an intuitive reasoning, in application to Graphene, is wrong. It is not only the phase but also the {\em magnitude} of the Friedel oscillations that exhibits a crucial dependence on the magnetic field.
In the present paper we consider this question systematically
and find the field-dependent form of the Friedel oscillations.
We shed light on the nature of the magnetic field modification.
Our key finding is that the potential oscillates anomalously.
Namely, it
{\em does not fall off} with distance, $r$, in a
parametrically large interval.
This omnipresent effect plays a central role in a variety
of quantum many-body contexts in Graphene. The polarization operator (PO) is an essential quantity for the evaluation of interaction effects using the Feynman diagrams. The non-decaying part in the PO dramatically changes the power counting in the integrand expressing Feynman diagrams. From the new power counting, the magnetic effect may give rise to a new zero-bias anomaly in DOS of Graphene, modify to the quasi-particle lifetime and thermodynamics of Dirac electrons in the Fermi liquid regime. It may also induce new temperature dependence for the dc/ac conductivities\cite{unpublished}.
The obtained non-decaying Friedel oscillations open an avenue for controlled studies of magnetotransport. They also manifest themselves in field-related thermodynamic properties of Graphene and materials with a pseudo magnetic field such as randomly strained Graphene, stacked and twisted Dirac materials, and the properties of the wormholes in them\cite{npb10Herrero,sym20Capozziello,npb20Garcia}.
The Hamiltonian that incorporates the $B$ field in Landau gauge reads $H=H_{B}+\hat{u} V_{\text{imp}}(r)$,
\begin{eqnarray}
\label{Hamiltonian}
\hat{H}_{B}= v_F \left[ (p_x-eBy) \hat{\Sigma}_x+p_y \hat{\Sigma}_y \right],
\end{eqnarray}
where $V_{\text{imp}}(r)$ is the short-ranged impurity potential,
$v_F$ is the Fermi velocity and $\hat{\Sigma}_{x,y}=\hat{\sigma}_{x,y} \otimes \hat{\tau}_z$. One can define $\hat{\Sigma}_{z}=\hat{\sigma}_{z} \otimes \hat{\tau}_0$, together with $\hat{\Sigma}_{x,y}$, to form a su(2)-algebra. The Pauli matrices $\hat\sigma_{x,y,z}$ act in the space of $A$ and $B$ sublattices of the honeycomb lattice, $\hat{\tau}_z$ is the Pauli matrix distinguishing between two Dirac points ( $K$ and $K'$ ) of the Graphene dispersion relation and $\hat{\tau}_0$ is the identity matrix. We consider the simplest case of
the diagonal disorder $\hat{u}=u\hat{I}$, where $u$ is a scalar.
The uniform field breaks the chiral symmetry near each Dirac point individually since, $\mathbf{p}\cdot \mathbf{\hat\Sigma}$, $\hat\Sigma=(\hat\Sigma_x, \hat\Sigma_y),$
does not commute with the Hamiltonian.
It is this non-commutativity,
specific for graphene and other Dirac materials\cite{aip14Balatsky}, that causes the
observable modification of Friedel oscillations in a weak magnetic field, $B$. Quantitatively,
the criterion of weak field is that the
the magnetic length, $l=(\frac{\hbar c}{eB})^{1/2}$, is much larger than the de Broglie wavelength, $k_F^{-1}$. Below we show that weak field modifies the screened Coulomb potential\cite{prb93Yue,prb01Zala,prb05Adamov} $V(r)$ to
\begin{figure}
\includegraphics[scale=0.5]{fo.pdf}
\centering
\caption{(Color online) Electrostatic potential, $V_H(r)$, is plotted vs. dimensionless distance $k_Fr$ from the impurity [big (red) circle] in the presence of a weak magnetic field, $B$, in the range $1\ll k_F r\ll k^2_F l^2$. The figure is obtained from Eq.~(\ref{2}) using a typical value of $p_0/k_F=0.1$. The potential $V_H(r)$ is measured in units of $W_0= {k^3_F g V(2k_F)}/{2\pi^2 v_F}$. The amplitude of oscillations first decays as $1/r^3$ and then converges to a constant $\propto B^2$. The inset illustrates the classical trajectory of 2D electrons between $0$ and $ \mathbf{r}$ in the presence of a weak magnetic field. $L$ is length of the arc, $r$ is the simply $|\mathbf{r}|$ and $\theta(r)$, the angle of the arc, is approximately given by $r/k_F l^2$. }
\label{FO}
\end{figure}
\begin{eqnarray}
\label{2}
V_H(r)=\frac{g{V}( 2k_F) }{2\pi^2v_F r^2}
\Biggl[\frac{1}{r}\cos\Bigl(2k_Fr-\frac{p^3_0 r^3}{12}\Bigr)
\nonumber\\
+\frac{r^2}{2k_F l^4}\sin\Bigl(2k_Fr-\frac{p^3_0 r^3}{12}\Bigr)
\Biggr].
\end{eqnarray}
Here $V(2k_F)$ is the $2k_F$ component of the interaction, while the impurity potential is treated in the Born approximation with $g=u\int d^2r V_{\text{imp}}(r)$. There are two competing terms. One can see that when the magnetic phase $p^3_0 r^3 \ll 1$ with $p_0^{-1}=(k_Fl)^{4/3}/k_F$, the potential is decaying as a polynomial function, $\sim1/r^3$. When $1 \ll p^3_0 r^3 \ll k_F r$, the potential is oscillating anomalously, with a {\em constant} amplitude. This persistent effect comes from the diagonal part $\hat{u}=u\hat{I}$ of impurity potential, while other non-magnetic impurity potentials do not contribute to $V_H(r)$ in the leading order in impurity scattering. (for details, see Appendix~\ref{rs} and ~\ref{disorder}.)
The paper is organized as follows. In Sec.(\ref{sp}), we present a qualitative derivation of persistent Friedel oscillations, which follows from the semi-classical magnetic phase accumulated by the electron propagator. In Sec.(\ref{PO}), we present a thorough calculation of the polarization operator in the presence of a weak magnetic field and derive Friedel oscillations of electron density. The implications to interaction effects are discussed in Sec.(\ref{in}). Concluding remarks are given in Sec.(\ref{cm}).
\section{ Qualitative discussion} \label{sp}
In this section, we make a qualitative argument for new effects of Friedel oscillations in Eq.~(\ref{FO}). Both modifications of Friedel oscillations, namely the phase $p_0^3 r^3$ in oscillations and the persistent part, can be understood semiclassically.
The electron propagator, $G_{s,s'}(0,\mathbf{r})$, where $s=\pm$ represent $A/B$ sublattices, accumulates a phase, $k_{s,s'}L$, when electrons propagate along the semi-classical arc. Here $L$ is the length of arc shown in the inset of Fig.~(\ref{FO}) and $k_{s,s'}$ is an effective momentum.
The diagonal component of Dirac propagator, $G_{s,s}(0,\mathbf{r})$, can be understood as a propagator of the 2D electron gas with an effective Fermi energy $E^s_F=E_F[1-s(2k^2_Fl^2)^{-1}]$ and an effective cyclotron frequency $\omega_0=v_F (k_Fl^2)^{-1}$.
This yields an effective momentum $k_{s,s}=k_F[1-s(2k^2_Fl^2)^{-1}]$ for diagonal propagators. While if $s\neq s'$, $k_{s,s'}=k_F$. For details of deriving effective momentums, see Appendix~(\ref{effective}).
The phase $p_0^3 r^3$ in the oscillations is due to the curving of the path. Semiclassically, the propagator acquires a magnetic phase $k_F (L-r)$ because of the curving of trajectory. Since $L= k_Fl^2 \theta$, $r=2k_Fl^2\sin(\theta/2)$ and $\theta(r)\simeq r/k_F l^2$, the magnetic phase $k_F (L-r)$ becomes equal to $p_0^3 r^3/24$. The PO involves a product of two propagators, and thus the magnetic phase in PO is doubled. This is exactly the magnetic phase of Friedel oscillations.
The persistent part of Friedel oscillations originates from the deviation of $k_{s,s'}$ from $k_F$. The effective momentum implies spin-dependent magnetic phases, $\sim(k_{s,s}-k_F)r=-sr (2k_F l^2)^{-1}$ of diagonal propagators . Although the phase is spin-dependent, it does not depend on the valleys. The phase could then be expressed compactly, $- \theta(r)\hat\Sigma_z /2$. The PO involves a trace of two propagators. Using the fact that Pauli matrices are traceless and $\textbf{tr}\hat\Sigma_z^2\neq 0$, the leading magnetic contribution to PO is from a square of $\theta(r)$, namely $\theta(r)^2 \propto r^2/k_F^2 l^4$. This is the new amplitude of the second term in Eq.~(\ref{2}).
Importantly, the anomalous effect in Eq.~(\ref{2}) persists even at high temperatures, $T\sim T_0 \equiv v_Fp_0$, which is much higher than the cyclotron energy. The temperature scale, $T_0$ can be derived qualitatively. Quasi-classically, the electron propagator accumulates a dynamical phase $k_F L$ along the arc (see the inset in Fig.~\ref{FO}). The condition $k_F (L-r)\sim 1$ can be cast in the form $r\sim p_0^{-1}$, since $k_F (L-r)\propto (p_0 r)^3$. Then, the spatial scale $p_0^{-1}$ translates into the energy scale $v_F p_0$. This, in turn, sets the temperature scale, $T_0$.
\section{The Polarization operator} \label{PO}
In this section, we show the emergence of persistent Friedel oscillations by calculating the PO rigorously in the momentum space. We start from the summation over Landau levels for PO and then develop a low energy effective theory around the Fermi level. We show how the spin-dependent magnetic phase of the gauge-invariant electron propagator manifests itself in the matrix elements of PO. Using the obtained form of the PO, we derive the Friedel oscillations and observe their persistent behavior. Finally, we discuss the smearing of anomalies in the PO under a weak magnetic field.
\subsection{Summation over Landau levels}.
We start from a general expression for the PO in the momentum space
\begin{equation}
\label{Pi}
\Pi(k,\omega)=\sum^\infty_{n,n'=0}\sum_{s,s'=\pm}
\frac{n_F(s'\omega_{n'})-n_F(s\omega_n)}{\omega-(s\omega_n-s'\omega_{n'})}\Big\vert M_{s,s',k}^{n,n'}\Big\vert^2,
\end{equation}
where $n_F(\omega)$ is the Fermi-Dirac distribution,
while the frequencies, $\omega_n$, are given by
$\omega_n=\left(2n\right)^{1/2}v_F/l$. The quantities $M_{s,s',k}^{n,n'}$
are the matrix elements of $\exp\left(i{\bf k r}\right)$ between the
states $\langle s,n\vert$ and $\vert s',n'\rangle$. Since the wave functions
are the vectors consisting
of the oscillator states $n$ and $n+1$, the square of the matrix element
can be expressed via
the generalized Laguerre polynomials, $L^n_m$, as follows
\begin{eqnarray}
\label{M}
\Big\vert M_{s,s',k}^{n,n'}\Big\vert^2=(-1)^{n'-n}\frac{e^{-x} }{ \pi l^2}
\Bigl[L^{n-n'}_{n'-1} (x) L^{n'-n}_{n-1}(x)\nonumber\\
+ L^{n-n'}_{n'} (x) L^{n'-n}_{n}(x) +2ss'\left(\frac{n}{n'}\right)^{1/2} L^{n-n'}_{n'-1} (x) L^{n'-n}_{n}(x) \Bigr].
\nonumber\\
\end{eqnarray}
where $x=k^2l^2/2$.
The summation in Eq. (\ref{Pi}) is performed over two valleys and two spins.
However, the main contribution comes from the states near the Fermi level, $E_F$, which we
assume to be positive. This allows to set $s=s'=1$ in Eq. (\ref{M}).
The condition that the magnetic field is weak can be cast in the form, $N_F\gg 1$, where
$N_F= k^2_Fl^2/2 $ is the number of Landau levels with energies between $\epsilon=0$
and $\epsilon=E_F$.
To perform the summation over $n$ and $n'$ it is convenient to use the following
integral representation of the Laguerre polynomials
\begin{equation}
\label{representation}
\!L_m^n(x)\!=\!\frac{1}{2\pi} \int\limits_0^{2\pi} \frac{d\theta}{(1-e^{i\theta})^{n+1}} \exp \left\{\frac{xe^{i\theta}}{e^{i\theta}-1}-im\theta\right\}.
\end{equation}
In the vicinity of the Kohn anomaly, $k\approx 2k_F$, we have $x\gg 1$.
Under this condition, the major contribution to the integral
Eq. (\ref{representation}) comes from the vicinity of $\theta =\pi$.
Substituting $\theta=\pi+\psi$ into the integrand and expanding with respect
to $\psi$ yields
the following integral representation for
the square of the matrix element (details see Appendix~\ref{matrix})
\begin{widetext}
\begin{eqnarray}
\label{simp_M}
&&\Big\vert M_{s,s,k}^{n,n'}\Big\vert^2=\frac{1}{4\pi^3 l^2}\int \frac{d\psi d\psi'}{4}
\exp{\Big\{\frac{ix}{48}\left(\psi^3+\psi'^3 \right)\Big\}}
\nonumber \Biggl[ \sum_{\nu=\pm}
\exp \Big\{i\left( \frac{x}{4}- \frac{n+n'+\nu}{2} \right)(\psi+\psi') \Big\}\\
&-2&\exp \Big\{i\left( \frac{x}{4}- \frac{n+n'}{2}\right)(\psi+\psi') +i\frac{\psi-\psi'}{2} \Big\} \Biggr].
\end{eqnarray}
\end{widetext}
Here the spin-dependent magnetic phase, the signal of chiral symmetry breaking, manifests itself in the matrix element as a small, but non-negligible, phases $\nu(\psi+\psi')$, with $\nu=\pm $. The negative sign in the second line is the result of the Berry phase $\pi$, which is specific for Dirac electrons. The two features are responsible for the main result of the paper.
Since the main contribution to the sum in Eq. (\ref{Pi}) comes from $n$ and $n'$ close to $N_F$,
it is convenient to introduce the new variables
$m=N_F-n$, $m'=-N_F+n'$. Then the summation in Eq. (\ref{Pi})
can be performed with the help of the following identity
\begin{eqnarray}
\label{identity}
&& \sum_{m,m'=-\infty}^{+\infty} \frac{ n_F(\epsilon_F+\frac{\sqrt{2}v_F}{2l}m' )-n_F( \epsilon_F-\frac{\sqrt{2}v_F}{2l}m)}{m'+m } \nonumber\\
&& \times \cos \left[ (m'-m) \alpha+\beta \right]
=-\frac{ \pi^2 T \cos \beta }{\omega_0 \sinh (2\pi |\alpha|T/\omega_0)},
\end{eqnarray}
where $\omega_0= v_F (k_F l^2)^{-1}$ is the effective cyclotron frequency and $\alpha$, $\beta$ are real numbers. When applying the above identity to the summation in Eq.~(\ref{Pi}), we set
$\alpha=y$ with $y\equiv 2^{-1}\left(\psi+\psi'\right)$ and $\beta=0$. As we will see in the next section, the integration over $y$ defines a characteristic scale, $y\sim (k_F l)^{-2/3} $. This scale for $y$ implies that the temperature damping term $A(T)\equiv T \left[\omega_0 \sinh (2\pi y T/\omega_0)\right]^{-1}$ is essentially temperature independent at $T\ll T_0$, namely $ A(T)\approx \left[2\pi (k_F l)^{2/3}\right]^{-1}$. At $T\gg T_0$, the damping factor is important, as it becomes exponential: $ A(T) \approx 2T \omega^{-1}_0 \exp{\left(- \pi T/T_0\right)}$. In the following, we work in the low temperature limit, $T\ll T_0$. This effect of persistent oscillation will survive up to $T\sim T_0$, while at higher temperatures the Friedel oscillations will be washed out.
\subsection{ The form of the PO} Equipped with Eq.~(\ref{simp_M}) and ~(\ref{identity}), one can write the static PO, $\Pi( k)\equiv \Pi(k,0)$, as a single integral with respect to variable $y $. Details see Appendix~(\ref{integral}). To clarify two different effects in $\Pi( k)$, we present $\Pi( k)$ as sum of two terms $\Pi_{1}( k)+\Pi_{2}( k)$. Here $\Pi_{1}(k)$ and $\Pi_{2}( k)$ are expressed by
\begin{eqnarray}
\label{Pi_one}
&&\Pi_{1}(k)=-\frac{1 }{ 4 \pi^{3/2}v_F l } \int\limits_a^{\infty}\frac{dy}{y^{3/2}} \\
&\times&\Biggl\{ \sum_{\nu=\pm 1}\cos \Biggl[ \left( k_F\delta k l^2-\nu\right) y+\frac{k^2_Fl^2y^3}{12} +\frac{\pi}{4}\Biggr] \nonumber\\
&-& 2\cos \Biggl[ k_F \delta k l^2 y+\frac{k^2_Fl^2y^3}{12} +\frac{\pi}{4} \nonumber\Biggr] \Biggr\},
\end{eqnarray}
and
\begin{eqnarray}
\label{Pi_second}
&&\Pi_{2}(k)=\frac{1 }{ 2 v_F l\pi^{3/2} } \int\limits_a^{\infty}\frac{dy}{y^{3/2}} \Biggl(\frac{1}{k^2_Fl^2y}\Biggr)\nonumber \\
&\times& \sin \Bigl( k_F\delta k l^2 y+\frac{k^2_Fl^2y^3}{12}+\frac{\pi}{4}\Bigr) .
\end{eqnarray}
Here $\delta k=k-2k_F$ is the momentum measured from $2k_F$. Finite low-$y$ cutoff, $a$, of the order of the lattice spacing,
does not affect the form of the Friedel oscillations. We will see that $\Pi_1$ and $\Pi_2$ are responsible
for the two distinct contributions to the Friedel oscillations
in Eq.~(\ref{2}).
{Here we derive the PO in the real space.} We start with the contribution Eq. (\ref{Pi_one}).
Transformation to the real space
is accomplished by the following radial integral
\begin{eqnarray}
\Pi(r)= \int_0^{\infty} kdk (2\pi )^{-1} J_0 (k r) \Pi(k)\nonumber \\ \simeq k_F\int_{-\infty}^{\infty} d\delta k (2\pi )^{-1} J_0 (k_F r) \Pi(k_F+\delta k)
\end{eqnarray}
where $J_0(x)$ is the zeroth-order Bessel function. Here we have used the fact that $\delta k \ll k_F$. In the domain, $k_Fr\gg 1$, we can replace the Bessel function by the large-$x$ asymptote $J_0(x)\approx \left(2/\pi x \right)^{1/2}\cos \left(x-\frac{\pi}{4}\right)$. The integration over $k$ sets $y=r (k_Fl^2)^{-1}$. Then the summation over $\nu$ in Eq.~(\ref{Pi_one}) yields
\begin{eqnarray}
\label{simplified}
\Pi_{1}(r)&=&- \frac{k_F }{ 2 v_F \pi^{2} r^2 } \sin \Bigl( 2k_Fr -\frac{p_0^3r^3}{12}\Bigr)\nonumber\\
&\times& \Biggl[\cos \Bigl( \frac{r}{k_Fl^2}\Bigr) -1\Biggr].
\end{eqnarray}
The effect of the weak magnetic field is not negligible if the magnetic phase $p_0^3r^3\sim 1$. From here, the characteristic scale for $y$ is $ (p_0k_Fl^2)^{-1}= (k_F l)^{-2/3}$.
Since we consider the distances $ r \ll k_F l^2$, i.e. much smaller than the
Lamour radius, the magnitude of $2k_Fr$ oscillations given by $\sin$-function in the equation above can be further simplified to $k_F (2 v_F \pi^{2} r^2 )^{-1} (1-\cos r/k_Fl^2 )= (4 v_F \pi^{2} k_F l^4 )^{-1} $.
The result for $\Pi_1$ describes the contribution to the oscillations of the electron density
which do not decay with distance in the domain $k_F^{-1}\ll r \ll k_Fl^2$. It reproduces the second term in Eq. (\ref{2}).
Evaluation of the contribution $\Pi_{2}(r)$
to the PO defined by Eq. (\ref{Pi_second}) involves the same
steps as evaluation of $\Pi_{1}(r)$. The result reads
\begin{eqnarray}
\label{14}
\Pi_{2}(r)&\approx&\frac{1 }{2\pi^2v_Fr^3} {\cos\Bigl(2k_Fr-\frac{p^3_0 r^3}{12}\Bigr)}.
\end{eqnarray}
It reproduces the first term in Eq. (\ref{2}). The decay $1/r^3$ is specific for Graphene,
while the phase is the same as in 2D electron gas.
The the real space static PO, $\Pi(r)$, determines the Hartree potential $V_H(r)$, via modulation of the electron density $\delta n(\mathbf{r})$ around the impurity. Within the Born approximation, $\delta n(\mathbf{r})=g\Pi(r)$. Since the density modulation originates from the backscattering of fermions, $\delta n(\mathbf{r})$ determines the Hartree potential as $V_H(r)= V(2k_F) \delta n(\mathbf{r})$ (The derivation see Appendix~\ref{Hartree}). As such, the Hartree potential $V_H(r)$ is equivalent to $gV(2k_F)\Pi(r)$.
\begin{figure}
\centering
\includegraphics[scale=0.5]{merge.pdf}
\caption{(Color online)
Asymptote of derivative $\Pi'_1(k)$ is plotted versus the dimensionless $x=\delta k/p_0$. The red curve depicts $-{2v_F}{\sqrt{k_F/p_0}} \Pi'_{\text{1}}(k)/C=[Ai(x)Bi(x)]''/C$ versus $x$. Here $C$ is a positive constant, approximately equal to $0.12$. The blue curve is
$1/(\sqrt{|x|})^5$, converging to $\Pi'_1$ when $x\gg 1$. The inset depicts $\Pi_2''(k)$. The black curve is $-{v_F}\sqrt{p_0k_F} \Pi''_{2}(k)=Ai(x)Bi(x)$, demostrating the smearing of the anomaly.
The green represents $D\Theta(x)/\sqrt{|x|}$, namely the Kohn anomaly in PO in a zero field. D is a positive constant, approximately equal to $0.16$.}
\label{plotpi1}
\end{figure}
\subsection{ Smeared anomalies} \label{sa}
The spin-dependent magnetic phase, $\theta(r)\hat\Sigma_z /2$, leads to a new term, $\Pi_1(k)$, in the PO, while the curving of path smears the existing anomaly in $\Pi_2(k)$. We start from the momentum-space representation of the PO given in terms of the product of the Airy functions\cite{1953Bateman}.
Then we differentiate Eq. (\ref{Pi_second}) with respect to $\delta k$ twice and obtain
\begin{eqnarray} \Pi''_{2}(k)=- ( v_F \sqrt{p_0 k_F})^{-1}F ( {\delta k}/{p_0} ) \end{eqnarray} where $F(z)\equiv\text{Ai}(z)\text{Bi}(z)$. This represents the smearing of the Kohn anomaly of the PO by the weak field. Inset of Fig.~(\ref{plotpi1}) illustrates how the anomaly get smeared.
Importantly, $\Pi_1'(k)$ can also be obtained
as (details see Appendix~\ref{momentum})
\begin{eqnarray} \Pi_1'( k)= (2 v_F )^{-1}\sqrt{p_0/k_F} F''\Bigl( {\delta k}/{p_0}\Bigr). \end{eqnarray} This term only emerges in the presence of the magnetic field. In the limit $\delta k \gg p_0$, $\Pi_1'(k)$, it converges to zero as
$\propto B^2 (\delta k)^{-5/2}$. This asymptote, plotted in Fig.~(\ref{plotpi1}), has same origin with persistent oscillations. To better understand the effect, one can Fourier transform $\Pi_{\text{1}}(k)$ using the asymptote. From power counting, it is straightforward to see the emergence of the non-decaying oscillating function, $\sim B^2\sin(2k_Fr)$.
\begin{figure}
\centering
\includegraphics[scale=0.22]{Capture.png}
\caption{The leading contributions to the lifetime of quasi-particles. Solid lines represent the Feynman propagators. The wavy lines represent the electron-electron interactions. (a) Fock Digram. (b) Hatree diagram. }
\label{diagram}
\end{figure}
\section{ Implications to interaction effects}\label{in} Graphene is a 2D Fermi-liquid when $E_F>0$. This implies that the scattering rate of the quasi-particle around Fermi surface obeys $\Gamma(\omega) \propto \omega^2/E_F \log(E_F/\omega)$.
In perturbation theory in interaction parameter, two leading Feynman diagrams contributing to the electron lifetime are shown in Fig.~(\ref{diagram}). We evaluate these diagrams in the presence of the weak magnetic field using the obtained form of the electron propagator.
The computation leads to an unexpected result\cite{unpublished}: the quasiparticle lifetime acquires a singular in $\omega$ magnetic correction $
\Gamma(\omega;B)-\Gamma(\omega;0)\propto \omega_0^2/E_F \log\Big(\omega/E_F\Big).
$
Here $\Gamma(\omega;B)$ is the scattering rate of quasi-particle with frequency $\omega$ in the presence of the magnetic field, $B$, $\omega_0=v_F (k_F l^2)^{-1}$.
The expression is valid when $E_F>\omega>\{\omega_0, T\}$. The magnetic correction above is more singular in frequency $\omega$ than the non-magnetic part. This singularity originates from the spin-dependent magnetic phase, $\theta(r)\hat\Sigma_z$, in electron propagators. Interestingly, the field-dependent interaction corrections to various observables in Graphene, including zero-bias anomaly in the density of states and ac/dc conductivities, also exhibit more singular behavior in either $\omega$ or temperature, $T$\cite{unpublished}.
\section{ Concluding remarks}\label{cm} In this paper we demonstrated that weak magnetic field manifests itself in the Friedel oscillations in two ways: it modifies (i) the phase of the oscillations and (ii) makes the magnitude of oscillations
non-decaying in a parametrically large interval.
The origin of the modification of the phase in oscillations, $\sim p^3_0 r^3$, can be traced to the curving of the classical trajectory of an electron in a weak magnetic field (see inset in Fig.~\ref{FO}).
The trajectory is curved even at $r$ much smaller than the Larmour radius, leading to the magnetic phase\cite{prl07Sedrakyan} $\sim (p_0r)^3$. This effect just by itself leads to remarkable high-temperature interaction effects in 2DEG\cite{prb08Sedrakyan,prl08Sedrakyan,prl08Raikh}.
Graphene also supports modification of the magnitude of the Friedel oscillations by a weak field. The origin of this effect is an emerging spin-dependent phase in electron propagators, $\sim\exp (-i\hat\Sigma_z \theta(r)/2)$. This effect manifests in persistent Friedel oscillations and leads to non-trivial magnetic corrections to many-body characteristics in Graphene. The transport and thermodynamic properties of monolayer Dirac materials, randomly strained Graphene and stacked and twisted Dirac materials will also be anomalously sensitive to this magnetic phase even at temperatures, $T\sim T_0$, which is much higher than the cyclotron energy\cite{prl07Sedrakyan}.
Technically, to develop the theory of interaction effects in Dirac materials in the presence of the weak field, one can use the obtained form of the PO and/or Friedel oscillations in the Feynman diagrams (in the momentum or real space representations). However, a word of caution is in order here. Since the field dependence also enters the fermion Green's functions, in the Feynman diagrams, one should also consider modified propagators on the same footing along with Friedel oscillations.
The explicit form of the Feynman propagators in the weak B-field is derived in Appendix~\ref{rs}.
Experimentally, Friedel oscillations can be observed with the scanning tunneling microscope (STM), which images 2D surfaces at the atomic level\cite{science97sprunger,nature19Dutreix,prb13Jelena}. In STM, data are determined by backscattering processes along the energy contours.
Experimental tests of these oscillations
would include examining the temperature
dependence of the Friedel oscillations through an
extended range of temperatures $0 \lesssim T \lesssim T_0$, determining the persistent range of oscillations, $p_0^{-1}\lesssim r\lesssim k_F l^2$, and investigating the effect of their $B$-dependence.
\section{ Acknowledgements}. The research was supported by startup funds from the University of Massachusetts, Amherst (K.W. and T.A.S.), and by the Department of Energy, Office of Basic Energy Sciences, Grant No. DE-FG02-06ER46313 (M.E.R.).
|
1,108,101,566,338 | arxiv | \subsubsection*{\bibname}}
\input{math_commands.tex}
\usepackage{amsmath}
\usepackage[dvipsnames]{xcolor} %
\usepackage{hyperref}
\usepackage{url}
\usepackage{lineno}
\usepackage{booktabs} %
\usepackage{pgfplots, amsthm, graphicx, subcaption, dsfont, tikz, tikz-qtree, tabularx, amssymb,enumerate, verbatim, enumitem,makecell,multirow,bm}%
\usepackage[colorinlistoftodos, textwidth=30mm,color=blue!10]{todonotes}%
\usepackage{float}
\usepackage[bottom]{footmisc} %
\usepackage{footnote}
\usepackage{pdfpages}
\usepackage[noabbrev, capitalise]{cleveref} %
\pgfplotsset{compat=1.11} %
\usepgfplotslibrary{fillbetween}
\usepgfplotslibrary{groupplots}%
\usepackage{authblk}
\newtheorem*{my_pro_setting}{Problem Setting}
\newtheorem{property}{Property}
\title{Distance-Ratio-Based Formulation for Metric Learning}
\author[1,2]{Hyeongji Kim\thanks{[email protected]}}
\author[2]{Pekka Parviainen}
\author[1,2]{Ketil Malde}
\affil[1]{Institute of Marine Research, Bergen, Norway}
\affil[2]{Department of Informatics, University of Bergen, Norway}
\begin{document}
\maketitle
\begin{abstract}
In metric learning, the goal is %
to learn an embedding so that data points with the same class are close to each other and data points with different classes are far apart. %
We propose a distance-ratio-based (DR) formulation for metric learning. %
Like softmax-based formulation for metric learning, it models \(p(y=c|x')\), which is a probability that a query point \(x'\) belongs to a class \(c\). %
The DR formulation has two useful properties. %
First, the corresponding loss %
is not affected by scale changes of an embedding. Second, it outputs the optimal (maximum or minimum) %
classification confidence scores %
on representing points for classes. %
To demonstrate the effectiveness of our formulation, %
we conduct few-shot classification experiments using softmax-based and DR formulations on CUB and \emph{mini}-ImageNet datasets. The results show that DR formulation generally enables faster and more stable metric learning than the softmax-based formulation. As a result, using DR formulation achieves improved or comparable generalization performances. %
\end{abstract}
\section{Introduction}
Modeling %
probability \(p(y=c|x')\), which is a probability that a query point \(x'\) belongs to a class \(c\), plays an important role in discriminative models. Standard neural network based classifiers use the softmax activation function to estimate this probability. %
When \(l_c (x')\) is the logit (pre-softmax) value from the network for class \(c\) and the point \(x'\) and \(\mathcal{Y}\) is a set of classes, the softmax function models \(p(y=c|x')\) as: %
\begin{flalign}\label{eq:std_softmax}
\hat{p}(y=c|x')=\frac{\exp(l_c (x'))}{\sum\limits_{y\in \mathcal{Y}}{\exp(l_y (x'))}},
\end{flalign}
where \(\hat{p}(y=c|x')\) is an estimation of the probability \(p(y=c|x')\).
Standard classifiers work well on classifying %
classes with enough training examples. However, we often encounter %
few-shot classification tasks that we need to classify points from unseen classes with only a few available examples per class. In such cases, standard classifiers may not perform %
well \citep{vinyals2016matching}. %
Moreover, standard classifiers do not %
model similarity %
between different data points on the logit layer.
Metric learning methods %
learn pseudo metrics such that points with the same classes are close, and points with different classes are far apart on the learned embedding spaces.
As a result,
metric learning models can work well on classifying classes with a few examples \citep{chen2019closer}, and they can be used to find similar data points for each query point \citep{musgrave2020metric}.
Several metric learning models \citep{goldberger2004neighbourhood, snell2017prototypical, allen2019infinite} use softmax-based formulation to model \(p(y=c|x')\) by replacing logits \(l_c (x')\) in Equation (\ref{eq:std_softmax}) with %
negative squared distances between data points on embedding spaces. %
We found that \emph{1) softmax-based models %
can be affected by scaling embedding space} and thus possibly weaken the training process. %
Moreover, \emph{2) they %
do not have the maximum (or minimum) confidence scores\footnote{Confidence score (value) is an estimated probability of \(p(y=c|x')\) using a model.} on representing points of classes.} %
It implies that when the softmax-based formulation is used for metric learning, %
query points do not directly converge (approach) to points representing the same class on embedding space, and query points do not directly diverge (be far apart) from points representing different classes. As a result, metric learning with %
softmax-based formulation can be unstable.%
To overcome these limitations, we propose an alternative formulation named \emph{distance-ratio-based (DR) formulation} to estimate \(p(y=c|x')\) in metric learning models. Unlike softmax-based formulation, \emph{1) DR formulation is not affected by scaling embedding space.} %
Moreover, \emph{2) it has the maximum confidence score \(1\) on the points representing the same class with query points and the minimum confidence score \(0\) %
on the points representing the different classes.} Hence, when we use DR formulation for metric learning, query points can directly approach to corresponding points and directly diverge from points that represent different classes.
We analyzed the metric learning %
process with both formulations %
on few-shot learning tasks. Our experimental results show that using our formulation is less likely to be affected by scale changes and more stable. %
As a result, our formulation enables faster training (when Conv4 backbone was used) or comparable training speed (when ResNet18 backbone was used).
\subsection{Problem Settings}
\begin{my_pro_setting}\label{setting1}
Let \(\mathcal{X} \subset \mathbb{R}^{d_{I}}\) be %
an input space, \(\mathcal{Z}=\mathbb{R}^{d_{F}}\) be an unnormalized embedding space, and \(\mathcal{Y}\) be a set of possible classes. The set \(\mathcal{Y}\) also includes classes that are unseen during training. From a joint distribution \(\mathcal{D}\), data %
points \(x\in\mathcal{X}\) and corresponding classes \(c\in\mathcal{Y}\) %
are sampled. %
An embedding function %
\(f_{\theta}: \mathcal{X}\rightarrow \mathcal{Z}\) %
extracts features (embedding vectors) from inputs where \(\theta\) represents learnable parameters. %
We consider the Euclidean distance \(d(\cdot,\cdot)\) on the embedding space \(\mathcal{Z}\). %
\end{my_pro_setting}
In this paper, we only cover unnormalized embedding space \(\mathbb{R}^{d_{F}}\). %
One might be interested in using normalized embedding space \(\mathbb{S}^{\left(d_{F}-1\right)}=\left\{ z\in \mathbb{R}^{d_{F}}| \left\|z\right\|=1 \right\}\). %
For normalized embedding space, %
one can still use Euclidean distance or angular distance (arc length) as both are proper distances. %
To compare softmax-based and distance-ratio-based formulation that estimate \(p(y=c|x')\) for metric learning, in this work, we use prototypical network \citep{snell2017prototypical} for explanation and experiments. %
We do this because the prototypical network is one of the simplest metric learning models.
\subsection{Prototypical Network}
Prototypical network \citep{snell2017prototypical} was devised to solve few-shot classification problems, which require to recognize unseen classes during training %
based on only a few labeled points (support points). It is learned by episode training \citep{vinyals2016matching} whose training batch (called an episode)
consists of a set of support points and a set of query points.
Support points act as guidelines that represent classes. Query points act as evaluations of a model %
to update the model (embedding function \(f_{\theta}\)) %
in a training phase and to measure few-shot classification performances in a testing phase. %
Using embedding vectors from %
support points, prototypical network calculates a \emph{prototype} \(\mathbf{p}_{c}\) to represent a class \(c\).
A prototype \(\mathbf{p}_{c}\) is defined as: %
\begin{flalign}
\mathbf{p}_{c}=\frac{1}{|S_{c}|}\sum\limits_{(x_i,y_i)\in S_c}{f_{\theta}(x_i)}, \nonumber
\end{flalign}
where \(S_{c}\) is a set of support points with class \(c\). (When \(|S_{c}|\) is fixed with \(K=|S_{c}|\), a few-shot learning task %
is called a \(K\)-shot learning task.)%
We can use the Euclidean distance with a prototype \(\mathbf{p}_{c}\) on the embedding space to estimate how close a query point \(x'\) is %
to a class \(c\). We denote the distance as \(d_{x',c}\). Mathematically, \(d_{x',c}\) is:
\begin{flalign}\label{eq:dist_proto}
d_{x',c}=d(f_{\theta}(x'),\mathbf{p}_{c})
\end{flalign}
Using this distance \(d_{x',c}\), prototypical network estimates the probability \(p(y=c|x')\). %
We will explain later about the softmax-based formulation %
and our formulation for this estimation. Based on the estimated probability, training loss \(L\) is defined as %
the average classification loss (cross-entropy) of query points. The loss \(L\) can be written as:
\begin{flalign}\label{eq:L_proto}
L=-\frac{1}{|Q|}\sum_{(x',y')\in Q}{\log(\hat{p}(y=c|x'))},
\end{flalign}
where \(Q\) is a set of query points in an episode and \(\hat{p}(y=c|x')\) is an estimation of the probability \(p(y=c|x')\).
Based on the training loss \(L\), we can update %
the embedding function \(f_{\theta}\).
\subsection{%
Metric Learning with Softmax-Based Formulation
}
In the original prototypical network \citep{snell2017prototypical}, the softmax-based formulation was used to model the probability \(p(y=c|x')\). The softmax-based formulation is defined by the softmax (in Equation (\ref{eq:std_softmax})) over negative squared distance \(-d_{x',c}^2\). Thus, the formulation can be written as:
\begin{flalign}\label{eq:dist_softmax}
\hat{p}(y=c|x')=\frac{\exp(-d_{x',c}^2)}{\sum\limits_{y\in {\mathcal{Y}}_E}{\exp(-d_{x',y}^2)}},
\end{flalign}
where \(\mathcal{Y}_E\) is a subset of \(\mathcal{Y}\) that represents a set of possible classes within an episode. (When \(|{\mathcal{Y}}_E|\) is fixed with \(N=|{\mathcal{Y}}_E|\), a few-shot learning task is called a \(N\)-way learning task.)\\
We denote the value in Equation (\ref{eq:dist_softmax}) as \(\sigma_c (x')\). When we use \(\sigma_c (x')\) to estimate the probability \(p(y=c|x')\), we denote the corresponding loss (defined in Equation (\ref{eq:L_proto})) as \(L_{S}\).
\newcolumntype{M}[1]{>{\arraybackslash}m{#1}}
\newcolumntype{\Mc}[1]{>{\centering\arraybackslash}m{#1}}
The softmax-based formulation can be obtained by estimating a class-conditional distribution \(p(x'|y=c)\) with a Gaussian distribution. Based on this, in Appendix \ref{sec:represent_mean}, we explain %
why an average point is an appropriate point to represent a class when we use the softmax-based formulation.
\subsubsection{Analysis of Softmax-Based Formulation}
To analyze the formulation in Equation (\ref{eq:dist_softmax}), %
let us consider a toy example. %
In this example, there are only two classes \(c_1\) and \(c_2\) and corresponding prototypes \(\mathbf{p}_{c_1}\) and \(\mathbf{p}_{c_2}\). %
Let us consider a query point \(x'\) that has distance %
\(d_{x',c_1}\) and %
\(d_{x',c_2}\) as in two cases in Table \ref{tab:toy_ex_cases}. When we compare case (a) and 2 times scaled case (case (b)), we can check that the loss \(L_{s}\) is much smaller for the scaled case (\(6.1442\times 10^{-6}\)). %
In other words,
\emph{simply scaling an embedding can change the confidence score %
and thus corresponding training loss}. It implies that
embedding can be scaled to reduce training loss. Thus, using softmax-based models may weaken a training process by allowing unnecessary model updates that do not change relative locations of data points.
To inspect the locations that maximize or minimize confidence scores, in Figure \ref{fig:p_red_x'},
we visualized the estimated probability \(\hat{p}(y=\textcolor{red}{red}|x')\) using three prototypes \(\textcolor{red}{\mathbf{p}_{red}}\), \(\textcolor{Green}{\mathbf{p}_{green}}\), and \(\textcolor{blue}{\mathbf{p}_{blue}}\).
For the %
softmax-based model in Figure \ref{fig:p_red_x'_b} \citep{goldberger2004neighbourhood, snell2017prototypical, allen2019infinite}, %
\emph{the maximum confidence value is not even at the prototype} \(\textcolor{red}{\mathbf{p}_{red}}\). %
It implies %
that when we train an embedding with training loss \(L_S\), query points with the red class do not converge directly to the prototype %
\(\textcolor{red}{\mathbf{p}_{red}}\). %
In Figure %
\ref{fig:p_red_x'_b}, the prototypes %
with different classes \(\textcolor{Green}{\mathbf{p}_{green}}\) %
and \(\textcolor{blue}{\mathbf{p}_{blue}}\) %
are not the points that minimize confidence values. It means that query points with red class do not directly diverge (get far apart) from prototypes \(\textcolor{Green}{\mathbf{p}_{green}}\) %
and \(\textcolor{blue}{\mathbf{p}_{blue}}\). %
As prototypes do not provide direct guidelines for query points, metric learning with softmax-based formulation can be unstable.
\begin{table*}[hbp]%
\caption{A toy example with different \(d_{x',c_1}\) and \(d_{x',c_2}\), and the corresponding values. We assumed the query point \(x'\) has class \(c_1\) to calculate the losses. In this table, we set \(\rho=2\) for DR formulation. \(\delta\) is defined in Section \ref{sec:DR_form}. }
\label{tab:toy_ex_cases}
\centering
\begin{tabular}{M{1.5cm}|\Mc{1.5cm}\Mc{1.5cm}|\Mc{1.75cm}\Mc{1.75cm}|\Mc{2.cm}\Mc{2.cm}}
\toprule
\scriptsize Cases & \scriptsize \(d_{x',c_1}\)& \scriptsize \(d_{x',c_2}\)& \scriptsize \(\sigma_{c_1}(x')\)& \scriptsize \(\delta_{c_1}(x')\)& \scriptsize \(L_{s}\)& \scriptsize \(L_{DR}\) \\ \midrule
\scriptsize {Case (a)}&\scriptsize {\(1\)} &\scriptsize {\(2 \)} & \scriptsize \( 0.95257\) & \scriptsize \( 0.80000\) & \scriptsize \( 0.048587 \) & \scriptsize \( 0.22314\) \\ \hline
\scriptsize {Case (b)}&\scriptsize {\(2\)} &\scriptsize \( 4\)
&\scriptsize \(0.99999\) & \scriptsize \( 0.80000 \)& \scriptsize \( 6.1442\times 10^{-6}
\) & \scriptsize \( 0.22314 \) \\ %
\bottomrule
\end{tabular}
\end{table*}
\begin{figure*}[tbp]
\centering
\vspace{.05in}
\begin{subfigure}[b]{0.27\textwidth}
\includegraphics[width=1.0\linewidth,height=0.75\linewidth]{prob_red_softmax_sq.png}%
\caption{\(\sigma_{\textcolor{red}{red}} (x')\)
}\label{fig:p_red_x'_b}%
\end{subfigure}
\begin{subfigure}[b]{0.27\textwidth}
\includegraphics[width=1.0\linewidth,height=0.75\linewidth]{prob_red_DR_p_1.png}%
\caption{\(\delta_{\textcolor{red}{red}} (x')\) %
with \(\rho=1\)}\label{fig:p_red_x'_c}%
\end{subfigure}
\begin{subfigure}[b]{0.27\textwidth}
\includegraphics[width=1.0\linewidth,height=0.75\linewidth]{prob_red_DR_p_2.png}%
\caption{\(\delta_{\textcolor{red}{red}} (x')\) %
with \(\rho=2\)}\label{fig:p_red_x'_d}%
\end{subfigure}
\caption{Visualization of estimations of \(p(y=\textcolor{red}{red}|x')\) based on softmax-based formulation and distance-ratio-based formulation. %
}
\label{fig:p_red_x'}
\end{figure*}
\section{Metric Learning with Distance-Ratio-based Formulation%
}\label{sec:DR_form}
To handle %
the limitations of softmax-based formulation in metric learning, %
we propose an alternative form called \emph{distance-ratio-based (DR) formulation} for %
estimating probability \(p(y=c|x')\). When we use distance \(d_{x',c}\) as in Equation (\ref{eq:dist_proto}), %
DR formulation is defined as:
\begin{flalign}\label{eq:DR_form}
\hat{p}(y=c|x')=\frac{ \frac{1}{d_{x',c}^\rho} }{\sum\limits_{y\in {\mathcal{Y}_E}}{ \frac{1}{d_{x',y}^\rho} }}=\frac{ {d_{x',c}}^{-\rho} }{\sum\limits_{y\in {\mathcal{Y}_E}}{ {d_{x',y}}^{-\rho} }},
\end{flalign}
where %
\(\rho>0\) is a learnable parameter. %
When \(d_{x',c}=0\) %
and \(d_{x',c'}>0\) %
for all \(c'\neq c\), we define \(\hat{p}(y=c|x')\) as \(1\) and \(\hat{p}(y=c'|x')\) as \(0\). As this formulation uses ratios of distances for classification, we call it as \emph{distance-ratio-based formulation}. (One can check that Equation (\ref{eq:DR_form}) can be obtained by replacing the negative squared distance \(-d_{x',c}^2\) in Equation (\ref{eq:dist_softmax})
with \(-\rho \ln(d_{x',c})\).)
Let us denote the value in Equation (\ref{eq:DR_form}) as \(\delta_c (x')\). %
Then, when we use DR formulation to estimate the probability \(p(y=c|x')\), we denote the corresponding training loss (defined in Equation (\ref{eq:L_proto})) as \(L_{DR}\).
Based on the training loss \(L_{DR}\), we can update %
the embedding function \(f_{\theta}\) and also the learnable parameter \(\rho\).
\subsection{Analysis of Distance-Ratio-Based Formulation}
To analyze our formulation, let us consider when we change the scale of an embedding space. When we scale embedding with a scale parameter \(\alpha>0\), then the corresponding estimation of probability \(p(y=c|x')\) with DR formulation is:
\begin{flalign}\label{eq:DR_form_scale}
\hat{p}(y=c|x')=&\frac{ \frac{1}{\left(\alpha d_{x',c}\right)^\rho} }{\sum\limits_{y\in {\mathcal{Y}_E}}{ \frac{1}{\left(\alpha d_{x',y}\right)^\rho} }}=\frac{ \frac{1}{\alpha^\rho d_{x',c}^\rho} }{\sum\limits_{y\in {\mathcal{Y}_E}}{ \frac{1}{\alpha^\rho d_{x',y}^\rho} }}\nonumber\\
=&\frac{ \frac{1}{\alpha^\rho} \frac{1}{d_{x',c}^\rho} }{ \frac{1}{\alpha^\rho}\sum\limits_{y\in {\mathcal{Y}_E}}{ \frac{1}{d_{x',y}^\rho} }}=\frac{ \frac{1}{d_{x',c}^\rho} }{\sum\limits_{y\in {\mathcal{Y}_E}}{ \frac{1}{d_{x',y}^\rho} }}
\end{flalign}
Equation (\ref{eq:DR_form_scale}) shows that when we use our formulation, \emph{scaling an embedding has no effect on the confidence scores %
and thus the training loss}. %
(This property can also be checked from the cases (a) and (b) %
in Table \ref{tab:toy_ex_cases}.)
In addition to the scale invariance property,
\(\delta_c (x')\) has an additional property that has \emph{optimal confidence scores %
on prototypes.%
} In detail,
if we assume \(d(\mathbf{p}_{c},\mathbf{p}_{c'})>0\) %
for \(\forall c'\in \mathcal{Y}_{E}\) with \(c'\neq c\), then the following two equations hold:
\begin{flalign}
\lim_{x'\rightarrow\mathbf{p}_{c}}{\delta_{c}(x')}=1 \label{eq:sparse_1}\\
\lim_{x'\rightarrow\mathbf{p}_{c'}}{\delta_{c}(x')}=0 \label{eq:sparse_0}
\end{flalign}
This property can be checked from Figures \ref{fig:p_red_x'_c} %
and \ref{fig:p_red_x'_d} %
that visualize the estimated probability \(\hat{p}(y=\textcolor{red}{red}|x')\) using %
DR formulation. %
Proof of the property is in Appendix \ref{sec:sparsity_proof}. %
As prototypes %
provide optimal guidelines for query points, %
when we used DR formulation, query points can easily get close to the prototypes %
with their corresponding classes (\(\mathbf{p}_{c}\)) and get far away from the prototypes %
with different classes (\(\mathbf{p}_{c'}\)). Hence, metric learning with DR formulation can be stable.
\section{Experiments}
\subsection{Experiment Settings}
In our experiments, we wanted to investigate the effectiveness of the distance-ratio-based (DR) formulation compared to the softmax-based formulation. %
For that, we trained prototypical networks \citep{snell2017prototypical} based on two formulations for each experiment: %
softmax-based (ProtoNet\_S) and DR formulation (ProtoNet\_{DR}).
We implement 1-shot and 5-shot learning tasks with five classes for each episode (5-way). %
Details of the settings are described in the following paragraphs. %
Codes for our experiments are available in \url{https://github.com/hjk92g/DR_Formulation_ML}.
\paragraph{Dataset} %
We conduct experiments using two common benchmarks for few-shot classification tasks: CUB (200-2011) \citep{wah2011caltech} and \emph{mini}-ImageNet dataset \citep{vinyals2016matching}. The CUB dataset has 200 classes and 11,788 images. %
We use the same 100 training, 50 validation, and 50 test classes split as \citet{chen2019closer}. %
The \emph{mini}-ImageNet dataset is a subset of the ImageNet dataset \citep{deng2009imagenet} suggested by \citet{vinyals2016matching}. It has 100 classes and 600 images per class. %
We use the same 64 training, 16 validation, and 20 test classes split as %
\citet{ravi2016optimization, chen2019closer}. For both datasets, we %
apply data augmentation for training data. Applied data augmentation includes random crop, left-right flip, and color jitter.
\paragraph{Backbone (Architecture)}
We use two different backbones as embedding functions \(f_{\theta}\) for each experiment: Conv4 \citep{snell2017prototypical} and ResNet18 \citep{he2016deep}. The Conv4 consists of four convolutional blocks. Each block is composed of a 64-filter \(3\times 3\) convolution, batch normalization, a ReLU activation function, and a \(2\times 2\) max-pooling layer. It takes \(84\times84\) sized color images and outputs 1600 dimensional embedding vectors. %
The ResNet18 backbone is the same as in \citet{he2016deep}. It contains convolutions, batch normalizations, ReLU activation functions like the Conv4, but it also has skip connections. %
It takes \(224\times224\) sized color images and outputs 512 dimensional embedding vectors.
\begin{figure*}[t]
\centering
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=1.0\linewidth,height=0.625\linewidth]{CUB_training_accuracies.png} %
\caption{Data: CUB, Backbone: Conv4}\label{fig:1-shot_training_a}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=1.0\linewidth,height=0.625\linewidth]{CUB_ResNet18_training_accuracies.png}
\caption{Data: CUB, Backbone: ResNet18}\label{fig:1-shot_training_b}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=1.0\linewidth,height=0.625\linewidth]{miniImg_training_accuracies.png} %
\caption{Data: \emph{mini}-ImageNet, Backbone: Conv4}\label{fig:1-shot_training_c}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=1.0\linewidth,height=0.625\linewidth]{miniImg_ResNet18_training_accuracies.png}
\caption{Data: \emph{mini}-ImageNet, Backbone: ResNet18}\label{fig:1-shot_training_d}
\end{subfigure}
\caption{Training and validation accuracy curves for two different backbones on 1-shot learning tasks.}
\label{fig:1-shot_training}
\end{figure*}
\begin{figure*}[t]
\centering
\vspace{.05in}
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=1.0\linewidth,height=0.625\linewidth]{CUB_5s_training_accuracies.png} %
\caption{Data: CUB, Backbone: Conv4}\label{fig:5-shot_training_a}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=1.0\linewidth,height=0.625\linewidth]{CUB_ResNet18_5s_training_accuracies.png}
\caption{Data: CUB, Backbone: ResNet18}\label{fig:5-shot_training_b}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=1.0\linewidth,height=0.625\linewidth]{miniImg_5s_training_accuracies.png} %
\caption{Data: \emph{mini}-ImageNet, Backbone: Conv4}\label{fig:5-shot_training_c}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=1.0\linewidth,height=0.625\linewidth]{miniImg_ResNet18_5s_training_accuracies.png}
\caption{Data: \emph{mini}-ImageNet, Backbone: ResNet18}\label{fig:5-shot_training_d}
\end{subfigure}
\caption{Training and validation accuracy curves for two different backbones on 5-shot learning tasks. %
}
\label{fig:5-shot_training}
\end{figure*}
\paragraph{Optimization}
Backbones are trained from random weights. %
We use Adam optimizer \citep{kingma2014adam} with a learning rate \(10^{-3}\). To investigate %
training steps, we save training information and validation accuracy for each 100 training episodes, and we call each of these steps a checkpoint. %
For 1-shot classification tasks, we train embedding for 60,000 episodes (600 checkpoints). For 5-shot classification tasks, we train embedding for 50,000 episodes (500 checkpoints). %
Based on validation accuracies on each checkpoint, we select the best model among the %
checkpoints.
To implement our DR formulation, we modify %
the implementation of the standard softmax-based prototypical network \citep{snell2017prototypical} by replacing negative squared distance \(-d_{x',c}^2\) %
in Equation (\ref{eq:dist_softmax}) by \(-\rho \ln(d_{x',c})\). %
For numerical stability, we add a small positive value \(10^{-10}\) before taking square root in the calculation of Euclidean distance $d_{x';c}$. %
For the DR formulation, we use \(\ln(\rho)\in\mathbb{R}\) to model \(\rho=\exp(\ln(\rho))\). We set the initial parameter for \(\ln(\rho)\) as \(2.0\). %
Based on this initial value, \(\log(\rho)\) value is trained for all experiments.
To analyze the local training steps, for each checkpoint (every 100 episodes %
of training), we checked the positions of episode points (both support and query points) on embedding space just before the weight updates and right after the weight updates. When we consider positions on embedding space, we denote a matrix that represents the original positions of episode points as \(X_{origin}\) and the corresponding matrix with updated weights as \(X_{new}\). We assume these matrices are mean-centered. %
When the matrices are not mean-centered, we center the matrices so that an average point is located at zero. %
Then, we model the matrix \(X_{new}\) as a modification of \(\alpha^* X_{origin}\) for an unknown scale parameter \(\alpha^*\). Based on this model on \(X_{new}\), %
we calculated a score called \emph{norm ratio} \(\phi\) that measures the relative effect of scaling (\(0\le \phi\le 1\)). It is defined as:
\begin{flalign}\label{eq:norm_ratio_main}
\phi = \frac{\left \| X_{new}-\hat{\alpha^*} X_{origin}\right \|_F}{\left \| X_{new}-X_{origin} \right \|_F},
\end{flalign}
where \(\left \| \cdot \right \|_F\) is the Frobenius norm and \(\hat{\alpha^*}\) is an estimated scaling parameter by minimizing the numerator of Equation (\(\ref{eq:norm_ratio_main})\). %
Norm ratio \(\phi\) is close to \(0\) when the major changes are due to scaling. Norm ratio \(\phi\) is close to \(1\) when a magnitude of an embedding is not changed. %
A detailed explanation for the norm ratio is in Appendix \ref{sec:proposed_norm_ratio}.
\begin{figure*}[t]
\centering%
\begin{subfigure}[b]{0.24\textwidth}%
\includegraphics[width=1.0\linewidth,height=0.625\linewidth]{CUB_norm_ratio.png} %
\caption{Data: CUB, Backbone: Conv4}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}%
\includegraphics[width=1.0\linewidth,height=0.625\linewidth]{CUB_ResNet18_norm_ratio.png} %
\caption{Data: CUB, Backbone: ResNet18%
}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}%
\includegraphics[width=1.0\linewidth,height=0.625\linewidth]{miniImg_norm_ratio.png} %
\caption{Data: \emph{mini}-ImageNet, Backbone: Conv4}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}%
\includegraphics[width=1.0\linewidth,height=0.625\linewidth]{miniImg_ResNet18_norm_ratio.png} %
\caption{Data: \emph{mini}-ImageNet, Backbone: ResNet18%
}
\end{subfigure}
\caption{Norm ratio \(\phi\) curves for two different datasets and backbones on 1-shot learning tasks. Note that the ranges of the y-axis are smaller in (b) and (d) than (a) and (c).}
\label{fig:1-shot_norm_ratio}
\end{figure*}
\begin{figure*}[t]
\centering
\vspace{.05in}
\begin{subfigure}[b]{0.24\textwidth}%
\includegraphics[width=1.0\linewidth,height=0.625\linewidth]{CUB_5s_norm_ratio.png} %
\caption{Data: CUB, Backbone: Conv4}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}%
\includegraphics[width=1.0\linewidth,height=0.625\linewidth]{CUB_ResNet18_5s_norm_ratio.png} %
\caption{Data: CUB, Backbone: ResNet18%
}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}%
\includegraphics[width=1.0\linewidth,height=0.625\linewidth]{miniImg_5s_norm_ratio.png} %
\caption{Data: \emph{mini}-ImageNet, Backbone: Conv4}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}%
\includegraphics[width=1.0\linewidth,height=0.625\linewidth]{miniImg_ResNet18_5s_norm_ratio.png} %
\caption{Data: \emph{mini}-ImageNet, Backbone: ResNet18%
}
\end{subfigure}
\caption{Norm ratio \(\phi\) curves for two different datasets and backbones on 5-shot learning tasks. Note that the ranges of the y-axis are smaller in (b) and (d) than (a) and (c).}
\label{fig:5-shot_norm_ratio}
\end{figure*}
In addition to norm ratio \(\phi\), to analyze the location changes of query points relative to the positions of prototypes, for each checkpoint, %
we also calculated other proposed measures called \emph{con-alpha ratio} \(\frac{\psi_{con}}{\hat{\alpha^*}}\), \emph{div-alpha ratio} \(\frac{\psi_{div}}{\hat{\alpha^*}}\), and \emph{con-div ratio} \(\frac{\psi_{con}}{\psi_{div}}\).
We use the same estimated scale parameter \(\hat{\alpha^*}\) as defined in the previous paragraph and Appendix \ref{sec:proposed_norm_ratio}. Value \(\psi_{con}\) measures a degree of convergence of query points to the prototypes with the same class. Con-alpha ratio \(\frac{\psi_{con}}{\hat{\alpha^*}}\) is the corresponding value %
after adjusting scale changes. It %
is smaller than \(1\) when query points get close to the prototypes %
with the same classes after adjusting the scale changes. %
Value \(\psi_{div}\) measures a degree of divergence (separation) of query points to the prototypes with different classes. Div-alpha ratio \(\frac{\psi_{div}}{\hat{\alpha^*}}\) is the corresponding value %
after adjusting the scale changes. It %
is larger than \(1\) when query points get far apart from the prototypes %
with different classes after adjusting the scaling. Con-div ratio \(\frac{\psi_{con}}{\psi_{div}}\) is defined as
con-alpha ratio \(\frac{\psi_{con}}{\hat{\alpha^*}}\) divided by div-alpha ratio \(\frac{\psi_{div}}{\hat{\alpha^*}}\).
It measures a relative degree of intended convergence of query points compared to divergence to the prototypes with different classes. Detailed explanations for these measures are in Appendix \ref{sec:proposed_ratios_other_measures}.
\subsection{Experiment Results}
Using the Conv4 backbone (Figures \ref{fig:1-shot_training_a}, \ref{fig:1-shot_training_c}, \ref{fig:5-shot_training_a}, and \ref{fig:5-shot_training_c}), we can observe that %
utilization of the DR formulation helps to train faster than using the softmax-based prototypical networks \citep{snell2017prototypical}. When using the ResNet18 backbone, the differences are smaller in 1-shot learning tasks (Figures \ref{fig:1-shot_training_b} and \ref{fig:1-shot_training_d}) or %
reversed in 5-shot tasks (Figures \ref{fig:5-shot_training_b} and \ref{fig:5-shot_training_d}).
Figures \ref{fig:1-shot_norm_ratio} and \ref{fig:5-shot_norm_ratio} visualize the changes of norm ratio \(\phi\). Tables \ref{tab:CUB_Conv4_proposed_ratios} to \ref{tab:miniImg_ResNet18_proposed_ratios_5s} (in Appendix \ref{sec:proposed_ratios_results}) report geometric means and statistical test results on norm ratio values. %
In all the experiments with the Conv4 backbone, %
norm ratios \(\phi\) were significantly smaller
when we used the softmax-based formulation than the DR formulation. In other words, there were more scale changes in embedding when we used the softmax-based formulation. It indicates that when we use the Conv4 backbone, using softmax-based formulation can be more prone to weaken the training process due to the unnecessary scale changes. %
When using the ResNet18 backbone, norm ratio values were very close to \(1\) (geometric mean at least \(0.99705)\) on both formulations.
\begin{table*}%
\caption{Few-shot classification accuracies (\%) for CUB and \emph{mini}-ImageNet datasets. Each cell reports mean accuracy based on 600 random test episodes and the corresponding 95\% confidence interval from a single trained model. }%
\label{tab:FSL}
\centering
\begin{tabular}{M{2.25cm}|M{2.25cm}|\Mc{2.0cm}\Mc{2.0cm}|\Mc{2.0cm}\Mc{2.0cm}}
\toprule
\multirow{2}{*}{ Backbone }&\multirow{2}{*}{ Method }& \multicolumn{2}{c}{\scriptsize CUB}& \multicolumn{2}{c}{\scriptsize \emph{mini}-ImageNet} \\ \cline{3-6}
\scriptsize { }&\scriptsize { }&\scriptsize 1-shot &\scriptsize 5-shot & \scriptsize 1-shot &\scriptsize 5-shot \\ \midrule
\multirow{2}{*}{ Conv4}& %
\scriptsize {ProtoNet\_S}&\scriptsize {\(50.46 \pm 0.88\)} &\scriptsize \(76.39 \pm 0.64\)
&\scriptsize \(44.42 \pm 0.84\) & \scriptsize \(64.24 \pm 0.72\)\\ \cline{2-6}
{}&\scriptsize { ProtoNet\_{DR}}&\scriptsize {\(57.13 \pm 0.95\)} &\scriptsize \(76.50 \pm 0.66\)
&\scriptsize \( 48.71 \pm 0.78\) & \scriptsize \( 65.90\pm 0.69\)\\ \hline
\multirow{2}{*}{ \makecell{ResNet18} }&
\scriptsize {ProtoNet\_S}&\scriptsize {\(72.99 \pm 0.88 \)} &\scriptsize \(86.64 \pm 0.51\)
&\scriptsize \(54.16 \pm 0.82\) & \scriptsize \(73.68 \pm 0.65\)\\ \cline{2-6}
{}&\scriptsize { ProtoNet\_{DR}}&\scriptsize {\(73.33 \pm 0.90\)} &\scriptsize \(86.63 \pm 0.49\)
&\scriptsize \( 54.86 \pm 0.86\) & \scriptsize \( 72.93\pm 0.64\)\\
\bottomrule
\end{tabular}
\end{table*}
Tables \ref{tab:CUB_Conv4_proposed_ratios} to \ref{tab:miniImg_ResNet18_proposed_ratios_5s} (in Appendix \ref{sec:proposed_ratios_results}) also report geometric means, proportions of properly learned cases (\(\frac{\psi_{con}}{\hat{\alpha^*}}<1\), \(\frac{\psi_{div}}{\hat{\alpha^*}}>1\), \(\frac{\psi_{con}}{\psi_{div}}<1\)), and statistical test results on con-alpha ratio \(\frac{\psi_{con}}{\hat{\alpha^*}}\), div-alpha ratio \(\frac{\psi_{div}}{\hat{\alpha^*}}\), and con-div ratio \(\frac{\psi_{con}}{\psi_{div}}\). %
When we consider con-alpha ratio \(\frac{\psi_{con}}{\hat{\alpha^*}}\) for 1-shot learning tasks, the %
properly converged cases (\(\frac{\psi_{con}}{\hat{\alpha^*}}<1\)) were significantly more frequent when we used DR formulation. It means that in 1-shot training, our DR formulation model is more stable in decreasing distances between query points and prototypes with the corresponding classes. When we consider div-alpha ratio \(\frac{\psi_{div}}{\hat{\alpha^*}}\), for all the experiments, the properly diverged cases (\(\frac{\psi_{div}}{\hat{\alpha^*}}>1\)) were significantly more frequent when we used the DR formulation.
It indicates that the DR formulation-based model is more stable in increasing the distance between query points and prototypes with different classes.
Table \ref{tab:FSL} reports few-shot classification accuracies on test episodes. First, we consider the results when the Conv4 backbone was used for training. Except for the 5-shot classification task on the CUB dataset, which resulted in comparable accuracies (difference was \(0.11\%\)), the test accuracies were higher with the DR formulation. The accuracy differences ranged from \(1.66\%\) to \(6.67\%\).
Now, we consider the results with the ResNet18 backbone in Table \ref{tab:FSL}. %
First, the accuracy gaps were reduced. \citet{chen2019closer} also observed this phenomenon when they compared accuracy gaps with different backbones using several few-shot learning models.
While the differences were small in the 1-shot classification task (differences were \(0.34\%\) or \(0.70\%\)), using DR formulation achieved higher accuracies. For the 5-shot classification task on the \emph{mini}-ImageNet dataset, using DR formulation achieved \(0.75\%\) lower accuracy.
\section{Discussion}
In this work, we address the limitations of softmax-based formulation for metric learning by proposing a distance-ratio-based (DR) formulation. DR formulation focuses on updating relative positions on embedding by ignoring the scale of an embedding space. It also enables stable training by using each representing point as an optimal position. %
Our experiments show that using DR formulation resulted in faster training speed in general and improved or comparable generalization performances.
When distance \(d_{x',c}\) is a distance between a query point \(x'\) and the nearest support point with class \(c\), distance ratio \(\frac{d_{x',c_1}}{d_{x',c_2}}\) for two different classes \(c_1\) and \(c_2\) has been utilized in previous literature. %
By setting \(c_1\) as the nearest class and \(c_2\) as the second nearest class from a query point \(x'\), \citet{junior2017nearest} have used the distance ratio named {\it nearest neighbor distance ratio (NNDR)} for handling open-set classification problems \citep{geng2020recent}.
Independently, \citet{jiang2018trust} have utilized the inverse value \(\frac{d_{x',c_2}}{d_{x',c_1}}\) %
to define {\it trust score}, which is an alternative value for confidence value of the %
standard softmax classifiers. Unlike these works that directly use distance-ratios without modeling confidence values, %
distance-ratio-based (DR) formulation models probability \(p(y=c|x')\) using distance ratios.
To output %
confidence scores that are either \(0\) or \(1\) on some areas,
sparsemax \citep{martins2016softmax}, sparsegenlin, and sparsehourglass \citep{laha2018controllable}
were proposed as alternatives for softmax formulation in non-metric learning cases. Unlike DR formulation, which has %
a confidence score \(0\) or \(1\) only on (countable) \emph{%
representing points}, these formulations have \emph{areas} (sets of uncountable points) %
that output confidence score \(0\) or \(1\). Such property is inappropriate for metric learning %
as confidence scores can be \(1\) even for %
non-representing points, and thus query points do not need to converge very close to the corresponding representing points.
Recent works \citep{wang2017normface, liu2017sphereface, wang2018additive, wang2018cosface, deng2019arcface} proposed to use modifications of the standard softmax classifier for metric learning. These modified softmax classifiers showed competitive performances on metric learning \citep{musgrave2020metric}. %
Unlike traditional metric learning models that use data points or prototypes, which are obtained from data points, to represent classes, they use learnable parameter vectors to represent classes. They %
use cosine similarity %
on normalized embedding space \(\mathbb{S}^{\left(d_{F}-1\right)}\). %
That is equivalent to using the softmax-based formulation with Euclidean distance. %
While scale dependency of the softmax-based formulation can be addressed due to normalization, the softmax-based formulation still lacks the second property of DR formulation.
Thus, %
a representing parameter vector may not be a vector that maximizes the confidence value. %
To handle this issue, DR formulation can also be used as an alternative %
by using Euclidean or angular distance on normalized embedding space (see example in Appendix \ref{sec:normalized_ex}).%
In addition to the supervised metric learning, cosine similarity on a normalized representation space is also used in %
contrastive self-supervised learning \citep{chen2020simple} and in recent data augmentation strategy \citep{khosla2020supervised} which uses supervised contrastive loss. DR formulation can also be applied to these models for possible performance improvements.
While using DR formulation resulted in faster or comparable training speed in most experiments, in Figure \ref{fig:5-shot_training_b} and \ref{fig:5-shot_training_d}, we observe slightly slower training speed in 5-shot learning with the ResNet18 backbone. We speculate the reason is %
that an average point is not an optimal point to represent a class in DR formulation (explained in Appendix \ref{sec:represent_mean_DR}), unlike softmax-based formulation (explained in Appendix \ref{sec:represent_mean_softmax}). To investigate this, in Appendix \ref{sec:1-NN_5s}, %
we conduct 5-shot learning experiments with nearest neighbor-based models instead of using an average point to represent a class. %
Our experiment results showed that the scale changes of softmax-based prototypical networks are decreased dramatically when the ResNet18 was used as a backbone. %
One possible reason for this phenomenon can be %
the skip connections in residual modules \citep{he2016deep} and the fact that the scale of the input layer is fixed. It requires further investigation to draw a conclusion.
|
1,108,101,566,339 | arxiv | \section{Introduction}
In this paper we continue to study
the model of (2 + 1)-dimensional massless Dirac
fermions interacting with a random static non-Abelian gauge
potential.
The Hamiltonian (or rather a generating
functional for the Green's functions) of this model is given by
\begin{eqnarray}
\mbox{i}\hat H - \epsilon_n\hat I =
\int \mbox{d}^2x\left(R^+_{\alpha}, L^+_{\alpha}\right)\left(
\begin{array}{cc}
(\partial_x - \mbox{i}\partial_y)\delta_{\alpha\beta} -
\mbox{i}A^+_{\alpha\beta}(x,y) & - \epsilon_n \\
- \epsilon_n & (\partial_x + \mbox{i}\partial_y) -
\mbox{i}A^-_{\alpha\beta}(x,y)
\end{array}\right)
\left(R_{\beta}, L_{\beta}\right)
\end{eqnarray}
where the random fields $A^a_{\alpha\beta}(x,y)~$($ a = \pm$)
have a Gaussian distribution:
\begin{equation}
\langle A^a_{\alpha\beta}(\vec r_1)A^b_{\gamma\eta}(\vec r_2)\rangle =
{\cal A} \delta(\vec r_1 - \vec
r_2)\delta_{a, -b}\delta_{\alpha\eta}\delta_{\beta\gamma}
\end{equation}
The fermionic fields $R_{\alpha}, \: L_{\alpha}$ represent respectively the
right and the left moving components of the spinor field, and $\alpha$
takes the values $1,...,N$. This model was introduced in
Ref. \cite{ners}
to describe normal
excitations in two
dimensional non-$s$-wave superconductors with disorder. In this
context $N$ denotes
the number of nodes of the order parameter on the Fermi surface.
Since the disorder is time-independent we consider
the Fourier components of the fermionic fields with
different frequencies separately. This reduces the dimensionality of
the problem, making it two dimensional. The model (1) is exactly
solvable in the subspace $\epsilon = 0$, where it is
critical. The
presence of the superconducting order parameter fixes the chemical
potential and thus insures that
the disorder is diagonal in
chirality. This is essential for the criticality at $\epsilon
= 0$. At the critical point, one can apply the
methods of conformal field theory and calculate scaling dimensions of
the fields and of their
multi-point correlation functions. This gives us a rare opportunity
to obtain nonperturbative results for a
non-trivial random theory.
We hope that a study of this exact solution will give an insight into
general properties of random systems.
The averaging over the disorder can be done either
through the replica trick
\cite{tsv} or using the supersymmetric approach
\cite{wen},\cite{gade}. In this paper we shall
mostly use the replica approach with which
we are more familiar.
To demonstrate that the two approaches
are equivalent we shall (i) compare the conformal
dimensions of the primary fields and (ii)
demonstrate that both representations give the same
conformal blocks for the four-point correlation function.
As in the standard localization theory (see for example
\cite{efetov}), one can integrate out the fast degrees of freedom and
derive an effective action for the slow ones in the form of a
sigma model. This program was carried out in Ref.\cite{ners}. The
resulting sigma model has the following action:
\begin{eqnarray}
S = S_0 + M\epsilon_n \int\mbox{d}^2x \mbox{Tr}(Q + Q^+)
\end{eqnarray}
where the $S_0$ action contains the Wess-Zumino term:
\begin{eqnarray}
S_0 &&= \frac{Nr}{2}\int \mbox{d}^2x (\partial_{\mu}\Phi)^2 + NW[SU(r);
g]
\nonumber\\\vspace{.5cm}
W[SU(2r); g] &&= \frac{1}{16\pi}\int \mbox{d}^2x\left[
\mbox{Tr}(\partial_{\mu}g^+\partial_{\mu}g) +
\frac{2}{3}\int_0^{\infty}\mbox{d}\xi
\epsilon_{abc}
\mbox{Tr}(g^+\partial_{a}g g^+\partial_b g g^+\partial_c g)\right] \label{wzw}
\end{eqnarray}
where $Q$ is
the $r\times r$ matrix $Q = g\exp[\mbox{i}\sqrt{4\pi}\Phi]$, and where $g$
belongs to the SU(r) group. $\Phi$ is a real scalar field defined
on the cirle with circumference $\sqrt\pi$.
The quantity $M$ is the energy
scale introduced by the disorder: $M \sim \exp[ - 2\pi/N {\cal A}]$;
it marks the crossover from the bare density of states (DOS)
$\rho(\epsilon) \sim |\epsilon|$ to the renormalized
DOS $\rho(\epsilon) \sim |\epsilon|^{\nu}$ ($\nu = (2N^2 - 1)^{-1}$).
$M$ serves as the ultraviolet cut-off for the sigma
model (\ref{wzw}).
It is well known that in conventional
localization theories the {\it average} DOS is not affected by the
disorder (its higher moments are, however).
It is not the case for the model (1), where the DOS is directly
proportional to the order parameter. As it was shown in \cite{ners},
the local DOS is given by
\begin{equation}
\rho(\epsilon, x) = \frac{M}{r}\mbox{Tr}[Q(x) +
Q^+(x)] \label{rho}
\end{equation}
This means that even the average DOS is strongly renormalized.
At $\epsilon$ = 0 the sigma model (\ref{wzw}) is critical and
$\langle \rho(0, x)\rangle = 0$, as it might be expected in a critical
theory in two dimensions \cite{remark}. In three dimensions a finite DOS
emerges at $\epsilon = 0$
\cite{gorkov}. In our previous publications we interpreted this
effect as a manifestation of
violation of some continuous symmetry present in the
theory (\cite{ners}, \cite{tsv}). However, the meaning of this
symmetry has remained obscure. In this paper we identify the
operators which generate this symmetry and derive their algebra.
The paper is organized as follows. In Section II we discuss general
properties of the model at criticality and derive the expression for
the conformal dimensions of its primary fields. In Section III we derive
the expression for the four-point correlation function of local DOS.
It turns out that this correlation function contains logarithmic
singularities and therefore a fusion of local DOS generates
operators with unusual
properties - the so-called logarithmic operators.
In Section IV we develop a general theory of such operators and
demonstrate that the appearance of such operators always implies the
presence of some continuous
symmetry. It may well be that this
symmetry is present in all critical models with disorder.
In Section V we show how these general results hold
for the model (4). In Section VI
we study correlation functions in the model
deformed by the logarithmic operators away from
criticality. It
turns out that even in the case of a marginal deformation the
correlation functions have logarithmic corrections. In Section VII
we derive the log-normal distribution function of the local DOS.
The paper contains a conclusion and an appendix
where the conformal blocks of replica and supersymmetric
theories are compared.
\section{Conformal dimensions}
The WZNW model has been
well studied at finite $r$. There is an extensive
literature on the subject, but we particularly recommend
the original publication by Knizhnik and Zamolodchikov\cite{knizh}.
These authors derived explicit expressions for
the four-point correlation functions of primary fields which we are going to
exploit. In our calculational procedure
we follow the
general principle: when calculating
any $n$-point correlation function $F_{r}(1, 2, ...n)$
$r$ is treated
as an arbitrary number on all intermediate steps
of the calculations until the final expression is
obtained. We define the replica limit as follows:
\begin{equation}
F(1,2, ... n) = \lim_{r \rightarrow 0}\frac{2N}{r}F_{r}(1, 2, ...n)
\label{eq:def}
\end{equation}
The reason for the introduction of the extra factor $2N$ will be
discussed later.
Let us study the correlation functions of the $Q, Q^+$-fields. The problem
of indices is simplified
by the fact that the $Q_{pq}$ matrices are slow parts of the
operators
\begin{equation}
Q_{pq} \sim \sum_{\alpha = 1}^N R^+_{\alpha,p}L_{\alpha,q}
\end{equation}
From this fact one can derive a simple recipe for the index structure of
$n-$point correlation functions: it is the same as for the
$n$-point function of the
$\sum_{\alpha}R^+_{\alpha,p}L_{\alpha,q}$-fields
in the theory of
massless free fermions. The simplest example
is the 2-point function \cite{tsv}, \cite{wen}:
\begin{eqnarray}
\langle Q_{p_1q_1}(z,\bar z)Q^+_{q_2p_2}(0,0)\rangle =
\delta_{q_1q_2}\delta_{p_1p_2}\frac{1}{(M|z|)^{2/N^2}} \label{eq:corr}
\end{eqnarray}
where
$1/2N^2$ is the conformal dimension of the composite
operator $Q$ given by the sum of the dimensions of the bosonic
exponent $\exp[\mbox{i}\sqrt{4\pi}\Phi]$ and of the operator
field $g_{pr}$ from the
fundamental representation of the SU(r) group:
\begin{equation}
\Delta = \lim_{r \rightarrow 0}\left[\frac{1}{2rN} + \frac{(r - 1/r)}{2(N +
r)}\right] = \frac{1}{2N^2} \label{eq:dimen}
\end{equation}
In the replica limit we get from Eq.(\ref{eq:corr})
\begin{equation}
G(z, \bar z) \equiv \lim_{r\rightarrow 0}(2N/r)
\langle\mbox{Tr}[Q(z, \bar z)]
\mbox{Tr}[Q^+(0,0)]\rangle = (M|z|)^{- 2/N^2} \label{eq:corr1}
\end{equation}
All other operators are generated by fusion of the fundamental
fields Tr$Q$ and Tr$Q^+$.
The corresponding primary fields are composite fields of
bosonic exponents and Wess-Zumino tensors belonging to irreducible
representations of the SU(r) group. These representations are
classified by Young tableaus which can be represented by a string
of numbers $f_1 > f_2 > ... > f_{r} \geq 0$. Only representations
with $f_l \leq N - 1$ are generated \cite{knizh}.
The corresponding
conformal dimensions are given by the expressions
\begin{eqnarray}
\Delta_f &&= \frac{C_f}{N + r} + \frac{f^2}{2rN}\nonumber\\
C_f &&= \frac{1}{2}\sum_l[f_l^2 + (r + 1 - 2l)f_l] - \frac{f^2}{2r}
\end{eqnarray}
where $f = \sum_lf_l$. In the replica limit we get
\begin{eqnarray}
\Delta_f = \frac{f^2}{2N^2} + \frac{1}{2N}\sum_l[f_l^2 - (2l - 1)f_l]
\label{dims}
\end{eqnarray}
This expression coincides with the conformal dimensions obtained by
the supersymmetric approach after rows and columns in the Young
tableau are interchanged. For instance,
for the representation $(\overbrace{1,1, ... 1}^m, 0,..)$ we reproduce
the expression obtained in \cite{wen}(see Eq.(4.48b)there) for the
$(m, 0, ...0)$ representation:
\begin{equation}
\Delta_m = \frac{m}{2N}\left[1 - \frac{(N - 1)m}{N}\right] \label{deltas}
\end{equation}
\section{Four point correlation function of the order parameter
fields}
Let us now study the four point correlation function of the $Q,~ Q^+$
fields. The index structure is the same for all $r$\cite{knizh}:
\begin{eqnarray}
\langle Q_{p_1q_1}(z_1,\bar z_1)Q^+_{q_2p_2}(z_2,\bar z_2)&&Q_{p_3q_3}
(z_3,\bar z_3)Q^+_{q_4p_4}(z_4, \bar z_4)\rangle \nonumber\\
&&= M^{-4/N^2}\left[\frac{|z_{13}z_{24}|}{|z_{12}z_{14}z_{23}z_{34}|}
\right]^{2/N^2}(W
+ \tilde W),\\
\tilde W &&=
[\delta_{p_1p_2}\delta_{p_3p_4}\delta_{q_1q_2}\delta_{q_3q_4}
W_{11}(x,\bar
x) + \delta_{p_1p_3}\delta_{p_2p_4}\delta_{q_1q_3}\delta_{q_2q_4}
W_{22}(x,\bar x)],\\
W &&= [\delta_{p_1p_2}\delta_{p_3p_4}\delta_{q_1q_3}\delta_{q_2q_4}
W_{12}(x,\bar x) + \delta_{p_1p_3}\delta_{p_2p_4}\delta_{q_1q_2}
\delta_{q_3q_4}W_{21}(x,\bar x)]
\end{eqnarray}
where
\begin{equation}
x = \frac{z_{12}z_{34}}{z_{13}z_{24}}, \: \bar x = \frac{\bar z_{12}\bar
z_{34}}{\bar z_{13}\bar z_{24}} \label{z}
\end{equation}
Here the functions $W_{AB}(x,\bar x), \: (A,B = 1,2)$ satisfy
linear differential equations (the Knizhnik-Zamolodchikov equations)
which we shall discuss later in detail.
Now note
that in our theory we shall deal only with correlation functions of
Tr$Q$, Tr$Q^+$ (for simplicity we do not consider transport phenomena
which would need introduction of advanced and retarded correlation
functions).
Since all correlation functions must be
proportional to $r$, only the $W$ term (that is the term with
all indices equal) survives
in the replica limit,
the $\tilde W$ term being proportional to $r^2$.
Therefore we have
\begin{eqnarray}
(2N/r)\langle\mbox{Tr}
&&Q(1)\mbox{Tr}Q^+(2)\mbox{Tr}Q(3)\mbox{Tr} Q^+(4)\rangle
\nonumber\\
=&& 2N M^{-4/N^2}\left[\frac{|z_{13}z_{24}|}{|z_{12}z_{34}z_{23}z_{3
4}|}\right]^{2/N^2}[W_{12}(x,\bar
x) + W_{21}(x,\bar x)] \label{four1}
\end{eqnarray}
The functions $W_{AB}(x, \bar x) = U^{pq}W^{(p)}_A(x)W^{(q)}_B(\bar x)$
are composed of linearly independent solutions of the
Knizhnik-Zamolodchikov equation, written $W^{(p)}_A(x)$ and $ \: W^{(q)}_B(\bar
x)$
(conformal blocks). In the replica limit these equations
have the following form:
\begin{eqnarray}
Nx\frac{d W_1}{dx} = - W_2, ~~~~~~~~~~~~~
N(1 - x)\frac{d W_2}{dx} = W_1
\end{eqnarray}
Thus for the function $W_1$ we get the following hypergeometric
equation:
\begin{equation}
N^2\frac{d}{dx}\left(x\frac{d W_1}{dx}\right) + \frac{W_1}{1 - x} = 0
\label{equ}
\end{equation}
Here we encounter a problem.
A hypergeometric equation has always two linearly independent
solutions, normally expressed in terms of powers and hypergeometric functions.
Usually there are three sets of solutions defined in the
vicinity of $x =
0$, 1 and $\infty$ respectively. These pairs of solutions are related
to each other via simple transformation rules (see, for example
\cite{abram}). Eq.(\ref{equ}) is an exlusion: in the vicinity of
$x = 0$ one of the solutions contains a
logarithmic singularity and cannot be expressed in terms of
hypergeometric functions (this second solution was
overlooked in the previous publication of one of the authors \cite{tsv}):
\begin{eqnarray}
W_1^{(0)}(x) &&= F(1/N, - 1/N, 1, x), \nonumber\\
\: W_1^{(1)}(x) &&= \ln x
W_1^{(0)}(x) + H_1(x)\nonumber\\
NW_2^{(0)}(x) &&= x F(1 + 1/N, 1
- 1/N, 2, x), \nonumber\\
NW_2^{(1)}(x) &&= \ln x W_2^{(0)}(x) - N^2 + H_2(x) \label{zero}
\end{eqnarray}
where $H_{1,2}(x)$ are functions that are regular at $x = 0$:
\begin{eqnarray}
H_1(x) = \sum_{n = 1}^{\infty}&&\frac{x^n(1/N)_n(-1/N)_n}{(n!)^2}
[\psi(1/N + n) - \psi(1/N) + \nonumber\\
&&+ \psi(- 1/N + n) - \psi(- 1/N) - 2\psi(n + 1) + 2\psi(1)] \nonumber\\
H_2(x) = x\sum_{n =
0}^{\infty}&&\frac{x^n(1 + 1/N)_n(1 - 1/N)_n}{n!(n+1)!}
[\psi(1 + 1/N + n) - \psi(1 + 1/N) + \nonumber\\
&&+ \psi(1 - 1/N + n) - \psi(1 - 1/N)
- \psi(n + 1) + \psi(1) - \psi(n + 2) + \psi(2)]
\end{eqnarray}
where $(a)_n = \Gamma(a + n)/\Gamma(a)$.
Only in the vicinity of $x = \infty$ are the solutions still
hypergeometric functions (for $N \neq 2$).
At $|x| << 1$ we have
\begin{eqnarray}
W_1^{(0)}(x) = 1 + O(x), ~~~~~&&~~~~~ W_1^{(1)}(x) = \ln x [1 + O(x)],
\nonumber\\
NW_2^{(0)}(x) = x + O(x^2), ~~~~~&&~~~~~ NW_2^{(1)}(x) = - N^2 + x\ln x[1 +
O(x)]
\end{eqnarray}
Now we have to choose the matrix $U^{pq}$ in such a way that the
resulting expression for the four point correlation function be
a uniquely defined function in the complex plane of $x$. It also must be
invariant under the permutation of points 1 and 3 (2 and 4) which
means the invariance under $x \rightarrow 1 - x, \: \bar x \rightarrow
1 - \bar x$ (crossing symmetry). These properties are achieved when
\begin{eqnarray}
U^{(01)} = U^{(10)}, \: U^{(11)} = 0, \: U^{(00)} = hU^{(01)}
\end{eqnarray}
To find $h$, we first note that the solutions to the Knizhnik-Zamolodchikov
equations obey the monodromy
properties
\begin{eqnarray}
W_1^{(0)}(1-x) = a_i W_2^{(i)}(x)\nonumber\\
W_2^{(0)}(1-x) = b_i W_1^{(i)}(x)
\end{eqnarray}
where
\begin{eqnarray}
a_0 &&= a_1 [\psi(1/N) + \psi(-1/N) - \psi(2) - \psi(1)]\nonumber\\
a_1 &&= b_1 = \frac{N}{\Gamma(1/N) \Gamma(-1/N)}\nonumber\\
b_0 &&= a_1 [\psi(1/N) + \psi(-1/N) - 2\psi(1)]
\end{eqnarray}
Using the crossing symmetry, we then find that
\begin{equation}
h = \frac{a_0 b_1 + a_1 b_0}{a_1 b_1} = 1/2
\end{equation}
Thus we get
\begin{eqnarray}
G(1,2,3,4)
=&&
- \frac{1}{2N}
\left[\frac{|z_{13}z_{24}|M^{-2}}{|z_{12}z_{14}z_{23}z_{34}|}\right]^{2/N^2}
\nonumber\\
&&\times[W_1^{(0)}(x)W_2^{(1)}(\bar
x) + W_1^{(1)}(x)W_2^{(0)}(\bar
x) + \frac{1}{2}W_1^{(0)}(x)W_2^{(0)}(\bar
x) + (x \rightarrow \bar x)] \label{four}
\end{eqnarray}
Here we choose $U^{(01)} = - 1/4N^2$ for normalization.
In order to derive the operator algebra of the model we consider
various limits of this formula. In the limit $z_{43} = \epsilon
\rightarrow 0$ we get
\begin{equation}
\langle [QQ^+(3)]Q(1)Q^+(2)\rangle = \frac{1}{|\epsilon
z_{12}|^{2/N^2}}\left[1 - \frac{1}{N^2}\left(\frac{\epsilon
z_{12}}{z_{13}z_{23}} +
c.c\right)\ln\left(|\epsilon||\frac{z_{12}}{z_{13}z_{23}}|\right) + ...\right]
\label{three}
\end{equation}
From now on we shall use $Q$ without subscripts instead of Tr$Q$
assuming that the replica limit has been taken. We shall also put $M =
1$.
The three-point correlation function (\ref{three}) is very unusual
from the conformal field theory point of view because it
contains logarithms. Therefore we pause to consider general properties
of logarithmic operators.
\section{General properties of logarithmic operators}
So far correlation functions with logarithms at criticality
have been obtained in the WZNW model on the supergroup $GL(1,1)$
\cite{rs}, in the C = - 2 model
\cite{gurarie} and in gravitationally dressed CFT \cite{bk}.
It was first pointed out by Gurarie in Ref. \cite{gurarie}
that the appearence of logarithms in correlation functions is
due to the presence of special operators, whose operator product
expansions (OPE's) display
logarithmic short-distance singularities. These logarithmic
operators have conformal dimensions degenerate with those of the
usual primary operators, and it is this degeneracy that is at the
origin of the logarithms (cf. our discussion of the degenerate
hypergeometric equation in the previous Section). As a result of this
degeneracy one can
no longer completely diagonalize the Virasoro operator $L_0$, and the
new operators together with the standard ones form the basis of the
Jordan-cell for $L_0$. In order to get a better insight in the
situation, we shall consider the simplest example, which was mentioned
in \cite{bk}, namely, the Liouville model with the action
\begin{equation}
S = \frac{1}{8\pi} \int \mbox{d}^2\xi \sqrt{g(\xi)}\left[
\partial_{\mu}\phi(\xi) \partial^{\mu}\phi(\xi) + Q R^{(2)}(\xi)
\phi(\xi)
\right]
\label{Liou}
\end{equation}
($R^{(2)}$ is the Riemann curvature on a two-dimensional surface - the
world sheet) with the stress-energy tensor
$$
T = -\frac{1}{2} \partial_{z} \phi \partial_{z} \phi +
\frac{Q}{2} \partial_{z}^2 \phi
$$
and the central charge
\begin{equation}
{\cal C} = 1 + 3 Q^2 \label{central}
\end{equation}
The primary field
$\exp(\alpha \phi)$ has a dimension
\begin{equation}
\Delta_{\alpha} =
\alpha(Q-\alpha)/2 \label{Eq}
\end{equation}
This means that there are two operators
with the same dimension $\Delta_{\alpha}$, namely
$V_{\pm} = \exp( \alpha_{\pm} \phi)$, where
$$
\alpha_{\pm} = \frac{Q}{2} \pm \frac{1}{2}\sqrt{Q^2 -
8\Delta_{\alpha}}
$$
If $Q^2 = 8\Delta_{\alpha}$,
i.e. when $\alpha = Q/2$, there is a degeneracy $\alpha_{+} =
\alpha_{-}$ and instead of two exponential primary fields we have
only one exponent $C = \exp(\frac{1}{2}Q\phi)$ and the new operator
$ D = \phi \exp(\frac{1}{2}Q\phi)$ with the same dimension
$\Delta = Q^2/8$. The latter field is sometimes called
the puncture operator. It was discussed in \cite{polch} in the context of the
Liouville gravity, when the action (\ref{Liou}) describes the
gravitational (Liouville) sector of a (non)critical string in the
conformal gauge.
It is easy to get the OPE of the stress-energy
tensor $T$ with these fields. After simple calculations we find
\begin{eqnarray}
T(z) C(0) &&= {\Delta \over z^2} C(0)+ {1\over z} \partial_z C(0) +
...
\nonumber\\
T(z) D(0) &&= {\Delta \over z^2} D(0)+{1\over z^2}
C(0)+{1\over z} \partial_z D(0) + ...
\label{JOPE}
\end{eqnarray}
where the dimension of the fields $C$ and $D$ is $\Delta = Q^2/8$
and the normalization of the field $D$ was defined as
$ D =(2/Q)\phi \exp(\frac{1}{2}Q \phi)$.
It is easy to see indeed that there is a mixing between $C$ and $D$
and that the Virasoro operator $L_0$ which is defined through the
Laurent expansion $T(z) = \sum_{n} L_{n} z^{-n-2}$ is not diagonal
\begin{eqnarray}
L_{0}|C> = \Delta |C>, ~~~~~~ L_{0}|D> = \Delta |D> + |C> \label{example}
\end{eqnarray}
Let us also note that usually one can think about
factorization of the primary
field into the product of chiral left and right operators using the
decomposition $\phi(z, \bar{z}) = \phi_L (z) + \phi_R (\bar{z})$,
leading to $\exp[\alpha \phi] = \exp[\alpha \phi_L (z)] \times
\exp[\alpha \phi_R (\bar{z})]$. For a logarithmic operator
we have
$$
\phi ~\exp[\alpha \phi] = \phi_L (z) \exp[\alpha \phi_L (z)] \times
\exp[\alpha \phi_R (\bar{z})] + \phi_R (\bar{z})
\exp[\alpha \phi_L (z)] \times \exp[\alpha \phi_R (\bar{z})]
$$
and thus the logarithmic operator is the sum of left and right
operators each of which can be factorized.
This simple example illustrates a quite general property of all
theories with logarithmic operators and OPE (\ref{JOPE}) is valid
in all these theories. One can also obtain some general information
about two- and three-point correlation functions with operators $C$
and $D$ starting from the four-point correlation function \cite{gurarie}
\begin{equation}
\langle A(z_1)B(z_2) A(z_3) B(z_4)\rangle =
\frac{1}{(z_1-z_3)^{2\Delta_{A}}(z_2-z_4)^{2\Delta_{B}}}
[x(1-x)]^{\Delta_C - \Delta_{A} - \Delta_{B}} F(x)
\end{equation}
where $x$ is defined by Eq.(\ref{z}), and where we have extracted
the factor $[x(1-x)]^{\Delta_C - \Delta_{A} - \Delta_{B}}$ to
make $F(x)$ finite at $x \rightarrow 0$ or
$x \rightarrow 1$ in the ordinary case. In the case of logarithmic
operators one will get $F(x) = (d + c \ln x + o(x))$ at small $x$.
To reproduce the logarithmic singularity at $x=0$ after the fusion
of $A(z_1)$ and $B(z_2)$ one has to postulate the following OPE
(we restrict ourselves to the chiral sector):
\begin{equation}
A(z_1)~ B(z_2) = (z_1-z_2)^{\Delta_C - \Delta_A -\Delta_B}
\left[D + C \ln(z_1-z_2) +..\right]
\label{AB}
\end{equation}
Taking the limit $z_1 \rightarrow z_2$ one immediately gets
from the four-point correlation function the following three-point
correlation functions:
\begin{eqnarray}
\langle C(z_1) A(z_3) B(z_4)\rangle &&=
\frac{c}{z_{13}^{\Delta_{A}+\Delta_C-\Delta_B}
z_{14}^{\Delta_{B}+\Delta_C-\Delta_{A}}z_{34}^{\Delta_{A} +
\Delta_{B} - \Delta_C}}~ \nonumber \\
\langle D(z_1) A(z_3) B(z_4)\rangle &&=
\frac{1}{z_{13}^{\Delta_{A}+\Delta_C-\Delta_B}
z_{14}^{\Delta_{B}+\Delta_C-\Delta_{A}}z_{34}^{\Delta_{A} +
\Delta_{B} - \Delta_C}}~\left(c \ln \frac{z_3-z_4}{(z_1-z_3)(z_1 - z_4)}
+ d \right)
\label{CAB}
\end{eqnarray}
Now let us consider the $ A(z_3)$ and $ B(z_4)$ fusion which
after insertion of (\ref{AB}) into (\ref{CAB}) will lead to the
following two-point correlation functions:
\begin{eqnarray}
\langle C(x) D(y)\rangle &&=
\langle C(y) D(x) \rangle = \frac{c}{(x-y)^{2\Delta_C }}\nonumber \\
\langle D(x) D(y)\rangle &&=
\frac{1}{(x-y)^{2\Delta_C}} \left(-2c\ln(x-y) + d\right)
\nonumber \\
\langle C(x) C(y)\rangle &&= 0
\label{CC}
\end{eqnarray}
The first equation imposes a strong constraint on the
dimensions $\Delta_C$ of logarithmic
operators, namely that the dimension $\Delta_C$ must be {\it an integer}.
To prove it let us note that we have
$\langle C(x)D(y)\rangle =\langle C(y)D(x)\rangle $,
which means that
the correlation function is invariant under the permutation of
$x$ and $y$, which means that $(-1)^{2\Delta_C} = 1$ so $\Delta_C =n$.
In case of noninteger $\Delta_C$ one must have the
structure constant $c = 0$ and only the operator $D$ will survive in
OPE (\ref{AB}),
however in this case it will be an ordinary, nonlogarithmic operator.
This new result about dimensions of logarithmic
operators means that for any
logarithmic operator we have a hidden continous symmetry. This
symmetry is generated by the conserved
holomorphic (or antiholomorphic) current $C(z)$.
This current is a symmetric
tensor of rank $\Delta_{C}$, which is
a usual vector current if $\Delta_{C} = 1$. In the next Section we
shall demonstrate the existence of such a conserved vector
current in the model with disorder that we are considering.
Let us note that we have
also proved that there is no central extension in the corresponding
current algebra, or in other words there is no anomalous Schwinger
term in the current-current commutator. This is the direct
consequence of the triviality of
the correlation function $\langle C(z)C(0)\rangle = 0$.
\section{Operator product expansions in the model (4)}
In this Section we demonstrate how the general theory just discussed
applies to the model (4). For this end we study OPE's in this theory
after the replica
limit has been taken. In doing so we assume that the replica limit can be
described by some quantum field theory. Here we encounter a
certain ambiguity, namely, that we can define the correlation
functions with an arbitrary prefactor. It turns out that this
prefactor is fixed by the requirement of
self-consistency of OPE's. The latter is achieved when one
uses the definition
(\ref{eq:def}). This explains the necessity of the factor
$2N$ in Eq.(\ref{eq:def}).
We suggest
the following OPE:
\begin{eqnarray}
Q(z)Q^+(0)
= |z|^{-2/N^2}\left\{I - z\left[D(0) +
C(0)\ln|z|^2\right] - \bar z\left[\bar D(0) +
\bar C(0)\ln|z|^2\right] ...\right\} \label{ope}
\end{eqnarray}
where $D, C$ and $\bar D, \bar C$
are some new operators whose correlation functions are to be found.
Notice that in the conventional
WZNW theory this operator expansion would
contain the unit operator and the operator in the adjoint
representation. However, the latter one has the conformal dimensions
which vanish in the replica limit:
\begin{equation}
\Delta_{ad} = \bar\Delta_{ad} = \frac{c_v}{c_v + N} = \frac{r}{r +
N} \rightarrow 0
\end{equation}
Therefore we have here a situation described in the previous Section:
the conformal dimensions of descendants of the unity operator become
degenerate with the dimensions of
descendants of some other primary field (the adjoint operator)
which gives rise to logarithms.
Substituting (\ref{ope}) into Eq.(\ref{three}) we get
\begin{eqnarray}
\langle Q(1)Q^+(2)C(3)\rangle &&=
\frac{1}{2N^2}|z_{12}|^{-2/N^2}\frac{z_{12}}{z_{13}z_{23}}\nonumber\\
\langle Q(1)Q^+(2)D(3)\rangle &&=
N^{-2}|z_{12}|^{-2/N^2}\frac{z_{12}}{z_{13}z_{23}}
\ln|\frac{z_{12}}{z_{13}z_{23}}|
\label{qc}
\end{eqnarray}
Setting $z_{12} = \epsilon$ in these equations and using the OPE
(\ref{ope}) we get the following set of two-point correlation
functions:
\begin{eqnarray}
\langle D(1)C(2)\rangle &&= - \frac{1}{2N^2{z_{12}}^2}, \nonumber\\
\langle C(1)C(2)\rangle &&= 0, \nonumber\\
\langle D(1)D(2)\rangle &&= \frac{2 \ln|z_{12}|}{N^2{z_{12}}^2} \label{cddd}
\end{eqnarray}
There are similar expressions for $\bar C, \: \bar D$-operators with
$z$ being substituted for $\bar z$. Correlators of $C, \: D$ and $\bar C,
\: \bar D$ are equal to zero.
Setting $z_{31} = \epsilon$ in Eqs.(\ref{qc}) we deduce the following
OPE:
\begin{eqnarray}
C(z)Q(0) &&= \frac{1}{2N^2z}Q(0) + ... , \nonumber\\
D(z, \bar z)Q(0) &&= -
\frac{1}{N^2z}\ln|z|Q(0) + ... \label{cdq}
\end{eqnarray}
with the same equations for $Q^+$, except for a change of sign.
These OPE's and the fact that
$C(z)$ does not depend on $\bar z$ ($\bar\partial\langle
C(z)D(0)\rangle
= 0$), enable us to
identify $C, \bar C$ as
generators of a continuous symmetry. It is this symmetry
which is associated with the order parameter $\rho$.
Conformal field theories are characterized by their symmetry
group and a number ${\cal C}$
called `conformal charge'. Formally ${\cal C}$ is a coefficient in the pair
correlation function of stress-energy tensor operators.
A physical meaning of ${\cal C}$ becomes clear when we recall that a
theory with an integer conformal charge
${\cal C} = k$ is equivalent to the theory with $k$ species of free
bosonic
fields.
Thus ${\cal C}$ in unitary theories counts an effective number of degrees of
freedom. The central charge of our theory
is the
sum of central charges
of the free bosonic field (${\cal C} = 1$) and the
WZNW model on the SU(r) group:
\begin{equation}
{\cal C} = 1 + \frac{N(r^2 - 1)}{N + r} = \frac{r}{N} + O(r^2)
\end{equation}
Thus the resulting
central charge vanishes, as it must be; however,
according to the definition of the replica limit
(\ref{eq:def})
the physical correlation function
of the stress-energy tensors remains finite:
\begin{equation}
\langle T(z)T(0)\rangle = \lim_{r\rightarrow 0}\frac{2N{\cal C}_r}{r}
\frac{1}{2z^4} =
\frac{1}{z^4} \label{c}
\end{equation}
Superficially this looks like the effective central charge ${\cal C}_{eff} =
2$. However, ${\cal C}_{eff}$ does not appear in the fusion rules of the
stress-energy tensor components inside of correlation functions with
matter fields, where we have
\begin{equation}
T(z)T(\xi) = \frac{2}{(z - \xi)^2}T(\xi) + \frac{1}{z -
\xi}\partial_{\xi}T(\xi) + ...
\end{equation}
As we have mentioned above, the numerical coefficient in (\ref{c})
is fixed by the
self-consistency requirements of OPE.
Applying twice the OPE (\ref{ope}) to the four-point correlation
function and using the Ward identities for $Q$ fields,
we get the following set of identities:
\begin{eqnarray}
\langle T(z)C(1)D(2) \rangle &&= \sum_{j=1,2}\left\{ \frac{1}{(z - z_j)^2} +
\frac{1}{z - z_j} \partial_j \right\} \langle C(1)D(2) \rangle \nonumber\\
\langle \bar T(z)C(1)D(2) \rangle &&= \sum_{j=1,2} \frac{1}{\bar z - \bar z_j}
\bar \partial_j \langle C(1)D(2) \rangle = 0 \nonumber\\
\langle T(z)D(1)D(2) \rangle &&= \sum_{j=1,2}\left\{ \frac{1}{(z - z_j)^2} +
\frac{1}{z - z_j} \partial_j \right\} \langle D(1)D(2) \rangle + \sum_{j=1,2}
\frac{1}{z - z_j} \langle C(1)D(2) \rangle \nonumber\\
\langle \bar T(z)D(1)D(2) \rangle &&= \sum_{j=1,2} \left\{ \frac{1}{\bar
z -
\bar
z_j} \bar \partial_j \right\} \langle D(1)D(2) \rangle + \sum_{j=1,2}
\frac {1}{(\bar z - \bar z_j)^2} \langle C(1)D(2) \rangle
\end{eqnarray}
We can then substitute the two-point correlation functions to get
\begin{eqnarray}
\langle T(z)C(1)D(2)\rangle &&= - \frac{1}{2N^2}\frac{1}{(z - z_1)^2(z -
z_2)^2}\nonumber\\
\langle T(z)D(1)D(2)\rangle &&= \frac{2}{N^2}\frac{1}{(z - z_1)^2(z -
z_2)^2}(\ln|z_{12}| - 1/4)\nonumber\\
\langle T(z)\bar D(1)\bar D(2)\rangle &&= -
\frac{1}{2N^2}\frac{z^2_{12}}{(z -
z_1)^2(z -
z_2)^2\bar z_{12}^2}
\end{eqnarray}
Taking into account Eq.(\ref{c}) and Eqs.(\ref{cddd})
we conclude that these expressions
are compatible with the following OPE:
\begin{eqnarray}
C(z)D(\xi, \bar\xi) &&= -
\frac{1}{2N^2}\left[\frac{1}{(z - \xi)^2} + T(\xi) + ...\right]\label{CD}\\
D(z,\bar z)D(\xi, \bar\xi) &&= \frac{2}{N^2}\left[\frac{\ln|z - \xi|}{(z
- \xi)^2} +
(\ln|z - \xi| - 1/4)T(\xi) - \frac{(\bar z - \bar\xi)^2}{4(z
- \xi)^2}\bar T(\bar\xi) + ...\right] \label{DD}
\end{eqnarray}
and
\begin{eqnarray}
T(z)C(\xi) &&= \frac{C(\xi)}{(z - \xi)^2} +
\frac{\partial_{\xi}C(\xi)}{(z - \xi)} + ...\\
T(z)D(\xi, \bar\xi) &&= \frac{D(\xi, \bar\xi)}{(z - \xi)^2} + \frac{C(\xi)}{(z
- \xi)^2} +
\frac{\partial_{\xi}D(\xi, \bar\xi)}{(z - \xi)} + ...\\
\bar T(\bar z)D(\xi, \bar\xi) &&= \frac{C(\xi)}{(\bar z - \bar\xi)^2} +
\frac{\partial_{\bar\xi}D(\xi, \bar\xi)}{(\bar z - \bar\xi)} + ...\\
\end{eqnarray}
From the OPE's (\ref{cdq}, \ref{CD}), the Ward identity for the
stress-energy tensor and primary fields and the
Knizhnik-Zamolodchikov
equation
(\cite{knizh}) we derive the following Ward identity:
\begin{eqnarray}
&&\langle C(z_1)D(z_2, \bar z_2)Q(\xi_1, \bar\xi_1)...Q^+(\xi_{2N},
\bar\xi_{2N})\rangle \nonumber\\
&&=
\frac{1}{2N^2}\sum_j \frac{\sigma_j}{z_1 - \xi_j}\langle
C(z_2)Q(\xi_1, \bar\xi_1)...Q^+(\xi_{2N}, \bar\xi_{2N})\rangle -
\frac{1}{2N^2z_{12}^2}\langle
Q(\xi_1, \bar\xi_1)...Q^+(\xi_{2N}, \bar\xi_{2N})\rangle \label{wardc}
\end{eqnarray}
where $\sigma = 1$ for $Q$ and $-1$ for $Q^+$. Notice that the
operator $D$ does not appear in the right hand side of this identity.
This Ward identity is an important one since it, together with Eq.(\ref{CD})
establishes an isomorphism between the representations of
the Virasoro algebra and the algebra of the conserved current
$C$.
Now let us study the fusion of $Q$ with itself. For this end it is
more convenient to rewrite the four-point correlation function
(\ref{four}) in terms of the solutions regular at $x \rightarrow
\infty$. For Eq.(\ref{equ}) these solutions are ($N \neq 2$)
\begin{eqnarray}
\tilde W_1^{(0)}(x) &&= (-x)^{-1/N}F(1/N,1/N,1 + 2/N; 1/x), \nonumber\\
\tilde
W_2^{(0)}(x)
&&= (-x)^{-1/N}F(1 + 1/N,1/N,1 + 2/N;1/x)\nonumber\\
\tilde W_1^{(1)}(x) &&= (-x)^{1/N}F(- 1/N,- 1/N,1 - 2/N; 1/x), \nonumber\\
\tilde W_2^{(1)}(x)
&&= - (-x)^{1/N}F(1 - 1/N,- 1/N,1 - 2/N; 1/x)
\end{eqnarray}
These solutions have extremely simple monodromy properties:
\begin{eqnarray}
\tilde W_1^{(0)}(1 - x) &&= \tilde W_2^{(0)}(x), \:
\tilde W_2^{(0)}(1 -
x) = \tilde W_1^{(0)}(x) \nonumber\\
\tilde W_1^{(1)}(1 - x) &&= - \tilde
W_2^{(1)}(x), \:
\tilde W_2^{(1)}(1 - x) = - \tilde W_1^{(1)}(x)
\end{eqnarray}
The crossing invariant form of the correlation function is
\begin{equation}
W(x, \bar x) = \alpha[\tilde W_1^{(0)}(x)\tilde W_2^{(0)}(\bar x) - k^2\tilde
W_1^{(1)}(x)\tilde W_2^{(1)}(\bar x) + (x \rightarrow \bar x)] \label{infty}
\end{equation}
where
\[
k = \frac{\Gamma(1 + 2/N)\Gamma^2(- 1/N)}{\Gamma(1 -
2/N)\Gamma^2(1/N)}
\]
The coefficient $\alpha$ whose numerical value we do not provide
should be choosen to match Eq.(\ref{infty}) to
the correlation function (\ref{zero})
regular at $x = 0$.
Let us
consider the limit $z_{31} = \epsilon \rightarrow 0$ we have
\begin{equation}
G(1, 2;1 + 0, 2 +0) = 2\alpha|\epsilon|^{-4/N^2}[|z/\epsilon|^{4(N -
2)/N^2} -
k^2|z/\epsilon|^{- 4(N + 2)/N^2}] + ...
\end{equation}
This expansion is valid only for $N \neq 2$. In this case it
corresponds to the standard operator product expansion:
\begin{equation}
Q(1)Q(2) = C_1^{1/2}|z_{12}|^{-4\Delta + 2\Delta_A}O_A(2) +
C_2^{1/2}|z_{12}|^{-4\Delta + 2\Delta_S}O_S(2) + ...
\end{equation}
where $C_1, ~C_2$ are numerical coefficients and $O_A$ and $O_S$ are
operators
from the asymmetric and the
symmetric representations whose Young tableaus are $(1,1,0, ...)$ and
$(2,0,...)$ respectively. Their conformal dimensions are given by
Eq.(\ref{dims}):
\begin{eqnarray}
\Delta_A = \frac{2 - N}{N^2}, \: \Delta_S = \frac{2 + N}{N^2} \label{eq:dim}
\end{eqnarray}
which reproduces the result obtained in the previous publications \cite{tsv}
and \cite{wen}.
At $N = 2$ the dimension of the antisymmetric operator vanishes. Now
we have a situation where there are three operators with zero
conformal dimension - the unity, the adjoint operator and the operator
in the antisymmetric representation. This situation will be discussed
elsewhere.
\section{Deformation by the logarithmic operators}
The conventional WZNW model remains an integrable theory even if one
changes the coefficient in front of the
Tr$(\partial_{\mu}g^+\partial_{\mu}g)$-term in the action
(4). According to \cite{knizh}, such perturbation is equivalent to the
$J_{-1}\bar
J_{-1}\Phi^{ab}$-operator (recall that $\Phi^{ab}$ is the primary
field in the adjoint representation). The
corresponding beta
function is
\begin{equation}
\beta(\gamma) = \frac{2c_v}{c_v + N}\gamma
\end{equation}
where $\gamma$ is the deviation of the coupling constant from its
critical value.
In our case $c_v = r \rightarrow 0$ and the beta function apparently
vanishes. This means that the perturbation becomes marginal and
we have to reconsider the terms of higher order in $\gamma$.
Despite the fact that $\Phi^{ab}$ does not appear now in
OPE, its decendants, that is the logarithmic
operators $D, ~\bar D$ do appear.
We suggest that the
change in the coupling constant of the WZNW model (4) is
associated with the perturbation by the marginal operator
$\gamma\bar{D}D$. We warn the reader not to confuse this perturbation with
a change of the disorder strength ${\cal A}$ which is truely
irrelevant, leading to a
change of the cut-off $M$. One physical mechanism of a marginal deformation
away from criticality in the model (4) was described in
\cite{ners} (see Chapter 7). We conjecture that this
deformation is generated by the $\gamma\bar{D}D$-perturbation.
In the case of a deformation $\gamma \int \mbox{d}^2 z O(z, \bar{z})$
caused by a usual marginal
operator $O$ one has two possibilities depending on the operator
product expansion $$O(z, \bar{z}) O(0) = f \frac{O(0)}{|z|^2} + ...$$
The first one is when $f =0$, i.e. the OPE of $O(z, \bar{z}) O(0)$
does not
contain the operator $O$ itself.
In this case this operator is truly marginal
and one has the continuous family of
conformal field theories parametrized by the deformation parameter
(coupling constant). The
anomalous dimensions $\Delta$ depend on this parameter. In the
model (1) this situation is realized when one introduces an Abelian
disorder (see \cite{wen}).
In the opposite case, when $f \neq 0$, i.e. the OPE of $O(z) O(0)$
contains $O$ itself, there is a renormalization group (RG)
flow of the coupling constant
$$ \frac{\mbox{d}\gamma}{\mbox{d} \ln \Lambda} = f \gamma^2 +..$$
which means that the theory actually depends on the scale $\Lambda$.
Let us now study the same problem in a case where the theory is
deformed
by the operator $\bar{D}D$, which is truly marginal, because
the OPE of $D(z) D(0)$ does not contain the operator $D$ itself
(see Eq.(\ref{DD}).
In this case we shall calculate the correlation function
\begin{eqnarray}
G(z; \gamma) &&=
\langle A(z) B(0) \exp(\gamma \int \mbox{d}^2 x \bar{D}D(x)\rangle
\nonumber \\
&&= \sum_{n} \frac{\gamma^n}{n!}\int \langle A(z) B(0)~
\bar{D}D(x_1)....\bar{D}D(x_n)\rangle \mbox{d}^2 x_1......\mbox{d}^2 x_n
\end{eqnarray}
where $A$ and $B$ are some operators ($Q$ and $Q^{+}$, for example)
with the correlation function
$$
G(z; 0) = \langle A(z) B(0)\rangle
$$
Using the OPE
\begin{equation}
\bar{D}D(x)~ A(y) = a \frac{\ln^2 |x-y|^2}{|x-y|^2} A(y),
{}~~~
\bar{D}D(x)~ B(y) = b \frac{\ln^2 |x-y|^2}{|x-y|^2} B(y)
\end{equation}
one can find the first order in $\gamma$ correction to the
correlation function which will be
\begin{eqnarray}
\gamma \int \langle &&A(z) B(0)~
\bar{D}D(x )\rangle \mbox{d}^2 x = \\ a&& \gamma
\langle A(z) B(0)\rangle \int \mbox{d}^2 x \frac{\ln^2
|x-z|^2}{|x-z|^2}
~ + b \gamma
\langle A(z) B(0)\rangle \int \mbox{d}^2 x \frac{\ln^2
|x|^2}{|x|^2} \nonumber
\end{eqnarray}
where in both integrals we integrate over $x$ between $0$ and $z$.
Then it is easy to find the following logarithmic correction:
$$
\gamma \frac{ (a +b)}{3}~ \ln^3 |z|^2 G(z; 0)
$$
Now one can consider the next order corrections and sum all of them
using the same methods as in the case of conventional marginal
operators (see \cite{Pokrovskii}). The result is
\begin{equation}
G(z; \gamma) = G(z; 0) \exp\left(\gamma\frac{a+b}{3}\ln^3 |z|^2\right)
\label{green}
\end{equation}
which is different from the case of a conventional marginal operator when
one has the first power of log in the exponent and not the third.
The first power in the exponent introduces the power
factor
\begin{equation}
\exp\left((a+b) \gamma \ln |z|^2\right) = |z|^{2(a+b)\gamma}
\end{equation}
corresponding to the change in the anomalous dimension
$\Delta_{A}(\lambda) = \Delta_{A}(0) - (a+b) \gamma$, and the
behaviour of the deformed correlation function is still power-like.
The are no logarithmic corrections after all. This is not true anymore
with the logarithmic operator, when the correlation function
cannot be written as a power at all.
Thus we see
that the correlation functions for operators which have non-trivial
OPE with the logarithmic operator $D$ (like our primary
fields $Q$, for example) will have logarithmic corrections in the
deformed theory - even in the absence of the RG flow.
Let $A = Q$ and $B = Q^+$, then, according to Eq.(\ref{cdq}), $a = b
= N^{-4}$. At $\gamma < 0$ the correlation function (\ref{green}) decays faster
then
any power. At $\gamma > 0$ it increases faster than any power. In this
case the approximation leading to Eq.(\ref{green}) breaks down when
the correlation function begins to increase, that is at
\begin{equation}
|z| \sim M^{-1}\exp[\frac{N}{4\sqrt\gamma}]
\end{equation}
We speculate that for $\gamma > 0$ the symmetry is broken and the finite
density of states at $\epsilon = 0$ is formed. This probably explains
the finite DOS obtained numerically in disordered d-wave
superconductors by Wheatley\cite{joe}.
\section{Probability distribution of local DOS}
Now we shall calculate the distribution function of local densities
of states. We can do it for a system of a finite size $L$. From
Eq.(\ref{rho}) we know that in the case of zero frequency we have
\begin{equation}
\langle \rho^n(x)\rangle = M^n\langle [\mbox{Tr}(Q + Q^+)]^n\rangle
\sim L^{- 2\Delta_n} \label{cor}
\end{equation}
The latter equality is valid in the leading
order in $1/L$; $\Delta_n$ is given by Eq.(\ref{deltas})
being the smallest conformal dimension
in the operator
product of n operators Tr$\langle(Q +
Q^+)\rangle$. For $N > 2$ $\Delta_n$'s are negative for $n > 1$.
Let us imagine now that the result (\ref{cor}) comes from a local
distribution function of $\rho$:
\begin{equation}
\langle \rho^n(x)\rangle = \int_0^{\infty}P(\rho)\rho^n \mbox{d}\rho =
A_n\exp\{2\ln L[\frac{(N - 1)n^2}{2N^2} - \frac{n}{2N}]\}
\end{equation}
where $A_n$ may contain powers of $\ln L$. The distribution function
which
reproduces this result is the famous log-normal distribution which
is considered as a
characteristic feature of disordered systems \cite{lerner},
\cite{raz}, \cite{falko}:
\begin{eqnarray}
P(\rho) &&= D(\rho)\exp\left[ - \frac{1}{\ln L^{\eta}}\ln^2(\rho
L^{\zeta})\right], \label{lognorm}\\
\zeta &&= \frac{1}{N}(3 - 2/N), \: \eta = \frac{4(N - 1)}{N^2}
\end{eqnarray}
where $D(\rho)$ is a smooth function of $\ln\rho$ which we cannot
determine.
From the fact that the frequency scales as $L^{- 2 + 2\Delta_1}$ we
can conjecture that the distribution function of $\rho(\omega)$ is
given by
\begin{eqnarray}
P(\rho_{\omega}) =
D(\rho_{\omega})\exp\left[ - \frac{1}{\ln (1/\omega^{\gamma})}
\ln^2(\rho_{\omega}
\omega^{- \beta})\right], \nonumber\\
\beta = \frac{3N - 2}{2N^2 - 1}, \gamma = \frac{4(N - 1)}{2N^2 - 1}
\end{eqnarray}
The authors of Ref.\cite{wen} have discussed this distribution without
writing it down explicitly.
It is worth remarking that the log-normal distribution also appears
in the Liouville theory (27). Indeed, according to Eq.(\ref{Eq}), the
conformal dimension of the operator $\exp(n\alpha\phi)$ is given by
$\frac{1}{2}n\alpha(Q - n\alpha)$, i.e. it has the same quadratic
$n$-dependence as $\Delta_n$ (\ref{deltas}). Repeating the previous arguments
we obtain for the vertex
operator $V = \exp(\alpha\phi)$ the log-normal distribution
(\ref{lognorm}) with $\eta = 4\alpha^2, ~ \zeta = \alpha Q(3 - 2
\alpha Q)$. Since the logarithmic operators appear in this theory only
when $Q = 2\alpha$, we conclude that their presence is not directly
related to multifractality of the target space.
\section{Conclusions}
In this paper we have demonstrated that in the general class of
nonunitary critical
models a new phenomenon takes place - the emergence of
logarithmic operators associated with a special hidden continuous symmetry.
The presence of this symmetry is intimately related to the fact that
the order parameter of our model - the local density of states at
$\epsilon = 0$ - does
not acquire a non-zero average. Since above mentioned
features appear also in quantum gravity \cite{bk}, we
anticipate a connection between quantum gravity and critical models with
disorder \cite{future}. Our expectations are supported by
the recently discovered similarities between the
conventional localization theory and the Liouville theory
\cite{falko}, \cite{khm}.
The physical meaning of the hidden symmetry in
models with disorder as well as in 2d gravity, remains obscure. It
is clear, however, that this symmetry
should routinely appear
in critical non-unitary theories where the Hamiltonian
cannot be diagonalized (in our paper
this fact is expressed in Eq.(\ref{example}).
It follows also from our work that, at least for the model in
question, the replica approach is equivalent to the supersymmetric
one. This is a pleasant fact.
As we have said above, Eqs.(\ref{CD}) and (\ref{wardc}) enable one in
principle to
reformulate the theory in terms of representations of the current
algebra of the conserved current $C(z)$. With this task being accomplished
one can abanbon replicas and treat the theory
axiomatically as it is customary, for instance, in the theory of
the standard WZNW model. At present this remains the
biggest challenge.
\acknowledgments
The authors are grateful to B. Altshuler, J. Chalker, K. Efetov, V. Fal'ko,
A. Nersesyan, D. Khmelnitskii, N. Mavromatos,
B. Muzykantskii, R. Stinchcombe and J. Wheater for valuable and inspirational
discussions. Our special thanks are to I. Lerner, who acquainted us
with the results obtained in the conventional localization theory.
One of us (I.I.K) would like also to thank
A. Bilal and V. Gurarie for numereous interesting discussions about
the logarithmic operators.
{\bf Appendix}
In this Appendix we write down the relationship between conformal
blocks in the replica and the supersymmetric (SUSY) representations. According
to Ref.\cite{wen}, the correlation functions in SUSY representation
are products of correlation functions of the Gaussian model and the SU$_k$(N)
WZNW theory with $k = - 2N$. According to Ref.\cite{knizh}, the
conformal blocks of the latter model are
\begin{eqnarray}
W_1^{(0)}(x) &&= (1 - x)^{-1}F(1/N, - 1/N, 2; x)\nonumber\\
W_2^{(0)}(x) &&= - \frac{1}{2N}x(1 - x)^{-1}F(1 + 1/N, 1 - 1/N, 3; x)
\end{eqnarray}
The second solutions contain logarithms. The relationship between two
representations is established by the identity
\begin{eqnarray}
x(1 - 1/N^2)F(1 + 1/N, 1 - 1/N, 3; x) + 2(1 - x)F(1/N, - 1/N, 2; x)
\nonumber \\ =
(1 - xN^2)F(1/N, - 1/N, 1;x)
\end{eqnarray}
using which one can write the expression for four-point
function (\ref{four1}) in terms of conformal blocks either of replica
or supersymmetric models.
|
1,108,101,566,340 | arxiv | \section{Introduction}
The following questions are motivated by applications involving flash memory.
Let $S_n$ be the \df{symmetric group} of permutations
$\pi=[\pi(1),\ldots,\pi(n)]$ of $[n]:=\{1,\ldots,n\}$, with composition
defined by $(\pi\rho)(i)=\pi(\rho(i))$. For $2\leq k \leq n$ let
$$\tau_k:=\bigl[k,\;1,2,\ldots,k-1,\;k+1,\ldots,n\bigr]\in S_n$$
be the permutation that jumps element $k$ to position $1$ while shifting
elements $1,2,\dots,k-1$ right by one place. Let $\S_n$ be the \df{directed
Cayley graph} of $S_n$ with generators $\tau_2,\ldots,\tau_n$, i.e.\ the
directed graph with vertex set $S_n$ and a directed edge, labelled $\tau_i$,
from $\pi$ to $\pi\tau_i$ for each $\pi\in S_n$ and each $i=2,\ldots,n$.
We are concerned with self-avoiding directed cycles (henceforth referred to
simply as \df{cycles} except where explicitly stated otherwise) in $\S_n$. (A
cycle is self-avoiding if it visits each vertex at most once). In
applications to flash memory, a permutation represents the relative ranking
of charges stored in $n$ cells. Applying $\tau_i$ corresponds to the
operation of increasing the $i$th charge to make it the largest, and a cycle
is a schedule for visiting a set of distinct charge rankings via such
operations. Schemes of this kind were originally proposed in \cite{jiang}.
One is interested in maximizing the length of such a cycle, since this
maximizes the information that can be stored. It is known that $\S_n$ has a
directed \df{Hamiltonian} cycle, i.e.\ one that includes \emph{every}
permutation exactly once; see
e.g.~\cite{johnson,holroyd-ruskey-williams,jiang,knuth-4-2}. However, for the
application it is desirable that the cycle should not contain any two
permutations that are within a certain fixed distance $r$ of each other, with
respect to some metric $d$ on $S_n$. The motivation is to avoid errors
arising from one permutation being mistaken for another
\cite{jiang,mazumdar}. The problem of maximizing cycle length for given $r,d$
combines notions of Gray codes~\cite{savage} and error-detecting/correcting
codes~\cite{baylis}, and is sometimes known as a snake-in-the-box problem.
(This term has its origins in the study of analogous questions involving
binary strings as opposed to permutations; see e.g.~\cite{snake}).
The main result of this article is that, in the case that has received most
attention (described immediately below)
there is a cycle that is \df{perfect},
i.e.\ that has the maximum size even among arbitrary sets of
permutations satisfying the distance constraint.
More precisely, our focus is following case considered in
\cite{yehezkeally-schwartz,horovitz-etzion,zhang-ge}. Let $r=1$ and let $d$
be the \df{Kendall tau} metric \cite{kendall}, which is defined by setting
$d(\pi,\sigma)$ to be the inversion number of $\pi^{-1}\sigma$, i.e.\ the
minimum number of elementary transpositions needed to get from $\pi$ to
$\sigma$. (The $i$th elementary transposition swaps the permutation elements
in positions $i$ and $i+1$, where $1\leq i\leq n-1$). Thus, the cycle is not
allowed to contain any two permutations that are related by a single
elementary transposition. The primary object of interest is the maximum
possible length $M_n$ of such a directed cycle in $\S_n$.
It is easy to see that $M_n\leq n!/2$. Indeed, any set of permutations
satisfying the above distance constraint includes at most one from the pair
$\{\pi,\pi\tau_2\}$ for every $\pi$, but these pairs partition $\S_n$. To
get a long cycle, an obvious approach is to restrict to the \df{alternating
group} $A_n$ of all even permutations. Since an elementary transposition
changes the parity of a permutation, this guarantees that the distance
condition is satisfied. The generator $\tau_k$ lies in $A_n$ if and only if
$k$ is odd. Therefore, if $n$ is odd, this approach reduces to the problem of
finding a maximum directed cycle in the directed Cayley graph $\A_n$ of $A_n$
with generators $\tau_3,\tau_5,\ldots,\tau_n$. Yehezkeally and Schwartz
\cite{yehezkeally-schwartz} conjectured that for odd $n$ the maximum cycle
length $M_n$ is attained by a cycle of this type; our result will imply this.
(For even $n$ this approach is less useful, since without using $\tau_n$ we
can access only permutations that fix $n$.) As in
\cite{yehezkeally-schwartz,horovitz-etzion,zhang-ge}, we focus mainly on odd
$n$.
For small odd $n$, it is not too difficult to find cycles in $\A_n$ with
length reasonably close to the upper bound $n!/2$, by ad-hoc methods. Finding
systematic approaches that work for all $n$ is more challenging. Moreover,
getting all the way to $n!/2$ apparently involves a fundamental obstacle, but
we will show how it can be overcome.
Specifically, it is obvious that $M_3=3!/2=3$. For general
odd $n\geq 5$, Yehezkeally and Schwartz
\cite{yehezkeally-schwartz} proved the inductive bound
$M_n\geq n(n-2)M_{n-2}$, leading to $M_n\geq
\Omega(n!/\sqrt n)$ asymptotically. They also showed by
computer search that $M_5=5!/2-3=57$. Horowitz and Etzion
\cite{horovitz-etzion} improved the inductive bound to
$M_n\geq (n^2-n-1)M_{n-2}$, giving $M_n=\Omega(n!)$. They
also proposed an approach for constructing a longer cycle
of length $n!/2 - n+2(=(1-o(1))n!/2)$, and showed by
computer search that it works for $n=7$ and $n=9$. They conjectured
that this bound is optimal for all odd $n$. Zhang and Ge
\cite{zhang-ge} proved that the scheme of
\cite{horovitz-etzion} works for all odd $n$, establishing
$M_n\geq n!/2-n+2$, and proposed another scheme aimed at
improving the bound by $2$ to $n!/2 - n+4$. Zhang and Ge
proved that their scheme works for $n=7$, disproving the
conjecture of \cite{horovitz-etzion} in this case, but were
unable to prove it for general odd $n$.
The obvious central question here is whether there exists a perfect cycle,
i.e.\ one of length $n!/2$, for any odd $n>3$. As mentioned above, Horovitz
and Etzion \cite{horovitz-etzion} conjectured a negative answer for all such
$n$, while the authors of \cite{zhang-ge,yehezkeally-schwartz} also speculate
that the answer is negative. We prove a \emph{positive} answer for $n\neq 5$.
\begin{thm}\label{main}
For all odd $n\geq 7$, there exists a directed Hamiltonian
cycle of the directed Cayley graph $\A_n$ of the
alternating group $A_n$ with generators
$\tau_3,\tau_5,\ldots,\tau_n$. Thus, $M_n=n!/2$.
\end{thm}
Besides being the first of optimal length, our cycle has a somewhat simpler
structure than those in \cite{horovitz-etzion,zhang-ge}. It may in
principle be described via an explicit rule that specifies which generator
should immediately follow each permutation $\pi$, as a function of $\pi$.
(See \cite{holroyd-ruskey-williams,williams} for other Hamiltonian cycles of
Cayley graphs that can be described in this way). While the improvement from
$n!/2-n+2$ to $n!/2$ is in itself unlikely to be important for applications,
our methods are quite general, and it is hoped that they will prove useful
for related problems.
We briefly discuss even $n$. Clearly, one approach is to simply leave the
last element of the permutation fixed, and use a cycle in $\A_{n-1}$, which
gives $M_n\geq M_{n-1}$ for even $n$. Horovitz and Etzion
\cite{horovitz-etzion} asked for a proof or disproof that this is optimal. In
fact, we expect that one can do much better. We believe that $M_n\geq
(1-o(1)) n!/2$ asymptotically as $n\to\infty$ (an $n$-fold improvement over
$(n-1)!/2$), and perhaps even $M_n\geq n!/2-O(n^2)$. We will outline a
possible approach to showing bounds of this sort, although it appears that a
full proof for general even $n$ would be rather messy. When $n=6$ we use this
approach to show $M_6\geq 315=6!/2-45$, improving the bound $M_6\geq 57$ of
\cite{horovitz-etzion} by more than a factor of $5$.
Hamiltonian cycles of Cayley graphs have been extensively studied, although
general results are relatively few. See
e.g.~\cite{pak-radoicic,curran-gallian,witte-gallian,knuth-4-2} for surveys.
In particular, it is unknown whether every \emph{undirected} Cayley graph is
Hamiltonian. Our key construction (described in the next section) appears to
be novel in the context of this literature also.
Central to our proof are techniques having their origins in change ringing
(English-style church bell ringing). Change ringing is also concerned with
self-avoiding cycles in Cayley graphs of permutations groups (with a
permutation representing an order in which bells are rung), and change
ringers discovered key aspects of group theory considerably before
mathematicians did -- see~e.g.\ \cite{white,thompson,griffiths,tintin}. As we
shall see, the fact that $\A_5$ has no Hamiltonian cycle (so that we have the
strict inequality $M_5<5!/2$) follows from a theorem of Rankin
\cite{rankin,swan} that was originally motivated by change ringing.
\section{Breaking the parity barrier}
\label{parity}
In this section we explain the key obstruction that frustrated the previous
attempts at a Hamiltonian cycle of $\A_n$ in
\cite{yehezkeally-schwartz,horovitz-etzion,zhang-ge}. We then explain how it
can be overcome. We will then use these ideas to prove \cref{main} in
\cref{hypergraphs,cycle}.
By a \df{cycle cover} of a directed Cayley graph we mean a
set of self-avoiding directed cycles whose vertex sets
partition the vertex set of the graph. A cycle or a cycle
cover can be specified in several equivalent ways: we can
list the vertices or edges encountered by a cycle in order,
or we can specify a starting vertex of a cycle and list the
generators it uses in order, or we can specify which
generator immediately follows each vertex -- i.e.\ the
label of the unique outgoing edge that belongs to the cycle
or cycle cover. It will be useful to switch between these
alternative viewpoints.
A standard approach to constructing a Hamiltonian cycle is to start with a
cycle cover, and then successively make local modifications that unite
several cycles into one, until we have a single cycle. (See
\cite{rapaport-strasser,pak-radoicic,holroyd-ruskey-williams,williams,griffiths,
yehezkeally-schwartz,horovitz-etzion,zhang-ge,curran-gallian,
compton-williamson,witte-gallian} for examples.) However, in $\A_n$ and many
other natural cases, there is a serious obstacle involving parity, as we
explain next.
The \df{order} $\order(g)$ of a group element $g$ is the smallest $t\geq 1$
such that $g^t=\id$, where $\id$ is the identity. In our case, let
$\tau_k,\tau_\ell$ be two distinct generators of $\A_n$, and observe that
their ratio $\rho:=\tau_\ell\tau_k^{-1}$ is simply the permutation that jumps
element $\ell$ to position $k$ while shifting the intervening elements by
$1$. For example, when $n=9$ we have $\tau_9=[912345678]$ and
$\tau_7^{-1}=[234567189]$, so $\tau_9\tau_7^{-1}=[123456978]$ (element $9$
jumps first to position $1$ and then back to position $7$). In general, the
ratio $\rho$ has order $q:=|k-\ell|+1$, which is odd. In the example, $q=3$.
The fact that $\order(\rho)=q$ corresponds to the fact that in the Cayley
graph $\A_n$, starting from any vertex, there is a cycle of length $2q$
consisting of directed edges oriented in \emph{alternating} directions and
with alternating labels $\tau_\ell$ and $\tau_k$. Consider one such
alternating cycle $Q$, and suppose that we have a cycle cover that includes
all $q$ of the $\tau_k$-edges of $Q$. Consequently, it includes none of the
$\tau_\ell$-edges of $Q$ (since it must include only one outgoing edge from
each vertex). An example is the cycle cover that uses the outgoing
$\tau_k$-edge from every vertex of $\A_n$. Then we may modify the cycle cover as
follows: delete all the $\tau_k$-edges of $Q$, and
add all the $\tau_\ell$-edges of $Q$. This results in a new
cycle cover, because each vertex of the graph still has exactly one incoming
edge and one outgoing edge present.
Suppose moreover that all the $\tau_k$-edges of $Q$ lay in distinct
cycles in the original cycle cover. Then the effect of the modification is
precisely to unite these $q$ cycles into one new cycle (having the same
vertices). The new cycle alternately traverses the new $\tau_\ell$-edges and
the remaining parts of the $q$ original cycles. All other cycles of the
cycle cover are unaffected. See \cref{qset3} (left) for the case
$(k,\ell)=(n-2,n)$ (with $q=3$), and \cref{qset3} (right) for the
permutations at the vertices of the alternating cycle $Q$.
A modification of the above type reduces the total number of cycles in the
cycle cover by $q-1$, and therefore, since $q$ is odd, it does not change the
\emph{parity of the total number of cycles}. Less obviously, it turns out
that this parity is preserved by such a modification even if we relax the
assumption that the $q$ deleted edges lie in distinct cycles. (See
\cite{rankin} or \cite{swan} for proofs.)
This is a problem, because many cycle covers that one might
naturally start with have an \emph{even} number of cycles. This holds in
particular for the cycle cover that uses a single generator $\tau_k$
everywhere (for $n\geq 5$), and also for the one that arises in an obvious
inductive approach to proving \cref{main} (comprising
$|A_n|/|A_{n-2}|=n(n-1)$ cycles each of length $|A_{n-2}|$). Thus we can
(apparently) never get to a Hamiltonian cycle (i.e.\ a cycle cover of one
cycle) by this method.
\begin{figure}
\centering
\begin{minipage}{.5\textwidth}
\centering
\begin{tikzpicture}[scale=.75]
\path[use as bounding box] (-3.1,-3.3) rectangle (3.1,2.6);
\foreach \i in {0,...,5}
{ \draw (60*\i:1) node[circle,fill,inner sep=1.7pt] (\i){};};
\draw[very thick,darkgreen] (120:1) node[circle, draw,inner sep=3pt]{};
\draw (120:1) node[circle,fill=darkgreen,inner sep=1.7pt]{};
\draw[very thick,midarrow,blue,dotted] (0) to node[above right=-4pt and -2pt] {$\tau_{n-2}$} (1);
\draw[very thick,midarrow,blue,dotted] (2) to node[above left=-4pt and -2pt] {$\tau_{n-2}$} (3);
\draw[very thick,midarrow,blue,dotted] (4) to node[below] {$\tau_{n-2}$} (5);
\draw[ultra thick,midarrow] (0) to node[below right=-4pt and -2pt] {$\tau_n$} (5);
\draw[ultra thick,midarrow] (2) to node[above] {$\tau_n$} (1);
\draw[ultra thick,midarrow] (4) to node[below left=-4pt and -2pt] {$\tau_n$} (3);
\draw[very thick,midarrow,blue] (1) to[bend left=135,looseness=10] (0);
\draw[very thick,midarrow,blue] (3) to[bend left=135,looseness=10] (2);
\draw[very thick,midarrow,blue] (5) to[bend left=135,looseness=10] (4);
\end{tikzpicture}
\end{minipage
\begin{minipage}{.4\textwidth}\centering
\centering
\begin{tikzpicture}
\boldmath
\newcommand{\bdot}{\boldsymbol\cdot}
\matrix (T) [matrix of math nodes, nodes in empty cells,
row 1/.style={nodes={text=darkgreen}},
row 7/.style={nodes={text=darkgreen}},
font=\fontsize{13}{13}\bfseries\selectfont,column sep={12pt,between origins},row
sep={14pt,between origins}] {
\bdot&\bdot&\bdot&\bdot&\bdot&\bdot & a & b & c \\
c & \bdot&\bdot&\bdot&\bdot&\bdot&\bdot & a & b \\
\bdot&\bdot&\bdot&\bdot&\bdot&\bdot & c & a & b \\
b & \bdot&\bdot&\bdot&\bdot&\bdot&\bdot & c & a \\
\bdot&\bdot&\bdot&\bdot&\bdot&\bdot & b & c & a \\
a & \bdot&\bdot&\bdot&\bdot&\bdot&\bdot & b & c \\
\bdot&\bdot&\bdot&\bdot&\bdot&\bdot & a & b & c \\
};
\draw (T-1-9.center) ..controls(T-1-9.south)and(T-2-1.north).. (T-2-1.center);
\draw[blue,very thick,dotted] (T-2-1.center)..controls(T-2-1.south)and(T-3-7.north)..(T-3-7.center);
\draw (T-3-9.center)..controls(T-3-9.south)and(T-4-1.north)..(T-4-1.center);
\draw[blue,very thick,dotted] (T-4-1.center)..controls(T-4-1.south)and(T-5-7.north)..(T-5-7.center);
\draw (T-5-9.center)..controls(T-5-9.south)and(T-6-1.north)..(T-6-1.center);
\draw[blue,very thick,dotted] (T-6-1.center)..controls(T-6-1.south)and(T-7-7.north)..(T-7-7.center);
\end{tikzpicture}
\end{minipage}
\caption{\emph{Left:} linking $3$ cycles by replacing generator $\tau_{n-2}$ with
generator $\tau_n$ in $3$ places. We start with the $3$ thin blue cycles,
each of which comprises
a dotted edge labeled with generator $\tau_{n-2}$, and a curved arc that
represents the remaining part of the cycle. We delete the dotted edges
and replace them with the thick solid black edges (labelled $\tau_n$),
to obtain one (solid) cycle, containing the same vertices as the original $3$ cycles.
\emph{Right:} the permutations at the six vertices
that are marked with solid discs in the left picture.
The permutation at the (green) circled vertex is $[\ldots\ldots,a,b,c]$, where
$a,b,c\in[n]$,
and the permutations are listed in clockwise order around the inner hexagon
starting and finishing there.
The ellipsis $\cdots\cdots$ represents a sequence of $n-3$ distinct elements of
$[n]$, the same sequence everywhere it occurs.
A solid black curve indicates that the ratio between the
two successive permutations is
$\tau_n$ (so that an element jumps from position $n$ to $1$),
while a dotted blue curve indicates $\tau_{n-2}^{-1}$ (with a jump from $1$ to $n-2$).
} \label{qset3}
\end{figure}
The above ideas in fact lead to the following rigorous condition for
non-existence of directed Hamiltonian cycles. The result was proved by
Rankin \cite{rankin}, based on an 1886 proof by
Thompson~\cite{thompson} of a special case arising in
change ringing; Swan \cite{swan} later gave a simpler
version of the proof.
\begin{thm}\label{rankin}
Consider the directed Cayley graph $\G$ of a finite group with two generators
$a,b$. If $\order(ab^{-1})$ is odd and $|\G|/\order(a)$ is even, then $\G$ has
no directed Hamiltonian cycle.
\end{thm}
An immediate consequence is that $\A_5$ has no directed Hamiltonian cycle
(confirming the computer search result of \cite{horovitz-etzion}), and indeed
$\A_n$ has no directed Hamiltonian cycle using only two generators for odd
$n\geq 5$.
\begin{figure}
\centering
\begin{minipage}{.55\textwidth}\centering
\begin{tikzpicture}[scale=1.3]
\path[use as bounding box] (-2.3,-2.3) rectangle (2.3,2.3);
\foreach \i in {0,...,11}
{ \draw (-15+30*\i:1) node[circle,fill,inner sep=1.5pt] (\i){};};
\draw[very thick,darkgreen] (45:1) node[circle, draw,inner sep=3pt]{};
\draw (45:1) node[circle,fill=darkgreen,inner sep=1.7pt]{};
\draw[very thick,midarrow,blue,dotted] (0) to node[right] {$\tau_{n-2}$} (1);
\draw[very thick,midarrow,red,dotted] (2) to node[above right=-1pt and -4pt] {$\tau_{n-4}$} (3);
\draw[very thick,midarrow,red,dotted] (4) to node[above left=-1pt and -4pt] {$\tau_{n-4}$} (5);
\draw[very thick,midarrow,blue,dotted] (6) to node[left] {$\tau_{n-2}$} (7);
\draw[very thick,midarrow,red,dotted] (8) to node[below left=-1pt and -4pt] {$\tau_{n-4}$} (9);
\draw[very thick,midarrow,red,dotted] (10) to node[below right=-1pt and -4pt] {$\tau_{n-4}$} (11);
\draw[ultra thick,midarrow] (0) to node[above left=-4pt and -1pt] {$\tau_n$} (11);
\draw[ultra thick,midarrow] (2) to node[below left=-4pt and -1pt] {$\tau_n$} (1);
\draw[ultra thick,midarrow] (4) to node[below=1pt] {$\tau_n$} (3);
\draw[ultra thick,midarrow] (6) to node[below right=-4pt and -1pt] {$\tau_n$} (5);
\draw[ultra thick,midarrow] (8) to node[above right=-4pt and -1pt] {$\tau_n$} (7);
\draw[ultra thick,midarrow] (10) to node[above=1pt] {$\tau_n$} (9);
\draw[very thick,midarrow,blue] (1) to[bend left=135,looseness=10] (0);
\draw[very thick,midarrow,red] (3) to[bend left=135,looseness=10] (2);
\draw[very thick,midarrow,red] (5) to[bend left=135,looseness=10] (4);
\draw[very thick,midarrow,blue] (7) to[bend left=135,looseness=10] (6);
\draw[very thick,midarrow,red] (9) to[bend left=135,looseness=10] (8);
\draw[very thick,midarrow,red] (11) to[bend left=135,looseness=10] (10);
\end{tikzpicture}
\end{minipage
\begin{minipage}{.4\textwidth}\centering
\begin{tikzpicture}
\boldmath
\newcommand{\bdot}{\boldsymbol\cdot}
\matrix (T) [matrix of math nodes,nodes in empty cells,
font=\fontsize{13}{13}\selectfont,
row 1/.style={nodes={text=darkgreen}},
row 13/.style={nodes={text=darkgreen}},
column sep={13pt,between origins},row
sep={13pt,between origins}] {
\bdot&\bdot&\bdot&\bdot& a & b & c & d & e \\
e & \bdot&\bdot&\bdot&\bdot & a & b & c & d \\
\bdot&\bdot&\bdot&\bdot & a & b & e & c & d \\
d & \bdot&\bdot&\bdot&\bdot & a & b & e & c \\
\bdot&\bdot&\bdot&\bdot & d & a & b & e & c \\
c & \bdot&\bdot&\bdot&\bdot & d & a & b & e \\
\bdot&\bdot&\bdot&\bdot & c & d & a & b & e \\
e & \bdot&\bdot&\bdot&\bdot & c & d & a & b \\
\bdot&\bdot&\bdot&\bdot & c & d & e & a & b \\
b & \bdot&\bdot&\bdot&\bdot & c & d & e & a \\
\bdot&\bdot&\bdot&\bdot & b & c & d & e & a \\
a & \bdot&\bdot&\bdot&\bdot & b & c & d & e \\
\bdot&\bdot&\bdot&\bdot & a & b & c & d & e \\
};
\draw (T-1-9.center) ..controls(T-1-9.south)and(T-2-1.north).. (T-2-1.center);
\draw[blue,very thick,dotted] (T-2-1.center)..controls(T-2-1.south)and(T-3-7.north)..(T-3-7.center);
\draw (T-3-9.center)..controls(T-3-9.south)and(T-4-1.north)..(T-4-1.center);
\draw[red,very thick,dotted] (T-4-1.center)..controls(T-4-1.south)and(T-5-5.north)..(T-5-5.center);
\draw (T-5-9.center)..controls(T-5-9.south)and(T-6-1.north)..(T-6-1.center);
\draw[red,very thick,dotted] (T-6-1.center)..controls(T-6-1.south)and(T-7-5.north)..(T-7-5.center);
\draw (T-7-9.center)..controls(T-7-9.south)and(T-8-1.north)..(T-8-1.center);
\draw[blue,very thick,dotted] (T-8-1.center)..controls(T-8-1.south)and(T-9-7.north)..(T-9-7.center);
\draw (T-9-9.center)..controls(T-9-9.south)and(T-10-1.north)..(T-10-1.center);
\draw[red,very thick,dotted] (T-10-1.center)..controls(T-10-1.south)and(T-11-5.north)..(T-11-5.center);
\draw (T-11-9.center)..controls(T-11-9.south)and(T-12-1.north)..(T-12-1.center);
\draw[red,very thick,dotted] (T-12-1.center)..controls(T-12-1.south)and(T-13-5.north)..(T-13-5.center);
\end{tikzpicture}
\end{minipage}
\caption{The key construction. \emph{Left:} replacing a suitable combination of
generators $\tau_{n-2}$ and $\tau_{n-4}$ with $\tau_n$ links $6$ cycles into one,
breaking the parity barrier. We start with the $2$ blue and $4$ red thin cycles, and
replace the dotted edges with the thick black solid edges to obtain the solid
cycle.
\emph{Right:} the permutations appearing
at the vertices marked with solid discs, listed in clockwise order
starting and ending at the
circled vertex, which is $[{.}\,{.}\,{.}\,{.}\, ,a,b,c,d,e]$.
The ellipsis ${\cdot}\,{\cdot}\,{\cdot}\,{\cdot}$
represents the same sequence everywhere it occurs.} \label{qset6}
\end{figure}
To break the parity barrier, we must use at least three generators in a
fundamental way. The problem with the previous approach was that
$\order(\tau_\ell\tau_k^{-1})$ is odd: we need an analogous relation
involving composition of an \emph{even} number of ratios of two generators.
In terms of the graph $\A_n$, we need a cycle of length a multiple of $4$
whose edges are oriented in alternating directions. It is clear that such a
thing must exist for all odd $n\geq 7$, because the ratios
$\tau_k\tau_\ell^{-1}$ generate the alternating group on the $n-2$ elements
$\{3,\ldots,n\}$, which contains elements of even order. We will use the
example:
\begin{equation}
\order\bigl(\zeta\bigr)=2,\quad\text{where }
\zeta:= \tau_n\tau_{n-2}^{-1}\tau_n\tau_{n-4}^{-1}\tau_n\tau_{n-4}^{-1}.
\label{order2}
\end{equation}
It is a routine matter to check \eqref{order2}: the ratio
$\tau_n\tau_{n-s}^{-1}$ is the permutation that jumps an element from
position $n$ to $n-s$ (while fixing $1,\ldots,n-s-1$
and shifting $n-s,\ldots,n-1$ right one place), so to compute the
composition $\zeta$ of three such ratios we need only keep track of the last
$5$ elements. \cref{qset6} (right) shows the explicit computation: starting
from an arbitrary permutation $\pi=[\ldots,a,b,c,d,e]\in A_n$, the successive
compositions
$\pi,\pi\tau_n,\pi\tau_n\tau_{n-2}^{-1},\pi\tau_n\tau_{n-2}^{-1}\tau_n,\ldots,
\pi\zeta^2=\pi$ are listed -- the ellipsis
${\cdot}\,{\cdot}\,{\cdot}\,{\cdot}$ represents the same sequence everywhere
it occurs. This explicit listing of the relevant permutations will be useful
later.
We can use the above observation to link $6$ cycles into one, as shown in
\cref{qset6} (left). Let $Q'$ be a length-$12$ cycle in $\A_n$ with edges in
alternating orientations that corresponds to the identity \eqref{order2}.
That is to say, every alternate edge in $Q'$ has label $\tau_n$, and is
oriented in the same direction around $Q'$. The other $6$ edges are oriented
in the opposite direction, and have successive labels
$\tau_{n-2},\tau_{n-4},\tau_{n-4},\tau_{n-2},\tau_{n-4},\tau_{n-4}$.
Suppose that we start
with a cycle cover in which the two $\tau_{n-2}$-edges and the four
$\tau_{n-4}$-edges of $Q'$ all lie in distinct cycles. Then we can delete
these $6$ edges and replace them with the six $\tau_n$-edges of $Q'$. This
results in a new cycle cover in which these $6$ cycles have been united into
one, thus reducing the number of cycles by $5$ and changing its parity. See \cref{qset6} (left) -- the old cycles are in thin red and blue, while the new cycle is shown by
solid lines and arcs.
We will prove \cref{main} by induction. The inductive step will use one
instance of the above $6$-fold linkage to break the parity barrier, together
with many instances of the simpler $3$-fold linkage described earlier with
$(k,\ell)=(n-2,n)$. The base case $n=7$ will use the $6$-fold linkage in the
reverse direction (replacing six $\tau_n$-edges with
$\tau_{n-2},\tau_{n-4},\ldots$), together with the cases
$(k,\ell)=(7,5),(7,3)$ of the earlier linkage.
\section{Hypergraph spanning}
\label{hypergraphs}
The other main ingredient for our proof is a systematic way
of organizing the various linkages. For this the language
of hypergraphs will be convenient. Similar hypergraph
constructions were used in \cite{horovitz-etzion,zhang-ge}.
A \df{hypergraph} $(V,H)$ consists of a vertex set $V$ and
a set $H$ of nonempty subsets of $V$, which are called
\df{hyperedges}. A hyperedge of size $r$ is called an
$r$-hyperedge.
The \df{incidence graph} of a hypergraph $(V,H)$ is the bipartite graph with
vertex set $V\cup H$, and with an edge between $v\in V$ and $h\in H$ if
$v\in h$. A \df{component} of a hypergraph is a component of its incidence
graph, and a hypergraph is \df{connected} if it has one component. We say
that a hypergraph is \df{acyclic} if its incidence graph is acyclic. Note
that this a rather strong condition: for example, if two distinct hyperedges
$h$ and $h'$ share two distinct vertices $v$ and $v'$ then the hypergraph is
not acyclic. (Several non-equivalent notions of acyclicity for hypergraphs
have been considered -- the notion we use here is sometimes called
Berge-acyclicity -- see e.g.~\cite{fagin}).
We are interested in hypergraphs of a particular kind that are related to the
linkages considered in the previous section. Let $[n]^{(k)}$ be the set of
all $n!/(n-k)!$ ordered $k$-tuples of distinct elements of $[n]$.
If $t=(a,b,c)\in [n]^{(3)}$ is a triple, define the \df{triangle}
$\T(t)=\T(a,b,c):=\{(a,b),(b,c),(c,a)\}\subset[n]^{(2)}$ of pairs that
respect the cyclic order. (Note that $\T(a,b,c)=\T(c,a,b)\neq \T(c,b,a)$.)
In our application to Hamiltonian cycles, $\T(a,b,c)$ will encode precisely the linkage
of $3$ cycles shown in \cref{qset3}. The following fact and its proof are
illustrated in \cref{cactus9}.
\begin{figure}
\centering
\includegraphics[width=\textwidth]{hypergraph.eps}
\caption{The hypergraph of \cref{cactus}, when $n=9$. The vertices are all the
ordered pairs $(a,b)=ab\in[n]^{(2)}$, and the hyperedges are triangles of the form $\{ab,bc,ca\}$.
Hyperedges are colored according to the step of the induction at which they are added.
In the last step from $n=8$ to $n=9$, all the white hyperedges are added,
i.e.\ those incident to vertices that contain $9$.
}
\label{cactus9}
\end{figure}
\begin{prop}\label{cactus}
Let $n\geq 3$. There exists an acyclic hypergraph with vertex set
$[n]^{(2)}$, with all hyperedges being triangles $\T(t)$ for $t\in
[n]^{(3)}$, and with exactly two components: one containing precisely the $3$
vertices of $\T(3,2,1)$, and the other containing all other vertices.
\end{prop}
\begin{proof}
We give an explicit inductive construction. When $n=3$ we simply take as
hyperedges the two triangles $\T(3,2,1)$ and $\T(1,2,3)$.
Now let $n\geq 4$, and assume that $({[n-1]}^{(2)},H)$ is a hypergraph
satisfying the given conditions for $n-1$. Consider the larger hypergraph
$([n]^{(2)},H)$ with the same set of hyperedges, and note that its components
are precisely: (i) $\T(3,2,1)$; (ii) an acyclic component which we denote $K$
that contains all vertices of $[n-1]^{(2)}\setminus\T(3,2,1)$; and (iii) the
$2n-2$ isolated vertices $\{(i,n),(n,i):i\in[n-1]\}$.
We will add some further hyperedges to $([n]^{(2)},H)$. For $i\in[n-1]$,
write $i^+$ for the integer in $[n-1]$ that satisfies $i^+\equiv (i+1)
\bmod{(n-1)}$, and define
\begin{align*}
D:=\bigl\{&\Delta(i,i^+,n):i\in [n-1]\bigr\}\\
=\bigl\{&\T(1,2,n),\T(2,3,n),\ldots,\T(n-2,n-1,n),\;\T(n-1,1,n)\bigr\}.
\end{align*}
Any element $\Delta(i,i^+,n)$ of $D$ has $3$ vertices. One of them, $(i,i^+)$, lies in
$K$, while the others, $(i^+,n)$ and $(n,i)$, are isolated vertices of $([n]^{(2)},H)$.
Moreover, each isolated vertex of $([n]^{(2)},H)$ appears in exactly one
hyperedge in $D$. Therefore, $([n]^{(2)},H\cup D)$ has all the claimed
properties.
\end{proof}
We remark that the above hypergraph admits a simple (non-inductive)
description -- it consists of all $\T(a,b,c)$ such that $\max\{a,b\}<c$ and
$b\equiv (a+1) \bmod (c-1)$.
In order to link cycles into a Hamiltonian cycle we will require a \emph{connected} hypergraph. For $n\geq 3$ there is no connected acyclic hypergraph of triangles
with vertex set $[n]^{(2)}$. (This follows from parity considerations: an
acyclic component composed of $m$ triangles has $1+2m$ vertices, but
$|[n]^{(2)}|$ is even.) Instead, we simply introduce a larger hyperedge, as
follows.
\begin{samepage}
\begin{cor}\label{cactus2}
Let $n\geq 5$ and let $a,b,c,d,e\in[n]$ be distinct. There exists a
connected acyclic hypergraph with vertex set $[n]^{(2)}$ such that one
hyperedge is the $6$-hyperedge $\T(a,b,e)\cup \T(c,d,e)$, and all others are
triangles $\T(t)$ for $t\in [n]^{(3)}$.
\end{cor}
\end{samepage}
\begin{proof}
By symmetry, it is enough to prove this for any one choice of $(a,b,c,d,e)$;
we choose $(2,1,4,5,3)$. The result follows from \cref{cactus}, on noting
that $\T(3,4,5)=\T(4,5,3)$ is a hyperedge of the hypergraph constructed
there: we simply unite it with $\T(3,2,1)=\T(2,1,3)$ to form the
$6$-hyperedge.
\end{proof}
\section{The Hamiltonian cycle}
\label{cycle}
We now prove \cref{main} by induction on (odd) $n$. We give
the inductive step first, followed by the base case $n=7$.
The following simple observation will be used in the
inductive step.
\begin{lemma}\label{penultimate}
Let $n\geq 3$ be odd, and consider any Hamiltonian cycle of $\A_n$. For
every $i\in[n]$ there exists a permutation $\pi\in A_n$ with $\pi(n)=i$ that
is immediately followed by a $\tau_n$-edge in the cycle.
\end{lemma}
\begin{proof}
Since the cycle visits all permutations of $A_n$, it must contain a directed
edge from a permutation $\pi$ satisfying $\pi(n)=i$ to a permutation $\pi'$
satisfying $\pi'(n)\neq i$. This is a $\tau_n$-edge, since any other
generator would fix the rightmost element.
\end{proof}
\begin{proof}[Proof of \cref{main}, inductive step]
We will prove by induction on odd $n\geq 7$ the statement:
\begin{equation}
\text{\em there exists a Hamiltonian cycle of $\A_n$
that includes at least one
$\tau_{n-2}$-edge.}\label{ind}
\end{equation}
As mentioned above, we postpone the
proof of the base case $n=7$. For distinct $a,b\in[n]$
define the set of permutations of the form $[\ldots,a,b]$:
$$A_n(a,b):=\Bigl\{\pi\in A_n: \bigl(\pi(n-1),\pi(n)\bigr)=(a,b)\Bigr\}.$$
Let $n\geq 9$, and let $L=(\tau_{s(1)},\tau_{s(2)},\ldots, \tau_{s(m)})$ be
the sequence of generators used by a Hamiltonian cycle of $\A_{n-2}$, as
guaranteed by the inductive hypothesis, in the order that they are
encountered in the cycle starting from $\id\in A_{n-2}$ (where $m=(n-2)!/2$,
and $s(i)\in\{3,5,\ldots,n-2\}$ for each $i$). Now start from any
permutation $\pi\in A_n(a,b)$ and apply the sequence of generators $L$
(where a generator $\tau_k\in A_{n-2}$ is now interpreted as the generator $\tau_k\in A_n$ with the same name). This gives a cycle in $\A_n$ whose vertex set is precisely $A_n(a,b)$. (The two rightmost elements $a,b$ of the permutation are undisturbed, because $L$ does not contain $\tau_n$.) Note that, for given $a,b$, different choices of the starting permutation $\pi\in A_n(a,b)$ in general result in different cycles.
We next describe the idea of the proof, before giving the details. Consider
a cycle cover $\mathcal{C}$ comprising, for each $(a,b)\in[n]^{(2)}$, one cycle $C(a,b)$ with vertex set $A_n(a,b)$ of the form described above (so $n(n-1)$ cycles in
total). We will link the cycles of $\mathcal{C}$ together into a single cycle by
substituting the generator $\tau_n$ at
appropriate points, in the ways discussed in \cref{parity}. The linking
procedure will be encoded by the hypergraph of \cref{cactus2}. The vertex
$(a,b)$ of the hypergraph will correspond to the initial cycle $C(a,b)$.
A $3$-hyperedge $\T(a,b,c)$ will indicate a
substitution of $\tau_n$ for $\tau_{n-2}$ in $3$ of the cycles of $\mathcal{C}$,
linking them together in the manner of \cref{qset3}. The $6$-hyperedge will
correspond to the parity-breaking linkage in which $\tau_n$ is substituted
for occurrences of both $\tau_{n-2}$ and $\tau_{n-4}$, linking $6$ cycles as
in \cref{qset6}. One complication is that the starting points of the
cycles of $\mathcal{C}$ must be
chosen so that $\tau_{n-2}$- and $\tau_{n-4}$-edges occur in appropriate
places so that all these substitutions are possible. To address this, rather than choosing the cycle cover $\mathcal{C}$ at
the start, we will in fact build our final cycle sequentially, using one hyperedge at
a time, and choosing appropriate cycles $C(a,b)$ as we go. We
will start with the $6$-hyperedge, and for each subsequent $3$-hyperedge we will
link in two new cycles. \cref{penultimate} will ensure enough $\tau_{n-2}$-edges for subsequent steps: for any $(a,b,c)\in[n]^{(3)}$, there is a vertex of the form $[\ldots,a,b,c]$ in $C(b,c)$ followed by $\tau_{n-2}$-edge. The inductive hypothesis \eqref{ind} will provide the $\tau_{n-4}$-edges needed for the initial $6$-fold linkage.
We now give the details. In preparation for the sequential linking procedure,
choose an acyclic connected hypergraph $([n]^{(2)},H)$ according to
\cref{cactus2}, with the $6$-hyperedge being $\T_0\cup \T_0'$, where
$\T_0:=\T(c,d,e)$ and $\T_0':=\T(a,b,e)$, and where we write
\begin{equation}
(a,b,c,d,e)=(n-4,n-3,n-2,n-1,n). \label{abcde}
\end{equation}
Let $N=|H|-1$, and order the hyperedges
as $H=\{h_0,h_1,\ldots,h_N\}$ in such a way that $h_0=\T_0\cup \T_0'$ is the
$6$-hyperedge, and, for each $1\leq i\leq N$, the hyperedge $h_{i}$ shares
exactly one vertex with $\bigcup_{\ell=0}^{i-1} h_\ell$. (To see that this is
possible, note that for any choice of $h_0,\ldots,h_{i-1}$ satisfying this condition, connectedness of the hypergraph implies that there exists $h_i$ that shares \emph{at least} one vertex with one of its predecessors; acyclicity then implies that it
shares exactly one.)
We will construct the required Hamiltonian cycle via a sequence of steps
$j=0,\ldots, N$. At the end of step $j$ we will have a self-avoiding
directed cycle $C_j$ in $\A_n$ with the following properties.
\begin{ilist}
\item The vertex set of $C_j$ is the union of $A_n(x,y)$ over all
$(x,y)\in\bigcup_{i=0}^j h_i$.
\item \sloppy For every $(x,y,z)\in[n]^{(3)}$ such that $(y,z)\in \bigcup_{i=0}^j
h_i$ but $\T(x,y,z)\notin \{\T_0,\T_0',h_1,h_2,\ldots,h_j\}$, there exists a permutation
$\pi\in A_n$
of the form $[\ldots,x,y,z]$ that is followed immediately by a
$\tau_{n-2}$-edge in $C_j$.\fussy
\end{ilist}
We will check by induction on $j$ that the above properties hold. The final
cycle $C_N$ will be the required Hamiltonian cycle. The purpose of the
technical condition (ii) is to ensure that suitable edges are
available for later linkages; the idea is that the
triple $(x,y,z)$ is available for linking in two further cycles
unless it has already been used.
We will describe the cycles $C_j$ by giving their sequences of generators.
Recall that $L$ is the sequence of generators of the Hamiltonian cycle of
$\A_{n-2}$. Note that $L$ contains both $\tau_{n-2}$ and $\tau_{n-4}$, by
\cref{penultimate} and the inductive hypothesis \eqref{ind} respectively. For each of
$k=n-2,n-4$, fix some location $i$ where $\tau_k$ occurs in $L$ (so that
$s(i)=k$), and let $L[\tau_k]$ be the sequence obtained by starting at that
location and omitting this $\tau_k$ from the cycle:
$$L[\tau_k]:=\bigl(\tau_{s(j+1)},\tau_{s(j+2)}\ldots,\tau_{s(m)},\
\tau_{s(1)},\ldots,\tau_{s(j-1)}\bigr).$$
Note that the composition in order of the elements of $L[\tau_k]$ is
$\tau^{-1}_k$.
For step $0$, let $C_0$ be the cycle that starts at $\id\in A_n$ and uses the
sequence of generators
\begin{gather*}
\tau_n, L[\tau_{n-2}],\tau_n,
L[\tau_{n-4}],\tau_n,L[\tau_{n-4}],\\
\tau_n, L[\tau_{n-2}],\tau_n, L[\tau_{n-4}],\tau_n,L[\tau_{n-4}],
\end{gather*}
(where commas denote concatenation). This cycle is
precisely of the form illustrated in \cref{qset6} (left) by
the solid arcs and lines. The curved arcs represent the
paths corresponding to the $L[\cdot]$ sequences. The
vertex set of each such path is precisely $A_n(u,v)$ for
some pair $(u,v)$; we denote this path $P(u,v)$.
The solid lines represent the $\tau_n$-edges.
Moreover, since \cref{qset6} (right)
lists the vertices (permutations) at the beginning and end
of each path $P(u,v)$, we can read off the pairs $(u,v)$. With
$a,\ldots,e$ as in \eqref{abcde}, the pairs are
$\{(d,e),(c,d),(e,c),(b,e),(a,b),(e,a)\}$. This set equals
$\T_0\cup\T_0'=h_0$, so property (i) above holds for the
cycle $C_0$.
We next check that $C_0$ satisfies (ii). Let
$(x,y,z)\in[n]^{(3)}$ be such that $(y,z)\in h_0$.
The cycle $C_0$ includes a path $P(y,z)$ with vertex set $A_n(y,z)$
and generator sequence $L[\tau_k]$ (where $k$ is
$n-2$ or $n-4$). Let $C(y,z)$ be the cycle that results
from closing the gap, i.e.\ appending a $\tau_k$-edge $f$ to
the end of $P(y,z)$. Note that $P(y,z)$ and $C(y,z)$ both
have vertex set $A_n(y,z)$. By \cref{penultimate} applied
to $\A_{n-2}$, the cycle $C(y,z)$ contains a permutation of
the form $[\ldots,x,y,z]$ immediately followed by a
$\tau_{n-2}$-edge, $g$ say. Edge $g$ is also present
in $C_0$ unless $g=f$. Consulting \cref{qset6}, and
again using the notation in \eqref{abcde}, we see that this happens
only in the two cases $(x,y,z)=(e,c,d),(e,a,b)$. But in
these cases we have $\T(x,y,z)=T_0,T_0'$ respectively. Thus
condition (ii) is satisfied at step $0$.
Now we inductively describe the subsequent steps. Suppose
that step $j-1$ has been completed, giving a cycle
$C_{j-1}$ that satisfies (i) and (ii) (with parameter $j-1$
in place of $j$). We will augment $C_{j-1}$ to obtain a
larger cycle $C_{j}$, in a manner encoded by the hyperedge
$h_j$. Let
$$h_j=\T(a,b,c)=\bigl\{(a,b),(b,c),(c,a)\bigr\}$$ (where we no longer
adopt the notation \eqref{abcde}). By our choice of the ordering of $H$,
exactly one of these pairs belongs to $\bigcup_{i=0}^{j-1} h_i$; without loss
of generality, let it be $(b,c)$. By property (ii) of the cycle $C_{j-1}$, it
contains a vertex of the form $[\ldots,a,b,c]$ immediately followed by a
$\tau_{n-2}$-edge, $f$ say. Delete edge $f$ from $C_{j-1}$ to obtain a
directed path $P_{j-1}$ with the same vertex set. Append to $P_{j-1}$ the
directed path that starts at the endvertex of $P_{j-1}$ and then uses the
sequence of generators
$$\tau_n,L[\tau_{n-2}],\tau_n,L[\tau_{n-2}],\tau_n.$$
Since $\order(\tau_n\tau_{n-2}^{-1})=3$, this gives a
cycle, which we denote $C_j$.
The new cycle $C_j$ has precisely the form shown in
\cref{qset3} (left) by the solid arcs and lines, where
$C_{j-1}$ is the thin blue cycle
in the upper left, containing the circled vertex, which is
the permutation $[\ldots,a,b,c]$. The arc is $P_{j-1}$, and the dotted edge
is $f$. As before, the
permutations at the filled discs may be read from
\cref{qset3} (right). Thus, $C_j$ consists of the path
$P_{j-1}$, together with two paths $P(a,b),P(c,a)$ with
respective vertex sets $A_n(a,b),A_n(c,a)$ (the other two
thin blue arcs in the figure), and three $\tau_n$-edges (thick black lines)
connecting these three paths. Hence $C_j$ satisfies
property (i).
We now check that $C_j$ satisfies (ii). The argument is similar
to that used in step $0$. Let $(x,y,z)$ satisfy the
assumptions in (ii). We consider two cases. First suppose
$(y,z)\notin h_j$. Then $(y,z)\in \bigcup_{i=0}^{j-1} h_i$, and
so property (ii) of $C_{j-1}$ implies that $C_{j-1}$ has a vertex of the form
$[\ldots,x,y,z]$ followed by a $\tau_{n-2}$-edge $g$, say.
Then $g$ is also present in $C_j$ unless $g=f$. But in that case we have $(x,y,z)=(a,b,c)$, and so $\T(x,y,z)=h_j$,
contradicting the assumption on $(x,y,z)$.
On the other hand, suppose $(y,z)\in h_j$. Then $(y,z)$ equals
$(a,b)$ or $(c,a)$. Suppose the former; the argument in
the latter case is similar. Let $C(a,b)$ be the cycle
obtained by appending a $\tau_{n-2}$-edge to $P(a,b)$.
Applying \cref{penultimate} shows that $C(a,b)$ contains a vertex of
the form $[\ldots,x,a,b]$ followed by a $\tau_{n-2}$-edge
$g$, say. Then $g$ is also present in $P(a,b)$ unless $x=c$,
but then $\T(x,y,z)=h_j$, contradicting the assumption in
(ii). Thus, property (ii) is established.
To conclude the proof, note that the final cycle $C_N$ is
Hamiltonian, by property (i) and the fact that the hypergraph of \cref{cactus2} has vertex set $[n]^{(2)}$. To check that it includes some
$\tau_{n-2}$-edge as required for \eqref{ind},
recall that $h_N$ has only one vertex in
common with $h_0,\ldots,h_{N-1}$, so there exist $x,y,z$
with $(y,z)\in h_N$ but $\T(x,y,z)\notin H$. Hence property (ii)
implies that $C_N$ contains a $\tau_{n-2}$-edge.
\end{proof}
\begin{proof}[Proof of \cref{main}, base case]
For the base case of the induction, we give an explicit
directed Hamiltonian cycle of $\A_7$ that includes $\tau_5$
at least once. (In fact the latter condition must
necessarily be satisfied, since, as remarked earlier,
\cref{rankin} implies that there is no Hamiltonian cycle
using only $\tau_3$ and $\tau_7$.)
\begin{table}
{\renewcommand{\arraystretch}{1.25}
\begin{tabular}{|c|c|c|}
\hline
row & permutations & generator \\
\hline
1 & $6\overline{7} \overline{7}\overline{7}\s\s\s,
\overline{7}\overline{7}\overline{7} 6 \s\s\s$ & $\tau_5$ \\
2 & $67\s\s\s\s\s,76\s\s\s\s\s$ & $\tau_3$ \\
3 & $567\overline{1}\s\s\s, 576\s\s\s\s$ & $\tau_5$ \\
4 & $2567\s\s\s,4576\s\s\s$ & $\tau_5$ \\
5 & $5671234,5612347,5623714,5637142$ & $\tau_3$ \\
6 & $5623471,5671423$ & $\tau_5$ \\
7 & otherwise & $\tau_7$\\
\hline
\end{tabular}
\vspace{3pt}
} \caption{Rules for generating a directed Hamiltonian
cycle of $\A_7$. Permutations of the given forms should be
followed by the generator in the same row of the table. The
symbol $\s$ denotes an arbitrary element of $[7]$, and
$\overline{a}$ denotes any element other than
$a$.}\label{a7}
\end{table}
\cref{a7} specifies which generator the cycle uses
immediately after each permutation of $A_7$, as a function
of the permutation itself. The skeptical reader may simply
check by computer that these rules generate the required
cycle. But the rules were constructed by hand; below we
briefly explain how.
First suppose that from every permutation of $A_7$ we apply
the $\tau_7$ generator, as specified in row 7 of the table.
This gives a cycle cover comprising $|A_7|/7=360$ cycles of
size $7$. Now consider the effect of replacing some of
these $\tau_7$'s according to rows 1--6 in succession. Each
such replacement performs a linkage, as in
\cref{qset3,qset6}. Row 1 links the cycles in sets of $3$
to produce $120$ cycles of length $21$, each containing
exactly one permutation of the form $67\s\s\s\s\s$ or
$76\s\s\s\s\s$. Row 2 then links these cycles in sets of
$5$ into $24$ cycles of length $105$, each containing
exactly one permutation of the form $675\s\s\s\s$ or
$765\s\s\s\s$. Rows $3$ and $4$ link various sets of three
cycles, permuting elements $1234$, to produce $6$ cycles.
Finally, rows $5$ and $6$ break the parity barrier as
discussed earlier, uniting these $6$ cycles into one.
\end{proof}
\section{Even size}
We briefly discuss a possible approach for even $n$. Recall
that $M_n$ is the maximum length of a cycle $\S_n$ in which
no two permutations are related by an adjacent
transposition.
To get a cycle longer than $M_{n-1}$ we must use $\tau_n$. But this is an odd
permutation, so we cannot remain in the alternating group $A_n$. We suggest
following $\tau_n$ immediately by another odd generator, say $\tau_{n-2}$, in
order to return to $A_n$ (note that $\tau_2$ is forbidden). In order to
include permutations of the form $[\ldots,j]$ for every $j\in[n]$, we need to
perform such a transition (at least) $n$ times in total in our cycle. In the
$i$th transition we visit one odd permutation, $\alpha_i$ say, between the
generators $\tau_n$ and $\tau_{n-2}$. For the remainder of the cycle we
propose using only generators $\tau_k$ for odd $k$, so that we remain in
$A_n$.
In fact, one may even try to fix the permutations $\alpha_1,\ldots,\alpha_n$ in
advance. The problem then reduces to that of finding long self-avoiding
directed paths in $\A_{n-1}$, with specified start and end vertices, and
avoiding certain vertices -- those that would result in a permutation that is
related to some $\alpha_i$ by an elementary transposition. Since there are $n$
$\alpha_i$'s and $n-1$ elementary transpositions, there are $O(n^2)$ vertices
to be avoided in total.
Since, for large $n$, the number of vertices to be avoided is much smaller than $|A_{n-1}|$, we think it very likely that paths of length
$(1-o(1))|A_{n-1}|$ exist, which would give $M_n\geq (1-o(1))n!/2$ as
$n\to\infty$. It is even plausible that $M_n\geq n!/2-O(n^2)$ might be
achievable. The graph $\A_{n-1}$ seems to have a high degree of global connectivity,
as evidenced by the diverse constructions of cycles of close to optimal
length in \cite{horovitz-etzion,yehezkeally-schwartz,zhang-ge}. For a
specific approach (perhaps among others),
one might start with a short path linking the required
start and end vertices, and then try to successively link in short cycles
(say those that use a single generator such as $\tau_{n-1}$) in the manner
of \cref{qset3}, leaving out the relatively few short cycles that contain
forbidden vertices. It is conceivable that the forbidden permutations might
conspire to prevent such an approach, for example by blocking even short paths between the start and end vertices. However, this appears unlikely, especially given the additional flexibility in the choice of $\alpha_1,\ldots,\alpha_n$.
While there appear to be no fundamental obstacles, a proof for general even
$n$ along the above lines might be rather messy. (Of course, this does not preclude some other more elegant approach). Instead, the approach was combined with a
computer search to obtain a cycle of length $315(=6!/2-45)$ for $n=6$, which
is presented below, answering a question of \cite{horovitz-etzion}, and
improving the previous record $M_6\geq 57$ \cite{horovitz-etzion} by more
than a factor of $5$. The case $n=6$ is in some respects harder than larger
$n$: the forbidden vertices form a larger fraction of the total, and $\A_5$
has only two generators, reducing available choices. (On the other hand, the
search space is of course relatively small). Thus, this result also lends support
to the belief that $M_n\geq (1-o(1))n!/2$ as $n\to\infty$.
The search space was reduced by quotienting the graph $\S_6$ by a group of
order $3$ to obtain a Schreier graph, giving a cycle in which the sequence of
generators is repeated $3$ times. The cycle uses the sequence of generators
$(\tau_{k(i)})$ where $(k(i))_{i=1}^{315}$ is the sequence
\newcommand{\z}{\hat{3}}
\begin{align*}
\bigl(
&\mathtt{64\ 55\z5\z\z5555\z555\z555\z\z5555\z5555\z555\z5555\z\z555\z5555\z\z} \\
&\mathtt{64\ 555\z\z5\z55\z\z55\z\z5\z555\z5555\z555\z555\z\z5555\z555\z5555\z\z5}
\bigr)^3.
\end{align*}
(Here, commas are omitted, the superscript indicates that the sequence is repeated three times, and $3$'s are marked as an aid to visual clarity).
\bibliographystyle{abbrv}
|
1,108,101,566,341 | arxiv | \section{Introduction}
\label{sec:intro}
Bayesian analyses of \ac{GW} data from binary mergers rely on extensive explorations of the posterior probability distribution of detected signals~\cite{Aasi:2013jjl, Veitch:2014wba} and hinge on accurate waveform models, representing the prediction of a \ac{GW} signal originated from a system described by a certain set of parameters \(\theta \).
As the sampling of the posterior distribution for a single \ac{GW} event typically requires the generation of\ \ \({\gtrsim}10^{7}\) waveforms, speed in their generation is essential.
This is especially compelling in view of future detectors, for which the rate of events will be significantly higher than the current one.
The in-band duration of signals will also increase and, due to their low mass, \ac{BNS} mergers are most affected by this; they are the focus of this work.
For general relativistic waveform models, speed and accuracy are often at odds.
For example, very fast waveform generation can be obtained with analytical \ac{PN} approximants~\cite{Buonanno:2009zt, Blanchet:2013haa}, but such templates lack in accuracy and tend to bias \ac{PE}.
Although further reducing the computational cost of state-of-the-art models is not strictly necessary for current analyses, it will be for the long signals observed by next generation (XG) detectors.
Indeed, Bayesian analyses of \ac{BNS} signals in XG detectors~\cite{Smith:2021bqc,Pratten:2019sed,Williams:2022vct} have been demonstrated only using phenomenological~\cite{Dietrich:2019kaq} or \ac{PN} approximants~\cite{Schmidt:2019wrl}.
These waveform templates include only partial physical information, and are therefore expected to strongly bias \ac{PE} with XG detectors~\cite{Gamba:2020wgg, Williams:2022vct}.
For example, phenomenological approximants model the effect of spin precession, but do not contain unequal-mass tidal corrections, with the binary matter effects entirely determined by one single effective tidal parameter.
\ac{PN} models are unreliable close to merger and are only available (in the frequency domain) for binaries with spins aligned with the orbital angular momentum.
Incorporating the whole physics content of advanced waveform models (including higher harmonics, precession, eccentricity, self-spin interactions, beyond leading order adiabatic electric and magnetic-type tidal effects, dynamical tides) will be key to avoid biases, but currently can only be accomplished at a great increase in computational cost.
In particular, here we are interested in leveraging the \ac{EOB} approach~\cite{Buonanno:1998gg, Buonanno:2000ef,Damour:2000we,Damour:2001tu,Damour:2008qf, Damour:2009kr, Damour:2014sva, Bohe:2016gbl, Nagar:2018zoe, Lackey:2018zvw}, one of the most accurate state-of-the art frameworks for waveform generators.
In this framework, the Hamiltonian description of the two-body problem in \ac{GR} is mapped to an effective problem of a single body orbiting in a Kerr-like deformed metric.
The effective metric potentials are determined by suitably resummed \ac{PN} expressions that make the model predictive in the fast-motion and strong-field merger regime.
Gravitational waveforms are natively generated in the time-domain using the solution of the EOB equations of motion and a particular factorized and resummed analytical expression of the multipolar \ac{PN} waveform~\cite{Damour:2008gu}.
The \ac{EOB} approach has the advantage of being both accurate to Einstein's equations, and flexible to the addition of analytical (e.g. \ac{PN}) information.
The faithfulness of inspiral-merger-ringdown models is increased by suitably informing them with \ac{NR} data, see e.g.~Refs.~\cite{Riemenschneider:2021ppj,Albertini:2021tbt} for recent work targeted at XG detectors.
Analogously, \ac{BNS} inspiral-merger waveforms are obtained by augmenting the effective interbinary potential and waveform multipoles with tidal terms~\cite{Flanagan:2007ix,Damour:2009wj,Bini:2012gu,Bernuzzi:2012ci,Bernuzzi:2015rla,Hinderer:2016eia,Steinhoff:2016rfi,Akcay:2018yyh}.
Full inspiral-merger-postmerger \ac{BNS} waveforms can be constructed by hybridising the model with \ac{NR}-informed post-merger models~\cite{Breschi:2019srl, Breschi:2022xnc}.
The practical usage of the above \ac{EOB} model is hampered by the requirement of numerically solving of an ODE system, which brings a constant-time overhead and constrains the maximum rate of waveform generation.
For current \ac{BNS} analyses, generating the \(\gtrsim 10^7\) templates required easily takes several weeks of CPU time.
A crucial element to improve the EOB model efficiency is the post-adiabatic method, introduced in Ref.~\cite{Nagar:2018gnk} for the {\tt TEOBResumS}~model. The post-adiabatic iterative method yields an efficient yet accurate approximation of the \ac{EOB} Hamiltonian flow, removing the need to solve the related ODE for all but the very last stages of the inspiral.
This technique provides a significant speed up (a factor 10 or more for typical \ac{BNS} signals in the LIGO-Virgo~\cite{TheLIGOScientific:2014jea, TheVirgo:2014hva} band), but it is currently applicable only to quasi-circular mergers.
To further optimise the waveform generation, a desiderable feature for a fast approximant is to yield waveforms in the frequency domain, since the likelihood takes a simple form in the Fourier space, when assuming a Gaussian and stationary noise background.
A time-domain approximant, such as the one mentioned above, needs to be Fourier-transformed before use, which typically entails a slow-down up to an order of magnitude.
For this reason, a \ac{SPA} was introduced within the \ac{EOB} model {\tt TEOBResumS}, yielding a fast and accurate frequency-domain approximant called {\tt TEOBResumSPA}~\cite{Gamba:2020ljo}, which has been successfully applied to the analyses of GW170817 and GW190425 data, see e.g. Refs~\cite{Gamba:2020ljo, Breschi:2021wzr}.
The evaluation of a frequency-domain waveform approximant typically scales as \(t _{\text{waveform}} \approx t _{\text{overhead}} + N _{\text{points}} t _{\text{point}}\), where \(N _{\text{points}}\) is the number of grid points it is evaluated at.
The per-frequency-point time \(t _{\text{point}}\) is typically on the order of few hundreds of nanoseconds, cannot be reduced below the CPU clock speed times the number of floating point operations required, and varies much less than \(t _{\text{overhead}}\) across models.
Thanks to the combination of \ac{SPA} and the post-adiabatic approach, the overhead time for {\tt TEOBResumSPA} {} has been reduced to only tens of milliseconds. The fundamental limitation in reducing this number is that, even when evaluating the waveform at few frequency points, the full Hamiltonian flow must still be computed in a complete radial grid.
In current \ac{BNS} analyses this bears little relevance since the second term is typically dominant.
This leads to \ac{PE} times on the order of a few days on a modern computer cluster, which is acceptable for current event rates, but will not be for XG detectors.
Techiques such as \acp{ROQ}~\cite{Field:2013cfa} can decrease the required value of \(N _{\text{points}}\) so much that the linear term becomes negligible comparable to the constant one.
The driving requirement behind this work is therefore to build a model with a much lower \(t _{\text{overhead}}\) --- which can lead to a significantly faster \ac{PE} if combined with \ac{ROQ} --- while remaining faithful to the predictions of \ac{EOB}: this will enable accurate analyses of data from XG detectors.
One of the most promising approaches to achieve the necessary increase in efficiency is template acceleration through \ac{ML}.
The last few years saw a sharp rise in studies on this topic, a review of which can be found in Ref.~\cite{Tiglio:2021ysj}.
Most of these efforts, however, focused on \ac{BBH} signals.
A pioneering study on this was the one of Ref.~\cite{Chua:2018woh}, which developed a neural network to compute the liner combination coefficients of a generic BBH represented on a basis of waveforms.
Along the lines of this work, Ref.~\cite{Khan:2020fso} also reached high performance, while retaining large faithfulness, compared the training waveforms.
Reference \cite{Schmidt:2020yuu} used instead a \ac{PCA} to drastically reduce the number of basis functions required for waveform reconstruction, while Ref.~\cite{Barsotti:2021wks} applied automated learning to select the best performing regression scheme (although varying only the BBH mass ratio $q$) and Ref.~\cite{Thomas:2022rmc} extended the latter effort to spin-precessing signals.
Finally, Ref.~\cite{Liao:2021vec} used a deep generative model for waveform generation.
It is worth noting that the models listed above work in the time-domain: while this ensures a smooth (hence more easily learnable) physical representation of the \ac{BBH} signal, a Fourier transform is still required to use the model in \ac{PE} applications.
The literature is less rich in the field of binary neutron star (\ac{BNS}) modelling. Reference \cite{Lackey:2016krb} developed a non-spinning surrogate model in the time domain, and extended it to aligned-spin \ac{BNS} using Gaussian Process regression \cite{Lackey:2018zvw}.
Their parameter space is the same one we use, as discussed later.
Subsequently, Ref.~\cite{Lackey:2018zvw} built a fast frequency-domain surrogate of
the spin-aligned model {\tt SEOBNRv4T} \cite{Hinderer:2016eia, Steinhoff:2016rfi}, again using Gaussian process regression.
In this work, we introduce {{\tt mlgw\_bns}}, a new {\it frequency domain} \ac{BNS} surrogate model which relies on a neural network.
The salient characteristics of this model are: we train on the residuals of the \ac{EOB} waveforms generated by {\tt TEOBResumSPA} {} from a \ac{PN} baseline, which simplifies the relation the network has to learn; we downsample the waveforms as much as possible and apply a \ac{PCA} following the approach of Ref.~\cite{Schmidt:2020yuu}.
This way, the neural network must only learn a relation between the \ac{BNS} parameters $\theta$ and a low-dimensional representation of the waveform, which allows it to be quite shallow, in the end significantly decreasing the waveform computational overhead.
Synergic usage of this model with \ac{ROQ} compression techniques allows us to showcase more than an order of magnitude improvement in the analysis of current \ac{BNS} signals.
Even larger speedups, achieved for wider bandwidths, will enable future studies to systematically exploit highly accurate \ac{EOB} models in full Bayesian \ac{PE} analyses involving XG detectors.
This paper is organized as follows.
In Sec.~\ref{sec:model}, we describe the details of our method, while Sec.~\ref{sec:performance} is devoted to the performance analysis of our model in terms of timing and accuracy.
The improved capabilities of the resulting model are illustrated in Sec.~\ref{sec:PE}, where we show the results of a realistic \ac{PE} analysis on the \ac{BNS} transient GW170817~\cite{TheLIGOScientific:2017qsa,LIGOScientific:2018mvr}, additionally making use of a reduced order quadrature scheme to fully exploit the potential of our technique.
Sec.~\ref{sec:con} presents final remarks and future research directions.
\paragraph*{Software availability. ---} Our model is released within the public \texttt{python} package {{\tt mlgw\_bns}}, available at \href{https://pypi.org/project/mlgw-bns/}{pypi.org/project/mlgw-bns/}.
The description in this paper refers to version \texttt{0.12.0}.
The package contains both the trained model described here, which can be used to generate waveforms out-of-the-box, as well as the full functionalities required to train new models at will (e.g.~by using a different approximant than {\tt TEOBResumSPA}, or different parameter ranges).
The training time and memory requirements are both relatively small: a model can easily be trained on a laptop in a few hours.
The software we developed to achieve the frequency compression applied in the \ac{PE} stage is available at: \href{https://github.com/GCArullo/JenpyROQ}{github.com/GCArullo/JenpyROQ}.
\paragraph*{Conventions. ---}
We work in geometric units, setting \(G = c = 1\).
The total binary mass is denoted as $M= m_1 + m_2$, the mass ratio as $q = m_1 /m_2 \ge 1$, and the symmetric mass ratio as $\nu = m_1 m_2 / M^2$.
The dimensionless spin vectors are denoted as ${\boldsymbol \chi}_i$ for $i=1,2$ and the spin components aligned with the orbital angular momentum $\textbf{L}$ are labeled as $\chi_i = {\boldsymbol \chi}_{i}\cdot \textbf{L} / |\textbf{L}|$.
The effective spin parameter is defined as \(\chi _{\text{eff}} = (\chi_1 m_1 + \chi_2 m_2 ) / M\).
The quadrupolar tidal polarizability parameters are defined as $\Lambda_{i}=({2}/{3})\,k_{2,i}\,C_i^{-5}$ for $i=1,2$, where $k_{2,i}$ and $C_i$ are the second Love number and the compactness of the $i$-th star, respectively. The reduced tidal parameter,
\begin{align}
\label{eq:LambdaT}
\tilde\Lambda &= \frac{16}{13}
\frac{(M_1 + 12 M_2) M_1^4}{M^5}\Lambda_1 + (1\leftrightarrow 2)\,,
\end{align}
determines tidal interactions at leading post-Newtonian-order~\cite{Favata:2013rwa,Damour:2012yf}.
Masses, spins, and tidal parameters are collectively called the \emph{intrinsic parameters} of a \ac{BNS} system, {\it i.e.}~ $\theta_{\rm int} =\{M,q,\chi_{1},\chi_{2},\Lambda_1,\Lambda_2\}$.
The location and orientation of the source are identified by the \emph{extrisinc parameters} $\theta_{\rm ext}=\{D_L, \iota , \alpha, \delta, \psi, t_c,\phi_c\}$, {\it i.e.}~ luminosity distance $D_L$, inclination angle $\iota$, right ascension angle $\alpha$, declination angle $\delta$, polarization angle $\psi$, time of coalescence $t_c$, and phase at the merger $\phi_c$.\footnote{This is the full extrinsic parameter set required to reconstruct the \(h_{\mu \nu }\) tensor, but within {\tt mlgw\_bns} {} concretely the sky position parameters \(\alpha\), \(\delta\) and \(\psi\) are not accepted: the polarizations are returned in a frame located at Earth and aligned with the source.}
The frequency-domain waveform from a compact binary coalescence can in general be written as
\begin{align} \label{eq:emission-mode-decomposition}
h_+ (f) - i h_\times (f) = \frac{1}{D_L} \sum _{l=2}^{\infty } \sum _{m=-\ell}^{\ell} h_{\ell m} (f)\, {}_{(-2)}Y_{\ell m} (\iota, \varphi )\, ,
\end{align}
where the functions \(_{(-2)}Y_{\ell m}\) are the spin-weighted spherical harmonics, given {\it e.g.}~ by equations II.7 and II.8 of Ref.~\cite{Ajith:2007a}, while the complex functions \(h_{\ell m} (f)\) are the frequency-domain modes of the GW strain.
The discussion in this paper is restricted to the $(\ell,m)=(2,2)$ mode;
focusing on it, the two \ac{GW} polarizations can be simply written as
\begin{subequations} \label{eq:22-mode-polarizations}
\begin{align}
h_+ (f) &= \frac{1}{D_L} \sqrt{ \frac{5}{4 \pi }} h_{22} (f) \frac{\cos^2 \iota + 1}{2} \\
h_ \times (f) &= \frac{1}{D_L} \sqrt{ \frac{5}{4 \pi }} h_{22} (f) \cos \iota \,.
\end{align}
\end{subequations}
A relevant scalar product in waveform space is the Wiener product
\begin{align}\label{eq:Wiener_ip}
(a | b) := 4 \Re \int_0^{ \infty } \frac{a^*(f) b(f)}{S_n(f)} \text{d}f\,,
\end{align}
where \(S_n\) is the \ac{PSD} of a given detector.
Results shown in Sec.~\ref{sec:performance} are computed considering \(S_n\) to be the expected Einstein Telescope \ac{PSD}, ET-D~\cite{EinsteinTelescope:2011fda,Hild:2010id}.
In terms of this, the optimal match (or {\it faithfulness}) between waveforms $a$ and $b$ is given by
\begin{align}
\mathcal{F}(a, b)
:= \max_{t_0, \phi_0 } \frac{(a|b)}{\sqrt{(a|a)(b|b)}}\,,
\end{align}
where the maximum is taken over all possible time and phase shifts \(t_0 \) and \(\phi_0\) between the two waveforms.
The mismatch is then defined as $\bar{\mathcal{F}}(a, b) := 1 - \mathcal{F}(a, b)$.
\section{Model construction}
\label{sec:model}
\subsection{Overview}
{\tt mlgw\_bns} {} is a surrogate waveform approximant based on a neural network that learns the relation between five intrinsic parameters of a binary system --- mass ratio, dimensionless spins, and quadrupolar tidal polarizabilities, collectively denoted as $\theta =\{q,\chi_{1},\chi_{2},\Lambda_1,\Lambda_2\}$ --- and the corresponding frequency-domain waveform mode $h_{22}(f;\theta)$.
The binary mass $M$ is not included in $\theta$ since the nontrivial mass scale in the \ac{BNS} problem is fully included in the tidal polarizability parameters.\footnote{As in the scale-invariant binary black hole case, the waveform's frequency dependence is really on the mass-rescaled parameter \(Mf = GMf / c^3\), not on \(f\) alone.}
Concretely, within {\tt mlgw\_bns}{} a fixed reference mass of \(M _{\text{ref}} = 2.8 M_{\odot}\) is chosen and waveforms for generic masses are generated by the appropriate rescaling of both the waveform's amplitude and frequency; this is described in more detail in Sec.~\ref{sec:frequencies}.
Similarly, the other extrinsic parameters \(\theta _{\text{ext}}\) can be neglected when constructing an approximant: the dependence on them can be included analytically in the likelihood after a waveform has been generated.
The driving idea behind {\tt mlgw\_bns} {} is to have the neural network be as shallow and small as possible while retaining reconstruction accuracy;
this is accomplished by reducing the dimensionality of the waveform's description.
The first step to this end is to make a training dataset of \emph{residuals} from an analytical \ac{PN} baseline, which means the network only has to learn information in the high-frequency region, where the two models differ: this is described in Sec.~\ref{sec:residuals}.
We then employ a reduced frequency grid and perform a \ac{PCA} in order to decrease the dimensionality of each waveform's representation to about 30 floating point numbers; this is described in Sec.~\ref{sec:dimensionality-reduction}.
Finally, a neural network is trained to reconstruct the relation between
the parameters \(\theta\) and the \(\sim 30\) principal components, as described in Sec.~\ref{sec:neural-network}.
The training datasets for all the aforementioned stages are generated by drawing from the same uniform distribution on the parameters, in the intervals:
$$
q \in [1, 2]\,,\ \ \Lambda _i \in [5, 5000]\,,\ \ \chi _i \in [-0.5, 0.5] \,.
$$
These ranges corresponds to a realistic prior choice in GW analyses of \ac{BNS}.
The random number generator used for the extraction is deterministically re-seeded for every new dataset,
in order to ensure reproducibility as well as independence of the datasets.
The frequency-domain waveforms currently learned by {\tt mlgw\_bns} {} are those generated by the state-of-the-art \ac{EOB} model
{\tt TEOBResumSPA}{}; these will be denoted by a subscript \(\text{EOB}\) in the following discussion.
We train with {\tt TEOBResumSPA}{} frequency domain waveforms as opposed to Fourier transforms of time-domain \texttt{TEOBResumS} waveforms for a few reasons: the two are closer than the intrinsic accuracy of \texttt{TEOBResumS} (\(\mathcal{\bar{F}} \lesssim 5 \times 10^{-4}\)~\cite{Gamba:2020ljo}); the \ac{SPA} waveforms are much smoother than the ones calculated with a \ac{FFT}, and therefore easier to represent with small amounts of frequency points (see Sec.~\ref{sec:downsampling}); the \ac{SPA} waveforms can be natively evaluated at arbitrary frequencies, allowing us to never employ a uniform frequency grid.
Figure~\ref*{fig:flowchart} shows a graphical outline of {\tt mlgw\_bns}'s operation.
\begin{figure*}[ht]
\centering
\pgfdeclarelayer{background}
\pgfdeclarelayer{foreground}
\pgfsetlayers{background,main,foreground}
\definecolor{myred}{HTML}{c16c5a}
\definecolor{myyellow}{HTML}{faeba2}
\definecolor{myorange}{HTML}{de8261}
\definecolor{myviolet}{rgb}{0.258234, 0.038571, 0.406485,}
\definecolor{wrongultramarine}{rgb}{0.07, 0.04, 0.56}
\tikzstyle{generator}=[draw, fill=myviolet!20, text width=5em,
text centered, minimum height=2.5em]
\tikzstyle{parameters}=[generator, fill=myred!45, rounded corners, text width=15em]
\tikzstyle{ann} = [above, text width=5em, text centered]
\tikzstyle{algo} = [generator, text width=15em, fill=white!40,
minimum height=6em, rounded corners]
\tikzstyle{trained} = [parameters, fill=myorange!60, ellipse, text width=2em]
\tikzstyle{arrow} = [ultra thick,->]
\def2{2}
\def2.5{2.5}
\begin{tikzpicture}[node distance=2cm]
\node (eob) [generator] {Effective One Body};
\node (pn) [generator, right of=eob, xshift=2.5cm, yshift=-2.5cm] {Post-Newtonian};
\node (pred) [generator, right of=eob, xshift=7cm] {predicted waveform};
\node (manage) [algo, below of=eob, yshift=-.7cm] {\textbf{Management}
\begin{itemize}
\addtolength{\itemindent}{-.5cm}
\item greedy downsampling (\ref{sec:downsampling})
\end{itemize}
};
\node (dimred) [algo, below of=manage, yshift=-1cm] {\textbf{Dimensionality reduction}
\\
\begin{itemize}
\addtolength{\itemindent}{-.5cm}
\item residual calculation (\ref{sec:residuals})
\item PCA training (\ref{sec:pca})
\end{itemize}};
\node (nn) [algo, below of=dimred, yshift=-1cm] {\textbf{Neural Network}
\\
\begin{itemize}
\addtolength{\itemindent}{-.5cm}
\item optimization (\ref{sec:hyperparameter-optimization})
\item NN training (\ref{sec:neural-network})
\end{itemize}};
\node (trainedpca) [trained, right of=dimred, xshift=2.5cm] {PCA};
\node (trainednn) [trained, right of=nn, xshift=2.5cm] {NN};
\node (manage2) [algo, right of=manage, xshift=7cm] {\textbf{Management}
\begin{itemize}
\addtolength{\itemindent}{-.5cm}
\item extrinsic parameter inclusion
\item resampling to user grid
\end{itemize}
};
\node (dimred2) [algo, right of=dimred, xshift=7cm] {\textbf{Reconstruction}
\begin{itemize}
\addtolength{\itemindent}{-.5cm}
\item PCA reconstruction
\item residual recombination
\end{itemize}
};
\node (nn2) [algo, right of=nn, xshift=7cm] {\textbf{Prediction}
\begin{itemize}
\addtolength{\itemindent}{-.5cm}
\item NN evaluation
\end{itemize}
};
\node (params) [parameters, below of=nn2, yshift=-.7cm] {user parameters \((\theta _{\text{int}}, \theta _{\text{ext}})\)};
\path (nn.south)+(0, -1cm) node (training) {\Large{Training}};
\path (params.south)+(0, -1cm) node (prediction) {\Large{Prediction}};
\draw [arrow] (eob) -- (manage);
\draw [arrow] (pn) -- (dimred);
\draw [arrow] (pn) -- (dimred2);
\draw [arrow] (manage) -- (dimred);
\draw [arrow] (dimred) -- (nn);
\draw [arrow] (nn2) -- (dimred2);
\draw [arrow] (dimred2) -- (manage2);
\draw [arrow] (manage2) -- (pred);
\draw [arrow] (params) -- (nn2);
\draw [arrow] (dimred) -- (trainedpca);
\draw [arrow] (trainedpca) -- (dimred2);
\draw [arrow] (nn) -- (trainednn);
\draw [arrow] (trainednn) -- (nn2);
\begin{pgfonlayer}{background}
\path (manage.west |- manage.north)+(-0.4,0.4) node (a) {};
\path (training.south -| nn.east)+(+0.4,-0.4) node (b) {};
\path[fill=myyellow!20,rounded corners, draw=black!50, dashed]
(a) rectangle (b);
\path (manage2.west |- manage2.north)+(-0.4,0.4) node (a) {};
\path (prediction.south -| nn2.east)+(+0.4,-0.4) node (b) {};
\path[fill=myyellow!20,rounded corners, draw=black!50, dashed]
(a) rectangle (b);
\end{pgfonlayer}
\end{tikzpicture}
\caption{Flowchart for the operation of {\tt mlgw\_bns}.}
\label{fig:flowchart}
\end{figure*}
\subsection{Residuals from a Post-Newtonian baseline} \label{sec:residuals}
\begin{figure}[ht]
\centering
\label{fig:original-residuals}
\includegraphics[width=.48\textwidth]{fig02.pdf}
\caption{Residuals of 100 \ac{EOB} waveforms to their \ac{PN} counterparts. The \ac{EOB} waveforms are chosen according to a uniform distribution in parameter space.
}
\label{fig:original_residuals}
\end{figure}
We start with a polar representation of the waveform in amplitude and phase as
\(h(f) = A _{\text{EOB}}(f) e^{- i \phi _{\text{EOB}} (f) }\).
Instead of reconstructing the waveform directly, {\tt mlgw\_bns} {} reconstructs its
residuals from a fiducial \ac{PN} model.
The residuals are computed as
\begin{subequations} \label{eq:amplitude-phase-residuals}
\begin{align}
\Delta A (f; \theta) &= \log \left(\frac{A _{\text{EOB}}(f; \theta)}{A _{\text{PN}}(f; \theta)}\right)
\\
\Delta \phi (f; \theta) &= \phi _{\text{EOB}} (f; \theta) - \phi _{\text{PN}} (f; \theta)
\,.
\end{align}
\end{subequations}
and shown in Fig.~\ref{fig:original_residuals} for 100 sets of parameters.
The complete waveform is recovered from the predicted residuals
\(\Delta A _{\text{pred}} (f; \theta ), \Delta \phi _{\text{pred}}(f; \theta )\)
as
\begin{subequations} \label{eq:amplitude-phase-reconstruction}
\begin{align}
A _{\text{pred}}(f; \theta) &= A _{\text{PN}} (f; \theta) \exp( \Delta A _{\text{pred}} (f; \theta)) \\
\phi _{\text{pred}} (f; \theta) &= \phi _{\text{PN}}(f; \theta) + \Delta \phi _{\text{pred}}(f; \theta)
\,.
\end{align}
\end{subequations}
We use the \texttt{TaylorF2} approximant with 3.5PN-accurate amplitude, pseudo 5.5PN-accurate phase~\cite{Messina:2019uby} with 7.5PN-accurate tidal contributions~\cite{Damour:2012yf,Henry:2020ski} and the monopole-quadrupole 3PN contribution to the phase~\cite[Eqs.\ (50)--(52)]{Nagar:2018plt} (see also~\cite[Eq.\ (41)]{Nagar:2018zoe}).
The phase residuals computed as above typically exhibit large linear trends due to the different choices in the time-domain
alignment between the \ac{EOB} and \ac{PN} models (which corresponds to a linear phase term in the frequency domain).
These trends are not physically meaningful, but even small differences can result in a large effect: the variation over the whole frequency spectrum is of the order of \(2000 \text{Hz}\times 2 \pi \times \Delta t\) radians (for the reference mass), meaning that even single-millisecond shifts will yield tens of radians in difference. Typical shifts between the models used within {\tt mlgw\_bns} {} are of the order of tens of milliseconds, resulting in several hundreds of radians of meaningless phase difference.
In order to remove this effect, the average slope \(\text{d} \Delta \phi / \text{d} f\) is first calculated between the first frequency sample and some higher frequency (typically chosen to be low enough to lie in the region of validity of the PN approximation) and then the corresponding linear term is subtracted from the residuals. Figure~\ref{eq:amplitude-phase-residuals} shows residuals with this procedure already applied.
This means that waveforms returned by {\tt mlgw\_bns} {} are aligned with the corresponding \ac{PN} ones, as opposed to the \ac{EOB} ones.
Since the prediction of the merger time within {\tt mlgw\_bns} {} is modelled on the \ac{EOB} one, this means that the predicted waveforms' mergers fluctuate by the same few tens of milliseconds.
This is inessential for the purposes of inspiral-only parameter estimation, but it can be problematic if we wish to extend the inspiral model with one for the post-merger~\cite{Breschi:2022xnc,Breschi:2022ens}.
A solution to this could be to reconstruct the time-shift dependence on the parameters \(\Delta t(\theta _{\text{int}})\), and de-shift the predicted waveforms after generating them with the \ac{PN} alignment; this is however not implemented in version \texttt{0.12.0} {} of {\tt mlgw\_bns} {} used in this work.
\subsection{Dimensionality reduction} \label{sec:dimensionality-reduction}
Neural networks can be small and simple if the dimensionality of the data they must operate on is itself small.
Fortunately, the default representation of residuals (or waveforms) in frequency space contains a large amount of redundancy: this section discusses our approach to reducing the dimensionality of its representation.
The steps employed within {\tt mlgw\_bns} {} to this end are three: two of them are different techniques of decreasing the number of points in frequency space the residuals are sampled at, and the third is \ac{PCA}.
The orders of magnitude for how many floating point numbers are needed to represent waveforms or residuals starting at \(5 \text{Hz}\) after each of these steps are as follows (see also Figs.~\ref{fig:downsampling_comparison_amplitude} and~\ref{fig:downsampling_comparison_phase} for a breakdown of where these points are used in frequency space):
\begin{enumerate}
\item the default uniform frequency spacing requires \(\sim 2 \times 10^7\) points per waveform, scaling with \(f_0^{-8/3}\);
\item the multibanding approach reduces this to \(\sim 2 \times 10^5\), scaling with \(f_0^{-5/3}\) \cite{Vinciguerra:2017ngf};
\item the dataset-dependent greedy downsampling approach reduces this to \(\sim 3 \times 10^3\) for full waveforms or \(\sim 10^3\) for residuals;
\item the \ac{PCA} representation, finally, only requires \(\sim 3 \times 10^1\) numbers per waveform.
\end{enumerate}
Uniform spacing is never used within {\tt mlgw\_bns}: a small number of waveforms is generated directly with the multibanded grid in order to train the greedy downsampling, and once this is done all further waveforms are generated on the smaller greedy downsampling grid.
This means that, even when starting from a very low initial frequency, we can easily work with a dataset of waveforms within the RAM of a laptop.
\subsubsection{Multibanding}
\label{sec:multibanding}
``Multibanding'' is the name we give to a technique of generating a frequency grid which is much smaller than the uniform one, not very dependent on the specifics of the dataset, and which may still be used to get a good representation of \ac{CBC} waveforms.
The starting point is the observation that a \ac{CBC} signal will always have a specific chirping profile, with high-frequency information only contained in a short (in time) section at the end.
The default frequency array used in signal processing, for a real-valued signal with duration \(T\) and time spacing \(\Delta t\), will be a uniform array from \(f = 0\) to \(f = 1 / 2 \Delta t\) (the Nyquist frequency), with spacing \(\Delta f = 1 / T\).
As expected, this means there is no information loss: \(T / \Delta t\) real numbers are mapped to \((2 \Delta t )^{-1} / T^{-1}\) complex numbers.
This array describes high- and low-frequency information for all times: in the \ac{CBC} case this entails a lot of redundancy, since it is already known ahead of time that for the overwhelming majority of the signal there will be no high-frequency information.
We may construct a frequency array which is ``aware'' of this behavior~\cite{Vinciguerra:2017ngf,Smith:2016qas}.
We start from the fact that the duration of a CBC signal starting from a frequency \(f_0\) is \(T \propto f_0^{-8/3}\), with a proportionality constant that can be analytically derived at Newtonian (0PN) order and which depends on the mass and the mass ratio~\cite{Maggiore:1900zz}:
\begin{align}
T = \frac{5}{256} (\pi f_0)^{-8/3} M ^{-5/3} / \nu \,.
\end{align}
Then, we can make a frequency array for which the frequency spacing at each frequency is \(\Delta f (f) \approx 1 / T(f)\).
This will mean we sample the low-frequency region much more finely than the high-frequency one, but locally each frequency band is described with the correct level of detail.
The approach used within {\tt mlgw\_bns} {} differs from the one used by Ref.~\cite{Vinciguerra:2017ngf} in two aspects.
First, whereas they approximate the uniformly-varying \(\Delta f\) by dividing the frequency domain into bands and using a different, uniform frequency for each of those, we construct a frequency array with continuously-varying spacing. Second, while they extend this sampling into the high-frequency regime, we use it only for frequencies lower than a certain pivot, typically \(f _{\text{pivot}} \approx 40 \text{Hz}\), while for higher frequencies we use uniform sampling.
This is a conservative choice, motivated by the fact that at high frequency the 0PN expression for the time to merger cannot be expected to hold, combined with the fact that a uniform array with the spacing defined by \(\Delta f = 1/ T(40 \text{Hz}) \approx 0.02 \text{Hz}\) is not a large computational burden, resulting in only a few tens of thousands of points.
This approach, that we call \textit{multibanding}, needs to know something about the dataset: while the mass is kept fixed during the training, the mass ratio cannot be.
The dependence is \(\Delta f \propto 1 / T \propto \nu \), and \(\nu \) scales inversely with the mass ratio \(q\) (which is \(>1\) here).
Therefore, the \emph{smallest} \(\Delta f\) we should use as a lower bound corresponds to the largest \(q\) within the dataset; note, however, that this characteristic is shared by the uniform sampling, which is also defined by the three quantities \(f _{\text{min}}\), \(f _{\text{max}}\) and \(\Delta f\).
Figures~\ref{fig:downsampling_comparison_amplitude} and~\ref{fig:downsampling_comparison_phase} show histograms for the multibanding approach compared to the standard, uniform-in-frequency approach, for the case of waveforms starting at \(5 \text{Hz}\).
The uniform-in-frequeny grid looks tilted in the histogram since the bins represent logarithmic frequency intervals, which increase in absolute width (\(\Delta f\)) as the frequency increases.
The general pattern to observe is that, as we make more and more assumptions about the waveforms we need to represent, the frequency array can shrink.
Multibanding is a rather safe choice, since it makes no more assumptions than uniform sampling, but it still provides at least an order-of-magnitude improvement in typical cases.
The lower two histograms, labelled ``Waveforms'' and ``Residuals'', show the numbers of points that can be achieved when greedily selecting frequencies by requiring they allow us to reconstruct full EOB waveforms or their residuals (described in Sec.~\ref{sec:residuals}) respectively.
\begin{figure}[ht]
\centering
\includegraphics[width=.48\textwidth]{fig03.pdf}
\caption{Comparison of various ways to sample the amplitudes of a waveform. We show histograms of the arrays of frequencies used for the sampling, in the cases of no multibanding (uniform spacing \(\Delta f = \const\)), multibanding (discussed in Sec.~\ref{sec:multibanding}), and training the greedy algorithm discussed in Sec.~\ref{sec:downsampling} on 128 waveforms or 128 sets of residuals, computed as discussed in Sec.~\ref{sec:residuals}.}
\label{fig:downsampling_comparison_amplitude}
\end{figure}
\begin{figure}[ht]
\centering
\includegraphics[width=.48\textwidth]{fig04.pdf}
\caption{Same as Fig.~\ref{fig:downsampling_comparison_amplitude}, but training the greedy algorithm to reconstruct the phase of the same waveforms.}
\label{fig:downsampling_comparison_phase}
\end{figure}
\subsubsection{Downsampling} \label{sec:downsampling}
While the multibanding reduces the size of the frequency arrays by orders of magnitude, especially for very low initial frequencies, we can do even better if we allow a heavier dependence on the specific dataset.
Specifically, we can determine a set of points in frequency space such that any waveform in the dataset, if given at those points only, can be interpolated and retrieved at all frequencies within a certain accuracy.
In order to achieve this goal, a greedy optimization technique is used. First, a set of waveforms is generated on the grid described in the previous section. These waveforms are then downsampled to a sparse grid, which can initially just consist of just the endpoints of the domain, and resampled with a cubic spline.\footnote{Cubic interpolation was found to be a good middle ground when accounting for computational complexity (which increases with interpolation order) and greedy grid size (which decreases with interpolation order).}
The reconstruction error can then be measured for each of these waveforms: new points are added to the grid where it is worst.
This procedure is iterated until all the given waveforms can be reconstructed within a certain tolerance, which we select to be \(10^{-5}\) for both amplitude and phase.
The downsampling is performed separately for amplitude and phase.
As the diagram in Fig.~\ref{fig:flowchart} shows, when reconstructing a waveform the ``residual recombination'' step happens before the ``resampling to user grid'' step.
This means that this downsampling procedure, which by itself is a generic algorithm, is applied to the \emph{full EOB waveforms} as opposed to the residuals described in Sec.~\ref{sec:residuals}.
While this requires us to use a slightly larger frequency grid (but still with \(< 10^4\) points), it was found to be generally faster than the alternative.
\subsubsection{Principal Component Analysis} \label{sec:pca}
Once the waveform has been downsampled, it is represented with \(n_A\) numbers for the amplitude and \(n_\phi \) for the phase.
Its dimensionality can be further reduced using \ac{PCA}.
We collect all the residuals corresponding to each waveform in an array \(x = [\Delta A, \Delta \phi ] \in \mathbb{R}^{n_A + n_\phi }\) and construct a training dataset out of such arrays, \(\left\lbrace x_i \right\rbrace_i\), of which we may compute the mean \(\mu = \left\langle x \right\rangle\) and the covariance matrix
\begin{align}
C = \left\langle (x - \mu ) (x- \mu )^{\top} \right\rangle \,.
\end{align}
This (symmetric, positive definite) matrix is diagonalized as \(C = V D V^{\top}\), where \(D = \operatorname{diag} (\lambda _i)\) is a diagonal matrix containing the eigenvalues of the covariance matrix, ordered so that \(\lambda _i \geq \lambda _{i+1}\).
The columns of \(V\) are the eigenvectors and, because of the ordering, the first \(k\) eigenvectors correspond to the \(k\) largest eigenvalues.
Projecting a vector \(x\) onto the span of these \(k\) eigenvectors allows us to approximately represent it with only \(k\) numbers.
Specifically, if \(U\) is the \((n_A + n_\phi ) \times k\) submatrix of \(V\) consisting of the \(k\) eigenvectors corresponding to
the largest eigenvalues of the covariance matrix \(C\), we explicitly write the forwards and backwards transformations for \(x\) into its low-dimensional representation \(\widetilde{x}\):
\begin{subequations} \label{eq:pca}
\begin{align}
x &\to \widetilde{x} = U^{\top} (x - \mu ) \\
\widetilde{x} &\to x = U \widetilde{x} + \mu
\,.
\end{align}
\end{subequations}
The number of principal components to keep can be tuned depending on the required final fidelity; including more of them increases the evaluation time of each waveform.
For simplicity, for the remainder of this work we always retain 30 principal components.
In principle this number could also be tuned, and its current value was mainly chosen to be ``safely large''.
This is confirmed by Fig.~\ref{fig:mismatches_by_n_train}: the reconstruction fidelity grows in a roughly linear fashion with the number of training points and its accuracy is never hampered by the number of \ac{PCA} up to fidelities \(\mathcal{\bar{F}} \lesssim 10^{-5}\).
As we will discuss in Sec.~\ref{subsec:roq-con}, even with this possibly suboptimal value of \(k\) our model is fast enough not to be the bottleneck in the evaluation of the likelihood.
\subsection{Frequency band}\label{sec:frequencies}
As our detectors improve their sensitivity at low frequency, it is crucial to have a model which can be conveniently evaluated there.
In this section, we discuss the frequency band in which our model is trained, and how we may overcome the inherent limitation of only training down to a given frequency.
For the default model, which is provided with version \texttt{0.12.0} {} of {\tt mlgw\_bns} {} and whose performance is discussed in this work,
the frequency range for which validity is guaranteed is \([5, 2048]\text{Hz}\), while the range of valid total masses is \([2, 4] M_{\odot}\):
this means, as we shall discuss below, that the reference-mass model is trained in the range \(\approx [3.57, 2926] \text{Hz}\).
When the user requests frequencies within the training range, the model is able to directly yield a prediction; however this may be limiting, especially when considering multi-band observations.
The waveform at frequencies lower than the ones in the training range is well-described by the \ac{PN} approximation: therefore, waveforms predicted by {\tt mlgw\_bns} {} are natively hybridized with \ac{PN} ones at low frequency, as Sec.~\ref{sec:low-freq-bound} below describes.
\subsubsection{Mass rescaling}
As mentioned in the introduction, we exclude the total mass \(M\) from the training parameters since the waveform only depends on the combination \(Mf\): this affects the frequency band in which we must train our model.
Suppose the user requires a waveform \(h(f; M, \theta )\) with total mass \(M\).
Then, the overall waveform is computed within {\tt mlgw\_bns} {} as
\begin{align}
h(f) = \frac{M}{ M _{\text{ref}}} h \left(\frac{fM}{M _{\text{ref}}}; M _{\text{ref}}, \theta\right)
\,,
\end{align}
which means that the user-given frequency grid will be shifted by a factor \(M / M _{\text{ref}}\).
In order for this to yield a valid waveform, however, the shifted frequencies must
still lie within the model's training frequency range.
Therefore, if we want our model to be applicable for all frequencies in a range \([f_1, f_2 ]\) and
for all masses in a range \([M_1, M_2]\) we need to train the reference-mass model in a range
\begin{align}
f \in \left[ f_1 \frac{M_1}{M _{\text{ref}}}, f_2 \frac{M_2 }{M _{\text{ref}}} \right]
\,.
\end{align}
\subsubsection{High frequency bound}
The model we are training on, {\tt TEOBResumSPA}, describes the inspiral up to merger, which in the mass range of interest typically happens above \(2 \, \text{kHz}\).
After the merger, the remnant (a short- or long-lived neutron star, or a black hole) will emit a post-merger GW signal for which models exist~\cite{Clark:2015zxa,Breschi:2019srl,Easter:2020ifj,Soultanis:2021oia,Wijngaarden:2022sah,Breschi:2022xnc}, but which is considered separately from the \ac{EOB} waveform:
after the merger frequency, {\tt TEOBResumSPA} {} waveforms are tapered with a powerlaw in the amplitude, \(A _{\text{EOB}} \propto f^{-10/3}\), and a linear relation in phase, \(\dot{\phi} _{\text{EOB}} = \dot{\phi} (f _{\text{max}})\), for \(f > f _{\text{max}}\) \cite[eqs.~S11-S12]{Gamba:2020ljo}.
This scaling is enforced as to ensure that the inverse Fourier transform of these waveforms is close to the time-domain waveform.
Also, it means that the amplitude is guaranteed to remain positive (albeit quickly diminishing) at high frequency.
However, this implies an issue in the residuals computation of Eq.~\eqref{eq:amplitude-phase-residuals}:
the baseline PN approximant is written as a power series in \(v = (\pi M f)^{1/3}\),
which means that there is no guarantee that \( A _{\text{PN}}\) will remain positive in the high-frequency regime, and indeed
in practice, it often does become negative, which means that our residuals defined in Eq.~\ref{eq:amplitude-phase-residuals} diverge.
We fix this by choosing a maximum frequency for the validity of the \ac{PN} model, and setting its amplitude to a constant value after that.
This is not done ``sharply'', since that would propagate a discontinuity to the prediction:
instead, we smoothly connect the expressions within an interval \([f_1, f_2] = [0.01/M, 0.02/M]\) as follows:
for all \(f \in [f_1, f_2 ]\) we write
\begin{align}
A _{\text{PN}}^{\text{new}} (f) = \left(1 - \zeta \left(x(f)\right)\right) A _{\text{PN}} (f) + \zeta \left(x(f)\right) C
\,,
\end{align}
where \(\zeta\colon [0, 1] \to [0, 1]\) is chosen so its derivative at the boundaries vanishes; specifically, we use
\begin{align}
\zeta (x) = \frac{1}{2} \left(1 - \cos(\pi x)\right)
\,,
\end{align}
while
\begin{align}
x(f) = \frac{f - f_1 }{f_2 - f_1}
\,.
\end{align}
The constant \(C\) is chosen to be equal to 20 in natural units; this is somewhat arbitrary,
but it is roughly the value attained by \(A _{\text{EOB}}\) at \(f \sim 0.02/M\),
as demonstrated by the first panel in Fig.~\ref{fig:original_residuals}:
the value at \(Mf = 0.02\) is \(\log A _{\text{EOB}}(Mf=0.02) / C\),
and one can see that it changes sign as we vary \(\widetilde{\Lambda}\).
This shows that \(C=20\) is a reasonable middle ground for this parameter.
This choice will only have an impact on the network's ability to learn the residuals;
if they are reconstructed correctly and the same modified \ac{PN} model is used both in training and reconstruction,
the specifics of the modification do not matter, and the high-frequency continuation of our waveforms is equal
to the \ac{EOB} one described at the beginning of this section.
For simplicity, for all frequencies higher than the maximum training one, we return a waveform which is identically equal to zero.
\subsubsection{Low frequency bound} \label{sec:low-freq-bound}
For a typical \ac{BNS}, a frequency of \(5 \text{Hz}\) corresponds to about 2 hours before merger.
This is close to the lower frequency limit for a ground-based detector, but
for a multi-band observational campaign (including space- or Moon-based detectors) having a model able to
be evaluated at arbitrarily low frequencies is very convenient.
The architecture in {\tt mlgw\_bns} {} makes this easily achievable:
since we are reconstructing residuals from a \ac{PN} baseline, we may
evaluate the waveform at arbitrarily low frequencies by setting the residuals to zero and just yielding the \ac{PN} waveform, which below \(5 \text{Hz}\) is a very good approximation of the true waveform:
as Fig.~\ref{fig:original_residuals} shows, the residuals approach 0 in the low-frequency regime.
For the phases, by subtracting an arbitrary linear term we can achieve \(\phi (f _{\text{min}}) = 0\) exactly, and \(\dot{\phi} (f _{\text{min}}) \approx 0\) to quite good accuracy, therefore we can simply yield \ac{PN} phases below \(f _{\text{min}}\) and our prediction above it.
For the amplitudes, this is not the case, and a discrepancy of the order of \(\Delta \log A \sim 5 \times 10^{-3}\) remains.
This discontinuity is fixed by a smoothing procedure: PN amplitudes corresponding to frequencies between \(f _{\text{min}} / 2\) and \(f _{\text{min}}\) are rescaled, so that the output of the model is
\begin{equation}
A(f) =
\begin{cases}
A _{\text{PN}} (f) & f < f _{\text{min}} / 2 \\
A _{\text{PN}} (f) + \Delta A \zeta \left( \frac{2f}{f _{\text{min}}} - 1 \right) & f _{\text{min}} / 2 \leq f \leq f_{\text{min}} \\
A _{\text{EOB}} (f)& f \geq f_{\text{min}}
\end{cases}
\end{equation}
\subsection{Neural Network} \label{sec:neural-network}
A feed-forward neural network is trained to reconstruct the map \(\theta \to \widetilde{x}\), where \(\theta\) is the vector of the the 5 intrinsic parameters considered, while \(\widetilde{x}\) is a 30-dimensional \ac{PCA} representation of the residuals corresponding to the waveform generated by the \ac{EOB} model with the given parameters.
As our neural network we employ a \texttt{MLPRegressor} from the \texttt{scikit-learn} library~\cite{Pedregosa:2011bfd}, and the training is performed with the Adam algorithm for stochastic gradient descent~\cite{Kingma:2017rta}.
As it is common, the parameters \(\theta \) are rescaled to have mean \(0\) and standard deviation \(1\).
After the \ac{PCA} reduction, each component in the vector \(\widetilde{x}\) natively has comparable variance, but we may arbitrarily rescale them, which is equivalent to rescaling the eigenvectors in the matrix \(U\) defined in Sec.~\ref{sec:pca}.
Also, we know that the eigenvectors corresponding to the largest eigenvalues \(\lambda_i\) ``matter more'', in that they explain more variance.
Therefore, as a preprocessing step we introduce a fixed rescaling of the vector \(\widetilde{x}\), as \(\widetilde{x}_i \to \widetilde{x}_i \lambda_i^\alpha\) for some tunable choice of \(\alpha \geq 0\).
The distance used during the training is then simply the Euclidean one between these rescaled \(\widetilde{x}\).
\subsubsection{Hyperparameter optimization} \label{sec:hyperparameter-optimization}
Several hyperparameters, which determine the network's properties and performance, must be chosen before training, such as
the number and size of hidden layers in the network, the activation function, the conditions for the termination of the training, the coefficient for the regularization term, and the coefficient \(\alpha\) defined above.
For a complete list, see App.~\ref{sec:appendix_B}, which details all the hyperparameters used within the default network discussed here.
The optimal set of hyperparameters may vary as the number of training waveforms used to train the network may change.
Heuristically, we might imagine that a complex network with many layers would be the best choice
with many thousands of training waveforms, while it would overfit when using only a hundred waveforms for the training,
for which the optimal configuration would be a smaller network.
The specific dependence of the recontruction efficiency on these parameters is, however, high-dimensional and hard to explore since evaluating each point requires us to train the whole network.
We evaluate each possible set of hyperparameters by computing its average reconstruction error on a validation dataset, generated independently but from the same distribution as the training dataset; the reconstruction error is measured as the distance defined by
\begin{align} \label{eq:reconstruction-error}
\operatorname{dist}^{2}(\widetilde{x}_{\text{orig}}, \widetilde{x} _{\text{pred}}) = \frac{\lVert x _{\text{orig}}-x _{\text{pred}} \rVert^2}{n_\phi + n_A}
\,,
\end{align}
where \(n_\phi + n_A\) is the dimensionality of the vector \(x\), as defined in Sec.~\ref{sec:pca}: the distance is written in terms of the vectors \(x = [\Delta A, \Delta \phi ]\),
reconstructed from the PCA-reduced \(\widetilde{x} \) predicted by the network.
The hyperparameters are optimized with the \texttt{optuna} package~\cite{Akiba:2019gsa} using a multi-objective
tree-structured Parzen estimator~\cite{Ozaki:2020nor}, where the two cost functions being simultaneously optimized are
\begin{enumerate}
\item the average reconstruction accuracy on a validation dataset measured as in Eq.~\eqref{eq:reconstruction-error};
\item the estimated time required for the generation of the training waveforms,
quantified by \(100 \text{ms}\) times the number of training waveforms, plus the
time needed to train the network.
\end{enumerate}
The training and validation datasets are randomized in each iteration.
The use of these two ``opposed'' cost functions allows for a Pareto front of optimal parameters to be computed.
This is a collection of parameter sets corresponding to different training dataset sizes;
once this optimization has been run, for any given dataset size we have a set of good hyperparameters to train the network.
Such a collection --- with dataset sizes ranging from 50 to \(10^5\) training waveforms --- is provided with version \texttt{0.12.0} {} of {\tt mlgw\_bns}, and Fig.~\ref{fig:pareto_front} shows the validation errors as a function of training dataset size.
When creating a new model, a lookup may then be performed to recover the locally optimal hyperparameters for the amount of data available to the model.
This is efficient since it allows us to train new networks without re-running the optimization when the parameter
space utilized remains relatively similar to the one used during the optimization procedure;
we have however found that with significant changes to the parameter space ({\it e.g.}~ including versus not including spin)
the optimization had to be re-run since it was giving suboptimal results.
\begin{figure}[ht]
\centering
\includegraphics[width=.48\textwidth]{fig05.pdf}
\caption{Pareto front for the hyperparameter optimization. The vertical axis shows the average error, computed as in Eq.~\eqref{eq:reconstruction-error}. The flattening observed at large training dataset sizes is not necessarily real: computational constraints prevented a large amount of trials to be performed in that region.}
\label{fig:pareto_front}
\end{figure}
\section{Model performance}
\label{sec:performance}
\subsection{Accuracy}
\begin{figure}[ht]
\centering
\includegraphics[width=.48\textwidth]{fig06.pdf}
\caption{Kernel Density Estimate representation of the mismatches
between the waveforms reconstructed by {\tt mlgw\_bns} {} and the corresponding ones
generated by the reference waveform generator, {\tt TEOBResumSPA}, for uniformly-distributed sets of parameters \(\theta _{\text{int}}\) in the training ranges, and with constant total mass \(M = M _{\text{ref}} = 2.8M_{\odot}\).
The curve labelled as ``PN only'' is obtained by comparing the baseline PN waveforms with the corresponding EOB ones, {\it i.e.}~ setting the reconstructed residuals to zero; for the other curves we use the number indicated for both the training of the
\ac{PCA} and for the training of the network, so the overall number of waveforms used is twice \(N\).
The same 4096 validation waveforms are used to generate each curve.
The mismatch is computed according to the predicted Einstein Telescope PSD, ET-D \cite{EinsteinTelescope:2011fda,Hild:2010id}, within the band \([3.57, 2926]\text{Hz}\) (see section \ref{sec:frequencies}).}
\label{fig:mismatches_by_n_train}
\end{figure}
Figure \ref{fig:mismatches_by_n_train} shows the mismatches between the reconstructed waveforms and the corresponding EOB ones. The mismatches are computed on validation datasets generated with the same distribution as the training ones, but with differently-seeded random number generators. The mismatches are computed according to the predicted Einstein Telescope PSD, ET-D \cite{EinsteinTelescope:2011fda,Hild:2010id}.
As shown by the figure, the accuracy measured through the mismatch \(\bar{\mathcal{F}}\) exhibits a roughly linear behavior \(\bar{\mathcal{F}} \sim 1 / N _{\text{train}} \).
The reconstructed residuals corresponding to the best model of Fig.~\ref{fig:mismatches_by_n_train} (trained with \(2^{17} = 131072\) waveforms) are shown in Fig.~\ref{fig:reconstruction-residuals}.
As one might expect, the residuals significantly differ from zero only in the high-frequency region, like the original residuals.
When considering the magnitude of the phase residuals, note that the logarithmic frequency axis distorts what may be linear trends: the temporal alignment chosen in the plot was not optimized to correspond to the best-match one, but instead to align the waveforms at low frequency.
\begin{figure}[ht]
\centering
\includegraphics[width=.48\textwidth]{fig07.pdf}
\caption{Residuals of 100 reconstructed waveforms to the reference EOB ones. The parameters for them are uniformly distributed.
}
\label{fig:reconstruction-residuals}
\end{figure}
\subsection{Speed}
\begin{figure}[ht]
\centering
\includegraphics[width=.48\textwidth]{fig08.pdf}
\caption{Benchmarks of the evaluation time required for one waveform,
with {\tt TEOBResumSPA} {} and with {\tt mlgw\_bns}. These times are computed by averaging 100 trials
at each point, and the frequencies at which the waveforms are evaluated
are always taken to be equally spaced between \(5 \text{Hz}\) and \(2048 \text{Hz}\) for simplicity
--- distorting the grid does not affect timing.
For both approximants, we also show a fit with a model \(t = t_{\text{o}} + t_{\text{p}} N\).}
\label{fig:benchmarking-evaluation}
\end{figure}
The evaluation times for {\tt mlgw\_bns} {} are shown in Fig.~\ref{fig:benchmarking-evaluation}
and compared to the evaluation times of {\tt TEOBResumSPA}{}.
Both templates exhibit a similar behavior in the number of sampling points: \(t(N _{\text{sample}}) \sim t_{\text{o}} + t_{\text{p}} N _{\text{sample}}\).
There is an approximately constant cost to evaluate the waveforms at small values of $N _{\text{sample}}$, while for large $N _{\text{sample}}$ the evaluation time scales linearly.
This is due to the fact that, for both templates, there are operations that are approximately independent on the number of evaluation points.
For {\tt mlgw\_bns}, these are running the parameters through the neural network and recomposing the result through \ac{PCA}.
For {\tt TEOBResumSPA} {}, the solution of the Hamiltonian flow using the post-adiabatic EOB iteration (at fixed number of points) and the subsequent ODE evolution for the last few orbits before merger \cite{Nagar:2018gnk}.
The linear regime is instead, for both templates, caused by the the time to interpolate the waveform to each of the finely-spaced user-given frequency points, and performing other linear-time operations such as combining amplitude and phase into the Cartesian representation of the waveform.
The linear-time operations taken by the two approximants are comparable; {\tt TEOBResumSPA} {} is implemented in C and {\tt mlgw\_bns} {} in \texttt{python}, but several components in the latter are just-in-time compiled thanks to \texttt{numba} \cite{Lam:2015a}.
While the constant \(c_2\) might be whittled down by optimizing the implementation, the linear term can not be completely removed --- the program will have to do at least a few floating point operations for each point we are resampling at.
Therefore, if we want fast waveform evaluation it is important to use as small a number of points as we can, while retaining the desired accuracy.
Several approaches have been suggested towards this goal for \ac{PE} purposes:
the simpler ones are similar in spirit to what has been discussed in Sec.~\ref{sec:multibanding},
using a smart coarser sampling than what the ``natural'' FFT grid would be.
More sophisticated approaches can be \ac{ROQ}s (discussed below in the context of \ac{PE}) or relative binnning \cite{Zackay:2018qdy, Leslie:2021ssu}.
In Tab.~\ref{tab:wf_timings} we show a breakdown of the use of time within an evaluation of {\tt mlgw\_bns}, in the case of 1000 grid points.
\begin{table}[t]
\caption{Timing breakdown for the evaluation of a waveform on 1000 grid points with {\tt mlgw\_bns}. Values will fluctuate across evaluations, this table is only meant to be indicative of the ratios between them.}
\begin{tabular}{lclc}
\hline
\hline
Task & Time [\(\mu\)s] & Subtask & Time [\(\mu\)s] \\
\hline
\multirow{2}{*}{Resampling} & \multirow{2}{*}{841} & Spline creation & 728 \\
&& Spline evaluation & 113 \\
\hline
\multirow{2}{*}{PN evaluation} & \multirow{2}{*}{653} & Amplitude & 434 \\
& & Phase & 219 \\
\hline
\multirow{3}{*}{PCA+NN} & \multirow{3}{*}{397} & NN & 326 \\
& & PCA & 41 \\
& & Misc. & 30 \\
\hline
\multirow{3}{*}{Postprocessing} & \multirow{3}{*}{289} & Include extrinsic & 157 \\
&& Compute \(h = A e^{-i \phi }\) & 40 \\
&& Misc. & 90 \\
\hline\hline
Total & 2180 &&
\end{tabular}
\label{tab:wf_timings}
\end{table}
\section{Parameter estimation}
\label{sec:PE}
To showcase the benefits brought by our model in a realistic setting, we perform PE studies on the binary neutron star (\ac{BNS}) transient
GW170817~\cite{TheLIGOScientific:2017qsa,LIGOScientific:2018mvr}.
In Sec.~\ref{subsec:teob-mlgw}, we first perform a full-scale validation, showing the compatible results of \ac{GW} inference using {{\tt mlgw\_bns}}, compared to the ones obtained with {{\tt TEOBResumSPA}}.
Then, in Sec.~\ref{subsec:roq-con} we discuss and apply compression techniques capable of reducing the number of frequency nodes on which {{\tt mlgw\_bns}} needs to be evaluated for PE purposes. This step allows to fully exploit the benefits of our model, which displays the largest gain compared to {{\tt TEOBResumSPA}} for a smaller number of frequency nodes (see Fig.~\ref{fig:benchmarking-evaluation}).
Sec.~\ref{subsec:roq-pe} finally repeats the \ac{PE} analysis combining {{\tt mlgw\_bns}} and such compression methods, showcasing more than order of magnitude speed gain obtainable with our \ac{ML} technique against {{\tt TEOBResumSPA}} in a full-fledged \ac{PE} analysis.
In particular, we analyze the (deglitched) GWOSC data of LIGO and Virgo centered around GPS time 1187008857
with a sampling rate of 4096~Hz and a duration of 128~s, considering the frequency range from [23, 2000]~Hz.
Our \ac{PE} relies on the MPI-parallelized {{\tt bajes}} pipeline~\cite{Breschi:2021wzr} and the
${\tt dynesty}$ \cite{Speagle:2020} nested sampler.
The reported errors correspond to the 90\% confidence intervals and the $\log$ symbol refers to the natural logarithm.
The mass prior is chosen to be flat in the mass components $m_{1,2}$, although the
sampling is then performed in $({\cal M},q)$, with ranges wide enough to capture the full posterior width.
We sample on aligned-spin components, with an isotropic prior bounded by $\chi_{1,2}\le 0.5$.
The prior on the tidal parameters is uniform in the ranges $\Lambda_{1,2}\in[5,5000]$
and the luminosity distance employs a volumetric prior in $D_L\in[1,75]~{\rm Mpc}$.
Other priors are set according to standard prescriptions in \ac{GW} astronomy~\cite{Breschi:2021wzr}.
We do not assume prior knowledge on electromagnetic counterparts.
We include spectral calibration envelopes with 10 logarithmic-spaced nodes for each detector.
For an overview of Bayesian inference of \ac{GW} signals see Refs.~\cite{Veitch:2009hd,Veitch:2014wba,thrane_talbot_2019,Breschi:2021wzr}.
\subsection{Full grid {\mlgwbns} -- {{\tt TEOBResumSPA}} comparison}\label{subsec:teob-mlgw}
\begin{figure*}[ht]
\centering
\includegraphics[width=0.96\textwidth]{fig09.pdf}
\caption{Corner plot the posterior distribution for selected parameters reconstructed for GW170817, with {\tt mlgw\_bns} {} (orange) and {\tt TEOBResumSPA} (black).
The contours report the 50\% and the 90\% credibility regions.
}
\label{fig:corner_posterior_mb_teob}
\end{figure*}
\begin{figure*}[ht]
\centering
\includegraphics[width=0.96\textwidth]{fig10.pdf}
\caption{Corner plot of the posterior distribution for selected parameters reconstructed for GW170817, in both cases with {\tt mlgw\_bns},
but when using an \ac{ROQ} technique or a full frequency grid evaluation.
The contours report the 50\% and the 90\% credibility regions.
}
\label{fig:corner_posterior_roq}
\end{figure*}
Using the settings discussed above, GW170817 is analyzed with {\mlgwbns} and {{\tt TEOBResumSPA}}
in order to compare performances and verify the consistency of the results.
The sampling employs 3000 live points, an evidence tolerance of 0.1, a maximum number of Markov-Chain Monte Carlo steps of 12000 and 5 auto-correlation times before accepting a point. We analytically marginalise over the coalescence time $t_c$ and phase $\phi_c$.
The two waveform approximants achieve compatible measurements, within the stochasticity of the sampler.
Figure \ref{fig:corner_posterior_mb_teob} shows the comparison between {\mlgwbns} and
{{\tt TEOBResumSPA}} posterior distributions for selected parameters of interest.
We recover ${\cal M}={{1.1975}^{+0.0003}_{-0.0002}}~{\rm M_{\odot}}$,
the mass ratio is constrained to $q<2.07$ at the 90\% confidence level
and the reduced tidal parameter corresponds to ${\tilde\Lambda}={{365}^{+522}_{-254}}$.
The recovered posteriors are consistent with previous similar
studies~\cite{TheLIGOScientific:2017qsa,LIGOScientific:2018mvr,Abbott:2018wiz,Gamba:2020ljo,Breschi:2021wzr}.
Moreover, the two models recovered similar Bayes' factors ($\log{\cal B} \simeq 482$),
and signal-to-noise ratios (${\rm SNR}={32}$),
validating the faithfulness of {\mlgwbns} with respect to the training template in a realistic application.
We observe only a mild improvement in execution time for {\mlgwbns} compared to {{\tt TEOBResumSPA}}.
This is expected given the uniform frequency grid with $(f_{\text{max}} - f_{\text{min}}) \times T = (2000-23) \times 128 = 253056$
evaluation points. In fact, Fig.~\ref{fig:benchmarking-evaluation} shows that for this number of points
the advantage in generating waveforms using {\mlgwbns} is not enormous.
Significant speedups can instead be achieved by relying on grids smaller than $10^4$ points.
This naturally calls for the usage of compression techniques, capable of restricting
the required number of frequency nodes used in computing the likelihood, the subject of the remainder of this section.
\subsection{Reduced order quadrature construction}\label{subsec:roq-con}
Reduced order modeling, which is referred to as \ac{ROQ}s in \ac{GW} astronomy when combined with discrete empirical interpolation techniques,
is a method of eliminating information redundancy present in sets of parametric functions (in our case, the gravitational waveforms as functions of the physical parameters of the binary system, such as masses and spins) when evaluated on a discrete set of points (in our case, the frequency grid).
By selecting a small number of waveforms' ``basis elements'' and an equal number of discrete interpolation frequency points, \ac{ROQ}s are capable of dramatically speeding up both waveform evaluation and integrals involving them, such as the Wiener inner products (see Eq.~\eqref{eq:Wiener_ip}) entering the standard \ac{GW} likelihood.
This is achieved by sufficiently accurate -- and fast to evaluate -- \textit{interpolants}, built on a large training dataset.
In the context of \ac{GW} astronomy, early development and applications of \ac{ROQ}s to \ac{GW} searches were presented in Refs.~\cite{Field:2011mf, Caudill:2011kv}.
An extended mathematical analysis (notably, including convergence estimates) was presented in~\cite{Antil:2012wf}, while the construction of surrogate models using related techniques was pioneered in Ref.~\cite{Field:2013cfa}.
Applications to PE were introduced in~\cite{Canizares:2013ywa, Canizares:2014fya}, and the extension to precessing signals \ac{PE} was achieved in~\cite{Smith:2016qas}, also including many improvements such as mass-frequency partitions and an adaptive frequency sampling strategy. \ac{ROQ} acceleration of tests of \ac{GR} was considered in~\cite{Meidam:2017dgf}.
Most of the methods used in the aforementioned applications are implemented in the \texttt{GreedyCpp} code.\footnote{Available at: \href{https://bitbucket.org/sfield83/greedycpp}{bitbucket.org/sfield83/greedycpp}}
More recently, \ac{ROQ}s of precessing signals containing higher harmonics were presented in Ref.~\cite{Qi:2020lfr}, while Ref.~\cite{Smith:2021bqc} used \ac{ROQ} methods to demonstrate the feasibility of analysing \ac{BNS} merger signals detected by the next generation of ground-based detectors.
The interested reader may refer to~\cite{Antil:2012wf, Field:2013cfa, Smith:2016qas} for an introduction to the concepts used below.
Ref.~\cite{Qi:2020lfr} introduced a set of modifications in how the initial basis elements are constructed compared to previous literature, aiming at improving the efficiency of basis construction.
The related algorithm was released in a public \texttt{python} package, labeled \texttt{PyROQ}.\footnote{Available at: \href{https://github.com/qihongcat/PyROQ}{github.com/qihongcat/PyROQ}}
We modified and generalised this algorithm, added numerical stability checks, restructured the software to make it more modular and easily usable with modern (typically \texttt{python}-based) waveform approximants.
Details of our algorithm, labeled \texttt{JenpyROQ},\footnote{Available at: \href{https://github.com/GCArullo/JenpyROQ}{github.com/GCArullo/JenpyROQ}} and GW170817 \ac{ROQ} interpolants construction are presented in Appendix A.
For the \ac{PE} analysis discussed above, we obtained a sufficiently accurate basis with 267 (10) linear (quadratic) basis elements,
achieving a linear (quadratic) frequency axis reduction factor of 950 (25300).
\subsection{Parameter estimation with reduced order quadrature interpolation}\label{subsec:roq-pe}
\begin{table}[t]
\caption{Waveform generation ($\rm t_{\rm wf} $) and likelihood inner-products ($\rm t_{\rm ip} $) timings when using ROQs or a full frequency grid evaluation.
We report results for both a single ($\rm N_{det} = 1$) and three detector network ($\rm N_{det} = 3$).
The total likelihood evaluation time is simply $\rm t_{\rm tot} \simeq \rm t_{\rm wf} +\rm t_{\rm ip}$, since other likelihood operations costs are comparatively negligible.
The \ac{ROQ} approximation results in a PE speedup factor of 18 (12) in the one (three) detector case.}
\begin{tabular}{@{}lcccc}
\hline\hline
\multicolumn{5}{c}{Timings [ms]} \\
\hline
\hline
$(\rm N_{det}, \rm ROQ) $ & (1, no) & (1, yes) & (3, no) & (3, yes) \\
\hline
$\rm t_{\rm wf} $ & 69.8 & 2.2 & 69.8 & 2.2 \\
$\rm t_{\rm ip} $ & 15.3 & 2.5 & 45.9 & 7.5 \\
$\rm t_{\rm tot}$ & 85.2 & 4.7 & 115.7 & 9.7 \\
\hline\hline
\end{tabular}
\label{tab:ROQ_timings}
\end{table}
To predict the expected speedup on a \ac{PE} run using the \ac{ROQ} interpolants described above, it is sufficient to compute $\rm t_{\rm tot} = \rm t_{\rm wf} +\rm t_{\rm ip}$, where $\rm t_{\rm wf}$ indicates the waveform (Eq.~\ref{eq:emission-mode-decomposition}) generation time
and $\rm t_{\rm ip}$ the evaluation time of the likelihood inner products (including interpolants evalutation):
all other operations (e.g. detectors projection) are negligible compared to these two costs.
Typical values for these times when using an \ac{ROQ} technique or a full frequency grid evaluation are reported in Table~\ref{tab:ROQ_timings}.
For a single detector, the predicted \ac{ROQ} speedup factor is $85.2~ \rm ms / 4.7~ \rm ms \sim {}18$.
For three detectors (the case of interest in our realistic application), the total speedup becomes: $115.7~ \rm ms / 10~ \rm ms \sim {}12$.
These numbers imply that when relying on {{\tt mlgw\_bns}} and an \ac{ROQ} scheme, the waveform evaluation cost is no longer the dominant one.
For this reason, the expected \ac{PE} speedup (12) is a factor of three smaller than the waveform evaluation speedup (35) inferred from Fig.~\ref{fig:benchmarking-evaluation}.
We validate this by repeating the GW170817 analysis in the previous section, employing {{\tt mlgw\_bns}} both times but using either a \ac{GW} likelihood built with the \ac{ROQ} interpolants
constructed above, or a standard likelihood computation.
We do not apply time-marginalisation in this case, since we have not interfaced the \ac{ROQ} formulation with the time-marginalised likelihood, hence we increase the values of sampler settings to avoid altogether any convergence issues.
We employ 5000 live points, an evidence tolerance of 0.1, a maximum number of Markov-Chain Monte Carlo steps of 12000 and 10 auto-correlation times before accepting a point.
We explore $t_c$ within the bounds [24.7, 25.0]~s, using a discretisation composed of 3000 points.
\ac{PE} results obtained with the \ac{ROQ} settings discussed above or with the standard likelihood are statistically indistinguishable,
as shown in Fig.~\ref{fig:corner_posterior_roq}.
However, with 24 nodes comprising 2 Intel Xeon E5-2650v4 12x 2.20 GHz 12-Core CPU each, the sampling runtimes and relative speedup are:
$\rm t^{ROQ=0}_{samp} / \rm t^{ROQ=1}_{samp} = 49\rm h 20\rm m / 4\rm h 14\rm m \sim 11.3$, in very good agreement with the predictions presented above.
Pre-sampling interpolant construction took 9~mins per detector with these settings.
Finally, we stress that the speedup resulting from the combination of {\mlgwbns} and \ac{ROQ} will bear a more dramatic impact when applied to longer frequency axes.
For example, in the case of full inspiral-merger-postmerger \ac{BNS} signals analyses, with a lower frequency bound of ${\sim}5$ Hz and reaching up to ${\sim}8$ kHz,
applications of similar techniques will provide a speedup larger than three orders of magnitude compared to a uniform grid.
\section{Conclusions}
\label{sec:con}
In this work we have introduced {\tt mlgw\_bns}, a new \ac{ML} surrogate waveform approximant in the frequency-domain for spin-aligned \ac{BNS} mergers, designed for applications to both current and future \ac{GW} detectors.
Our model is trained on accurate {\tt TEOBResumSPA}~\ac{EOB} waveforms, faithfully represented with a fidelity larger than the accuracy of the baseline SPA model against the native time-domain \ac{EOB} model $(\mathcal{\bar{F}} \lesssim 10^{-5})$.
At the same time, thanks to several dimensional reductions steps, {\tt mlgw\_bns} {} delivers a considerable increase in efficiency.
By performing careful benchmark tests with varying frequency grids, we estimate a speed-up of $\sim 30$ with respect to {\tt TEOBResumSPA} {}, when evaluated on frequency axes composed of less than $\sim 10^4$ points, which can reach up to $\sim 35$ for less than $\sim 10^2$ frequency points.
Combined with \ac{ROQ} techniques, an overall \ac{PE} acceleration of more than an order of magnitude is achieved for current \ac{BNS} analyses -- as we explicitly demonstrated re-analysing GW170817 using a reduced basis.
Thanks to the improved performance of our model, in our investigations the likelihood cost is no longer dominated by the waveform generation time, but by inner products computations, making additional decreases in the evaluation time of our \ac{ML} model less relevant.
If the inner products computation cost can be reduced in future \ac{PE} implementations, it will be important to explore further optimisations of the algorithm, such as tuning the number of \ac{PCA} components and the greedy downsampling reconstruction tolerance, or improving the hyperparameters selection procedure.
Moving towards future detectors, even more dramatic impovements can be obtained for \ac{PE} in the ET band.
Since the number of empirical nodes will still be $O(10^2)$ even at high SNR~\cite{Smith:2021bqc}, \ac{ROQ} interpolants interfaced with the frequency-domain {\tt TEOBResumSPA}~\ac{EOB} approximant would allow for a waveform generation speed-up of \({\sim}50\), compared to a standard uniform grid when analysing a signal starting from $5 \rm Hz$.
Instead, given the extremely low overhead of our \ac{ML} model, the combined usage of \ac{ROQ} and {\tt mlgw\_bns} {} will provide a massive speed-up of more than \({\sim}10^3\) for the same configuration, without loss of accuracy.
Other than exploiting fast \ac{PE} techniques, our \ac{ML} model can even enable them.
In fact, posterior sampling acceleration through {\it e.g.}~ the application of Hamiltonian nested sampling \cite{Betancourt:2011rgh}, as well as forecasting with Fisher matrix studies, can be easily achieved thanks to the intrinsically differentiable architecture of {\tt mlgw\_bns} {}: a planned neural network upgrade is to yield not only the waveform polarizations \(h_{+, \times}\) but also their derivatives with respect to the parameters, {\it i.e.}~ \(\partial h_{+, \times} / \partial \theta_i\).
The knowledge of gradients can be also exploited in template bank generation~\cite{Coogan:2022qxs}, allowing for a fast computation of a metric approximation for the match and for coverage of a large dimensional parameter space: our model will facilitate the generation of the first \ac{BNS} template bank including tidal effects.
On the other hand, the baseline model and the physics content of {\tt mlgw\_bns} {} will require improvements
in order to meet the accuracy prerequisites of XG observatories.
While the simplicity in re-training {\tt mlgw\_bns} {} will allow it to remain up to date with future
enhancements of tidal \ac{EOB} models (such as self-spin interactions, higher order tidal effects, dynamical tides), less straightforward will be to incorporate: higher modes~\cite{Nagar:2020pcj}, precession~\cite{ Akcay:2020qrj,Gamba:2021ydi}, eccentricity~\cite{Chiaramello:2020ehz, Nagar:2021gss}\footnote{All these features are already implemented both in the native time-domain {\tt TEOBResumS} {} model and in {\tt TEOBResumSPA} {}, with the exclusion of eccentricity, only available in the time-domain waveform.} and a frequency-domain postmerger completion~ \cite{Breschi:2022ens,Breschi:2022xnc}.
We leave such extensions of {\tt mlgw\_bns} {} to future work, but briefly discuss possible strategies to tackle them.
Higher order ($\ell>2$) modes break the simple dependence on the inclination angle \(\iota\) described by Eq.\eqref{eq:22-mode-polarizations}, requiring the modes to be reconstructed separately, with a corresponding slowdown in waveform evaluation.
GPU acceleration~\cite{Thomas:2022rmc} could be employed to ameliorate this.
Precession effects could be immediately included relying on \ac{ML}-reconstruced higher modes, and subsequently applying a twisting~\cite{Schmidt:2010it, Schmidt:2012rh, Gamba:2021ydi} describing a generic spins dynamics.
Finally, eccentricity introduces modulations which make the time-to-frequency map non-monotonic: this prevents a straightforward application of \ac{SPA}, which we use to generate our training datasets. This problem could be cured by moving from \ac{SPA} to shifted uniform asymptotics \cite{Klein:2018ybm,Klein:2014gds}.
In summary, {\tt mlgw\_bns}{} provides both a concrete step forward towards feasible and accurate \ac{PE} with XG detectors, and a more efficient alternative to current \ac{EOB} \ac{BNS} models for present-day analyses.
\begin{acknowledgments}
JT and SB thank Michela Mapelli for supporting this project and early discussions.
GC thanks Hong Qi for discussions on \texttt{PyROQ} and Rory Smith, Carl-Johan Haster for useful insights on integrating detector calibration uncertainties with ROQ interpolants.
MB and SB acknowledge support by the EU H2020 under ERC Starting Grant, no.~BinGraSp-714626.
MB and RG acknowledge support from the Deutsche Forschungsgemeinschaft (DFG) under Grant No. 406116891 within the Research Training Group RTG 2522/1.
GC acknowledges support by the Della Riccia Foundation under an Early Career Scientist Fellowship.
GC acknowledges funding from the European Union’s Horizon 2020 research and innovation program under the Marie Sklodowska-Curie grant agreement No. 847523 ‘INTERACTIONS’.
%
This research has made use of data, software and/or web tools obtained from the Gravitational Wave Open Science Center (https://www.gw-openscience.org), a service of LIGO Laboratory, the LIGO Scientific Collaboration and the Virgo Collaboration.
%
LIGO is funded by the U.S. National Science Foundation. Virgo is funded by the French Centre National de Recherche Scientifique (CNRS), the Italian Istituto Nazionale della Fisica Nucleare (INFN) and the Dutch Nikhef, with contributions by Polish and Hungarian institutes.
%
Computations were performed on {\scshape ARA}, a resource of Friedrich-Schiller-Universt\"at Jena supported in part by DFG grants INST 275/334-1 FUGG, INST 275/363-1 FUGG and EU H2020 BinGraSp-714626. Postprocessing was performed on the {\scshape Tullio} sever at INFN Turin.
\noindent {{\tt mlgw\_bns}} is publicly available at:
\href{https://github.com/jacopok/mlgw\_bns}{github.com/jacopok/mlgw\_bns}
\noindent \texttt{JenpyROQ} is publicly available at:
\href{https://github.com/GCArullo/JenpyROQ}{github.com/GCArullo/JenpyROQ}
\noindent {{\tt TEOBResumSPA}} is publicly available at:
\href{https://bitbucket.org/eob\_ihes/teobresums/}{bitbucket.org/eob\_ihes/teobresums/}
\noindent {{\tt bajes}} is publicly available at:
\href{https://github.com/matteobreschi/bajes/tree/release/v0.3.0}{github.com/matteobreschi/bajes}.
\noindent The Bayesian analyses presented in this work have been performed with {{\tt bajes}} version {\tt 0.3.0}, also available on \href{https://pypi.org/project/bajes/0.3.0/}{\scshape PyPI}.
%
\end{acknowledgments}
\newpage
|
1,108,101,566,342 | arxiv | \section{Field Failures}
\label{sec:fieldfailures}
In this section, we introduce the key concepts that are relevant in our study: \emph{field failure}, \emph{field fault} and \emph{field-intrinsic fault}.
\begin{pdefinition}{Field Failure}
A \emph{field failure} is a software failure experienced in a production environment.
\end{pdefinition}
\begin{pdefinition}{Field Fault}
A \emph{field fault} is a fault that is present in a software program deployed and running in a production environment.
\end{pdefinition}
Field faults may or may not cause field failures, depending on the execution conditions in the field.
\begin{pdefinition}{In-house Faults and Failures}
The term \emph{in-house} refers to the development environment.
\emph{In-house failures} indicate failures that occur when testing the software system in the development environment and \emph{in-house faults} indicate the causes of the failures exposed during testing.
\end{pdefinition}
The distinction between \emph{field} and \emph{in-house} failures depends only on the time the failures are exposed and not on the nature of the fault. The same failures may be \emph{field failures} if occurring in the field and \emph{in-house failures} if revealed during testing. We differentiate faults by their nature by introducing the new concept of field-intrinsic faults and the corresponding taxonomy.
Field faults might be either faults that simply escape the testing phase as a consequence of an inaccurate testing process, or problems that are hard or sometime even impossible to reveal in-house before the software is executed in the field.
Distinguishing between these two classes of field-faults is extremely important, because they call for different methods and techniques to be effectively addressed. Faults that simply escape the testing phase as a consequence of an inaccurate testing process can be addressed by improving the in-house testing and analysis activities, while faults that are hard or sometime even impossible to reveal in-house should be addressed with methods that operate in the field.
The empirical data reported in this paper indicate three main categories of factors that harden detecting software faults in-house: faults impossible to activate in-house, faults that depend on unknown conditions, and faults that depend on "uncountable" many configurations. \emph{Faults impossible to activate in-house} are faults that depend on conditions that cannot be simulated in laboratory.
\emph{Faults that depend on unknown conditions} are faults that are activated by undocumented situations which cannot be thus identified with any systematic approach.
\emph{Faults that depend on "uncountable" many inputs and conditions} are faults characterized by an input and configuration spaces so large that cannot be effectively addressed in-house either exhaustively or selectively. For example the huge heterogeneity of unique devices, operating systems, apps versions and configurations that characterize the mobile phone market cannot be tested exhaustively and do not present similarities that support any effective selective approach, that is, faults may be revealed in-house by chance, but would escape any feasible testing campaign no matter how effectively designed.
\begin{figure}[t!]
\centering
\includegraphics[width=7cm]{images/application_context.pdf}
\caption{General production environment}
\label{fig:arch}
\end{figure}
Distinguishing between faults that survive the testing due to inadequate quality processes and faults that are inherently hard to detect no matter if because impossible to activate or depending on unknown or uncountable execution conditions is important to devise verification and validation strategies. Witnessing the presence and quantity of the different classes of faults is important to define suitable V\&V campaigns.
We start the study and capture the different nature of faults with the new concept of field-intrinsic faults that we experimentally investigate in detail in the rest of this paper.
\begin{pdefinition}{Field-intrinsic Fault}
A \emph{field-intrinsic fault} is a field fault that is inherently hard to detect in-house, either because impossible to activate in-house or because depending on unknown or "uncountable" many conditions.
\end{pdefinition}
Although some field-intrinsic faults can be revealed and removed in-house by chance, most field-intrinsic faults escape any reasonable pre-deployment V\&V activity and manifest only in the field. In the reminder of this paper we characterize and classify field-intrinsic faults and support the need of new field V\&V activities.
We report the results of an empirical study of a body of field faults that we found in the fault repositories of different applications aiming to analyze the distribution of field-intrinsic faults in production environments.
Figure~\ref{fig:arch} shows the different elements that comprise a production environment and that play a key role in field-intrinsic faults.
The \emph{software in the field} (\emph{SIF}) represents a software application or a software system running in the field. The SIF receives inputs and produces outputs. The SIF receives the \emph{inputs} in the form of data and stimuli from both the SIF users and other systems interacting with the SIF.
The SIF \emph{outputs}
might be either visualized for the users or dispatched to other systems. While executing, the SIF may interact with a \emph{field} that includes several entities \emph{that are not under the control of the SIF}: multiple types of \emph{resources}, such as files and databases, which might be accessed by the SIF during computations, and third-party components that provide services to the SIF, such as \emph{plugins} that extend the capabilities of the SIF with additional features, \emph{drivers \& services} that provide a range of services to the SIF, and the \emph{operating system} that defines the basic runtime environment of the SIF.
Finally, the SIF may communicate with other applications and services using the \emph{network}.
The role of the environment in field failures leads to the concept of \emph{failure context}:
\begin{pdefinition}{Failure Context}
A \emph{failure context} is the execution context of a failure, that is, the specific state that the elements in the field must have to trigger the failure.
\end{pdefinition}
\section{Conclusion}
\label{sec:conclusion}
\balance
This paper reports the results of an empirical study about the characteristics of field failures, that is, failures observed in production environments.
In details, we introduce the concept field-intrinsic faults as faults inherently hard to detect in-house and more effectively detectable in the field. Field-intrinsic faults are a relevant subset of the more general category of field faults that characterize faults in the field regardless of why they escaped in-house testing.
We report our findings about the high frequency of field-intrinsic faults in the analyzed bug reports (field-intrinsic faults represent 70\% of the analyzed field faults), obtaining initial evidence that there is a relevant amount of faults that cannot be effectively addressed in-house and should be addressed directly in the field.
We qualitatively analyze the cases, and identify four main reasons for the presence of field-intrinsic faults: cases impossible to replicate in-house, combinatorial explosion of the cases that should be tested, unknown application or environment conditions.
We investigate the characteristics of these faults to determine the elements that may make them intrinsically hard to detect. We identify the need to reason on the state of the application and the need to interpret the outputs of the application as two key features of techniques designed to reveal these faults.
We are currently continuing the experimental evaluation of reports extracted from the bug repositories of applications in the same domain, to assess the quantitative data reported in this paper, and in different domains, to study the impact of the domains on field failures.
\CHANGED{reveal}
field-intrinsic faults.
\section{Findings} \label{sec:discussion}
The experimental data that we collected to answer the research questions lead to some interesting findings:
\finding{Most of the failures that can be observed in the field are caused by field-intrinsic faults}
Our experimental data indicate that about 70\% of the field failures that we analyzed are caused by field-intrinsic faults, that is, are caused by faults that might be hardly revealed in house.
These faults are caused by four challenges: combinatorial explosion, unknown environment or application condition, and situations impossible to reproduce.
This result calls for approaches that can deal with these classes of failures in the field.
\finding{Combinatorial explosion is a relevant cause of undetected field-intrinsic faults}
Combinatorial explosions are notably hard to address in testing and analysis techniques.
Our experimental investigation indicates that, \begin{changed}despite numerous techniques developed to tackle the problem of generating test cases that adequately cover interactions of parameters in a software application~\cite{lei2008ipog,nie2011survey}\end{changed}, combinatorial explosion \begin{changed}still\end{changed} plays a prominent role in
\CHANGED{preventing the detection of}
field-intrinsic faults.
Differently from other contexts, in the case of field-intrinsic faults, the source of combinatorial explosion is not the user input (only 18\% of the failures are caused by specific \CHANGED{combinations of} inputs) but the status of the field elements.
\finding{The interaction with the environment is almost always a relevant factor in field-intrinsic faults}
The vast majority of the field-intrinsic faults (78\% in our study) requires some forms of interactions with the environment to be activated.
Resources and operating systems are the most relevant field elements involved in field-failures, but also drivers, plugins and the network are often important.
This result indicates that techniques to reveal field-intrinsic faults must take into consideration the production environment in which the system is executed.
\finding{Value and system field-faults are more frequent than timing field-faults} The ability to analyze the output produced by a system, including the ability to detect crashes, is sufficient to detect most of the field-intrinsic failures, with a rate of timing field failures as low as 5\% of the cases.
\finding{The oracle problem affects about half of the field-intrinsic faults}
Our experimental analysis indicates that 43\% of the failures can be detected by intercepting unhandled events, for example system crashes, and error messages.
\CHANGED{Domain specific} oracles are necessary to address the remaining 57\% of the cases.
\CHANGED{This calls for techniques and methods to derive strong automatic oracles for field testing.}
\finding{Field failures can be commonly revealed with short sequences of actions}
Our experimental analysis provides evidence that few steps (three or fewer actions in 77\% of the cases) are usually needed to make the SIF fail from a failure-prone state.
This suggests that detecting states that offer opportunities for running test and analysis routines might be more important than studying techniques for generating tests composed of long sequences of actions.
\section{Introduction}
Software field failures are failures that occur in the field with sometimes severe consequences on users and organizations, such as
customer dissatisfaction, economic losses and legal issues.
\emph{Field failures} are caused by faults that escape the in-house testing activities and are not detected and repaired before the software is released in the field.
We denote such faults as \emph{field faults}.
Field failures may depend on weak testing activities and poor development practices.
However, they may also derive from factors that prevent the failures to be detected and the corresponding faults to be removed before the software is released, such as
when the conditions that trigger the failure are impossible to reproduce in the testing environment and when the number of combinations to be executed goes beyond any reasonable limit.
An example of conditions impossible to reproduce in-house is the extraordinary system load that derives from millions of people connected over the Internet to watch an exceptional sport event like the final match of the European Champions League or the US Super Bowl streamings~\cite{superbowlWeb2015}.
It is impossible to reproduce the same environment conditions to accurately test the system in-house for revealing the uncovered and failure-prone behaviors that may occur in the field, as it happened in 2016 when the CBS app failed to stream the Super Bowl match to several customers~\cite{superbowlAppNotworking}.
An example of amount of combinations impractical to execute with in-house testing is the extraordinary cardinality of the Microsoft environment configurations~\cite{Murphy-InVivo-ICST-2009} that reaches trillion of combinations of the configuration parameters.
Field faults that cannot be detected with in-house testing approaches might be more easily addressable in the field where the diversity and complexity of the execution environment could be exploited in the verification activity.
Field faults have attracted the interest of both academia, mainly in the context of service-based~\cite{Hielscher2008,Sammodi2011} and caching systems~\cite{Murphy-InVivo-ICST-2009}, and industry, with approaches like Netflix that injects faults in the production systems to validate scenarios that are impossible to test in-house~\cite{Basiri:Netflix:ISSRE:2016}.
Studies of field faults have considered many aspects, such as fault distribution~\cite{Hamill-TrendsInFaults-TSE-2009,Fan-NuclearFailures-SF-2013}, fault locality~\cite{Hamill-TrendsInFaults-TSE-2009}, fault locations~\cite{Ostrand-FaultDistribution-ISSTA-2002}, activities and types of human errors that introduce faults~\cite{Leszak-ClassificationOfDefects-JSS-2002}, relations between fault types, failure detection and failure severity~\cite{Hamill-FaultTypesDetectionSeverity-SQJ-2014}, and evolution of faults during bug fixing~\cite{Meulen-FaultsFailureBehaviour-ISSE-2004}.
Despite the growing interest in field faults and the design of approaches to address different kinds of faults,
there is still no study on the nature of field problems that indicates whether they can be better addressed in-house by improving testing techniques and methodologies, or in the field by exploiting the many instances of a same application running within several heterogeneous environments.
In this paper we present a study which provides an initial characterization of field faults and the consequent failures in the field: %
(i) we introduce a set of characteristics that make faults hardly detectable in-house, (ii) we study the characteristics of failures reported by the users from three ecosystems, and (iii) we discuss the factors that make these failures likely observable only in the field.
Our results indicate that:
\begin{itemize} [leftmargin=*]
\item 70\% of the problems observed in the field are extremely hard if not impossible to detect with in-house testing approaches, and are potentially easy to detect in the field;
\item 78\% of the problems that are hard to detect in house can be observed only in the presence of resources available in the field, for example new plugins, files and network connections, further emphasizing the role of the field to reveal these problems.
\end{itemize}
These results corroborate the intuition that we need more \emph{in-field software verification approaches} that exploit the resources available in the field to complement classic in-house V\&V strategies.
The analysis is based on bug reports from three ecosystems - Eclipse, OpenOffice and Nuxeo - and gives initial evidence of the predominance of field faults that can be hardly revealed in-house. The results of our analysis are publicly available at \url{http://www.lta.disco.unimib.it/tools/field/}.
The paper is organized as follows. Section~\ref{sec:fieldfailures} proposes a taxonomy of field failures. Sections~\ref{sec:rqs} and~\ref{sec:subjects} present the research questions that we investigated and the Ecosystems considered in our study, respectively. Section~\ref{sec:procedure} discusses the empirical procedure we followed to investigate the research questions. Section~\ref{sec:results} presents the results of our study about the nature and diffusion of field failures.
Section~\ref{sec:discussion} discusses the main findings,
Section~\ref{sec:related} presents related work, and
Section~\ref{sec:conclusion} summarizes the results presented in the paper.
\section*{Acknowledgment}
This work has been partially supported by the H2020 Learn project, which has been funded under the ERC Consolidator Grant 2014 program (ERC Grant Agreement n. 646867) and the GAUSS national research project, which has been funded by the MIUR under the PRIN 2015 program (Contract 2015KWREMX).
\bibliographystyle{IEEEtran}
\section{Test Obligations}
test obligations specify the general requirements of an application's testing process, specifically in this section we identify test obligations that are interesting in a field testing context. We associate multiple test opportunities to each obligation to identify and exploit potential testing situations that arise during the applications' regular use.
The test opportunities that we identify for each test obligation are derived from common \FFs{} patterns that emerge from the analysed data. Test opportunities exploit the obtained knowledge to determine how and when to execute tests during applications' regular use.
\subsection{Obligation: functionality/resource coverage}
This obligation involves covering functionalities of our application after a resource used by such functionalities is modified or a new resource is introduced. A resource modification happens when the resource type, value, location or permissions change. More in detail we can specify the meaning of change for each of these categories:
\begin{itemize}
\item \emph{location}: a resource is deleted or moved to a different file system folder (local) or to a different URL (remote)
\item \emph{type}: resource format is modified
\item \emph{value}: resource contents are modified
\item \emph{permissions}: resource read/write permissions are modified
\end{itemize}
\subsubsection{Pattern: modified resources location}
This pattern presents when a resource used by the application under test is no longer available at its original location. In general we have a specific resource $r$ in a location $l$ (with $L = \{l_1, l_2, ... l_n\}$ acceptable locations) used by some functionalities $F_r = \{f_1,f_2,...,f_n\}$. If $r$ is moved to a location $l'$ outside our set $L$ of acceptable locations, this might precede a failure in our application when the functionalities in $F_r$ try to manage the resource location change. \luca{why distinguish between abstract and concrete locations?}.
\paragraph{\textbf{Scenario}}
\smallskip
This scenario identifies situations where a failure is preceded by a relocation or deletion of resources used by the faulty functionality that triggers the failure. This happens when an external application or from a functionality other than the faulty one modifies a resource's location. The failure is typically triggered when the resource's location modification is handled by the faulty functionality. This scenario also allows us to define a rollback recovery strategy if the resources' location modification is done by a specific functionality inside our application.
\paragraph{\textbf{Matching failures}}
\paragraph{Subversive bug report 482565}
\smallskip
This is a Subversive fault which triggers a failure that manifests this way: the repository of an eclipse team multi-project is relocated, as soon as Eclipse starts it realizes that the relocation took place and it asks the user what to do; the user selects "Change repository location URL and relocate rest of projects" but after "ok" is hit when asked to relocate the first project the same prompt is shown in a loop. If "cancel" is hit after the loop starts the prompt for the relocation of the second project is shown, together with error reports.
\paragraph{Subversive bug report 373125}
\smallskip
This fault triggers a java.lang.IllegalArgumentException: Element not found exception when refactoring a project by moving its location from the Eclipse workspace to elsewhere in the file system. The failure is triggered because the refresh operation after the refactoring is split into several asynchronous parts.
\paragraph{Subversive bug report 480521}
\smallskip
To trigger the failure caused by this fault one may proceed in this way: move a Java class in an svn project to a different package and synchronize, then in Eclipse try to resolve conflicts; at this point an unhandled event loop exception is triggered.
\paragraph{EGit bug report 479964}
\smallskip
\luca{this fault is a bit borderline, it is not actually caused by relocation or deletion of resources but rather it handles deletion of resources in a wrong way, also it is categorized as "bad testing process"}
This fault triggers a failure when deleting an Eclipse project from the workspace if one selects and deselects the option "Also delete working tree". The failure is caused by the fact that the option "Remove the project in the repository from workspace" remains checked and hence the procedure deletes all contents on disk.
\paragraph{EGit bug report 413887}
\smallskip
\luca{this fault is a bit borderline, it is not actually caused by relocation or deletion of resources but when a new branch is created in the repository via EGit or when the repository is removed the system fails with an exception if non local project are present in the workspace (method IResource.getLocation() returns null) }
\paragraph{\textbf{Test Opportunity}}
\paragraph{Test Trigger}
\smallskip
We can simply take a resource location change from $l \in L$ to $l' \notin L$ as the trigger for the tests we defined. If the modification is performed by a functionality inside our testing set we can ignore the test cases that work on that functionality (the resource modification must come from another source).
\paragraph{Test Strategy}
\smallskip
It is clear that this failure pattern requires an application to use \emph{shared external resources} (repositories, files or databases for example), given this premise we can identify a test strategy that involves checking whether the shared resources have been relocated and that the functionalities that use them still work. To do this we need to monitor the current local and remote resources which the application can read/write and the list of the application's functionalities that perform these read/write operations (or the subset of them that we want to test). Once we have these elements we can test the functionalities that we selected when a modification on a shared resource is detected. In general we have a set of test suites $T = \{t_1,t_2,...,t_n\}$ in which each test suite tests a functionality, a set of resources $R = \{r_1, r_2,...,r_m\}$ and for each resource $r_i$ a set of allowed locations $L_i = \{l_{i1},l_{i2},...,l_{ik}\}$. We the identify $m$ subsets of $T$, $T_r = \{T_1,T_2,...,T_j\}$ where $T_i$ is the subset of test suites for the functionalities that use the $i^{th}$ resource in our list. When a resource $r_i$ inside $R$ is moved to a location $l'$ outside the set of allowed locations $L_i$ we execute all of the test suites in $T_i$.
\subsubsection{Pattern: new resources type}
This pattern requires some functionalities in the application under test to work on resources of different type. In this case the operations that the functionality applies to one type of resource might not work correctly on another type. The pattern presents when a functionality in the application that is usually applied to a resource of a specific type is suddenly applied to a resource of a different type. In general we have a functionality $f$ that can be applied to a set of resources $R = \{r_1,r_2,...,r_3\}$ of different type (by type and resource here we do not exclusively mean native types and resources (e.g. files or databases) but more abstract, application specific types (e.g. Eclipse Maven project, Eclipse Java project). This may lead to failure when a functionality that is usually applied to a resource of type $r_1$ is applied to a resource of type $r'$ where $r' \in R \setminus \{r_1\}$.
\paragraph{\textbf{Scenario}}
\smallskip
This scenario identifies situations where a failure presents when a functionality is applied to a new type of resource.
\paragraph{\textbf{Matching failures}}
\paragraph{Subversive bug report 482565}
\smallskip
The failure highlighted by this bug report is that external files in an Eclipse project are marked as obstructed, this means that some functionalities cannot be applied to this type of resource.
\paragraph{EGit bug report 484494}
\smallskip
The failure highlighted by this bug report is that when there are symlinks in a project comparing versions with a diff does not show the symlink content on the local side (index side is displayed correctly).
\paragraph{EGit bug report 474019}
\smallskip
When cloning a Gerrit repository with http EGit recognizes it correctly and configures it as Gerrit repository, but when it is cloned via SSH it doesn't.
\paragraph{EGit bug report 326526}
\smallskip
When pushing commits via ssh with public key authentication EGit asks for password, but even if it is entered correctly it keeps showing the password dialogue without pushing the committed changes.
\paragraph{EGit bug report 474750}
\smallskip
\luca{might also be unexpected user input}
When an UTF-8 encoded file is read via the commit viewer they show invalid characters
\paragraph{EGit bug report 435866}
\smallskip
EGit asks 3 times for my password and then fails with a TransportException when trying to fetch a repository via https.
\paragraph{\textbf{Test Opportunity}}
\paragraph{Test Trigger}
\smallskip
Defining a specific trigger in this case is not trivial: we might identify the triggering action in the user creating or importing a new resource type, or in the application allowing a functionality to be used on different types of resources.
\subsubsection{Pattern: large resource}
This pattern describes the failures that are triggered by a large resource being handled incorrectly by a functionality of the application under test. An example of incorrectly handling a large resource is trying to load a large database table in memory even if it does not fit, or executing a query on such table and timing out before the results can be generated. In general we have a functionality $f$ that is usually applied to a set of resources $R = \{r_1,r_2,...,r_n\}$, each resource with size $s_i \le s_{safe_i}$; the pattern emerges when a functionality is applied to a resource which has size $s_i > s_{safe_i}$.
\paragraph{\textbf{Scenario}}
\smallskip
This scenario identifies situations where a failure presents when a functionality is handling a large resource in a wrong way.
\paragraph{\textbf{Matching failures}}
\paragraph{Subversive bug report 472752}
\smallskip
The situation described by the user is of large projects in the same workspace blocking the save operation of eclipse. When the user tries to save a file a message box is presented to him with the message "wait for user operation to complete". This is due to subversion updating the svn cache and was fixed by moving out long running code from the event listener using queue/asynchronous processing.
\subsection{Obligation: functionality/service coverage}
This obligation is similar to the previous one, but instead of changes in resources we are now interested in changes in services. By services we mean everything that is used by functionalities of the application under test via API. We are interested in testing our application when a service changes (e.g. it is updated to a new version).
\subsubsection{Pattern: service update}
\luca{generalize}
This pattern presents when a service used by the application is updated. In general we have a service $s$ used by some functionalities $F_s = \{f_1,f_2,...,f_n\}$ of our application; when this service is update to a new version $s'$ it might trigger a failure when the functionalities in $F_s$ are used.
\paragraph{\textbf{Scenario}}
\smallskip
This scenario identifies situations where a failure is preceded by an update of a service used by the faulty functionality that triggers the failure. This happens when an external service that the functionality uses via API is updated. The failure is typically triggered when the service's API are called by the faulty functionality.
\paragraph{\textbf{Matching failures}}
\paragraph{Subversive bug report 429992}
\smallskip
Here a fault is introduced when Java is updated to version 1.8, which introduces lambda expressions, this makes the EclipseLink plugin silently ignore the classes that contain such expressions. \luca{Can a Java update be considered as a service update?}
\paragraph{Subversive bug report 467743}
\smallskip
Here a fault is introduced when Eclipse is updated to version 4.4.2 from 4.3, which introduces lambda expressions, this makes the EclipseLink plugin crash with an IncompatibleClassChangeError. \luca{Same as before, this fits the update part of the pattern, but this is not really a service.}
\subsubsection{Pattern: new plugin installed}
This pattern presents when a new plugin is installed in the application framework. In general we have a new plugin $p$ which uses some functionalities $F_p = \{f_1,f_2,...,f_n\}$ of our application (we can see these functionalities as services); when this plugin is installed we are interested in making sure that all of the functionalities in $F$ are working properly.
\paragraph{\textbf{Scenario}}
\smallskip
This scenario identifies situations where a failure is triggered by a plugin using a faulty functionality of our application.
\paragraph{\textbf{Matching failures}}
\subsection{Obligation: functionality/user input coverage}
This obligation involves covering functionalities of our application that ask for user input. This includes forms that must be filled by the user, command line arguments, GUI buttons, etc.
\subsubsection{Pattern: out-of-domain input value}
\luca{unexpected input value instead of this? e.g. non latin characters}
This pattern presents when the user inputs an out-of-domain value for a specific functionality. This happens for example when non alphanumeric characters are entered a "name" form field, when the same button is clicked multiple times, when invalid command line arguments are entered. Out-of-domain inputs are interesting since these are the cases in which the application failures related to user input are most likely to manifest.
In general we have a functionality $f$ that accepts a set of user inputs $I_f = \{i_1,i_2,...,i_n\}$, we also have a set of input domains $S_d = \{D_1,D_2,..,D_n\}$. This pattern presents when we observe $\exists i_i \in I_f \mid i_i \notin D_i$
\paragraph{\textbf{Scenario}}
\smallskip
This scenario identifies situations where a failure is preceded by an out-of-domain user input. The failure is typically triggered by the application functionality not checking the user input correctly.
\paragraph{\textbf{Matching failures}}
\paragraph{Subversive bug report 459010}
\smallskip
This fault presents whenever a user tries to open a folder with a blank space at the end with the subversive UI: svn issues a repository folder children operation failed. \luca{Can we consider a blank in a folder name an out of domain value?}
\paragraph{Subversive bug report 460253}
\smallskip
Here the unmarshalling method for a JSON object that has a null attribute incorrectly returns a null value. \luca{A JSON object with an unsetted attribute might not be considered out-of-domain.}
\subsubsection{Pattern: new input sequence}
\luca{in practice this is only feasible if the sequence model is coarse grained}
This pattern presents when the user inputs a new sequence of actions (through clicks in menus for example), in this case new sequence means that such sequence was not previously tested in-house. This type of user behaviour is worth testing because it may lead the testing framework to discover faults that due to their combinatorial nature were not discovered during in-house testing.
In general we have a functionality $f$ that accepts as input a set of sets of user actions $S = \{A_1,A_2,...,A_n\}$ where $A_i = \{a_1,a_2,...,a_n\}$. We are also interested in the set of sets of actions already tested in-house $S_{\texttt{inHouse}}$. Sequences that are different from the ones tested in-house are not always easy to discover. The trivial case presents when we have a new sequence longer than every previously tested one ($\exists A_i \in S \mid |A_i| > |\texttt{max}\{S_{\texttt{inHouse}}\}|$) but we can also observe a new sequence when it is equal to the ones already tested except for one (or more) elements, without being necessarily longer or shorter.
\paragraph{\textbf{Scenario}}
\smallskip
This scenario identifies situations where a failure is preceded by a new sequence of user actions. The failure is typically triggered by the application being in an incorrect state when the last user action is executed.
\paragraph{\textbf{Matching failures}}
\paragraph{Subversive bug report 470990}
\smallskip
Here a broken menu entry is reported by a user who navigated to Preferences / Team / SVN / Label Decorations in the eclipse menu, this entry was probably overlooked by developers and hence constitutes a new input sequence. \luca{bad testing?}
\paragraph{\textbf{Test Opportunity}}
\paragraph{Test Trigger}
\smallskip
The trigger to out testing framework is a user input sequence recognized as new.
\paragraph{Test Strategy}
\smallskip
When we encounter a sequence of actions that was not previously tested we are interested in making sure that the application state is consistent and that other actions executed in that state do not cause the system to crash. In general when a sequence of user actions brings the application in a state $s$ we have a set of actions $A = \{a_1,a_2,...,a_n\}$ that are available in that state; we want to execute each one of these actions and observe the application behaviour.
\subsubsection{Pattern: large input size}
This pattern presents when we have a large user input that was not previously observed or tested.
In general we have a functionality $f$ that accepts a set of user inputs $i$ with max size $s_{\texttt{max}}$ and safe size $s_{\texttt{safe}}$, where $s_{\texttt{safe}}$ was previously observed and tested. This pattern presents when we observe a user input with size between $s_{\texttt{safe}}$ and $s_{\texttt{max}}$.
\section{Experimental Procedure} \label{sec:procedure}
For our analysis, we identified as faults the bugs labeled as \textit{confirmed}, \textit{verified} or \textit{resolved}, and we inspected all the bugs reported for the three Eclipse plugins from January 2015 to December 2015 for a total of 412 analyzed bug reports, and all the bugs reported for both OpenOffice and Nuxeo from September \nth{1} 2016 to October \nth{1} 2016, for a total of 99 and 56 bug reports inspected, respectively.
For each bug report, we inspected the information about the failure, the inputs, the execution conditions and the failure impact.
We discarded the bug reports containing only a memory dump and a stack trace, which might be useful for developers, but are not useful for the purpose of our investigation, and studied in detail a total of 119 bug reports: 63 for Eclipse, 26 for OpenOffice, and 30 for Nuxeo.
\smallskip
\noindent \textbf{RQ1: Why are faults not detected at testing time?{}}
We investigated why faults have not been revealed in house but have been detected only in the field by examining the conditions that caused the failures to identify the factors that contribute to the failures and are extremely hard to be tested in-house.
We labeled each fault as \emph{bad-testing}, if we could not find any of such factors, \emph{field-intrinsic} otherwise.
We identified four categories of \emph{field-intrinsic} faults that
we discuss in the next section, where we characterize the identified classes of faults, and we report both qualitative and quantitative data.
We used only the faults labeled as \emph{field-intrinsic} to answer the other three research questions.
\smallskip
\noindent \textbf{RQ2: Which elements of the field are involved in field failures?{}}
For each bug report, we identified the elements of the field shown in Figure~\ref{fig:arch} that play an essential role in the failure.
\smallskip
\noindent \textbf{RQ3: What kinds of field failures can be observed?}
\input{tableFailureTypes}
We studied the characteristics of field failures to identify their attributes and classify them.
Better understanding the nature of field failures is essential for developing techniques for testing applications in the field without uncontrolled side effects.
Some types of failures might be easier to detect and control than others.
For example, exception and error messages are easy to detect and usually do not cause loss of data because the application itself detects and handles these erroneous situations; system crashes are also easy to detect, but may cause loss of user data; incorrect results may be hard to detect, and may silently compromise the user data and the overall computation.
We carefully analyzed the failure taxonomies proposed by Bondavali and Simoncini~\cite{Bondavalli-FailureClassification-FTDCS-1990}, Aysan et al.~\cite{Aysan-ErrorModeling-COMPSAC-2008}, Avizienis et al.~\cite{Avizienis-FailureTaxonomy-TDSC-2004}, Chillarege et al.~\cite{ChillaregeTSE1992}, and Cinque et al.~\cite{Cinque-MobilePhoneFailureTaxonomy-DSN-2007} to identify the candidate attributes for field failures, and exhaustively inspected the bug reports in our data set to identify the most relevant attributes for characterising field failures: \emph{failure type} and \emph{detectability}.
\smallskip
\emph{\textbf{Failure Type}}
The failure type characterizes a failure according to the way it appears to an observer external to the system.
\smallskip
We identified three possible categories of failure types, \emph{value}, \emph{timing} and \emph{system} failures, and we further detailed each type in three subtypes, for a total of nine failure types, which we use in the next sections to categorize bug reports, and which are summarised in Table~\ref{table:failuretypes}.
\begin{description} [leftmargin=!]
\item[Value failures] occur when the SIF produces incorrect outputs: an \emph{invalid value}, a \emph{value out of domain} or an \emph{error message}.
For example in a functionality that returns the ZIP code of a city, a \emph{value failure} of type \emph{invalid value} occurs when the SIF returns the ZIP code associated with a city different than the input one, a \emph{value failure} of type \emph{out of domain} occurs when the SIF returns a malformed ZIP code, a \emph{value failure} of type \emph{error message} occurs when the SIF returns a message the reports an internal error that prevented retrieving a ZIP code.
\item[Timing failures] occur when the SIF produces some outputs at a wrong time: too early (\emph{early timing}), too late (\emph{late timing}) or never (\emph{omission}).
\item[System failures] occur when the SIF is blocked (\emph{halting failure}) has stopped running (\emph{crash}) or does not respond reliably to the input stimuli (\emph{unstable behavior}).
\end{description}
\smallskip
\emph{\textbf{Detectability}}
The detectability attribute characterizes the difficulty of detecting the failure.
\input{tableDetectability}
We distinguish four levels of detectability, \emph{signaled}, \emph{unhandled}, \emph{silent} and \emph{self-healed}, based on both the ability of the system to detect the failure and an external observer to observe a misbehavior without specific system knowledge, as summarized in Table~\ref{table:failuredetectability}.
\begin{description}[leftmargin=!]
\item [Signaled failure:] a failure that the system detects and reports.
A simple example of a signaled failure is an application that opens a popup window to inform the user that the application will be unexpectedly closed because of a memory problem;
\item[Unhandled failure:] a failure that the system does not handle and that leads to a crash. The system does not detect the failure, while the user trivially detects the uncontrolled crash of the application without requiring any knowledge about the application;
\item[Silent failure:] a failure that the system does not detect letting the application continue operating
without producing any signal that a user can recognize as a failure without prior knowledge about the application.
A simple example of silent failure is a flight simulator that simulates the flight conditions imprecisely and that a user cannot detect without a specific knowledge of the flight simulation system.
\item[Self-healed failure:] a failure that the system detects and overcomes transparently to the user. The user continues using the application without noticing any problem. Self-healed failures are common in systems exploiting redundancy to mask failures, such as Hadoop~\cite{Hadoop}.
\end{description}
\smallskip
\textbf{RQ4: How many steps are needed to reproduce a field failure?{}}
For each failure, we identified a sequence of steps that are needed to cause the failure, aiming to, but not necessarily proved to be, a minimal sequence.
For the interactive subjects, we identify steps with GUI actions like opening windows, entering data in some fields, clicking on menus and buttons.
We counted the steps that lead to a failure by considering the sequence of operations
\CHANGED{described}
in the bug reports submitted by users.
When creating a bug report, users intuitively identify a critical state that may lead to the failure and submit both the information about the critical status, typically described in a declarative way, and a sequence of operations that lead to a system failure from the critical state, typically described in an operational way.
For example in the Open Office bug report \#126930, the state to trigger the failure is characterized by the availability of a certain file, and the steps to reproduce the failure consist of opening the file, scrolling the document, selecting a frame, and enlarging the frame.
We identified the minimal subset of the actions reported by the user that are needed to cause the failure from the critical state indicated in the bug report, which often corresponds to the minimal number of actions needed to reproduce the failure~\cite{roehm2015automated}.
The amount of steps needed to reproduce a failure is an important information for estimating the complexity of testing techniques that work in the field and reveal failures by monitoring the status of the application to detect failure-prone states and executing test cases of appropriate complexity when a failure prone state is detected.
\section{Related Work}
\label{sec:related}
Our study provides an initial characterization of the factors that might cause field failures. The most closely related work includes empirical studies about software faults and techniques to address the problem of testing applications in the field. We discuss both categories below.
\smallskip
\textbf{Empirical Studies}
Most of the studies on software faults and failures focus on the distribution of faults and failures across the components of a system~\cite{Ostrand-TheDistributionOfFaults-ISSTA-2002,Fenton-QuantitativeAnalysisOfFailuresTSE-2000, Runeson-FailureStudy-TSE-2007, GalinacGrbac-TSE-FailureStudy-2013,Shatnawi-2008}. These and our studies address different goals, but share some hypotheses and observations: they all (i) distinguish between pre-release and post-released faults, that is, faults detected during and after development, respectively, (ii) provide evidence that often a small subset of the components of a system contains most of the post-release faults, and (iii) indicate that the size of a component is not a good predictor of the post-release fault density~\cite{Ostrand-TheDistributionOfFaults-ISSTA-2002,Fenton-QuantitativeAnalysisOfFailuresTSE-2000, Runeson-FailureStudy-TSE-2007, GalinacGrbac-TSE-FailureStudy-2013}.
The results reported in existing studies might be exploited to improve the design of techniques for predicting failures and locating faults in the field, \CHANGED{for example Wu et al.~\cite{Wu-QuantitativeFailuresAnalysis-ESEM-2008} report a direct dependency between the quality of the testing process and the density of pre- and post-release faults in the source files}. However they do not contribute in the identification of the factors that may cause and motivate the presence of field failures.
Both Hamill et al.~\cite{Hamill-FaultTypesDetectionSeverity-SQJ-2014} and Fan et al.~\cite{Fan-NuclearFailures-SF-2013} proposed taxonomies of field-faults, with a high-coarse granularity and a focus on nuclear applications, respectively.
Hamill et al. identify a set of field-fault categories at a granularity level that is much higher than ours and provide limited support for analyzing the characteristics of field failures in details.
Hamil et al. conclude that coding faults are the major cause of field failures, but do not further analyze the characteristic of the faults, such as the nature of the triggers of the failures, as done in this paper, where we discuss the relation between field failures and the interactions with external resources.
Fan et al. show that in the nuclear industry software field failures mostly depend on design choices~\cite{Fan-NuclearFailures-SF-2013}. Although the paper does not investigate why these faults have not been revealed at an earlier stage of development, some of the failure causes reported in the paper are consistent with the results that we obtained. For example, the presence of failures caused by incorrect assumptions and unexpected execution conditions are consistent with the failures that we reported under the unknown environment or application conditions categories. The consistency with the results reported by our study increases the confidence on the validity of the results reported in this paper.
\begin{changed}
Grottke et al.~\cite{grottke2010empirical} and Cotroneo et al.~\cite{cotroneo2013fault} study Bohrbugs and Mandelbugs in safety critical applications and open-source software, respectively. Both studies focused on variations of amount of Bohrbugs and Mandelbugs during the application life cycle and on their impact on the time to fix a bug.
Both studies reveal some dependency of Mandelbugs from interaction of the software with field elements. Our results confirm that dependencies of some bugs from environment interactions persist also after deployment.
Lutz et al.~\cite{lutz2004empirical} classify safety-critical anomalies using Orthogonal Defect Classification, focusing on how to improve safety-critical software development process.
Our study identifies anomaly types and targets fixes after deployment.
\end{changed}
\smallskip
\textbf{Testing In the Field}
Field testing techniques are techniques that aim to reveal faults that escape in house testing before causing the system to fail in the field.
Field testing has been addressed quite recently with In Vivo Testing~\cite{Murphy-InVivo-ICST-2009}, Skoll~\cite{Skoll2007} and in the context of Web services~\cite{Hielscher2008, DenaroTosi2009,Sammodi2011}.
In Vivo testing is a technique for identifying faults that are triggered only in specific program states~\cite{Murphy-InVivo-ICST-2009}. The approach targets Java classes and consists of executing a predefined set of test cases while the software is running in the field.
Test cases are executed within a sandboxed replica of the system, with a parametric frequency and at randomly selected time.
In Vivo testing can detect faults that can be triggered with a predefined set of inputs provided by the software developers, but cannot detect faults that are triggered with inputs that have not been identified at testing time. Moreover, the strategy does not take into account the factors that demonstrate to play a key role in field failures according to our study, such as the interaction with the field.
Skoll aims to identify the faults that have not been detected at testing time because of the combinatorial explosion of the configuration options and
\CHANGED{the characteristics of the environment~\cite{Skoll2007}}.
Skoll distributes testing tasks across machines of volunteering end-users to extensively explore and test the configuration space.
Like in-vivo testing, Skoll does not generate test cases nor reuses runtime data to enhance testing activities. Thus, it cannot detect faults that are revealed by inputs not considered by the software engineers who implemented the test cases deployed in the field.
In the context of Web-service composition, online testing is used to trigger predefined self-adaptation strategies when Web services do not behave as expected, for example because the service specification has been updated or because the service itself is down
~\cite{Hielscher2008, DenaroTosi2009, Sammodi2011}.
These approaches target a specific class of software systems, and rely on strategies and inputs predefined by the software engineers.
The results reported in this paper call for novel testing and analysis techniques that can be executed in the field to reveal field-intrinsic faults. These techniques must have the ability to test situations that cannot be foreseen at development time and must face the challenge of dealing with field elements, such as local resources and services, without interfering with the user activity.
\section{Research Questions}
\label{sec:rqs}
We studied the nature and distribution of field failures by analyzing a set of bug reports produced by the end-users of three ecosystems, and
we articulated the roadmap of our study in terms of four research questions.
\begin{researchquestion}
RQ1: Why are faults not detected at testing time?{}
\end{researchquestion}
We analyzed the field failures in the subject studies to identify the main factors that determine the faults to survive after testing and continue to exist in the field.
\begin{researchquestion}
RQ2: Which elements of the field are involved in field failures?{}
\end{researchquestion}
We analyzed the dependencies of field failures on the field itself, to identify which elements of the field are involved in the failures.
\begin{researchquestion}
RQ3: What kinds of field failures can be observed?{}
\end{researchquestion}
We clustered the field failures reported in the subject studies into classes, and identified some relevant types according to their impact and detectability.
\begin{researchquestion}
RQ4: How many steps are needed to reproduce a field failure?{}
\end{researchquestion}
We estimated the least subset of steps required to reproduce the field failures.
We identify steps as user actions, being the subject studies reactive GUI applications.
\section{Results}
\label{sec:results}
\subsection*{RQ1: Why are faults not detected at testing time?{}}
We analyzed the bug reports to distinguish faults that are due to insufficient testing (\emph{bad testing} (BT)) from \emph{field-intrinsic faults}.
We further analyzed the field-intrinsic faults and identified four types of conditions that lead to field-intrinsic faults and that we use to classify such faults: \emph{Irreproducible Execution Condition} (IEC), \emph{Unknown Application Condition} (UAC), \emph{Unknown Environment Condition} (UEC), and \emph{Combinatorial Explosion} (CE).
The identified classes of faults
comprise a complete taxonomy for the faults in the bug reports that we analyzed, and represent an initial general framework for classifying field faults.
\subsubsection*{Irreproducible Execution Condition (IEC) Faults}
IEC faults are faults that can be revealed only under conditions that cannot be created in-house. This may depend on the impossibility of reproducing the complexity of the whole field environment, the inability of creating the specific failing execution or the evolution of the environment and the interactions with the SIF.
The safety critical routines to be executed in the case of natural disasters are good examples of execution conditions that might be impossible to reproduce in-house. Although a disaster can be simulated to some extent, a major natural disaster, for instance an earthquake or a tsunami, cannot be fully reproduced in-house, and some field-intrinsic faults may depend on extraordinary combinations of events that can be observed only in real conditions.
Similarly, the behavior of a system for an increasing number of users who interact with the application according to patterns that are not entirely predictable is often hard to test, especially for extreme situations, such as the extraordinary online streaming services workload experienced in the Super Bowl night~\cite{superbowlAppNotworking}.
The evolving varieties of configurations, for instance versions of the operating systems, drivers and plugins, are good examples of unpredictable changes to interactions between the SIF and the environment (hereafter \emph{SIF-environment interactions}).
New versions or entirely new plugins or drivers distributed after the most recent SIF release might generate faults impossible to reveal in-house before the release itself.
An example of such situation is the fault described in the EclipseLink bug report \#429992.
EclipseLink is an Eclipse plugin for developing Java applications that uses JPA to map objects into databases. The bug report indicates that EclipseLink silently ignores classes that contain lambda expressions: even if an object should be persisted in the database because its class includes the \emph{@Entity} annotation, no table for persisting the object is generated in the database.
Since lambda expressions have been introduced only in Java 8, it was impossible to test the combination of lambda expressions with JPA annotations when the EclipseLink plugin was developed, before the release of Java 8.
EclipseLink should not have been affected by the presence of lambda expressions and should have supported the persistency of the classes regardless of the presence of lambda expressions.
However due to an unforeseen compatibility issue, EclipseLink stopped working correctly when processing classes with lambda expressions.
\subsubsection*{Unknown Application Condition (UAC) Faults}
UAC faults are faults that can be revealed only with input sequences that although executable in-house depend on conditions about the application that are ignored before the field execution and thus cannot be captured in in-house test suites.
An example of field failures that derive from unknown conditions is the Eclipse Subversive report \#459010, which indicates that Subversive fails when retrieving folders whose name terminates with a blank character. This corner case is not documented in the specifications, and is hard to reveal with in-house testing because of the lack of information that may suggest to design test cases covering this specific situation. Structural test suites do not address this problem either, since many problems of this type are due to missed code, as in the case of this fault.
Another example of a UAC fault is the Eclipse \#440413 bug report, which describes a fault in method \texttt{convertObjectToString} of class \texttt{XMLConversionManager} that converts any object to a proper string representation.
The method works properly except when used to convert a \texttt{Big\-Decimal} representing a number in scientific notation, since it returns a string that encodes a number in scientific notation and not a plain number as expected.
We verified that this case is not mentioned either in the \texttt{XMLConversionManager} specification or in the API documentation, and is thus hidden to the testers who did not reveal the bug during testing and discovered it in the field after the software has been released.
In our experimental analysis, we did not have always access to the specification of the software. When this happened, we classified a fault as UAC when the inputs that lead to the failure are largely unrelated with the purpose of the functionality that fails, assuming that such cases were not defined in the specifications.
Thus our classification may not be perfectly accurate.
\subsubsection*{Unknown Environment Condition (UEC) Faults}
UEC faults are faults that can be revealed only with information about the environment that is not available before field execution.
UEC faults are hardly detectable with in-house test cases designed without a complete description of the constraints on the SIF-environment interactions.
The full range of behaviors of third-party services that the SIF accesses through the network is a good example of information rarely completely available at design time, and thus possible source of UEC faults.
An example of UEC fault is the Eclipse bug report \#394400 that indicates that EclipseLink may fail with a \texttt{NullPointerException} when executed under heavy load on the Oracle JRockit VM.
The issue depends on the behavior of the Just In Time compilation feature of the JRockit VM that may reorder the operations executed within method \texttt{isOverriddenEvent} so that it returns an incomplete result.
This undocumented behavior is responsible for the EclipseLink exception.
\subsubsection*{Combinatorial Explosion (CE) Faults}
Even when the behavior of both the application and the environment are fully specified and can be replicated in-house, the combination of the cases to be tested may increase to a magnitude of cases that cannot be fully tested in-house.
There are many sources of combinatorial explosion in software applications, such as the many possible configurations and preferences, the combinations of inputs and states, the many environments, for instance operative systems and hardware devices, that can be used to run an application, and so on.
A well known example of combinatorial explosion of combinations are the sets of hardware devices, operating systems and configurations that comprise the execution conditions of smartphone applications that can almost never be fully tested in-house.
An example of a CE fault is the fault described in the Eclipse bug report \#484494, which indicates that the diff feature of the Subversive plugin does not work when comparing a file to a symlink of a file that has been moved.
Changing the location of a file referred by a symlink and using the symlink as part of a comparison is a legal combination of operations among the huge set of combinations that comprise to the sequence: $\langle$change the status of a resource, use the changed resource as part of a computation$\rangle$.
Systematically testing all these combinations commonly exceed any reasonable albeit impressively large testing budget, because of the many ways resources can be changed independently from each other.
In our analysis, we observed that only a small percentage of CE cases are due to specific inputs (18\% of the cases), while the rest of the CE cases are due to field elements.
\subsubsection*{Bad Testing (BT) Faults}
We conclude our taxonomy with a discussion of BT faults, which we classify in our experimental analysis as \emph{field} but not \emph{field-intrinsic} faults.
BT faults are faults that are not detected in-house due to weaknesses of the testing process.
We include in this class all the faults in the field that do not belong to any of the previously described classes.
An example of BT fault is the fault reported in the Subversive bug report \#326694, which indicates that Subversive erroneously reports as conflicting two identical files that have accumulated the same set of changes on two different branches.
Since detecting conflicts is a primary feature of this
plugin, developers should have tested a basic case like the presence of the same changes in two distinct branches.
\subsubsection*{Taxonomy}
The taxonomy that we proposed in this paper opens new scenarios of increasing complexity.
BT faults simply substantiate the need of improving the in-house testing process and do not introduce new challenges for the software testing community. UAC, UEC and CE faults call for new techniques to enrich well designed test suites with test cases identified in the field while experiencing faults caused by unpredictable (UAC and UEC) or impossible-to-exhaustively-test (CE) conditions. The main challenge that has been only partially addressed so far is to record execution sequences that lead to failures in the field, and reproduce them either in the field or in house to identify and remove the faults.
Being not executable in house, IEC faults further challenge the software testing community with the problem of executing failing test cases in the field.
The main challenges are to both reveal failures by executing test cases in the field, which requires to control the execution of the test cases in the usually complex field context, and prevent any side-effect for the users.
\subsubsection*{Quantitative analysis}
Figure~\ref{fig:rq1} summarizes the quantitative results of our empirical investigation.
The bar chart indicates the number of faults classified in the five categories discussed above, and shows that field-intrinsic faults (the sum of the IEC, UAC, UEC and CE columns) are the majority of the field faults in our data set.
Field-intrinsic faults represent 70\% of the analyzed bug reports, thus confirming that field faults cannot be addressed by simply enhancing the testing process, but calls for specific in-field approaches.
The bar chart indicates that \emph{combinatorial explosion} (CE) is the most frequent cause of field-intrinsic faults, while \emph{Irreproducible Execution Condition} (IEC) is the least common source of faults.
Unknown execution conditions of either the application or the environment (UAC and UEC faults) are also relatively frequent cases.
The dominance of CE faults is not surprising: The behavior of SIFs is influenced by many factors that can be never exhaustively tested in house.
The many combinations that are hard to design, foresee and test in house, can be order of magnitudes easier to address in the field, where such a diversity is spontaneously and implicitly available.
Our analysis identified few \emph{Irreproducible Execution Condition} (IEC) faults, all caused by evolution of the SIF-environment interactions that emerged after the deployment of plugins not available at the time of testing, before the deployment of the SIF in the field.
The scarce presence of IEC faults may depend on the nature of the applications that we analyzed.
In other domains the presence of IEC faults might be higher.
Consider for instance the domain of embedded software, where the interactions with the physical world might be sometime extremely hard to test.
\begin{changed}
We observed a similar trend for the three subjects: a predominance of CE faults (openoffice being the highest at 73\%) and a total of 10 - 20 \% faults falling into the UEC/UAC category.
It is worth noting that the two IEC faults we identified were both on the Eclipse platform.
\end{changed}
\begin{figure}
\centering
\includegraphics[width=8.4cm]{images/RQ1.png}
\caption{RQ1: Why are faults not detected at testing time?{}}
\label{fig:rq1}
\end{figure}
\subsection*{RQ2: Which elements of the field are involved in field failures?{}}
We analyzed the bug reports to study the role of field elements in field failures, and validate the intuitive hypothesis that many field-intrinsic faults may be hard to reveal in-house because their activation may depend on one or more elements that should be present in the field and should be in the right status to produce the failure.
Below, we discuss the role played by the field elements that we introduced in Figure~\ref{fig:arch}, provide \CHANGED{concrete} examples,
and discuss the quantitative data from the experimental data sets.
\paragraph{\textbf{Resources}}
Software applications typically interact with many resources during the computation.
For instance, many applications read from and write to persistent units, such as files and databases.
Causes of field failures may involve resources in many ways. In our investigation, we observed two main cases: interactions between SIF and resources (hereafter \emph{SIF-resource interactions}) that lead to performance problems and SIF-resource interactions leading to functional problems. The unbearable amount of time for SIFs to process some large resources and SIFs incorrectly handling resources of a particular type are examples of performance and functional problems triggered by SIF-resource interactions, respectively.
An example of SIF-resource interaction that triggers a performance problem is described in the OpenOffice bug report \#95974.
The OpenOffice writer crashes when trying to open a \texttt{.odt} document longer than 375 pages. The failure causes the CPU usage and the disk access rate to increase to $100\%$, and the application window simply crashes after one minute of unresponsiveness, activating the recovery wizard.
\paragraph{\textbf{Plugins}}
The plugin mechanism is a common solution to extend applications with new functionality in the field.
In the presence of plugins, applications work as operating systems that embed the plugin executions, and interact with the plugins to access specific functionalities.
Applications and plugins are developed and maintained independently. Evolution at either sides may trigger failures due to unforeseen interactions.
For example, the EGit bug report \#383376 indicates that the repository search does not work on Github due to an unforeseen interaction with the Mylin Github connector plugin.
\paragraph{\textbf{Operating system}}
Many applications can be executed on different versions of different operating systems. The interactions of a SIF with a specific version of an operating system may trigger failures otherwise unexperienced.
An example of a problem involving the operating system is the failure documented in the OpenOffice bug report \#126622 that describes how the OpenOffice writer does not correctly handle functionalities involving tables and queries under OSX. The failure prevents OpenOffice from closing, and forces the users to restart the operating system.
\paragraph{\textbf{Drivers and services}}
Applications often interact with third party drivers and services, whose availability depend on the production environment. During in-house development specific combinations might remain untested and failures unrevealed.
For example, the fault documented in the Eclipse Egit bug report \#435866 indicates that the Eclipse Egit version control system fails to open the required network connections due to some unexpected changes of the authentication methods implemented in the Eclipse connection service.
\paragraph{\textbf{Network}}
Many software applications use the network to access resources or functionalities that are not available locally. With a plethora of different network protocols available, failures might be triggered when an application uses a specific protocol.
For instance, the fault described in the Nuxeo bug report \#20481 describes a failure caused by a connection timeout that occurs when users download big zip files. Nuxeo does not handle connection timeouts properly and does not clean up temporary files, which leads to resources exhaustion.
\CHANGED{\paragraph{\textbf{None}} In a few cases the field-intrinsic faults do not depend on any interaction between the field elements and the SIF. Although not depending on any field element, these faults are still extremely hard to reveal at testing time, for instance because they can be revealed only by selecting a specific input out of a combinatorial number of cases.}
This is the case of the OpenOffice bug report \#126953, which indicates that when changing the format of a paragraph wrIECen with the Verdana font to italics bold, OpenOffice incorrectly adds blank lines before each occurrence of the brackets '(' and ')', and the text within the brackets disappears. This failure can be triggered only with a specific combination out of millions of input combinations: the use of Verdana font, and the presence of brackets when changing fonts to italics bold, out of the many combinations of font types, characters and font properties.
\subsubsection*{Quantitative analysis}
\begin{figure}
\centering
\includegraphics[width=8.4cm]{images/RQ2.png}
\caption{RQ2: field elements involved in the failure}
\label{fig:rq2}
\end{figure}
Figure~\ref{fig:rq2} quantifies the impact of the different field elements on faults, by indicating the amount of faults affected by each type of field element.
The causes are not exclusive, since a same fault may involve multiple field elements.
In Figure~\ref{fig:rq2}, bar \emph{none} reports the number of bug reports that describe failures that do not involve any field element.
Our analysis shows that interactions with the resources are the main cause of field-intrinsic faults (49\% of the cases).
Interactions with the operating systems are also a relevant cause of field-intrinsic faults (20\% of the cases).
Network, drivers \& services, and plugins have been all observed as causes of field-intrinsic faults at least once, but they are collectively observed in a small proportion of the cases (10\% of the cases in total). In total, 78\% of the field-intrinsic faults interact with a field element.
Although the data reported in Figure~\ref{fig:rq2} may be biased by the experimental setting, they already provide important information to define a research road map in the study of techniques to reveal and fix field-intrinsic faults.
\subsection*{RQ3: What kinds of field failures can be observed?}
\begin{figure}
\centering
\includegraphics[width=8.4cm]{images/RQ4.png}
\caption{RQ3 - Failure Type}
\label{fig:rq4}
\end{figure}
We analyzed the distribution of failure types, and investigated the issues related to detectability.
Figure~\ref{fig:rq4} plots the distribution of the failure types presented in Table~\ref{table:failuretypes}.
Most failures (51 out of 83 failures corresponding to 61\% of the analyzed failures) are \emph{value failures}, that is, executions that produce incorrect results.
The most frequent case of \emph{value failures} is the generation of invalid outputs, followed by the generation of error messages and the production of
\CHANGED{values out of domain (OOD in Figure~\ref{fig:rq4})}.
System failures are also frequent (28 out of 83 failures corresponding to 34\% of the analyzed failures).
They mostly lead to system crashes, and only occasionally to either unstable behaviors or system halt.
Only a small set of the failures that we analyzed are due to the timing aspect (4 out of 83 failures corresponding to 5\% of the analyzed failures). We observed few late timing and omission failures, and no early timing failures.
The results indicate that the generation of incorrect values (either invalid values, values out of domain or error messages) and systems crashes are the main classes of field failures (they represent 74 out of 83 failures corresponding to 89\% of the analyzed failures).
These results, and in particular the low frequency of timing failures, might depend on the domain that we investigated (desktop applications extensible with plugins and Web applications).
We expect different frequencies of failure types in other domains: In particular, we expect an increasing frequency of timing failures in embedded systems, where the synchronization among the software components plays a relevant role.
\smallskip
Figure~\ref{fig:rq4detect} plots the distribution of failures by detectability according to the classes presented in Table~\ref{table:failuredetectability}.
A relatively high portion of failures are detected because the failures are either \emph{signaled} by the application itself (14 out of 83 failures corresponding to 17\% of the analyzed failures) or \emph{unhandled} (25 out of 83 failures corresponding to 30\% of the analyzed failures) causing a system crash.
Such failures
\CHANGED{can be easily detected.}
On the contrary, \emph{silent} failures (44 out of 83 failures corresponding to 53\% of the analyzed failures) are
\CHANGED{hard to detect}
without some specific knowledge about the expected behavior of the application in response to certain stimuli, pointing to the well known oracle problem~\cite{Barr:OracleSurvey:TSE:2015}.
These results suggest that testing strategies working in the field without
\CHANGED{exploiting domain specific oracles}
could hardly reveal more than half of the field-intrinsic faults.
\begin{figure}
\centering
\includegraphics[width=8.4cm]{images/RQ4detect.png}
\caption{RQ3 - Detectability}
\label{fig:rq4detect}
\end{figure}
The considered subjects do not include mechanisms to automatically overcome from failures at runtime, and thus we have not observed any occurrence of \emph{self-healed} failure.
\subsection*{RQ4{}: How many steps are needed to reproduce a field failure?}
\begin{figure}
\centering
\includegraphics[width=8.4cm]{images/RQ3.png}
\caption{RQ4: Steps required to trigger a failure}
\label{fig:rq3}
\end{figure}
\CHANGED{
As discussed in Section~\ref{sec:procedure}, we computed the number of user actions necessary to trigger the failures by considering the operations that are described in the bug reports limiting to the ones essential for reproducing the failure.}
Figure~\ref{fig:rq3} plots the distribution of the field-intrinsic faults by the number of steps required to reproduce the failure.
We were not able to determine the number of steps required to produce the failure in 13 out of the 83 analyzed failures (16\% cases corresponding to bar \emph{no info}), while we determined the number of steps required to reproduce the failure in 70 out of the 83 analyzed failures (84\%), and observed that a large amount of failures can be reproduced with no more than three steps (54 out of 70 reproducible failures corresponding to 77\% of the reproducible failures.)
These results provide useful information when designing field-testing approaches, since they suggest that only few actions are necessary to reproduce a failure once reached a failure prone state, and indicate that field testing strategies should focus more on detecting failure-prone states than on generating long action sequences to reproduce the failures.
\section{Study Subjects} \label{sec:subjects}
We selected a set of desktop and web applications that (i) are available with the source code, (ii) are widely adopted and are thus good representatives of well used applications, and (iii) give access to publicly available bug reports, which are needed to study bug reports submitted by end-users.
We thus selected multiple applications from three ecosystems:
\begin{description} [leftmargin=!]
\item[Eclipse] and in particular its well-known and widely used plugins: the \emph{Subversive} SVN client for Eclipse~\cite{Subversive}, the \emph{EGit} Eclipse Team provider for Git~\cite{EGit} and the \emph{EclipseLink} plugin for developing code using the Java persistence API~\cite{EclipseLink}.
The bug reports are accessible on the Eclipse Bugzilla bug tracking system~\cite{EclipseBugzilla}.
\item[OpenOffice] is one of the most popular open source office applications~\cite{OpenOffice}. The bug reports are accessible on the Apache OpenOffice Bugzilla bug tracking system~\cite{OpenOfficeBugzilla}.
\item[Nuxeo] is a Web-based content management system used to develop many popular Web sites~\cite{nuxeo}. The Nuxeo issue tracking is Jira~\cite{jira}. \end{description}
\subsection*{Threats to validity} \label{sec:threats}
We collected our experimental data from the bug reports of desktop applications extensible with plugins (Eclipse and OpenOffice) and Web applications (Nuxeo) by examining a limited although reasonable amount of bug reports.
\begin{changed}The results give early evidence of the nature of the failures that can be experienced in plugins and Web applications, and need further studies to be generalized to other kinds of applications and to be quantitatively assessed.\end{changed}
We defined the classification schema, and analyzed the bug reports
manually.
Two authors have independently analyzed the bug reports, and all the authors have discussed the conflicting cases until reaching a consensus.
Although the process we followed should mitigate the risk of misinterpretation of the cases, we cannot fully exclude clerical errors in our analysis.
The row data and the detailed material that we refer to in the paper are publicly available for independent inspections and further uses.
The bug reports that we examined might be inaccurate some times. They may for example include partial information about the failures.
Although we cannot fully eliminate this potential issue, we believe that possibly incomplete bug reports considered in the experiments may have reduced the number of field-intrinsic faults that we identified, thus only pessimistically affecting the results.
In particular, the lack of information about a failure may have increased the chance of a fault to be erroneously classified as irreproducible execution condition, while the unknown conditions about the application or the environment may have reduced the amount of faults classified as combinatorial explosion faults.
We assume that our results that indicate a density of 70\% of field-intrinsic faults among the analyzed bug reports is a conservative under approximation of the field-intrinsic faults that are present in the examined applications.
|
1,108,101,566,343 | arxiv | \section{Introduction}
The measurements of the angles $\alpha$, $\beta$ and $\gamma$ of the Unitarity Triangle (UT)
at the B-factories are providing precision tests of the Standard Model (SM) description of $CP$
violation. This description is provided by the Cabibbo-Kobayashi-Maskawa (CKM)
quark-mixing~\cite{CKMmatrix,Wolf}. We summarize the experimental constraints on the $\alpha$
UT angle obtained from $B$-meson decays to $\pi\pi$, $\rho\rho$ and $\rho\pi$ with the {\mbox{\sl B\hspace{-0.4em} {\small\sl A}\hspace{-0.37em} \sl B\hspace{-0.4em} {\small\sl A\hspace{-0.02em}R}}}
experiment at the SLAC National Accelerator Laboratory. The {\mbox{\sl B\hspace{-0.4em} {\small\sl A}\hspace{-0.37em} \sl B\hspace{-0.4em} {\small\sl A\hspace{-0.02em}R}}} detector and the PEP-II
accelerator are described elsewhere~\cite{BaBar}.
\section{Analysis Method}
\subsection{General formula}
The decay of a neutral $B$-meson into a pair of $\pi$ or $\rho$ mesons, $B \ensuremath{\rightarrow}\xspace hh$ ($h = \pi,\rho$), occurs via
two topologies: a tree-level process and a one-loop penguin diagram. The $CP$ parameter $\lambda_{hh}$,
defined by $\lambda_{hh} = \frac{p}{q}\frac{\overline{A}}{A}$, where $q$ and $p$ are the complex coefficient
that link the mass and the flavor eigenstates in the $B$ system, and $A$ ($\overline{A}$) is the $B^0$
($\overline{B}^0$) decay amplitude, can be expressed in terms of $\alpha$ as
\begin{equation}
\label{lambda_1}
\lambda_{hh} = e^{2i\alpha}
\frac{1-(|V^*_{td}V_{tb}|/|V^*_{ud}V_{ub}|)P/Te^{-i\alpha}}
{1-(|V^*_{td}V_{tb}|/|V^*_{ud}V_{ub}|)P/Te^{i\alpha}}~,
\end{equation}
where $T$ and $P$ are complex amplitudes dominated by tree and penguin topologies, respectively.
The quantity experimentally measured is the time-dependent decay rate
\begin{equation}
\label{TDdecayAmp_1}
f_{Q_{\rm tag}} = \frac{e^{-|\Delta t|/\tau}}{4\tau}
\left[
1 -
Q_{\rm tag}C_{hh}\cos(\Delta m_d\Delta t) +
Q_{\rm tag}S_{hh}\sin(\Delta m_d\Delta t)
\right]~,
\end{equation}
where $\tau$ is the neutral $B$ lifetime and $\Delta m_d$ is the $B^0\overline{B}^0$ oscillation frequency.
$\Delta t$ is the proper time difference between decays of the $B$ to $hh$ ($B_{\rm rec}$), and
the second $B$ in the event, denoted by $B_{\rm tag}$. The $Q_{\rm tag}$ parameter is related to the
flavor of the $B_{tag}$: $Q_{\rm tag} = +1 (-1)$ if the $B_{\rm tag}$ is a $B^0$ ($\overline{B}^0$). The
$CP$-violating asymmetries $C_{hh}$ and $S_{hh}$ are related to the $\lambda_{hh}$ parameter by
\begin{equation}
\label{SandC}
S_{hh} = 2{\mathcal Im}(\lambda_{hh})/(1 + |\lambda_{hh}|^2)~,
~~~~~~~~
C_{hh} = (1 - |\lambda_{hh}|^2)/(1 + |\lambda_{hh}|^2)~.
\end{equation}
$S_{hh}$ reflects the $CP$-violation induced by the interference between the mixing and decay processes;
$C_{hh}$ is the direct $CP$-violating asymmetry which comes from the interference between different decay
topologies. In the absence of penguin contributions ($P = 0$), $C_{hh}$ vanishes and $S_{hh}$ is simply
related to the CKM angle $\alpha$ by $S_{hh} = \sin(2\alpha)$.
In the more general case of the $B^0(\overline{B}^0) \ensuremath{\rightarrow}\xspace \rho^{\pm}\pi^{\mp}$ decays, the time-dependent decay rate is
given by
\begin{equation}
\label{TDdecayAmp_2}
f^{\rho^{\pm}\pi^{\mp}}_{Q_{\rm tag}} = (1 \pm {\mathcal A}_{\rho\pi})\frac{e^{-|\Delta t|/\tau}}{4\tau}
\left[1 - Q_{\rm tag}(C_{\rho\pi} \pm \Delta C_{\rho\pi})\cos(\Delta m_d\Delta t)
+ Q_{\rm tag}(S_{\rho\pi} \pm \Delta S_{\rho\pi})\sin(\Delta m_d\Delta t)\right]~,
\end{equation}
where, the $\pm$ sign depends on whether the $\rho$ meson is emitted by the $W$ boson or comes from the spectator
quark. ${\mathcal A}_{\rho\pi}$ is the direct $CP$ violation parameter measuring the asymmetry between the
$\rho^+\pi^-$ and $\rho^-\pi^+$ final states, while $\Delta S_{\rho\pi}$ and $\Delta C_{\rho\pi}$, which arise from the fact
that two production modes of the $\rho$ are possible, are dilution terms and have no $CP$ content.
\subsection{The isospin analysis}
Using the strong isospin symmetry, the angle $\alpha$ can be extracted up to discrete ambiguities from
the $CP$-violating asymmetries defined above~\cite{LipkinNirQuinSnyder}. The decay amplitudes of the
isospin-related final states obey the pentagonal relations
\begin{equation}
\label{isospin_1}
\sqrt{2}(A^{+0}_{\rho\pi} + A^{0+}_{\rho\pi}) = 2A^{00}_{\rho\pi} +
A^{+-}_{\rho\pi} +
A^{-+}_{\rho\pi}~,
~~~~~~~
\sqrt{2}(\overline{A}^{+0}_{\rho\pi} + \overline{A}^{0+}_{\rho\pi}) = 2\overline{A}^{00}_{\rho\pi} +
\overline{A}^{+-}_{\rho\pi} +
\overline{A}^{-+}_{\rho\pi}~;
\end{equation}
where $A^{ij}_{\rho\pi} = A(B^0~{\rm or}~B^+ \ensuremath{\rightarrow}\xspace \rho^i\pi^j)$ and
$\overline{A}^{ij}_{\rho\pi} = A(\overline{B}^0~{\rm or}~B^- \ensuremath{\rightarrow}\xspace \rho^i\pi^j)$, $i,j = +,-,0$. With the use
of these relations, 12 unknowns (6 complex amplitudes with one unphysical phase, and the CKM angle $\alpha$) are
to be determined while 13 observables are available: $S_{\rho\pi}$, $C_{\rho\pi}$, $\Delta S_{\rho\pi}$,
$\Delta C_{\rho\pi}$, ${\mathcal A_{\rho\pi}}$; four average branching fractions ${\mathcal B}(B \ensuremath{\rightarrow}\xspace \rho\pi)$;
two time-dependent $CP$-violating asymmetries in the $B^0 \ensuremath{\rightarrow}\xspace \rho^0\pi^0$ decay ($S^{00}_{\rho\pi}$,
$C^{00}_{\rho\pi}$) and two direct $CP$ asymmetries in $B^+ \ensuremath{\rightarrow}\xspace \rho^+\pi^0$ and $B^+ \ensuremath{\rightarrow}\xspace \rho^0\pi^+$ decays.
In the case of $B \ensuremath{\rightarrow}\xspace hh$ ($h=\pi,\rho$), Eq.~\ref{isospin_1} simplify to triangular relations
\begin{equation}
\label{isospin_2}
\sqrt{2}A^{+0}_{hh} = A^{+-}_{hh} + A^{00}_{hh}~,
~~~~~~~
\sqrt{2}\overline{A}^{+0}_{hh} = 2\overline{A}^{+-}_{hh} + \overline{A}^{00}_{hh}~.
\end{equation}
The information counting leads then to 6 unknowns and 7 observables: 3 branching fractions
${\mathcal B}(B\ensuremath{\rightarrow}\xspace hh)$; $C_{hh}$, $S_{hh}$, $C^{00}_{hh}$, $S^{00}_{hh}$. In the $\pi\pi$ system
$S^{00}_{\pi\pi}$ is impossible to measure (as the $\pi^0$ is reconstructed from two-photons decays,
there is no way to measure the decay vertex), then one is left with 6 observables: $\alpha$ can
be extracted with an 8-fold ambiguity within $[0,\pi]$~\cite{GronauLondon}.
\section{Experimental Results}
\subsection{$B \ensuremath{\rightarrow}\xspace \pi\pi$ and $B \ensuremath{\rightarrow}\xspace \rho\rho$}
The various branching fractions and $CP$-asymmetries measured in $B \ensuremath{\rightarrow}\xspace \pi\pi$ and $B \ensuremath{\rightarrow}\xspace \rho\rho$
decays are summarized in Table~\ref{tab:pipi_rhorho}. In the case of charged decays the charge asymmetry
is defined as ${\mathcal A}_{CP}(B \ensuremath{\rightarrow}\xspace hh) = -C_{hh}$. The measurements are sufficiently well established
to perform an isospin analysis.
\begin{table}[hbt!]
\begin{center}
\begin{TableSize}
\begin{tabular}{cccc}
\hline
{\bf Mode} & ${\mathbf {\mathcal B}(10^{-6})}$ & ${\mathbf C}$ & ${\mathbf S}$ \\
\hline
$\pi^+\pi^-$ & $5.5 \pm 0.4 \pm 0.3$~\cite{BF_BTopippim}
& $-0.68 \pm 0.10 \pm 0.03$~\cite{LattestBTopipi}
& $-0.25 \pm 0.08 \pm 0.02$~\cite{LattestBTopipi} \\
$\pi^0\pi^0$ & $1.83 \pm 0.21 \pm 0.13$~\cite{LattestBTopipi}
& $-0.43 \pm 0.26 \pm 0.05$~\cite{LattestBTopipi}
& -- \\
\hline
$\rho^+\rho^-$ & $25.5 \pm 2.1^{+3.6}_{-3.9}$~\cite{BTorhoprhom}
& $0.01 \pm 0.15 \pm 0.06$~\cite{BTorhoprhom}
& $-0.17 \pm 0.2^{+0.05}_{-0.06}$~\cite{BTorhoprhom} \\
$\rho^0\rho^0$ & $0.92 \pm 0.32 \pm 0.14$~\cite{BTorho0rho0}
& $0.2 \pm 0.8 \pm 0.3$~\cite{BTorho0rho0}
& $0.3 \pm 0.7 \pm 0.2$~\cite{BTorho0rho0} \\
\hline
{\bf Mode} & ${\mathbf {\mathcal B}(10^{-6})}$ & ${\mathbf {\mathcal A_{CP}}}$ & \\
\hline
$\pi^{\pm}\pi^0$ & $5.02 \pm 0.46 \pm 0.29$~\cite{BF_BTopippi0}
& $0.03 \pm 0.08 \pm 0.01$~\cite{BF_BTopippi0}
& \\
\hline
$\rho^{\pm}\rho^0$ & $23.7 \pm 1.4 \pm 1.4$~\cite{BTorhoprho0}
& $-0.054 \pm 0.055 \pm 0.010$~\cite{BTorhoprho0}
& \\
\hline
\end{tabular}
\end{TableSize}
\end{center}
\caption{\em Summary of {\mbox{\sl B\hspace{-0.4em} {\small\sl A}\hspace{-0.37em} \sl B\hspace{-0.4em} {\small\sl A\hspace{-0.02em}R}}} measurements of $B \ensuremath{\rightarrow}\xspace \pi\pi$ and $B \ensuremath{\rightarrow}\xspace \rho\rho$ decays.
The measurements for the $\rho\rho$ system corresponds to the longitudinal component of the decay
rate. The errors quoted are statistical and systematic, respectively.
\label{tab:pipi_rhorho}}
\end{table}
The present measurement for the $\pi^+\pi^-$ mode excludes the absence of $CP$ violation
$(C_{\pi\pi},S_{\pi\pi}) = (0,0)$ at a C.L. of $6.7\sigma$. The relatively high branching
fraction of the $\pi^0\pi^0$ mode tends to separate the 8-fold ambiguities in the
$\alpha$ extraction, which only allows a weak constraint on $\alpha$ to be set.
With the current experimental measurements two of the eight ambiguities are nearly merged.
The range $[23^o,67^o]$ in $\alpha$ is excluded at the $90\%$ C.L.~\cite{LattestBTopipi}.
The solution is in agreement with the global CKM fit~\cite{CKMfitter,UTfit} which gives the
range $[71^o,109^o]$ at $68\%$ C.L.
The analysis of $B \ensuremath{\rightarrow}\xspace \rho\rho$ is potentially complicated due to the possible presence
of three helicity states for the decay. The helicity zero state, which corresponds to longitudinal
polarization of the decay, is $CP$-even but the helicity $\pm 1$ states are not $CP$ eigenstates.
Fortunately this complication is avoided by the experimental finding that the dominant polarization
is longitudinal, $f_L(\rho^+\rho^-) = 0.992 \pm 0.024^{+0.026}_{-0.013}$~\cite{BTorhoprhom},
$f_L(\rho^0\rho^0) = 0.75^{+0.11}_{-0.14} \pm 0.05$~\cite{BTorho0rho0} and
$f_L(\rho^+\rho^0) = 0.950 \pm 0.015 \pm 0.006$~\cite{BTorhoprho0} ($f_L \equiv \Gamma_L/\Gamma$,
where $\Gamma$ is the total decay rate and $\Gamma_L$ is the rate of the longitudinally-polarized mode).
The $B^0 \ensuremath{\rightarrow}\xspace \rho^0\rho^0$ branching fraction is small compared with that of the $B^+ \ensuremath{\rightarrow}\xspace \rho^+\rho^0$
mode, which indicates that the penguin to three ratio ($P/T$, cf. Eq.~\ref{lambda_1}) is small compared with
that of the $B \ensuremath{\rightarrow}\xspace \pi\pi$ system~\cite{LipkinNirQuinSnyder}. This has the effect of merging the
different ambiguities in the extraction of $\alpha$. The latest $B^0 \ensuremath{\rightarrow}\xspace \rho^0\rho^0$ {\mbox{\sl B\hspace{-0.4em} {\small\sl A}\hspace{-0.37em} \sl B\hspace{-0.4em} {\small\sl A\hspace{-0.02em}R}}}
results present the first measurement of the time-dependent $CP$ asymmetries $C^{00}_L$ and $S^{00}_L$. The inclusion of
these measurements has the effect of raising the 8-fold degeneracy on $\alpha$: the data only favors
two solutions out of eight~\cite{BTorho0rho0,BTorhoprho0}. These two effects allow to set a strong
constraint on $\alpha$, where only two solutions are seen, corresponding to
$\alpha = (92.4^{+6.0}_{-6.5})^o$ at $68\%$ C.L.~\cite{BTorhoprho0} for the one in agreement with
the global CKM fit~\cite{CKMfitter,UTfit}.
\subsection{$B \ensuremath{\rightarrow}\xspace \rho\pi$}
The $B \ensuremath{\rightarrow}\xspace \rho\pi$ measurement reported here is a time-dependent amplitude analysis of $B^0 \ensuremath{\rightarrow}\xspace (\rho\pi)^0$.
The interferences between the intersecting $\rho$ resonance bands are modeled over the whole Dalitz Plot
using the isobar model~\cite{IsobarModel}. This allows determination of the strong phase differences from the
interference pattern, which permits direct extraction of the angle $\alpha$ with reduced ambiguities.
The Dalitz amplitudes and time-dependence are contained in the 26 coefficients of the bilinear form-factor
terms occurring in the time-dependent decay rate, which are determined from a likelihood fit.
The values obtained for these coefficients are converted back into the quasi-two-body $CP$ observables
(c.f. Eq.~\ref{TDdecayAmp_2}), which are more intuitive in their interpretation. Table~\ref{tab:rhopi}
reports the experimental findings on these observables~\cite{RhoPi}.
\begin{table}[hbt!]
\begin{center}
\begin{TableSize}
\begin{tabular}{lc|lc}
\hline
{\bf Observable} & {\bf Value} & {\bf Observable} & {\bf Value} \\
\hline
$C_{\rho\pi}$ & $ 0.15 \pm 0.09 \pm 0.05$ & $S_{\rho\pi}$ & $-0.03 \pm 0.11 \pm 0.04$ \\
$\Delta C_{\rho\pi}$ & $ 0.39 \pm 0.09 \pm 0.09$ & $\Delta S_{\rho\pi}$ & $-0.01 \pm 0.14 \pm 0.06$ \\
$C^{00}_{\rho\pi}$ & $-0.10 \pm 0.40 \pm 0.53$ & $S^{00}_{\rho\pi}$ & $ 0.04 \pm 0.44 \pm 0.18$ \\
$A_{\rho\pi}$ & $-0.14 \pm 0.05 \pm 0.02$ & & \\
\hline
\end{tabular}
\end{TableSize}
\end{center}
\caption{\em Summary of {\mbox{\sl B\hspace{-0.4em} {\small\sl A}\hspace{-0.37em} \sl B\hspace{-0.4em} {\small\sl A\hspace{-0.02em}R}}} measurements from the time-dependent amplitude analysis of
$B^0 \ensuremath{\rightarrow}\xspace (\rho\pi)^0$ decays. The errors quoted are statistical and systematic, respectively.
\label{tab:rhopi}}
\end{table}
These measurements allow the determination of the limit $\alpha = (87^{+45}_{-13})^o$ at $68\%$ C.L.,
with almost no constraint at $95\%$ C.L. This result is particularly interesting as there is an unique
solution in the $[0,180]^o$ range, which helps to break the ambiguities obtained from the $\pi\pi$ and
$\rho\rho$ results. A hint of $CP$-violation is obtained at the level of $3\sigma$.
\section{Summary}
Several analyses have been conducted in {\mbox{\sl B\hspace{-0.4em} {\small\sl A}\hspace{-0.37em} \sl B\hspace{-0.4em} {\small\sl A\hspace{-0.02em}R}}} to extract the angle $\alpha$ of the UT. In the last few years
the measurements of this angle have become increasingly precise. The measurements provided from the
$B \ensuremath{\rightarrow}\xspace \pi\pi/\rho\rho/\rho\pi$ modes give complementary constraints on $\alpha$. For the $B \ensuremath{\rightarrow}\xspace \rho\rho$ system,
the inclusion of the $S^{00}_{\rho\pi}$ observable allows to favor two of the 8-fold ambiguities on $\alpha$, and
the relatively large ${\mathcal B}(B^+ \ensuremath{\rightarrow}\xspace \rho^+\pi^0)$, with respect to ${\mathcal B}(B^0 \ensuremath{\rightarrow}\xspace \rho^0\pi^0)$,
causes the ambiguities to degenerate in two peaks, improving the precision of the constraint. The measurements
from the $B^0 \ensuremath{\rightarrow}\xspace (\rho\pi)^0$ time-dependent amplitude analysis give a direct access to $\alpha$, disfavoring
the ambiguities. The combined constraint averaging all the $\pi\pi$, $\rho\rho$ and $\rho\pi$ measurements from
{\mbox{\sl B\hspace{-0.4em} {\small\sl A}\hspace{-0.37em} \sl B\hspace{-0.4em} {\small\sl A\hspace{-0.02em}R}}} and Belle gives $\alpha = (89.0^{+4.4}_{-4.2})^o$ at $68\%$ C.L. (see Fig.~\ref{fig:alpha_average_wa}),
which is in good agreement with the global CKM fit~\cite{CKMfitter,UTfit}.
\vspace{-0.7cm}
\begin{figure}[h!]
\begin{center}
\begin{minipage}{.49\linewidth}
\hspace{0.5cm}
\vspace{-0.8cm}
\includegraphics[width=6.0cm,keepaspectratio]{ckm_alpha_winter09_wa.eps}
\end{minipage}
\end{center}
\caption
{\label{fig:alpha_average_wa}
{\em Constraints on $\alpha$, provided by the CKMfitter group~\cite{CKMfitter}, expressed as one minus the
confidence level as a function of angle. The constraints are constructed averaging the {\mbox{\sl B\hspace{-0.4em} {\small\sl A}\hspace{-0.37em} \sl B\hspace{-0.4em} {\small\sl A\hspace{-0.02em}R}}} and
Belle measurements for the $\pi\pi$ (dotted green curve), $\rho\rho$ (dash-dotted blue curve) and $\rho\pi$
(dashed red curve) systems. The solid filled green curve represents the combined constraint using all the
systems.
}}
\end{figure}
\vspace{-0.8cm}
\section{Acknowledgements}
I would like to thank the organizers of the Lake Louise Winter Institute 2009 for an enjoyable and stimulating
conference, and my {\mbox{\sl B\hspace{-0.4em} {\small\sl A}\hspace{-0.37em} \sl B\hspace{-0.4em} {\small\sl A\hspace{-0.02em}R}}} colleagues for their assistance and helpful discussions.
\section{References}
|
1,108,101,566,344 | arxiv |
\section{Introduction}
The next decade will see a dramatic improvement in our ability to probe the Universe, with major leaps in capabilities occurring nearly simultaneously across many new facilities. Each of these new facilities will enable transformative science, but joint analyses of the resultant datasets will be more powerful and robust than what can be achieved with any individual instrument. In this whitepaper we focus on the case for, and implications of, joint analyses and cross-correlations between datasets that span different experiments and projects within the Cosmic Frontier. The promise of such analyses is both that they enable new science, as well as increase the robustness of the core science being pursed by each project. Notably, cross-survey analyses will improve the constraints on cosmic acceleration that drive the design and requirements for cosmological surveys into which DOE has invested, and also leverage those investments to constrain other aspects of fundamental physics that are important for our understanding of the Universe. At present, however, cross-survey analyses can be challenging to initiate, organize and fund. One of the main goals of this whitepaper is to advocate for the creation of clear pathways to support cross-survey analyses as part of the core mission of DOE's Cosmic Frontier.
As an illustration of the diversity of possible cross-survey analyses, Fig.~\ref{fig:multiwavelength} presents simulated maps of a patch of the Universe as measured by different cosmological probes. Each probe is connected to the same underlying large scale structure, and as a result, all of these probes are correlated. By cross-correlating probes from different surveys, new information about cosmological structure can be extracted. The improvements to our understanding of the Universe that are enabled by joint analyses of multiple surveys are remarkably diverse. Some prominent examples include:
\begin{itemize}
\item {\bf Improved robustness of cosmological constraints}. Analyses of cross-correlations between surveys will increase the robustness of cosmological constraints by breaking degeneracies with nuisance parameters that degrade single-survey constraints.
Additionally, cross-correlations can provide tight constraints on astrophysical sources of systematic uncertainties, such as baryonic feedback and intrinsic alignments of galaxies.
\item {\bf Improved cosmological constraints from the evolution of large scale structure}. Cross-correlations of galaxy surveys with gravitational lensing of the CMB offer the prospect of tight constraints on structure at high redshift, improving constraints on dark energy, modified gravity, and the sum of the neutrino masses. Cross-correlations with line intensity mapping experiments offer similar benefits if long wavelength modes along the line of sight can be recovered.
\item {\bf Improved cosmological constraints from the abundance and clustering of galaxy clusters}. Galaxy clusters have the potential to be powerful cosmological probes, but realizing this potential will require control of astrophysical and systematic uncertainties. Cross-survey, multi-wavelength studies of galaxy clusters offer the prospect of significantly improved constraints on cluster masses and other properties.
\item {\bf Improved cosmological constraints from overlapping imaging and spectroscopic surveys.} Spectroscopic surveys can provide high-accuracy redshift information and improved source classification for objects detected in overlapping imaging surveys, enabling improved cosmological constraints. At the same time, imaging surveys provide a complete census of galaxies that allows for compilation of targets for spectroscopic surveys, as well as measurements of structure inaccessible to spectroscopic surveys.
\item {\bf Improved constraints on non-Gaussianity}. By exploiting multi-tracer techniques and the high-redshift reach of CMB lensing, cross-correlations offer the prospect of tight constraints on primordial non-Gaussianity and inflationary models.
\item {\bf A census of baryons}. The thermal and kinematic Sunyaev Zel'dovich effects measured by CMB surveys provide a 2D snapshot of the distribution and thermal state of baryons throughout the Universe. By cross-correlating these measurements with probes of known redshift --- such as galaxies --- 3D information about the baryons can be recovered. Upcoming surveys will also allow for a first detection of the polarized SZ effect, providing a new tool to study the distribution of baryons.
\end{itemize}
\begin{figure*}
\centering
\includegraphics[scale=0.65]{Figures/mdpl2_maps}
\caption{Simulated maps of the same patch of the Universe, as measured with several different cosmological probes (from left to right): dark matter halos (detectable via the galaxies they host), galaxy clusters (with the size of the circles indicating the cluster mass), gravitational lensing of the CMB ($\kappa_{\rm CMB}$), the thermal Sunyaev Zel'dovich effect (tSZ), the kinematic Sunyaev Zel'dovich effect (kSZ), the cosmic infrared background (CIB), and gravitational lensing of galaxy shapes (shading indicates the convergence, $\kappa_{\rm gal}$, while white lines indicate the shear, $\gamma$). Although each probe is very different, they are all sourced by the same underlying large scale structure, and are therefore correlated. Joint analyses of these different probes can yield access to new cosmological information about the underlying structure. Simulated data from Omori (in prep.). }
\label{fig:multiwavelength}
\end{figure*}
Measuring cross-correlations between different cosmological probes requires overlapping measurements on the sky. As shown in Fig.~\ref{fig:footprint2}, the survey strategies of several operational and planned DOE-funded cosmic surveys --- including optical imaging, spectroscopic, and CMB surveys --- have significant overlap. While we illustrate the overlap for three specific surveys, many of the opportunities and challenges discussed here can be applied to any cross-survey analysis; we list other relevant surveys (and the corresponding acronyms used throughout the text) in Table~\ref{tab: survey_summary}. Given the significant overlap on the sky of future cosmic surveys, there is potential to harness the power of cross-correlations between them. However, as we discuss below, actually performing such analyses to maximize the science return from cross-correlations will require significant additional investments in simulation and analysis infrastructure, as well as mechanisms for improved cross-survey collaboration.
\begin{figure}
\centering
\includegraphics[width=15cm]{Figures/footprint_modified.png}
\caption{
Future cosmic surveys will have large regions of overlapping coverage, enabling opportunities for cross-correlations. Here, we illustrate this overlap for three prominent future optical imaging (LSST), spectroscopic (DESI), and CMB (CMB-S4) surveys. The other future surveys listed in Table~\ref{tab: survey_summary}, omitted for clarity, are also expected to have significant overlap.
}
\label{fig:footprint2}
\end{figure}
\begin{table*}
\caption{List of recently commenced and planned cosmological surveys.}
\label{tab: survey_summary}
\centering
\begin{tabular}{p{0.3\linewidth}|p{0.5\linewidth}|p{0.1\linewidth}}
\hline
Type & Experiment (Acronym) & Reference \\
\hline
\multirow{4}{*}{Optical/NIR Imaging} & Vera C. Rubin Observatory's Legacy Survey of Space and Time (LSST) & \cite{LSST,lsst-science} \\
& Euclid & \cite{Euclid} \\
& Nancy Grace Roman Space Telescope (Roman) & \cite{Dore2019} \\
\hline
\multirow{5}{*}{Optical/NIR Spectroscopy} & Dark Energy Spectroscopic Instrument (DESI) & \cite{desi19} \\
& 4-metere Multi-Object Spectroscopic Telescope (4MOST) & \cite{4MOST} \\
& Maunakea Spectroscopic Explorer (MSE) & \cite{MSE} \\
& MegaMapper & \cite{Schlegel:2019} \\
& Nancy Grace Roman Space Telescope (Roman) & \cite{Dore2019} \\
\hline
\multirow{3}{*}{ \shortstack[l]{Cosmic Microwave Background \\ (Large Aperture) }} & Simons Observatory (SO) & \cite{SimonsObs} \\
& CMB-Stage 4 (CMB-S4) & \cite{S4} \\
& CMB-High Definition (CMB-HD) & \cite{CMB-HD-Snowmass} \\
\hline
\multirow{2}{*}{Line Intensity Mapping} & Spectro-Photometer for the History of the Universe, Epoch of Reionization, and Ices Explorer (SPHEREx) & \cite{SPHEREx} \\
& Packed Ultra-wideband Mapping Array (PUMA) & \cite{PUMA} \\
\hline
\multirow{2}{*}{X-ray} & eROSITA & \cite{EROSITA} \\
& Athena & \cite{Athena} \\
\hline
\end{tabular}
\end{table*}
We describe examples of specific cross-survey analyses that enable these many improvements to cosmological constraints in \S\ref{sec:LOIs}. Extracting cosmological information from these measurements will present new modeling and analysis challenges, which we discuss in \S\ref{sec:model_analysis}. Not unexpectedly, these joint analyses have associated costs, implications and impacts on the individual projects. We discuss several growth opportunities for addressing these issues and maximizing science return in \S\ref{sec:recs}.
\section{Science from joint probes}
\label{sec:LOIs}
\subsection{CMB lensing $\times$ galaxies}
Lensing of the CMB by the gravitational potentials associated with large-scale structure measured in future galaxy surveys offers several key science opportunities. First, it provides a means of measuring the gravitational potentials, and hence matter density, at high redshifts where this becomes infeasible with galaxy lensing. This, in turn, gives strong constraints on the amplitude of the power spectrum with redshift \cite{Omori:2019, White:2021yvw, Krolewski:2021yqy} which is crucial to tests of modified gravity \cite{Pullen:2015}, neutrinos \cite{Yu:2018tem} and dark energy \cite{Yu:2021vce}, and directly bears upon the tensions being raised by lower redshift probes. By using relativistic tracers (photons), CMB lensing provides access to the space-space perturbations to the metric, which in combination with redshift-space distortions, provides a test of gravity. By giving access to different tracers of the fluctuations at the highest redshifts and largest scales, a combination of galaxy clustering and CMB lensing may provide our tightest constraints on primordial non-Gaussianity \cite{Schmittfull2017,Snowmass2021:Inflation}. Joint analyses of cross-correlations between galaxy surveys and CMB lensing can also exploit the fact that parameter dependencies of cross-correlations are typically different from those of intra-survey correlations, enabling significant degeneracy breaking \cite{2016MNRAS.461.4099B}.
The technique of CMB lensing tomography \cite{Giannantonio:2016}, enabled by CMB surveys such as CMB-S4 and CMB-HD, and galaxy catalogs from surveys such as LSST, Euclid and Roman, will allow for the creation of mass maps in broad redshift slices out to redshifts as high as 5, making possible new precision tests of cosmology. Such results explore the connection between visible baryons and the underlying dark-matter scaffolding. CMB lensing also enables calibration of cluster masses at high redshift \cite{Baxter:2015}, allowing the abundance of galaxy clusters to be used as an additional probe of dark energy and neutrino masses (see also \S~\ref{sec:clusters}).
In addition to new science opportunities and improved cosmological constraints, cross-correlations between galaxy surveys and CMB lensing have the potential to make cosmological analyses more robust.
The complex astrophysics that determines where galaxies form (``biasing'') and their intrinsic shapes (``intrinsic alignments'') can substantially affect the cosmological interpretation of galaxy surveys and the cosmological constraints inferred from them at the level of the 2-point correlations \cite{Krause:2016,Yao:2017,Blazek:2019} as well as higher-order statistics \cite{2012MNRAS.427..442T,2012MNRAS.423.1663T,2012MNRAS.419.1804T}. Correlating galaxy positions and lensing with CMB lensing, an independent mass tracer, provides a powerful method to improve our understanding of these effects and to mitigate systematic biases. In a similar vein, cross-correlations of galaxy surveys with CMB lensing can be used to calibrate multiplicative biases that impact galaxy lensing measurements, allowing the data to self-calibrate, rather than relying on e.g. image simulations to constrain these biases \cite{Schaan:2017}. Finally, cross-correlations between galaxy surveys and CMB lensing measurements have the advantage of being largely immune to additive systematics in the galaxy survey or CMB lensing observables. As long as such systematics do not impact both fields being correlated, they are suppressed in cross-correlation.
\subsection{Thermal/Kinetic Sunyaev Zel'dovich effect $\times$ galaxies}
The Sunyaev Zel'dovich (SZ) effect \cite{SZ} is caused by inverse Compton scattering of CMB photons with free electrons in the late-time Universe. The \textit{thermal} SZ (tSZ) effect results when the electrons have high temperature, while the \textit{kinematic} SZ (kSZ) effect results from electrons with non-zero bulk velocity with respect to the CMB frame. Cross-correlations of CMB data with measurements of late-time structure offer the prospect of measuring these two effects, and thereby accessing new information about the distribution, thermal state, and dynamics of baryons. This information can in turn be used to improve cosmological constraints.
\vspace{0.1in}
\noindent \textbf{Constraining baryonic feedback}
Although baryons make up only 16\% of the matter budget in the Universe, their impact on the matter distribution represents a challenge for cosmologists. A combination of star formation, supernovae and ejecta from active galactic nuclei give rise to several complex and energetic process --- collectively known as \textit{feedback} --- that redistribute matter, causing significant impact (roughly 10\%) on the matter power spectrum at small scales. Our inability to accurately model the impact of feedback on the matter distribution limits our ability to use measurements on small scales to constrain cosmology \cite{DESy1:2017, Huang:2019, Amodeo:2020mmu, Amon2022}.
The tSZ and kSZ effects offer the prospect of directly probing the diffuse ionized gas that is so impacted by feedback, and thus improving our understanding of its effects on the matter distribution. The tSZ effect is sensitive to the electron gas pressure, and is therefore sensitive to changes in the thermal energy of the gas and its distribution. The kSZ, on the other hand, offers the possibility of measuring the ionized gas density.
Cross-correlations are essential to this program, since the tSZ and kSZ by themselves provide only a line-of-sight integral of the ionized gas properties. By cross-correlating tSZ measurements with low-redshift structure, gas properties at different redshifts can be extracted, enabling powerful tests of feedback models \cite{Hill:2018,Pandey:2020,Pandey:2021,Troster:2021}. The kSZ effect, on the other hand, depends on both the gas density and its velocity. Only by using prior information on the gas velocity --- from, for example, galaxy surveys --- can information about the gas density be extracted. A combination of kSZ and tSZ, together with mass distribution information from lensing, can be used to determine the full thermodynamic information of the halos, including the amount of feedback, the fraction of non-thermal pressure, and the temperature profile \cite{Battaglia:2017neq, Amodeo:2020mmu}. The direct access to the gas properties through the SZ effect allows for a direct calibration of baryon effects in weak lensing \cite{Amodeo:2020mmu, AtacamaCosmologyTelescope:2020wtv}. Moreover, ``projected fields'' techniques \cite{Ferraro:2016ymw, Hill:2016dta, Kusiak:2021hai} have been developed to handle photometric data and will reach their full potential with the next generation of cosmic surveys.
\vspace{0.1in}
\noindent \textbf{Cosmology with the kSZ effect}
Since the kSZ signal is proportional to the galaxies' peculiar velocity, the latter can be reconstructed by using high resolution CMB maps together with galaxy catalogs \cite{Smith2018}. The reconstruction typically has lower noise on larger scales, often the most affected by primordial physics.
For example, the cross-correlations of the kinematic SZ effect with an overlapping galaxy survey can yield tight constraints on local primordial non-Gaussian fluctuations, characterized by the parameter $f_{\rm{NL}}$ \cite{Munchmeyer2018}. Reaching a target of $\sigma(f_{\rm{NL}}) < 1$ would disfavor a wide class of multi-field inflation models, shedding light on the nature of inflation~\cite{Alvarez:2014vva, Ferraro:2014jba,Smith2018,Munchmeyer2018,Deutsch:2017ybc,Contreras2019,Cayuso2018}. Cross-correlations of galaxy surveys, such as LSST, with proposed CMB experiments, such as CMB-S4 and CMB-HD have the potential to reach this exciting threshold~\cite{Munchmeyer2018,Sehgal:2019nmk,CMB-HD-Snowmass,Snowmass2021:Inflation}.
\vspace{0.1in}
\noindent \textbf{Polarized SZ}
The polarized SZ effect~\cite{Sazonov:1999zp} describes the process by which CMB polarization arises along the line of sight to galaxies and clusters due to the scattering by free electrons in the objects when the incident CMB intensity exhibits quadrupolar anisotropy.
Upcoming CMB surveys such as CMB-S4 and CMB-HD will have the sensitivity necessary to detect and characterize this effect~\cite{Hall:2014wna,Deutsch:2017cja,Louis:2017hoh,Meyers:2017rtf,S4,CMB-HD-Snowmass}. The combination of future galaxy surveys and CMB surveys will allow for the reconstruction of three-dimensional maps of the remote temperature quadrupoles, thereby enabling several cosmological applications of the polarized SZ effect including: accessing additional information about cosmological fluctuations on the largest scales~\cite{Kamionkowski:1997na,Seto:2005de,Deutsch:2017ybc,Meyers:2017rtf}, probing late time structure formation and dark energy~\cite{Cooray:2002cb,Cooray:2003hd}, measuring the baryon content of galaxy clusters~\cite{Louis:2017hoh}, and searching for primordial gravitational waves~\cite{Alizadeh:2012vy,Deutsch:2018umo}.
\subsection{Cross-correlations with Line Intensity Mapping}
Line intensity mapping (LIM) is an emerging and potentially powerful observational technique to map the LSS over a wide range of scales and redshifts \cite{kovetz2017}. LIM detects the cumulative, unresolved emission of molecular and atomic spectral lines from galaxies together with the intergalactic medium. Measurements of the line frequency and spatial fluctuations in the line intensity provide a 3D map of the underlying dark matter distribution. Leveraging synergies between LIM at millimeter wavelengths and LIM of the neutral hydrogen 21cm line, together with optical galaxy surveys and CMB lensing, can significantly enhance the scientific return from each probe. The expected gain of cross-correlation analyses is the result of degeneracy breaking between cosmological parameters, sample variance cancellation, control of systematics, and improved calibration of nuisance parameters.
\vspace{0.1in}
\noindent \textbf{LIM $\mathbf{\times}$ galaxies}
The cross-correlation between mm-wave LIM and upcoming optical galaxy surveys is a promising means by which to reduce systematic uncertainties on large scales, such as from Galactic dust extinction and stellar contamination. For mm-wave surveys that accurately recover low $k_\parallel$ modes, this cross-correlation can furthermore improve the calibration of the redshift distribution in imaging surveys by reducing uncertainties in photometric redshift measurements (i.e., ``clustering-based'' redshift estimation \cite{Menard:2013aaa,Alonso:2017dgh,Cunnington:2018zxg,Guandalin:2021sxw}). The potential strength of LIM for this task is twofold: accurate measurement of redshifts together with wide redshift coverage extending well beyond the spectroscopic surveys.
\vspace{0.1in}
\noindent \textbf{LIM $\mathbf{\times}$ CMB lensing}
While CMB lensing is an observationally clean signal, the information content is limited due to the breadth of the projection kernel along the line of sight, with significant weighting of modes beyond $z\sim 2.$ LIM cross-correlations provide a unique opportunity to derive tomographic information at very high redshift that is not achievable with most spectroscopic galaxy surveys; unlocking this potential will require a dedicated effort to clean large-scale modes along the line of sight from foreground contamination via a variety of reconstruction methods \cite{Zhu:2015zlh,Zhu:2016esh,Modi:2019hnu,Li:2020uug,Darwish:2020prn}. Such an effort would be quite fruitful: by mitigating degeneracies between growth, line bias and mean brightness temperature, and through exploitation of cosmic variance cancellation, these cross-correlations can significantly improve the constraints on early dark energy, modified gravity, neutrino mass measurements, and local non-Gaussianity \cite{Schmittfull:2017ffw,Yu:2018tem,Wilson:2019brt,Sailer2021}.
\vspace{0.1in}
\noindent \textbf{LIM $\mathbf{\times}$ LIM}
Although 21cm auto-spectra will face challenges from Galactic foreground contamination, cross-correlating intensity maps between 21cm and other lines (such as CO and/or [CII]) is a potentially powerful spectroscopic tracer at $z>3$ that could provide convincing evidence of the cosmological origin of high-redshift 21cm emission \cite{Lidz:2011dx}. Multi-line cross-correlations also open up the possibility of marginalizing over astrophysical uncertainty associated with the Epoch of Reionization (e.g., by tracing the scale at which the cross-correlation changes sign) \cite{Chang:2019xgc, Gong:2011mf}. Furthermore, considering higher-order cross-statistics can potentially improve the reliability with which line bias factors can be extracted, while presenting challenges that are interesting in their own right \cite{Beane:2018pmx}.
\subsection{Cross-correlations between imaging and spectroscopic surveys}
Imaging and spectroscopic surveys provide highly complementary information which, when combined over the same area of sky, can significantly improve upon the capabilities of either alone. Joint-analyses of overlapping imaging and spectroscopic survey datasets will allow new tests of cosmology and fundamental physics, and can make core cosmological studies more robust to systematic uncertainties.
Below, we summarize some of the ways that
combining data from a wide-field, highly-multiplexed optical and near-infrared multi-object spectroscopic (MOS) survey (e.g. DESI, 4MOST, MegaMapper, MSE, Roman) and from overlapping photometric surveys (e.g. LSST, Roman, Euclid) can significantly increase the return from both datasets and unlock additional scientific opportunities.
\vspace{0.1in}
\noindent \textbf{Photometric redshift calibration}
Photometric redshifts are a critical tool for imaging surveys. If these redshift estimates have an undetected systematic bias, dark energy inference can be catastrophically affected \citep[see e.g.,][]{Hearin2010,LSSTDESC2018}. Direct calibration via a large, representative spectroscopic redshift sample may not be possible given the depth of future imaging surveys \citep{Newman2015}. However, methods based upon cross-correlating the locations of photometric galaxies with spectroscopic samples can provide an alternative route for photometric redshift calibration \citep{Newman2008}.
\vspace{0.1in}
\noindent \textbf{Characterizing intrinsic alignments}
Intrinsic alignments of galaxy shapes
are a known contaminant to weak gravitational lensing \citep{Brown2002,Troxel2015}. If not accounted for, their presence can generate significant biases in cosmological analyses \citep{Kirk12, Krause16, Blazek:2019,Yao2017}. With a wide-field MOS we can measure the cross-correlation between positions of bright galaxies with spectroscopy and intrinsic shapes of fainter galaxies used for lensing, constraining IA models \citep{2014MNRAS.445..726C,Singh14,Johnston18}.
\vspace{0.1in}
\noindent \textbf{Characterizing strong lensing systems}
Upcoming imaging surveys will discover significantly more strong gravitational lenses than are currently known \citep[e.g.,][]{Collett15}. Strong lensing science requires redshifts for both lens and source. Many lenses are bright enough for redshift measurements via targeted fibers within very wide-area surveys, enabling identification of the systems best suited for follow-up observations as described in \citep{single}.
\vspace{0.1in}
\noindent \textbf{Spectroscopy for supernova cosmology}
Type Ia supernovae (SNe Ia) provide a mature probe of the accelerating universe \citep[e.g.,][]{2018arXiv181102374D}, and their use as standardizable candles is an immediate route to measuring the equation of state of dark energy. However, a major systematic uncertainty is the photometric classification and redshift measurement of the supernovae. Wide-field spectroscopy can address this in two ways. First, spectroscopic observations are used for the classification of live SNe and the construction of optimized, large, homogeneous and representative training sets needed for purely photometric classifiers that may be used for the next generation of SN Ia cosmology \citep[e.g.,][]{2016ApJS..225...31L}. Second, spectroscopy is used to obtain redshifts for host galaxies of SNe that have faded away. While conventional SN Ia cosmology analyses rely on spectroscopic follow-up of live SNe, new analyses \citep[e.g.][]{jones2018,campbell2013, hlozek2012} show that it is possible to take advantage of even larger samples of SNe after obtaining spectroscopic redshifts of their host galaxies.
\vspace{0.1in}
\noindent \textbf{Testing General Relativity on cosmological scales}
Combining cross-correlations between galaxy density and lensing with measurements of redshift-space distortions in galaxy clustering allows for tests of gravity on cosmological scales \citep{Ishak2019}. A sample of weak lensing galaxies with spectroscopic overlap would enable measurement of statistics sensitive to the nature of gravity, such as $E_{G}$
\cite{Zhang2007,Reyes2010}. For example, combining LSST and DESI/4MOST would enable multiple determinations of $E_G$ to $\sim 0.004$, roughly 10 times more precise than current constraints.
\subsection{Multi-wavelength studies of galaxy clusters}
\label{sec:clusters}
Clusters of galaxies---the largest gravitationally-bound systems in the universe---are prominent tracers of cosmic structure.
The abundance, spatial distribution, and other properties of these massive systems are highly sensitive to the physical laws and phenomena that govern how structure grows over time.
Clusters provide sensitive constraints on dark energy parameters, the sum of the masses of the neutrino species, primordial non-Gaussianity, and the density of dark matter (see e.g., \cite{allen11} for a recent review). For some cosmological models, clusters have the potential to be the \textit{most} constraining probe \cite{2016arXiv160407626D}. For these reasons, the ability to detect and characterize galaxy clusters is a key science driver of current and future cosmological surveys in the optical, submillimeter and $X$-ray.
There are several upcoming surveys that will provide cosmologically interesting cluster observations, each offering distinct advantages, and we highlight a few of the largest here as examples of the potential for cross-wavelength analyses. LSST and Euclid will provide deep observations of the sky at optical through infrared wavelengths. In addition to compiling large ($>100,000$) samples of clusters identified via galaxy overdensities, these surveys will provide critical weak lensing mass calibration and redshift information for cluster samples selected in other surveys. eROSITA and Athena represent the next generation of X-ray surveys. The ongoing all sky eROSITA mission is expected to discover $\sim$100,000 galaxy clusters (primarily at lower redshifts) in a way that is highly complimentary to the optical and tSZ surveys. Athena's (scheduled for launch in 2031) unprecedented sensitivity will allow detection of galaxy clusters and groups to $z\sim 2$ in a modest area survey as well as extensive targeted characterization of clusters detected at other wavelengths \cite{2013arXiv1306.2307N}, with observables highly complimentary to those at other wavelengths.
\vspace{0.1in}
\noindent \textbf{Mitigation of systematic effects through multi-wavelength observations}
The main challenge confronting cluster cosmology is the presence of difficult to characterize systematic uncertainties.
The next decade offers an unprecedented confluence of expansive new multi-wavelength cosmic surveys and new computational and simulation capabilities which, if fully leveraged, will improve control of these errors and enable us to maximize the potential of these extreme systems to probe the composition and physical laws of the universe.
As clusters are multi-component systems and traced by numerous signatures (e.g., from overdensities of galaxies at optical and infrared wavelengths, hot gas detectable at X-ray and millimeter wavelengths, and as significant mass peaks in lensing data), employing data from all available telescopes will enable analyses that are not susceptible to any single set of astrophysical systematics or observational biases. For instance, optical imaging can provide precise weak lensing mass calibration of SZ-selected clusters \cite{Miyatake:2019,Stern:2019}. SZ observations, on the other hand, can be used to improve the robustness of optical cluster selection \cite{Costanzi:2021,Grandis:2021}.
\vspace{0.1in}
\noindent \textbf{High redshift clusters}
The nature of the inverse Compton scattering process that gives rise to the tSZ makes the effect independent of redshift, providing a means to detect galaxy clusters out to high redshift.
With their deep and wide fields covering a large amount of volume, and
the ultra-deep fields imaging lower-mass clusters, CMB-S4 and CMB-HD will provide effective probes of the crucial regime of $z \gtrsim 2$, when galaxy clusters were vigorously accreting new hot gas while at the same time forming the bulk of their stars. The CMB-S4 and CMB-HD catalogs will be more than an order of magnitude larger than current catalogs based on tSZ or X-ray measurements, and will contain an order of magnitude more clusters at $z > 2$ than will be discovered with current surveys. Additionally, the gravitational lensing maps reconstructed from CMB survey data will provide unique mass calibration measurements for the highest redshift systems.
\vspace{0.1in}
\noindent \textbf{Combining with spectroscopic surveys}
Finally, while not expected to be drivers of cluster catalog production, wide-area spectroscopic surveys such as DESI and future more ambitious surveys will also play critical roles in mitigating systematic biases in next generation cluster analyses as large spectroscopic samples are crucial for calibrating photometric redshifts for both the cluster samples themselves as well as those of the source galaxies used in weak lensing mass calibration. Spectroscopic surveys are also important for constraining certain cluster cosmological systematics (e.g., quantifying the contamination to optical cluster samples from line-of-site structure).
\section{Modeling \& Analysis challenges}
\label{sec:model_analysis}
Through measurements made with a panoply of instruments, cosmological data in the next decade will usher in a new era of large-scale, multi-wavelength, and overlapping surveys that opens up the exciting prospect of \emph{analyzing all datasets simultaneously}. As we have demonstrated above, significant additional information can be extracted from correlations between these datasets. However, modeling multi-survey correlations necessarily requires additional work beyond that typically undertaken by single surveys. Here we highlight some of the unique challenges presented by such analyses.
Part of the power of cross-survey analyses will come from the fact that they utilize information from multiple surveys at once from both linear and non-linear scales. Theoretical forecasts of the cosmological constraining power of the nonlinear regime now date back many years \cite{zentner_etal13,reid2014,krause_etal17,Shirasaki2020,salcedo2020}, and indicate that using smaller-scale information can result in factors of 2-4 improvement on dark energy constraints beyond present-day capabilities. Of course, the natural question arises as to whether these gains can be realized in practice, or whether the need to marginalize over nuisance parameters capturing systematic uncertainty result in an excessive loss of constraining power. Recent work analyzing BOSS galaxy samples has shown that these gains could indeed be a reality \cite{wibking_etal20,lange_hearin_2021,chapman_etal21}; by including measurements from nonlinear scales, these recent analyses have achieved a full factor of 2 improvement beyond previous BOSS analyses that restricted attention to the quasi-linear regime. Harvesting information from nonlinear scales nonetheless requires an expansion of the model parameter space, and so incorporating multi-wavelength capabilities into these models would enable analyses to leverage information from multiple surveys to break nuisance parameter degeneracies. There is a clear opportunity for synergy in the development of these capabilities, since numerous forecasts have also established the potential to achieve comparable gains by leveraging multi-wavelength large-scale structure measurements that jointly analyze galaxy clusters \cite{salcedo_etal20_cluster_crossx,nicola_etal20,eifler_etal21}.
Further enhancements beyond standard analyses of large-scale structure come from utilizing higher-order statistics. While second-order statistics such as clustering and lensing have been the default method in analyzing cosmological data to date, higher-order statistics are expected to unveil new information about astrophysics \cite{behroozi_etal21}, the galaxy--halo connection \cite{tinker_etal08,wang_etal19}, as well as cosmology \cite{Uhlemann_etal20,banerjee_abel_2020}. Carrying out such analyses together with multi-survey cross-correlations has potential to improve control over systematic effects such as fiber collisions \cite{guo_etal12_fiber_collisions} and baryonic effects \cite{foreman_etal20}, and so in order to achieve maximal and robust returns from higher-order statistics, it will be necessary for the community to invest at qualitatively new levels in the development of sophisticated modeling efforts with capability to address these challenges.
Contemporary efforts to derive cosmological constraints from nonlinear scales (both with and without use of higher-order statistics) are typically built upon simplistic empirical models such as the Halo Occupation Distribution (HOD). Due to the very formulation of HOD-type models, incorporating new constraints from more than a single tracer galaxy population requires a significant expansion of the parameter space, and/or reliance upon plausibly-violated assumptions about the galaxy-halo connection. Thus conventional halo occupation models actually {\em penalize} attempts to incorporate new constraining data. This older generation of models was devised at a time when the reliability of cosmological simulations to resolve halo substructure was not yet established, and so halo occupation models are founded upon host halos identified at a particular simulated snapshot; considerable progress has been made during the intervening years on the quality of both cosmological simulations as well as the associated data products, and subhalo catalogs with merger trees are becoming widely available for high-resolution, survey-scale simulations \cite{Chuang_etal19_unit_sims,heitmann_etal_last_journey,Ishiyama_etal21_uchuu,bose_etal21}. In this sense, conventional techniques designed to harvest cosmological information in the nonlinear regime such halo occupation models bear the mark of the single-survey era in which they were developed.
Historically, generating physically realistic multi-wavelength predictions has required modeling approaches such as {\em hydrodynamical simulations} or {\em Semi-Analytic Models} (SAMs). While such models remain irreplaceable in the effort to understand the detailed physics of galaxies and clusters, the scientific payload of multi-wavelength cross-correlations can only be delivered with expansive explorations of parameter space based on high-resolution, Gpc-scale simulations, and so direct constraints based on traditional implementations of these models may be out of reach for the 2020s. Thus, \textit{theoretical techniques with practical capability to conduct complete, multi-survey cosmological inference currently do not exist, and so the field of theoretical cosmology is ill-equipped for the quality, richness, and volume of cosmological data that will arrive in the 2020s.}
Considerable recent progress has been made by a new generation of empirical models that bridge the gap between the level of complexity achieved by SAMs and the computational efficiency of empirical models, e.g., UniverseMachine \cite{behroozi_etal18} and EMERGE \cite{moster_etal17}. The ability of these models to make CPU-efficient predictions across redshift is quite promising, but significant further advances are needed on both the modeling and computation side for this new approach to conduct multi-wavelength inference with survey-scale simulations and emerging computing architectures. Towards this end, an emerging trend spanning numerous fields in computational science \cite{Kochkov2021_ML_CFD,jaxmd2020,hafner_veros_2018} is to build prediction pipelines within software frameworks for automatic differentiation such as JAX \cite{jax2018github} and TensorFlow \cite{tensorflow2015_whitepaper}, which are being actively developed to support the performance needs of contemporary deep learning applications. This approach to generating autodiff-based predictions has now been applied in a variety of cosmological applications, including simulations of the density field \cite{modi_lanusse_seljak_2021_flowpm}, halo models \cite{jax_cosmo}, and simulation-based modeling of galaxy SEDs \cite{hearin_etal21_dsps,hearin_etal21_shamnet}; in addition to creating the capability to leverage gradient information via autodiff, large-scale structure pipelines constructed in this fashion naturally leverage the performance of these libraries on GPUs and other accelerator devices, and thereby anticipate the computing resources that will be available in the 2020s. In order to meet the predictive needs associated with the incoming flood of multi-wavelength astronomical data, and to maximize the scientific returns of the upcoming surveys, we consider it critical and urgent for the cosmology community to invest in the development of a new generation of modeling approaches that builds upon this progress and addresses the key limitations of contemporary techniques.
Beyond the technical challenges associated with cross-survey analyses, there are also practical difficulties associated with this work. Any such analysis necessarily requires detailed knowledge of data products generated by multiple surveys. Some of this information may be proprietary, and not easily shared. Previous cross-survey analyses have typically waited until data products become public (thereby delaying results) or have operated through cross-survey memoranda of understanding (MoU). Relative to single-survey analyses, analyses conducted through MoU are often subject to additional bureaucratic hurdles that can delay progress and unnecessarily increase workloads. These difficulties can be significant enough to discourage cross-survey analyses, a clearly suboptimal outcome.
\section{Growth Opportunities}
\label{sec:recs}
In the previous sections we have detailed a wide range of opportunities enabled by combining and cross-correlating large, multi-wavelength, datasets from multiple cosmological experiments. In particular, joint-probe analyses across surveys have the potential to provide complementary information to single-probe experiments that could otherwise be limited by astrophysical and observational systematic effects. We have also highlighted some of the challenges that such analyses face. To capitalize upon these opportunities and address the associated challenges, a qualitatively new level of investment in cross-survey, joint-probe infrastructure is required -- this includes simulations, associated modeling, coordination of data sharing, survey strategy, and training for the next-generation of scientists in a way that transcends any individual project or collaboration. The required investments are substantial, but they are critical for the next generation of cosmic surveys to fully realize their potential. Below we present a summary of future opportunities for growth that have potential to multiplicatively enhance the scientific returns of cosmological surveys in the 2020s:
\begin{itemize}
\item \textbf {Joint simulations:} Nearly all of the multi-probe analyses discussed above require high-fidelity synthetic data that is validated against observational data. The computational demands of these simulations can be extremely expensive, and an intensive human resource effort is required in order to generate synthetic data that is sufficiently high-quality to merit this expense. Considerable progress has been made in this area in recent years, but efforts are typically limited to an individual survey, or even an individual probe in isolation. For example, most CMB simulations do not include physically realistic models of galaxy populations at low redshift, and synthetic datasets tailored for optical surveys of galaxies do not commonly include realistic treatments of the diffuse gas that can be observed in CMB surveys via, e.g., the SZ effect.
As a result, there is a steeply increasing need in the field for simulations that are suitable for multi-wavelength cross-correlation analyses. This widespread need reflects a key opportunity for further growth in the area of generating multi-survey synthetic data, and the wider cosmology community stands to greatly benefit from substantially increased support for these efforts.
\item \textbf{Joint modeling and analysis:} Current toolkits such as \texttt{Cobaya} \cite{Cobaya}, \texttt{Monte Python} \cite{MontePython}, \texttt{CosmoLike} \cite{cosmolike}, and \texttt{CosmoSIS} \cite{Zuntz:2015} have been successful in combining a number of ``standard'' large-scale structure probes and deriving posteriors on cosmological parameters through Bayesian analyses. Sophisticated modeling efforts with capability to make multi-wavelength predictions that leverage high-resolution simulations are commonly implemented in custom codebases that require highly specialized techniques in order to infer cosmological parameters in a Bayesian fashion. Here there exists another exciting opportunity to fully integrate a new generation of simulation-based models together with cosmological inference pipelines, leveraging new technologies such as machine learning methods, GPU interfaces, automatic gradient approaches, and likelihood-free inference methods.
\item \textbf{New initiatives enabling joint analyses:} By construction, multi-survey analyses in the era of large collaborations are often not hosted under one single collaboration with well-established communication structure and analysis tools. In the present day, such analyses are often enabled by MoUs and other agreements, or carried out with public data. This structure could create an inherent barrier for multi-survey analyses, and suppress potential opportunities for exciting discoveries, while new levels of effort in cross-survey collaboration could offer major benefits to the scientific returns of future surveys. Such initiatives could include coordination of survey strategy to ensure overlap, joint-processing of data, and coordination of cross-survey blinding strategies. New funding lines that focus on multi-survey cross-correlation analyses could be an effective, modest way to address some of these limitations. The scope of these problems, however, warrants consideration of new ``centers'' focusing on development of joint simulation/modeling/analysis tools, as well as training/education for the next generation cosmologists who will be confronted with data in the 2020s that is of a qualitatively new character from previous decades.
\item \textbf{Support for proposed cosmic survey instruments:} The enormous potential of joint analyses discussed in this white paper is necessarily built on the success of single-probe experiments. Enabling cross-survey analyses requires support for wide-field cosmic surveys including those listed in Table~\ref{tab: survey_summary}, and many more described in accompanying Snowmass white papers \citep{CMB-HD-Snowmass,CF4_DESI2_Snowmass,CF3_DM_facility_Snowmass, mmLIM_Snowmass}. In return, joint-probe analyses will provide critical and complementary information to the understanding of cosmic acceleration and other fundamental physics.
\end{itemize}
\bibliographystyle{JHEP}
|
1,108,101,566,345 | arxiv | \section{Introduction}
Let $L$ be a rational function with complex coefficients.
Any representation of $L$ in the form $L=P\circ W$, where $P$ and $W$ are
rational functions of degree greater than one and the symbol $\circ$ denotes the superposition of functions, that is $P\circ W=P(W)$, is called a decomposition of $L.$
Two decompositions $L=P_1\circ W_1$ and $L=P_2\circ W_2$ of the same function $L$
are called equivalent if there exists a rational function $\mu$ of degree one such that
$$P_2=P_1\circ \mu, \ \ \ W_2=\mu^{-1}\circ W_1.$$
One of the main problems of the decomposition theory
of rational functions is to describe
possible solutions of
the equation
\be \label{ma} L=P_1\circ W_1=P_2\circ W_2\ee in the case where decompositions $L=P_1\circ W_1$ and $L=P_2\circ W_2$
are not equivalent.
In the case where $L$ is a polynomial, a description of solutions of \eqref{ma} was given by Ritt in his paper \cite{r2}
which was a starting point of the decomposition theory of rational functions.
Roughly speaking, in this case solutions of \eqref{ma} up to equivalency reduce
either to the solutions
$$z^n \circ z^rR(z^n)=z^rR^n(z) \circ z^n,$$
where $R$ is a polynomial, and $r\geq 0,$ $n\geq 1,$ or to the solutions
\be \label{gfd} T_n \circ T_m= T_m \circ T_n,\ee
where $T_n, T_m$ are the Chebyshev polynomial, which can be defined by the equality $T_n(\cos \theta)=\cos n\theta.$
A description of solutions of \eqref{ma} in the case
where $L$ is a Laurent polynomial, or more generally any rational function with at most two poles, was obtained in the papers \cite{pak},\cite{zi}.
For arbitrary rational functions, a description of solutions of \eqref{ma} is known only in particular cases. Namely, in the classical papers
of Julia, Fatou, and Ritt \cite{f}, \cite{j}, \cite{r} was given a description of commuting rational functions (that is of solutions of \eqref{ma} with $P_1=W_2$ and
$P_2=W_1$), and recently a description of semi-conjugate rational functions (that is of solutions of \eqref{ma} with $P_1=W_2$) was given in \cite{pse}.
The decomposition theory of polynomials turned out to be closely related with the following so called ``polynomial moment problem". Let $P,Q$ be complex polynomial; what are conditions implying that the equalities
\be \label{4} \int_{0}^1 P^idQ=0, \ \ \ i\geq 0,\ee hold ?
Indeed, it is easy to see using the change $z\rightarrow W(z)$ that \eqref{4} is satisfied whenever there exist
polynomials $\widetilde P,$ $\widetilde Q,$ and $W$ such that
\be \label{2}
P=\widetilde P\circ W, \ \ \
\ Q=\widetilde Q\circ W, \ \ \ W(0)=W(1).
\ee
Furthermore, it was shown in \cite{pm} that if polynomials $P,$ $Q$ satisfy \eqref{4}, then there exist polynomials $Q_j$ such that $Q=\sum_j Q_j$ and the equalities
\be \label{cc}
P=\tilde P_j\circ W_j, \ \ \
Q_j=\tilde Q_j\circ W_j, \ \ \ W_j(0)=W_j(1)
\ee hold
for some polynomials $\tilde P_j, \tilde Q_j, W_j$. Thus, the most interesting solutions of the polynomial moment problem
arise from polynomials having ``multiple'' decompositions
\be \label{sol} P=\widetilde P_1\circ W_1=\widetilde P_2\circ W_2=\dots=\widetilde P_s\circ W_s.\ee
Polynomial solutions of \eqref{sol} were described in the paper \cite{pakk}, where the cor\-responding generalization of the result of Ritt about solutions of \eqref{ma}
was obtained. Notice that in the study of the polynomial moment problem
one can restrict oneself by the case where considered polynomials have real coefficients. However,
the results of \cite{pm}, \cite{pakk} imply that in the real case a description of solutions of \eqref{4} is only a bit easier than in the complex one.
The polynomial moment problem naturally appears in the study of the center problem for the Abel differential equation with polynomial coefficients (see e. g. the recent papers \cite{bry}, \cite{bpy} and the bibliography therein) which is believed to be a simplified analog
of the center problem for the Abel differential equation whose coefficients are
trigonometric polynomials over $\R$. In its turn the last problem
is closely related to the classical center-focus problem of Poincar\'e
(\cite{cher}).
In the same way as the center problem for the Abel equation with polynomial coefficients leads to the polynomial moment problem,
the center problem for the Abel equation with trigonometric coefficients leads to the following ``trigonometric moment problem''.
Let $$p=p(\cos \theta,\sin \theta),\ \ \ q=q(\cos\theta ,\sin\theta )$$ be trigonometric polynomials over $\R$, that is elements of the ring $\R_t[\theta]$ generated over $\R$ by the functions
$\cos \theta$, $\sin \theta$.
What are conditions implying that the equalities
\be \label{1} \int_0^{2\pi }p^idq=0, \ \ \ i\geq 0,\ee
hold ? Like to the case of the polynomial moment problem one can consider a complexified version of this problem
(see
\cite{ppre}, \cite{ppz}, \cite{abc}). However, examples constructed in \cite{ppz}, \cite{abc} suggest that in the trigonometric case the complex version of the problem may be much more complicated than the real one.
Again, a natural sufficient condition for \eqref{1} to be satisfied is related with compositional properties of $p$ and $q$.
Namely, it is easy to see that if there exist $P, Q\in \R[x]$ and $w\in \R_t[\theta]$ such that
\be \label{c} p=P\circ w, \ \ \ \ q=Q\circ w, \ee
then \eqref{1} hold. Furthermore, if for given $p$ there exist several such $q$ (with different $w$),
then \eqref{1} obviously holds for their sum.
Thus, the trigonometric moment problem leads to the problem of description of solutions
of the equation
\be \label{mai} p=P_1\circ w_1=P_2\circ w_2,\ee where $p,w_1,w_2\in \R_t[\theta]$ and $P_1, P_2 \in \R[x],$ and
the main goal of this paper is to provide such a description. Notice that, besides of its relation with the trigonometric moment problem,
functional equation \eqref{mai} or its shortened version
\be \label{maii} P_1\circ w_1=P_2\circ w_2,\ee where as above $w_1,w_2\in \R_t[\theta]$ and $P_1, P_2 \in \R[x],$
seems to be
interesting by itself.
In particular, it contains among its solutions the most
known trigonometric identity $\sin^2 \theta=1-\cos^2 \theta$.
Observe that the problem of description of solutions of \eqref{maii} absorbs the problem of description of polynomial solutions of \eqref{ma} over $\R$ since for any
polynomial solution of \eqref{ma} and any $w\in \R_t[\theta]$
we obtain a solution of \eqref{maii} setting
$$w_1=W_1\circ w, \ \ \ w_2=W_2\circ w.$$
Further, observe that if $P_1,P_2,w_1,w_2$ is a solution of \eqref{maii}, then for any $k\in \N$ and $b\in \R$
we obtain another solution $P_1,P_2,\widetilde w_1,\widetilde w_2$ setting
$$ \widetilde w_1(\theta) = w_1(k\theta+b) , \ \ \ \widetilde w_2(\theta) = w_2(k\theta+b). $$
Finally, if $P_1,P_2,w_1,w_2$ is a solution of \eqref{maii}, then
for any $U\in \R[t]$ we obtain another solution $\widetilde P_1,\widetilde P_2,w_1,w_2$
setting
$$\widetilde P_1=U\circ P_1, \ \ \ \widetilde P_2=U\circ P_2.$$
Let $p$ be an element of $\R_t[\theta]$, and
$p=P_1\circ w_1$ and $p=\widetilde P_1\circ \widetilde w_1$ be two decompositions of
$p$, where $w_1,\widetilde w_1 \in \R_t[\theta]$ and $P_1,\widetilde P_1\in \R[x]$.
We will call
these decompositions equivalent, and will use the notation $P_1\circ w_1\sim \widetilde P_1\circ \widetilde w_1$,
if there exists $\mu\in \R[x]$ of degree one such that
$$\widetilde P_1=P_1\circ \mu, \ \ \ \widetilde w_1=\mu^{-1}\circ w_1.$$
We also will use the symbol $\sim$ for equivalent decompositions of rational functions defined earlier.
Under the above notation our main result about solutions of \eqref{maii} may be formulated as follows.
\bt Assume that
$P_1, P_2 \in \R[x]$ and $w_1,w_2\in \R_t[\theta]$ are not constant and satisfy the equality
$$P_1\circ w_1=P_2\circ w_2.$$
Then, up to a possible replacement of $P_1$ by $P_2$ and $w_1$ by $w_2$, one of the following conditions holds:
\vskip 0.2cm
\noindent 1) \ \
There exist $U,\widetilde P_1,\widetilde P_2,W_1,W_2\in \R[x]$ and $\widetilde w \in \R_t[\theta]$ such that
$$ P_1=U\circ \widetilde P_1, \ \ \ P_2=U\circ \widetilde P_2, \ \ \
w_1=W_1\circ \widetilde w, \ \ \ w_2=W_2\circ\widetilde w ,\ \ \ \widetilde P_1\circ W_1=\widetilde P_2\circ W_2,$$
and either
\begin{itemize}
\item[a)] $$
\widetilde P_1\circ W_1\sim z^n \circ z^rR(z^n),
\ \ \ \ \ \ \widetilde P_2\circ W_2\sim z^rR^n(z) \circ z^n,$$
where $R\in \R[x]$, $r\geq 0,$ $n\geq 1,$ or
\item[b)] $$
\widetilde P_1\circ W_1\sim T_{n} \circ T_m, \ \ \ \ \ \
\widetilde P_2\circ W_2\sim T_m\circ T_n, $$
where $T_{n},T_{m}$ are the Chebyshev polynomials, $m,n\geq 1,$ $\GCD(n,m)=1$;
\end{itemize}
\vskip 0.2cm
\noindent 2)\ \ There exist
$U, \widetilde P_1, \widetilde P_2\in \R[x],$ $\widetilde w_1,$ $\widetilde w_2\in \R_t[\theta],$ and a polynomial $W(\theta)=k\theta +b,$ where $k\in \N,$ $b\in \R$, such that
$$ P_1=U\circ \widetilde P_1, \ \ \ P_2=U\circ \widetilde P_2, \ \ \
w_1=\widetilde w_1\circ W, \ \ \ w_2=\widetilde w_2\circ W,\ \ \ \widetilde P_1\circ \widetilde w_1=\widetilde P_2\circ \widetilde w_2,$$
and either
\begin{itemize}
\item[c)]
$$\widetilde P_1\circ \widetilde w_1\sim z^2 \circ \,
\cos \theta\, S(\sin \theta), \ \ \ \ \ \ \widetilde P_2\circ \widetilde w_2\sim (1-z^2)\, S^2(z)\circ \sin\theta ,$$
where $S\in \R[x]$, or
\item[d)]
$$\widetilde P_1\circ \widetilde w_1\sim -T_{nl} \circ \cos\left( \frac{(2s+1)\pi}{nl}+m\theta\right), \ \ \ \ \ \
\widetilde P_2\circ \widetilde w_2\sim T_{ml} \circ \cos(n\theta),$$
where $T_{nl},T_{ml}$ are the Chebyshev polynomials, $m,n\geq 1,$ $l>1$, $0\leq s < nl,$ and $\GCD(n,m)=1$.
\end{itemize}
\et
Notice that solutions of types a) and b) reduce to polynomial solutions of \eqref{ma}, while
solutions of type c) generalize the identity $\sin^2 \theta=1-\cos^2 \theta$. Further, solutions
of type d) can be considered as a generalizartion of the identity $$T_n\circ \cos m\theta= T_m\circ \cos n\theta,$$ although
this identity itself is an example of a solution of type b) since $$\cos m\theta=T_m\circ \cos\theta, \ \ \
\cos n\theta=T_n\circ \cos\theta.$$
Our approach to functional equation \eqref{maii} relies on the isomorphism
$$\phi:\, \ \cos \theta\rightarrow \left(\frac{z+1/z}{2}\right), \
\sin \theta\rightarrow \left(\frac{z-1/z}{2i}\right),$$
between the ring $\R_t[\theta]$ and a subring of the ring $\C[z,1/z]$ of complex Laurent polynomials.
Clearly, any decomposition $p=P\circ w$ of $p\in \R_t[\theta]$, where $P\in \R[x]$ and $w\in \R_t[\theta]$, or more generally where
$P\in \R(x)$ and $w$ is contained in the quotient field $\R_t(\theta)$ of $\R_t[\theta]$, descends to a decomposition
$\phi(p)=P\circ \phi(w)$ of $\phi(p)$, making it possible to use results about decompositions of Laurent polynomials into compositions of rational functions
for the study of decompositions of trigonometric polynomials.
The paper is organized as follows. In the second section we recall some basic facts about decompositions of Laurent polynomials and prove their analogues for decompositions in $\R_t[\theta]$.
We also
show (Corollary \ref{c1}) that if $p\in \R_t[\theta]$, then any equivalency class of decompositions of $\phi(p)\in\C[z,1/z]$ into a composition of rational functions over $\C$ contains a representative which lifts to a decomposition $p=P\circ w$, where
$P\in \R(x)$ and $w\in \R_t(\theta)$. This result shows that the decomposition theory for $\R_t[\theta]$ is ``isomorphic"
to the decomposition theory for a certain subclass of complex Laurent polynomials,
and permits to deduce results about decompositions in $\R_t[\theta]$
from the ones in $\C[z,1/z]$.
Finally, in the third section of the paper, basing on the results of the second section and results about decompositions of Laurent polynomial, we prove
Theorem 1.1.
\section{Decompositions in $\R_t[\theta]$ and in $\C[z,1/z]$}
The goal of this section is to show that decomposition theory for $\R_t[\theta]$ can be considered as a ``part'' of
the decomposition theory of complex Laurent polynomials.
It is well known that $\R_t[\theta]$ is isomorphic to
a subring of the field $\R(x)$, where the isomorphism $\psi:\,\R_t[\theta]\rightarrow\R(x)$ is defined by the formulas
\be \label{t0} \psi(\sin \theta)=\frac{2x}{1+x^2}, \ \ \ \psi(\cos \theta)=\frac{1-x^2}{1+x^2}.\ee
Furthermore, the isomorphism $\psi$ extends to an isomorphism between $\R_t(\theta)$ and $\R(x)$,
where
$$x=\psi\left(\tan(\theta/2)\right)=\psi\left(\frac{\sin \theta}{1+\cos\theta}\right).$$
In particular, this implies by the
L\"uroth theorem that any subfield $k$ of $\R_t(\theta)$ has the form $k=\R(b)$ for some $b\in \R_t(\theta).$
In this paper however we will use the isomorphism $\phi$, defined by the formulas
\be \la{t} \phi(\cos \theta)= \frac{z+1/z}{2}, \ \ \
\phi(\sin \theta)=\frac{z-1/z}{2i},\ee
between the ring $\R_t[\theta]$ and a subring of the ring $\C[z,1/z]$ of complex Laurent polynomials,
which seems to be more convenient for the study of compositional properties of $\R_t[\theta]$.
For brevity, we will denote the ring $\C[z,1/z]$ by ${\cal L}[z]$ and the image of
$\R_t[\theta]$ in ${\cal L}[z]$ under the isomoprphism $\phi$ by ${\cal L_{\R}}[z]$. It is easy to see that ${\cal L_{\R}}[z]$ consists of Laurent polynomials
$L$ such that $\bar L(1/z)=L(z),$ where $\bar L$ denotes the Laurent polynomial obtained from $L$ by complex conjugation of all its coefficients.
Clearly, the isomorphism $\phi$ extends to an isomorphism between $\R_t(\theta)$ and ${\cal L_{\R}}(z)$, where ${\cal L_{\R}}(z)$
consists of rational functions $R$ satisfying the equality
$\bar R(1/z)=R(z).$
Any decomposition $p=P\circ w$, where $p\in \R_t[\theta]$,
$P\in \R(x),$ and $w\in \R_t(\theta)$, obviously descends to a decomposition
$\phi(p)=P\circ \phi(w)$, where $\phi(p)\in {\cal L_{\R}}[z]$ and $\phi(w)\in {\cal L_{\R}}(z)$.
However, it is clear that $L=\phi(p)$ may have decompositions $L=A\circ B,$ where $A,B\in \C(z),$ such that coefficients of $A$ are not real and
$B$ is not contained in ${\cal L_{\R}}(z)$. In this context the following simple lemma is useful.
\bl \label{l0} Let $L\in {\cal L_{\R}}(z)$ and
$L=A\circ B$ be a decomposition of $L$ into a composition of rational functions $A,B\in \C(z).$
Then the inclusion $B\in {\cal L_{\R}}(z)$ implies the inclusion $A\in \R(x)$.
\el
\pr Indeed, since $L,B\in {\cal L_{\R}}(z)$, we have:
$$A\circ B=\bar A\circ \bar B\circ 1/z=\bar A\circ B,$$ implying that $\bar A=A.$ \qed
\vskip 0.2cm
We will call a Laurent polynomial $L$ proper if $L$ is neither a polynomial in $z$, nor a polynomial in $1/z,$ or in other words
if $L$ has exactly two poles.
The lemma below is a starting point of the decomposition theory of Laurent polynomials (see \cite{pak},\cite{zi}).
\bl \label{l1} Let $L=P\circ W$ be a decomposition of $L\in {\cal L}[z]$ into a composition of rational functions $P,W\in \C(z).$
Then there exists $\mu\in \C(z)$ of degree one
such that either $P\circ \mu $ is a polynomial and $\mu^{-1}\circ W$ is a Laurent polynomial, or
$P\circ \mu $ is a Laurent polynomial and $\mu^{-1}\circ W=z^d$, $d\geq 1.$
\el
\pr Indeed, it follows easily from
$$L^{-1}\{\infty \}=W^{-1}\{P^{-1}\{\infty \}\}\subseteq \{0,\infty\}$$ that either $P^{-1}\{\infty \}$ consists of a single point $a\in \C\P^1$ and $W^{-1}\{a \}\subseteq\{0,\infty\},$
or $P^{-1}\{\infty \}$ consists of two points $a,b\in \C\P^1$ and $W^{-1}\{a,b \}=\{0,\infty\}.$
In the first case
there exists a rational function
$\mu\in \C(z)$ of degree one such that $P\circ \mu $ is a polynomial and $\mu^{-1}\circ W$ is a Laurent polynomial (which is proper if
and only if $L$ is proper). In the second case there exists $\mu\in \C(z)$ of degree one such that $P\circ \mu $ is a proper Laurent polynomial and $\mu^{-1}\circ W=z^d,$ $d\geq 1$.
\qed
\vskip 0.2cm
The following statement is a ``trigonometric'' analogue of Lemma \ref{l1} and essentially is equivalent to
Proposition 21 of \cite{ggl} and to Theorem 5 of \cite{ga1}.
Notice however that the proofs given in \cite{ggl}, \cite{ga1}
are much more complicated than the proof given below. The idea to relate decompositions in $\R_t[\theta]$ with decompositions in ${\cal L}[z]$
was proposed in \cite{pakov}.
\bl \label{l2} Let $p=P\circ w$ be a decomposition of $p\in \R_t[\theta]$ into a composition of $P\in \R(x)$ and $w\in \R_t(\theta).$ Then there exists a rational function
$\mu\in \R(x)$ of degree one such that either $P\circ \mu \in \R[x]$ and $\mu^{-1}\circ w\in \R_t[\theta]$, or
$P\circ \mu\in \R(x)$ and $\mu^{-1}\circ w=\tan(d\theta/2)$, $d\geq 1.$
\el
\pr
Setting $$L=\phi(p), \ \ \ W=\phi(w)$$ and considering the equality $L=P\circ W$,
we conclude as above that either
\be \label{qaz} P^{-1}\{\infty \}=\{a\} \ \ {\rm and} \ \ W^{-1}\{a \}=\{0,\infty\}\ee for some $a\in \C\P^1,$ or
\be \label{qaz1} P^{-1}\{\infty \}=\{a,b\} \ \ {\rm and} \ \ W^{-1}\{a,b \}=\{0,\infty\}\ee
for some $a, b\in \C\P^1.$
Since any polynomial with real coefficients is a product of linear and quadra\-tic polynomials with real coefficients, if \eqref{qaz} holds, then the first equality in \eqref{qaz} implies that either $P\in \R[x]$ and $W\in {\cal L_{\R}}[z]$, or
$a\in \R$. In the first case, since
$\phi$ is an isomorphism between $\R_t[\theta]$ and ${\cal L_{\R}}[z],$ we conclude that
$w\in \R_t[\theta].$ On the other hand, if $a\in \R$, then setting $\mu=a+1/z$ we see that
$P\circ \mu \in \R[x]$ and $\mu^{-1}\circ W\in {\cal L}[z]$. Furthermore, since $W\in {\cal L_{\R}}(z)$ and $\mu$ has real coefficients,
the function $\mu^{-1}\circ W$ is contained in ${\cal L_{\R}}[z]$ implying
that $\mu^{-1}\circ w\in\R_t[\theta].$
If \eqref{qaz1} holds, then we can modify $\mu\in \C(z)$ from Lemma \ref{l1} so that
\be \la{iuy} \mu^{-1}\circ W=\frac{1}{i}\frac{z^d-1}{z^d+1}=\frac{1}{i}\left(\frac{z^{d/2}-z^{-d/2}}{z^{d/2}+z^{-d/2}}\right)=\phi(\tan(d\theta/2)), \ \ \ d\geq 1.\ee
Furthermore, since the functions $\phi(\tan(d\theta/2))$ and $W$ are contained in ${\cal L_{\R}}(z)$, it follows from Lemma \ref{l0} that
$\mu^{-1}\in \R(x)$. Finally, it is clear that $P\circ \mu\in \R(x)$ and
$\mu^{-1}\circ w=\tan(d\theta/2).$
\qed
\vskip 0.2cm
Notice that if $p=P\circ w$ is a decomposition of $p\in \R_t[\theta]$ such that $P\in \R(x)$
and $w=\tan(d\theta/2)$, $d\geq 1,$ then $P$ has the form $P=A/(x^2+1)^k,$ where $A\in \R[x],$ $k\geq 1,$ and
$\deg A\leq 2k.$ This can be proved by arguments similar to the ones used in the proof of Lemma \ref{l2}.
Alternatively, we can observe that $\tan(d\theta/2)$ considered as a function of complex variable takes all the values in $\C\P^1$ distinct from $\pm i$.
Therefore,
the function $P$ may have poles only at points $\pm i$, since otherwise the composition
$p=P\circ w$ would not be an entire function.
\vskip 0.2cm
Two different types of decompositions of Laurent polynomials appearing in Lemma \ref{l1}
correspond to two different types of
imprimitivity systems in their mondromy groups
(for more details concerning decompositions of rational functions with two poles we refer the reader to \cite{mp}).
Namely, if $L$ is a Laurent polynomial of degree $n$
we may assume that its monodromy group $G$ contains the permutation
$$h=(1\,2\,\dots\, n_1)(n_1+1\, n_1+2\, \dots\, n_1+n_2),$$ where
$1\leq n_1 \leq n,$ $0\leq n_2 <n,$
$n_1+n_2=n$. Furthermore, the equalities $n_1=n,$ $n_2=0$ hold if and only if $L$ is not proper.
Denote by $W_{i,d}^1$ (resp. by $W_{i,d}^2$) a union of numbers from the
segment $[1,n_1]$ (resp. $[n_1+1,n_1+n_2]$) equal to $i$
by modulo $d$.
Since $h$ must permute blocks of any imprimitivity system of $G$, it is easy to see that if $\f E$ is such a system, then
either there exists a number $d\vert n$ such that
any block of $\f E$ is equal to $W_{i_1,d}^1\cup W_{i_2,d}^2$ for some $i_1,i_2,$ $1\leq i_1,i_2\leq d,$
or there exist numbers
$d_1\vert n,d_2\vert n$
such that
\be \la{rav} n_1/d_1=n_2/d_2 \ee
and
any block of $\f E$ is equal either to $W_{i_1,d_1}^1$ for some $i_1,$ $1\leq i_1\leq d_1,$ or to
$W_{i_2,d_2}^2$ for some $i_2,$ $1\leq i_2\leq d_2.$
The imprimitivity systems of the first type correspond to decompositions $L=A(B),$ where $A$ s a polynomial and $B$ is a Laurent
polynomial, while imprimitivity systems of the second type correspond to decompositions $L=A(B),$ where $A$ is a proper Laurent polynomial and $B=z^d.$
\vskip 0.2cm
The following result coincides with Lemma 6.3 of \cite{pak}. For the reader convenience we provide below a self-contained proof.
\bl \la{copo1} Let $A,B\in \C[z]\setminus \C$ and $L_1$, $L_2\in {\cal L}[z]\setminus \C$ satisfy
\be \la{tyr} A\circ L_1=B\circ L_2.\ee Assume additionally that $\deg A = \deg B$.
Then either
\be \label{barsu0} B=A\circ w^{-1}, \ \ \ L_2=w\circ L_1
\ee for some polynomial $w\in \C[z]$ of degree one,
or there exist $r\in \N$, $a\in \C$, and a root of unity $\nu $ such that
\be \label{barsu} w_1\circ L_1=\left(z^r+\frac{1}{z^r}\right)\circ (az), \ \ \ w_2\circ L_2=\left(z^r+\frac{1}{z^r}\right)\circ (a\nu z)\ee
for some polynomials $w_1,w_2\in \C[z]$ of degree one. \el
\pr Let $G$ be the monodromy group of a Laurent polynomial $L$ defined by any of the parts of equality \eqref{tyr}. Then equality \eqref{tyr}
implies that $G$ has two imprimitivity systems $\f E_1$ and $\f E_2$ of the first type corresponding to the decompositions in \eqref{tyr}. Furthermore,
since $\deg A = \deg B$, the blocks of these systems have the same cardinality $l=\deg L/\deg A.$
If these systems coincide, then equalities \eqref{barsu0} hold for
some rational function
$w\in \C(z)$ of degree one which obviously is a polynomial.
On the other hand, if they are different, then the imprimitivity system $\f E_1\cap \f E_2$
necessary belongs to the second type, and has blocks consisting of $r$
elements, where $2r=l.$ In particular, $L$ and $L_1,L_2$ are proper, and
the equalities
\be \label{rfv} L_1=\tilde L_1\circ W, \ \ \ L_2=\tilde L_2\circ W, \ee
hold for some rational functions
$\tilde L_1, \tilde L_2, W$, where $\deg \tilde L_1=\deg \tilde L_2=2.$
Applying now Lemma \ref{l1} to equalities \eqref{rfv} we conclude that
$$L_1=\left(\alpha_0+\alpha_1 z+\frac{\alpha_2}{z}\right)\circ z^r, \ \ \ \ L_2=\left(\beta_0+\beta_1z+\frac{\beta_2}{z}\right)\circ z^r,$$
for some $\alpha_0,\beta_0\in \C,$ and $\alpha_1, \alpha_2,\beta_1, \beta_2\in \C\setminus \{0\}$. Furthermore, equality \eqref{tyr} implies that
$$L_1=\left(\alpha_0+\alpha_1 z+\frac{\alpha_2}{z}\right)\circ z^r, \ \ \ \ L_2=\left(\beta_0+\alpha_1\nu _1 z+\frac{\alpha_2\nu _2}{z}\right)\circ z^r,$$
for some roots of unity $\nu _1,\nu _2$.
The lemma follows now from the equalities
$$\alpha_0+\alpha_1 z^r+\frac{\alpha_2}{z^r}=\left(\alpha_0+\frac{\alpha_1 z}{a^r} \right)\circ \left(z^r+\frac{1}{z^r}\right)\circ (az),$$
$$\beta_0+\alpha_1\nu_1 z^r+\frac{\alpha_2\nu_2}{z^r}=\left(\beta_0+\frac{\alpha_1\nu_1 z}{a^r\nu^r}\right)\circ \left(z^r+\frac{1}{z^r}\right)\circ (a\nu z),$$ where $a$ and $\nu$ are complex numbers satisfying $a^{2r}=\alpha_1/\alpha_2$ and $\nu^{2r}=\nu_1/\nu_2.$
\qed
\vskip 0.2cm
\bc \label{xom} Let $P\in \R[z]$ and $A,B\in \C[z]$ satisfy the equality $P=A\circ B$.
Then $A,B\in \R[z]$ whenever the leading coefficient of $B$ is real.
\ec
\pr Applying Lemma \ref{copo1} to the equality \be \label{pes} A\circ B=\bar A\circ \bar B\ee we conclude that $\bar B=\alpha B+\beta$, where $\alpha, \beta \in \C$.
Comparing the leading coefficients of the polynomials in the last equality we see that $\alpha=1$. It follows now from equality \eqref{pes} that
$A(z)=A(z+\beta)$ implying easily that $\beta=0$.
Finally, it follows from $\bar B=B$ and \eqref{pes} that $\bar A=A.$ \qed
\bl \label{uuii} Let $L=A\circ L_1$ be a decomposition of $L\in {\cal L_{\R}}[z]$ into a composition
of $A\in \C[z]$ and $L_1=\sum_{-n}^nc_iz^i \in{\cal L}[z].$ Assume additionally that
$c_{-n}=1/c_{n}.$ Then the leading coefficient of $A$ is real and $\vert c_n\vert =\vert c_{-n}\vert=1.$
\el
\pr
Let $\alpha$ be the leading coefficient of $A$.
Then $L \in {\cal L_{\R}}[z]$ implies that
\be \label{xyu+} \alpha c_n=\bar \alpha/\bar c_n.\ee
Multiplying this equality by its conjugated we obtain the equality $(\bar c_n c_n)^2=1$
implying that $c_n=1/\bar c_n$ or equivalently that $\vert c_n\vert=1$. Now \eqref{xyu+} implies that $\bar\alpha=\alpha$. \qed
\bt \label{2.1} Let $L=A\circ L_1$ be a decomposition of $L\in {\cal L_{\R}}[z]$ into a composition
of $A\in \C[z]$ and $L_1 \in{\cal L}[z].$ Then there exists a polynomial $v\in \C[z]$ of degree one
such that $A\circ v^{-1}\in \R[x]$ and $v\circ L_1\in {\cal L_{\R}}[z]$.
\et
\pr Since $L$ belongs to $\in {\cal L_{\R}}[z]$, the equality
$$A\circ L_1=\bar A \circ \bar L_1\circ 1/z$$ holds. Applying to this equality Lemma \ref{copo1} we conclude that either
\be \label{po} \bar L_1\circ 1/z=w\circ L_1,\ee for some polynomial $w=az+b,$ $a,b\in \C,$ or
\be \label{krot} v\circ L_1= cz^r+\frac{1}{cz^r}\ee for some
polynomial $v\in \C[z]$ of degree one and
$c\in \C.$
In the first case, setting $L_1=\sum_{-n}^nc_iz^i$, we see that \eqref{po} implies the equalities
$$ \bar c_{-i}=ac_i,\ \ \ 0 <\vert i \vert \leq n.$$ Taking $c_{-i}\neq 0$, we obtain $$c_{-i}= \overline{a c_i}=\bar a a c_{-i}$$ implying that
$a\bar a=1$ or equivalently that $\vert a\vert =1.$
Set $v=\lambda z+\mu,$ where $\lambda$ satisfies $\lambda^2=a$, and $\mu=\overline{\lambda c_0}.$
Since $\lambda\bar \lambda =1,$ we have:
$$\overline{\lambda c_{-i}}=\bar \lambda ac_{i}=\bar \lambda \lambda^2c_{i}=\lambda c_{i}, \ \ \ 0 <\vert i \vert \leq n.$$
Furthermore,
$$\overline{\lambda c_{0}+\overline{\lambda c_0}}=\lambda c_{0}+\overline{\lambda c_0},$$
and hence
$v\circ L_1\in {\cal L_{\R}}[z]$.
It follows now from the equality \be \label{xer} L=(A\circ v^{-1})\circ (v\circ L_1)\ee by Lemma \ref{l0} that
$A\circ v^{-1}\in \R[z].$
In the second case, it follows from equalities \eqref{krot} and \eqref{xer}
by Lemma \ref{uuii} that $\vert c\vert =1$ implying that $v\circ L_1\in {\cal L_{\R}}[z]$.
Finally, Lemma \ref{l0} implies as above that $A\circ v^{-1}\in \R[z]$.
\qed
\bc \label{c1} Let $L=P\circ W$ be a decomposition of $L\in {\cal L_{\R}}[z]$ into a composition
of $P,W\in \C(z)$. Then there exists a rational function $v\in \C(z)$ of degree one
such that $P\circ v^{-1}\in \R(x)$ and $v\circ W\in {\cal L_{\R}}(z)$.
\ec
\pr Arguing as in the proofs of Lemma \ref{l1} and Lemma \ref{l2} we see that
there exists a rational function $\mu\in \C(z)$ of degree one
such that either $P\circ \mu $ is a polynomial and $\mu^{-1}\circ W$ is a Laurent polynomial, or equality
\eqref{iuy} holds and
$P\circ \mu \in \R(x).$ In the second case the statement of the corollary is obvious, while in the first one it
follows from Theorem \ref{2.1}
\section{Double decompositions in $\R_t[\theta]$ and in $\C[z,1/z]$}
Recall, that two decompositions $P=A\circ B$ and $P=\widetilde A\circ \widetilde B$ of a function $P\in \C(z)$ into compositions of functions $A,B,\widetilde A,\widetilde B\in \C(z)$
are called equivalent if there exists a function $\mu\in \C(z)$ of degree one such that
$$\widetilde A=A\circ \mu, \ \ \ \widetilde B=\mu^{-1}\circ B.$$
Notice that if both $\widetilde A$ and $A$ (or $\widetilde B$ and $B$) are polynomials, then $\mu$ also is a polynomial. In particular, this is the case
for most of the equivalences considered below. In case if we consider rational functions defined over an arbitrary field the definition above is modified in an obvious way (in fact, below we are only interested in the cases where the ground field is $\C$ or
$\R$).
We start from recalling some basic facts about polynomial solutions of the equation
\be \label{tor} A\circ C=B\circ D.\ee
The proposition below reduces a description of solutions of \eqref{tor} to the case where degrees of $A$ and $B$ as well as of $C$ and $D$ are coprime
(see e.g. \cite{mp}).
\bp \la{eng}
Suppose $A,B,C,D\in \C[z]\setminus \C$ satisfy \eqref{tor}. Then there exist
$U, V, \tilde A, \tilde C, \tilde B, \tilde D \in \C[z], $ where
$$\deg U=\GCD(\deg A,\deg B), \ \ \ \deg V=\GCD(\deg C,\deg D),$$
such that
$$A=U\circ \tilde A, \ \ B=U\circ \tilde B, \ \ C=\tilde C\circ V, \ \ D=\tilde D\circ V,$$
and $$\tilde A\circ \tilde C=\tilde B\circ \tilde D. \ \ \ \ \ \ \ \ \ \Box$$
\ep
\noindent In fact, under an appropriate restriction, Proposition \eqref{eng} remains true if to assume that coefficients of polynomials $A,B,C,D$ as well as of $U, V, \tilde A, \tilde C, \tilde B, \tilde D$ belong to an arbitrary field (see Theorem 5, Chapter 1 of \cite{sch}). In particular, Proposition \ref{eng} remains true if
the ground field is $\mathbb R.$
The following result obtained by Ritt \cite{r2} describes solutions of \eqref{tor} in the case where
the equalities
\be \label{qwer} \GCD(\deg A,\deg B)=1, \ \ \ \GCD(\deg C,\deg D)=1 \ee
hold, and is know under the name ``the second Ritt theorem''.
\bt \la{ritt}
Suppose $A,B,C,D\in \C[z]\setminus \C$ satisfy \eqref{qwer} and \eqref{lau}.
Then
there exist $U,\widetilde A, \widetilde B, \tilde C,\tilde D, W\in \C[z]$, where $\deg U=\deg W=1,$ such
that $$ A=U \circ \widetilde A, \ \ \ \ B=U \circ \widetilde B, \ \ \ \
C=\tilde C \circ W, \ \ \ \ D=\tilde D \circ W,\ \ \ \ \widetilde A\circ \tilde C=\widetilde B\circ \tilde D$$
and, up to a possible replacement of $A$ by $B$ and $C$ by $D$, one of the following conditions holds:
$$\widetilde A\circ \tilde C\sim z^n \circ z^rR(z^n), \ \ \ \ \ \
\widetilde B\circ \tilde D\sim z^rR^n(z) \circ z^n,\leqno 1) $$
where $R\in \C[z]$, $r\geq 0,$ $n\geq 1,$ and
$\GCD(n,r)=1;$
\vskip 0.01cm
$$\widetilde A\circ \tilde C\sim T_n \circ T_m, \ \ \ \ \ \ \widetilde B\circ \tilde D\sim T_m \circ T_n,\leqno 2)$$
where $T_n,T_m$ are the Chebyshev polynomials, $m,n\geq 1,$ and $\GCD(n,m)=1.$ \qed
\et
Again, this theorem remains true if to assume that coefficients of all polynomials involved are real (see Lemma 2, Chapter 1 of \cite{sch}), and, under an appropriate modification, even belong to an arbitrary field (see \cite{za} and Theorem 5, Chapter 1 of \cite{sch}).
\vskip 0.2cm
Recall now the main result of the decomposition theory of Laurent polynomials (see \cite{pak},\cite{zi})
concerning solutions of the equation
\be \label{lau} P_1\circ W_1=P_2\circ W_2, \ee
where $P_1,P_2\in \C[z]$ and $W_1,$ $W_2\in \C[z,1/z].$ We will use the
notation of \cite{paen} (Theorem 3.1).
Notice that the main result of \cite{paen} (Theorem A) also may be used
for a proof of Theorem 1.1. However, the approach using the results of Section 2 is more general and
may be used for a solution of another problems related to decompositions of trigonometric polynomials.
Set
$$U_n(z)=\frac{1}{2} \left(z^n+\frac{1}{z^n}\right), \ \ \ V_n(z)=\frac{1}{2i} \left(z^n-\frac{1}{z^n}\right).$$
Observe that
\be \label{cos} U_n=\phi(\cos n\theta), \ \ \ V_n=\phi(\sin n\theta).\ee
Indeed, the first formula in \eqref{cos} follows from
the equality
\be \label{poli} T_n\circ \frac{1}{2}\left(x+\frac{1}{x}\right)=\frac{1}{2}\left(x^n+\frac{1}{x^n}\right),\ee
which in its turn is obtained from the definition of the Chebyshev polynomials by the substitution
$x=e^{i\theta}$.
The second one follows from the formulas $$T^{\prime}_n(\cos \theta)\sin \theta=n\sin n \theta$$
and \eqref{poli}.
Furthermore, it is easy to see that if $c=\cos a+i\sin a,$ where $a\in \R$, then
\be \label{cos1} U_n\circ (cz)=\phi(\cos (n\theta+na)), \ \ \ V_n\circ(cz)=\phi(\sin (n\theta+na)).\ee
\bt \la{irrr}
Let $P_1,P_2\in \C[z]\setminus \C$ and $W_1,$ $W_2\in \C[z,1/z]\setminus \C$
satisfy \eqref{lau}.
Then
there exist $U,$ $\widetilde P_1,$ $\widetilde P_2\in \C[z]$ and
$W,$ $\tilde W_1,$ $\tilde W_2\in \C[z,1/z]$ such
that
$$ P_1=U \circ \widetilde P_1, \ \ \ \ P_2=U \circ \widetilde P_2, \ \ \ \
W_1=\tilde W_1 \circ W, \ \ \ \ W_2=\tilde W_2 \circ W,\ \ \ \ \widetilde P_1\circ \tilde W_1=\widetilde P_2\circ \tilde W_2$$
and, up to a possible replacement of $P_1$ by $P_2$ and $W_1$ by $W_2$, one of the following conditions holds:
$$\widetilde P_1\circ \tilde W_1\sim z^n \circ z^rR(z^n), \ \ \ \ \ \
\widetilde P_2\circ \tilde W_2\sim z^rR^n(z) \circ z^n,\leqno 1) $$
where $R\in \C[z]$, $r\geq 0,$ $n\geq 1,$ and
$\GCD(n,r)=1;$
\vskip 0.01cm
$$\widetilde P_1\circ \tilde W_1\sim T_n \circ T_m, \ \ \ \ \ \ \widetilde P_2\circ \tilde W_2\sim T_m \circ T_n,\leqno 2)$$
where $T_n,T_m$ are the Chebyshev polynomials, $m,n\geq 1,$ and $\GCD(n,m)=1;$
\vskip 0.01cm
$$\widetilde P_1\circ \tilde W_1\sim z^2 \circ U_1
S(V_1), \ \ \ \ \ \ \widetilde P_2\circ \tilde W_2\sim (1-z^2)\,S^2 \circ V_1,\leqno 3)$$
where $S\in \C[z]$;
\vskip 0.01cm
$$\widetilde P_1\circ \tilde W_1\sim -T_{nl} \circ U_m(\v z), \ \ \ \ \ \
\widetilde P_2\circ \tilde W_2\sim T_{ml} \circ U_n,\leqno 4)$$
where $T_{nl},T_{ml}$ are the Chebyshev polynomials, $m,n\geq 1,$ $l>1$, $\varepsilon^{nlm}=-1$,
and $\GCD(n,m)=1;$
\vskip -0.2cm
$$\widetilde P_1\circ \tilde W_1\sim (z^2-1)^3\circ \left(
\frac{i}{\sqrt{3}}\,V_2+\frac{2\sqrt{2}}{\sqrt{3}}\,U_1\right), \leqno 5) $$
$$\ \ \ \ \ \ \widetilde P_2\circ \tilde W_2\sim (3z^4-4z^3)\circ \left(
\frac{i}{3\sqrt{2}}\,V_3+U_2
+\frac{i}{\sqrt{2}}\,V_1+\frac{2}{3} \right). \qed$$
\et
\vskip 0.2cm
\noindent Notice that if $L_1,L_2$ are polynomials, then $W$ also is a polynomial and either 1) or 2)
holds, in correspondence with Proposition \ref{eng} and Theorem \ref{ritt}.
\vskip 0.3cm
\noindent{\it Proof of Theorem 1.1.} Let $P_1, P_2 \in \R[x]$ and $w_1,w_2\in \R_t[\theta]$ satisfy equation \eqref{maii}. Assume first that there exist $w\in \R_t[\theta]$ and $\widehat W_1, \widehat W_2 \in \R[x]$ such that
the equalities
\be \label{vot} w_1=\widehat W_1\circ w, \ \ \ w_2=\widehat W_2\circ w\ee hold.
Then equality \eqref{maii} implies the equality
$$P_1\circ \widehat W_1=P_1\circ \widehat W_1,$$ and it is easy to see using the real versions of Proposition \ref{eng} and Theorem \ref{ritt} that
either the case a) or the case b) has the place.
Assume now that such $w$ and $\widehat W_1,$ $\widehat W_2$ do not exist.
Set
$$p=P_1\circ w_1= P_2\circ w_2, \ \ \ L=\phi(p), \ \ \ W_1=\phi(w_1), \ \ \ W_2=\phi(w_2),$$ and apply Theorem \ref{irrr} to equality \eqref{lau}.
Observe that our assumption implies that neither the first nor the second case provided by Theorem \ref{irrr} may have the place. Indeed, since $L$ is a proper Laurent polynomial, if one of these cases holds, then the function $W$ is also a proper Laurent polynomial. Therefore,
applying Theorem \ref{2.1} to the equality $W_1=\tilde W_1 \circ W$, we conclude that
there exists a polynomial $v\in \C[z]$ of degree one
such that $\tilde W_1\circ v^{-1}\in \R[x]$ and $v\circ W\in {\cal L_{\R}}[z]$.
Furthermore, applying Lemma \ref{l0} to the equality $$W_2=(\tilde W_2 \circ v^{-1})\circ (v \circ W),$$ we conclude
that $\tilde W_2\circ v^{-1}\in \R[x]$ implying that \eqref{vot} holds for
$$\widehat W_1= \tilde W_1\circ v^{-1}, \ \ \ \widehat W_2= \tilde W_2\circ v^{-1}, \ \ \ w=\phi^{-1}(v\circ W).$$
Consider now one by one all the other cases possible by Theorem \ref{irrr}.
If holds 3), then
there exist $\mu_1,\mu_2\in \C[z]$ of degree one and $S\in \C[z]$ such that
\be \label{koto} P_1= U\circ z^2 \circ \mu_1^{-1}, \ \ \ W_1 =\mu_1 \circ U_1S(V_1) \circ W,\ee
and
\be \label{kot} P_2= U\circ (1-z^2)\,S^2\circ \mu_2^{-1} , \ \ \ W_2 =\mu_2 \circ V_1\circ W,\ee
for some $U\in \C[z]$ and $W\in {\cal L}[z].$
Notice that changing if necessary $\mu_1$ to $\mu_1\circ (\gamma z)$ and $U$ to $U\circ (\gamma^2 z)$, where $\gamma \in \C$
is the leading coefficient of $S$,
without loss of generality we may assume that the polynomial $S$ is monic.
Furthermore, it follows from Lemma \ref{l1} that $W$ necessary has the form $cz^k,$ $c\in \C\setminus \{0\}.$
Let $\mu_2=\alpha z+\beta ,$ where $\alpha,\beta\in \C.$
Since $W_2$ is contained in ${\cal L_{\R}}[z]$, the second equality in \eqref{kot} implies that $\bar \beta =\beta$ and, by Lemma \ref{uuii}, that
$\alpha\in \R$ and $\bar c=1/c$. Therefore, $\mu_2\in \R[x]$ and there exists $a\in \R$ such that $c=\cos a +i\sin a$, implying by \eqref{cos1} that $w_2=\mu_2\circ \sin(n\theta +b),$
where $b=na.$
Since $\mu_2\in \R[x],$ applying now Corollary \ref{xom} to the first equality in \eqref{kot}, we conclude that $U\in \R[x]$ and $S^2\in \R[x].$ Moreover, since $S$ monic, the last equality implies that $S\in \R[x].$ Further,
since $S\in \R[x]$ and $\bar c=1/c$, the Laurent polynomial $U_1S(V_1) \circ W$ is contained in ${\cal L_{\R}}[z]$
implying by Lemma \eqref{l0} that $\mu_1\in \R[x].$
Finally, it is clear that $$w_1=\mu_1\circ \cos (n\theta +b)S(\sin(n\theta +b)).$$ Therefore, if \eqref{koto} and \eqref{kot} hold
we arrive to the case c) of Theorem 1.1.
Consider now the case 4).
In this case
there exist $\mu_1,\mu_2\in \C[z]$ of degree one and $U\in \C[z]$ such that
\be \label{koto+} P_1= U\circ -T_{nl} \circ \mu_1^{-1}, \ \ \ W_1 =\mu_1 \circ U_m(\varepsilon z) \circ W,\ee
and
\be \label{kot+} P_2= U\circ T_{ml}\circ \mu_2^{-1} , \ \ \ W_2 =\mu_2 \circ U_n\circ W,\ee
where $W=cz^k,$ $c\in \C\setminus \{0\}.$ As above the second equality in \eqref{kot+} implies that
$\bar c=1/c$ and $\mu_2\in \R[x].$ Further, the first equality in \eqref{kot+} implies that $U\in \R[x]$, and the second equality in \eqref{koto+} implies
that $\mu_1\in \R[x].$ Therefore, taking into account formulas \eqref{cos1}, we conclude that equalities \eqref{koto+} and \eqref{kot+} lead
to the case d) of Theorem 1.1.
Let us show finally that the case 5)
can not have a place. Assume the inverse. Then
\be \label{vse} W_1=\mu \circ \left(
\frac{i}{\sqrt{3}}\,V_2+\frac{2\sqrt{2}}{\sqrt{3}}\,U_1\right)\circ (cz^k),\ee
where $\mu=\alpha z+\beta,$ $\alpha,\beta, c\in \C.$
Since $W_1\in {\cal L_{\R}}[z]$,
equality \eqref{vse}
implies the equalities
$$\bar \alpha \bar c=-\alpha/c, \ \ \ \bar \alpha \bar c=\alpha/c$$
which are possible only if $\alpha=0$ and $w_1$ is a constant. \qed
|
1,108,101,566,346 | arxiv | \section{Introduction}
\label{intro}
Jet quenching caused by parton energy loss in dense medium has been proposed as a hard probe of the properties of the quark-gluon plasma (QGP) formed in high-energy heavy-ion collisions \cite{Bjorken:1982tu,Gyulassy:1990ye}. The simplest form of jet quenching is the suppression of single inclusive hadron spectra at large transverse momentum, dihadron and $\gamma$-hadron correlation in heavy-ion collisions relative to proton-proton collisions \cite{Wang:1991xy,Wang:1996yh,Vitev:2002pf,Wang:2003mm,Eskola:2004cr,Majumder:2004pt,Renk:2006nd,Qin:2007rn,Zhang:2007ja,Zhang:2009rn,Majumder:2010qh,Zapp:2012ak,Qin:2015srf}. Observation of these jet quenching phenomena among other experimental data on collective phenomena at the Relativistic Heavy-Ion Collider (RHIC) provided the first evidence of the formation of the strongly coupled quark-gluon plasma in high-energy heavy-ion collisions \cite{Adcox:2004mh,Adams:2005dq}. A systematic study of experimental data on suppression of single inclusive hadron spectra in heavy-ion collisions at both RHIC and the Large Hadron Collider (LHC) has provided unprecedented constraints on jet transport coefficients \cite{Burke:2013yra}.
Since the inclusive hadron spectra at large transverse momentum $p_T$ is the convolution of cross sections of energetic parton production and parton fragmentation functions in which leading hadrons dominate, the suppression of single inclusive hadron spectra is caused mainly by the energy loss of leading jet partons inside the dense QGP medium that suppresses the effective jet fragmentation functions at large momentum fraction. The hadron suppression factor is therefore not sensitive to the distribution of soft radiative gluons and recoil partons from jet-induced medium response. This is, however, not the case for fully reconstructed jets in heavy-ion collisions.
Suppression and modification of full jets are also proposed to study jet quenching and properties of QGP medium in high-energy heavy-ion collisions
\cite{Vitev:2008rz,Vitev:2009rd,Qin:2010mn,He:2011pd,Neufeld:2012df,Renk:2012cx,Dai:2012am,Qin:2012gp,Wang:2013cia,Casalderrey-Solana:2014bpa,Chang:2016gjp,Casalderrey-Solana:2016jvj,Milhano:2015mng,Kang:2017frl}.
Jets are collimated clusters of hadrons within a given cone-size in experimental measurements. In elementary hadronic processes such as proton-proton collisions, the jet production cross section can be calculated from perturbative QCD (pQCD) and can describe experimental data to high precision even with relatively small cone sizes \cite{Sterman:1977wj,Cacciari:2008gp,Kang:2017frl}. The cross section is not very sensitive to nonperturbative processes of jet hadronization through fragmentation. In heavy-ion collisions, however, the final jet production cross section is not only modified by parton energy loss of leading partons but also is influenced by how the lost energy is transported in the medium through radiated gluons and recoil medium partons. It is therefore imperative to include the effect of recoil partons and their further propagation in the form of jet-induced medium response as well as the propagation of radiated gluons in the study of jet suppression and medium modification \cite{Wang:2013cia,Tachibana:2015qxa,Casalderrey-Solana:2016jvj,Wang:2016fds,Tachibana:2017syd,KunnawalkamElayavalli:2017hxo,Milhano:2017nzm,Chen:2017zte,Luo:2018pto}.
Contributions from jet-induced medium response to the jet energy within a finite jet-cone size should also be influenced by the collective radial expansion and flow of the medium. They will affect the transverse momentum $p_T$ dependence of jet energy loss in heavy-ion collisions. Since the interaction strengths of gluon and quark with the medium are different due to their color charges, one should also expect a flavor dependence of the jet energy loss. The fractions of gluon and quark jets and their $p_T$ and colliding energy $\sqrt{s}$ dependence are determined by the pQCD cross sections and initial parton distributions in the colliding nuclei. All these conjoin to give a particular $\sqrt{s}$ and $p_T$ dependence of the jet energy loss that can explain the observed phenomenon in the suppression of single inclusive jets in heavy-ion collisions at LHC. The suppression factor for single inclusive jets has been measured in Pb+Pb collisions at two colliding energies, $\sqrt{s}=2.76$ and 5.02 TeV, at LHC~\cite{Aad:2014bxa,Aaboud:2018twu,Khachatryan:2016jfl}. The measured suppression factor has a weak $p_T$ dependence and remains almost the same at two colliding energies though the central rapidity density of bulk hadrons increases by about 20\%~\cite{Abbas:2013bpa,Adam:2016ddh}.
In this paper, we will use the Linear Boltzmann Transport (LBT) model \cite{Li:2010ts,Wang:2013cia,He:2015pra,Cao:2016gvr,Cao:2017hhk,Luo:2018pto} for jet interaction and propagation in dense QGP medium to study the suppression of single inclusive jet spectra in high-energy heavy-ion collisions. We will pay particular attention to effects of recoil thermal partons and their further propagation in the dense medium whose evolution in high-energy heavy-ion collisions is described by a 3+1D viscous relativistic hydrodynamic model. We will try to understand the weak transverse momentum and colliding energy dependence of the suppression factor for single inclusive jet spectra in Pb+Pb collisions at LHC energies. We will investigate the effect of recoil thermal partons from jet-induced medium response, their transport in the medium and influence of radial expansion on the effective jet energy loss as well as the transverse momentum and colliding energy dependence of the flavor composition of jets. We will also provide predictions of the cone-size dependence of the jet suppression factor in Pb+Pb collisions at LHC and jet suppression in Au+Au collisions at RHIC energy $\sqrt{s}=200$ GeV.
The remainder of this paper is organized as follows. We will provide a brief description of the LBT model and simulations of jet propagation in the dense medium whose evolution in high-energy heavy-ion collisions is given by the 3+1D CLVisc hydrodynamic model \cite{Pang:2012he,Pang:2014ipa,Pang:2018zzo} in Sec.~\ref{lbt}. In Secs.~\ref{ppjet} and \ref{aajet}, we carry out calculations of the single inclusive jet spectra in both p+p collisions as the baseline and Pb+Pb collisions. Effects of recoil medium partons, diffusion wake due to the back-reaction and underlying event subtractions are studied in detail. Results on single inclusive jet suppression in Pb+Pb collisions at other centralities and at both $\sqrt{s}$=2.76 and 5.02 TeV are presented and compared to experimental data. Section~\ref{sec:jetsuppression} is devoted to the discussion and understanding of the colliding energy and jet transverse momentum dependence of the jet suppression in heavy-ion collisions. In Sec.~\ref{sec:eloss}, we examine in detail effects of transport of recoil partons, radial expansion of the underlying bulk medium and the flavor composition on the effective jet energy loss in heavy-ion collisions. These effects combined with the shape and colliding energy dependence of the initial jet production spectra in p+p collisions can explain the weak transverse momentum and colliding energy dependence of the single inclusive jet suppression factor. They also lead to a unique cone-size dependence of the jet suppression. We will also provide predictions for single inclusive jet suppression at the RHIC energy $\sqrt{s}=200$ GeV in Sec.~\ref{rhicpredict}. A summary and discussion are given in Sec.~\ref{summary}.
\section{The Linear Boltzmann Transport model}
\label{lbt}
The Linear Boltzmann Transport (LBT) model is developed to study jet interaction and propagation in dense QGP medium with a particular emphasis on
thermal recoil partons and their further interaction and propagation through the medium in the form of jet-induced medium excitation (or response). It was
initially developed \cite{Li:2010ts} to study the so-called Mach-cone excitation by jets that travel at nearly the speed of light in the medium in which the velocity of sound is smaller than that of the propagating jets \cite{CasalderreySolana:2004qm,Stoecker:2004qu,Ruppert:2005uz,Chaudhuri:2005vc}. While signals of the Mach-cone excitation are still elusive in both experimental measurements and simulations with realistic hydrodynamic evolution of the medium, the LBT model becomes a powerful tool for the study of jet quenching in high-energy heavy-ion collisions. The model has been recently improved with the implementation of the complete set of elastic $2\to 2$ scattering processes \cite{He:2015pra}. Inelastic processes $2\rightarrow 2+n$ with multiple gluon radiation and global energy-momentum conservation have also been implemented more consistently in the latest version \cite{Wang:2013cia,Luo:2016ufs,Cao:2016gvr}. It has been used to describe both single inclusive light and heavy flavor hadron suppression \cite{Cao:2017hhk}, $\gamma$-hadron \cite{Chen:2017zte}, $\gamma$-jet \cite{Wang:2013cia,Luo:2016ruj,Luo:2018pto} and $Z^0$-jet correlations~\cite{Zhang:2018urd}. We will use it to study single inclusive jet suppression in high-energy heavy-ion collisions in this paper.
The basic building block of the LBT model is the linear Boltzmann equations for the transport of both jet shower and thermal recoil partons in QGP,
\begin{eqnarray}
p_a\cdot\partial f_a&=&\int \sum_{b c d } \prod_{i=b,c,d}\frac{d^3p_i}{2E_i(2\pi)^3} (f_cf_d-f_af_b)|{\cal M}_{ab\rightarrow cd}|^2
\nonumber\\ && \hspace{-0.5in}\times \frac{\gamma_b}{2}
S_2(\hat s,\hat t,\hat u)(2\pi)^4\delta^4(p_a\!+\!p_b\!-\!p_c\!-\!p_d)+ {\rm inelastic},
\label{bteq}
\end{eqnarray}
where the summation is over all possible parton flavors and channels of scattering, $f_i=(2\pi)^3\delta^3(\vec{p}-\vec{p_i})\delta^3(\vec{x}-\vec{x_i}-\vec{v_i}t)$ $(i=a,c)$ are the phase-space density for jet shower partons before, after scattering and medium recoil partons, $f_i=1/(e^{p_i\cdot u/T}\pm1)$ $(i=b,d)$ are phase-space distributions for thermal partons in the QGP medium with local temperature $T$ and fluid velocity $u=(1, \vec{v})/\sqrt{1-\vec{v}^2}$, and $\gamma_b$ is the color-spin degeneracy for parton $b$.
The leading-order (LO) elastic scattering amplitudes $|{\cal M}_{ab\rightarrow cd}|^2$ \cite{Eichten:1984eu} have collinear divergencies that are
regularized in the LBT model by a factor \cite{Auvinen:2009qm},
\begin{equation}
S_2(\hat s, \hat t, \hat u) = \theta(\hat s\ge 2\mu_{D}^2)\theta(-\hat s+\mu_{D}^2\le \hat t\le -\mu_{D}^2),
\end{equation}
where $\hat s$, $\hat t$, and $\hat u$ are Mandelstam variables, and
\begin{equation}
\mu_{D}^2 = \frac{3}{2}g^2 T^2,
\label{eq-mud}
\end{equation}
is the Debye screening mass with three quark flavors. The corresponding elastic cross sections are $d\sigma_{ab\rightarrow cd}/d\hat t=|{\cal M}_{ab\rightarrow cd}|^2/16\pi \hat s^2$. We neglect the Bose enhancement (Pauli blocking) for final-state gluons (quarks) and detailed balance of the radiative processes in the current implementation of the Boltzmann transport. The strong coupling constant $\alpha_s=g^{2}/4\pi$ is fixed and will be fitted to experimental data.
In the current version of the LBT model, we only consider gluon radiation induced by elastic scatterings. The differential inclusive rates for gluon radiation is assumed to follow that from the high-twist approach \cite{Guo:2000nz,Wang:2001ifa},
\begin{eqnarray} \la{induced}
\frac{d\Gamma_{a}^{\rm inel}}{dzdk_\perp^2}=\frac{6\alpha_sP_a(z)k_\perp^4}{\pi (k_\perp^2+z^2m^2)^4} \frac{p\cdot u}{p_0}\hat{q}_{a} (x)\sin^2\frac{\tau-\tau_i}{2\tau_f},
\end{eqnarray}
where $P_a(z)$ is the splitting function for the propagating parton $a$ to emit a gluon with the energy fraction $z$ and transverse momentum $k_\perp$, $m$ is the mass of the propagating parton, $\tau_f=2p_0z(1-z)/(k_\perp^2+z^2m^2)$ is the gluon formation time and $\tau_i$ is the time of the last gluon emission.
The elastic scattering rate in the inelastic processes has been factorized into the jet transport coefficient,
\begin{equation}
\hat{q}_{a}(x)=\sum_{bcd}\rho_{b}(x)\int d\hat t q_\perp^2 \frac{d\sigma_{ab\rightarrow cd}}{d\hat t},
\label{eq-qhat}
\end{equation}
which is defined as the transverse momentum transfer squared per mean-free-path in the local comoving frame of the QGP fluid. The parton density $\rho_{b}(x)$ includes the degeneracy factor. The splitting functions $P_a(z)$ above contains an infrared divergence and is regularized by the Debye screening mass $\mu_D$ as an infrared cut-off for the energy of radiated gluons.
In the actual implementation of parton transport simulations in LBT, the probability of elastic and inelastic scattering in each small but finite time step $\Delta \tau $ are calculated together to ensure unitarity. The probability for an elastic scattering in a time step $\Delta \tau $ during the propagation of parton $a$ is,
\begin{equation}
P^a_{\rm el}=1-\text{exp}[- \Delta\tau \Gamma_a^{\rm el}(x)],
\end{equation}
where
\begin{equation}
\Gamma_a^{\rm el}\equiv \frac{p\cdot u}{p_0}\sum_{bcd} \rho_b(x)\sigma_{ab\rightarrow cd}
\label{eq-rate}
\end{equation}
is the total elastic scattering rate for parton $a$. The probability for inelastic process is
\begin{equation}
P^a_\mathrm{inel}=1-\exp[-\Delta\tau \Gamma_a^{\rm inel}(x)],
\end{equation}
where
\begin{equation}
\Gamma_a^{\rm inel}=\frac{1}{1+\delta_g^a}\int dz dk_\perp^2 \frac{d\Gamma_a^{\rm inel}}{dzdk_\perp^2}
\end{equation}
is the total gluon radiation rate from parton $a$. The total scattering probability,
\begin{equation}
P^a_\mathrm{tot}=P^a_\mathrm{el}(1-P^a_\mathrm{inel}) +P^a_\mathrm{inel},
\end{equation}
can be separated into the probability for pure elastic scattering (first term) and that for inelastic scattering with at least one gluon radiation (the second term). Notice that for infinitesimally small time step $\Delta\tau \rightarrow 0$, the above total scattering probability per unit time is just the sum of the elastic and inelastic scattering rate.
A Poisson distribution with the mean $\langle N^a_g \rangle=\Delta\tau\Gamma_a^{\rm inel}$ is assumed to simulate multiple gluon radiations associated with each elastic scattering. The scattering channel, flavor, energy and momentum of the final partons, recoil partons and radiated gluons are sampled according to the differential elastic scattering cross section and the differential gluon radiation rate, respectively. Global energy and momentum conservation is ensured in each scattering with multiple radiated gluons.
In the LBT model, the above scattering probabilities are employed to simulate the change of phase-space distribution for jet shower, recoil medium partons and radiated gluons due to their scattering with thermal partons in the medium. During each scattering, the initial thermal parton $b$ is recorded as ``negative'' partons and they are also allowed to propagate in the medium according to the Boltzmann equation. The energy and momentum of these ``negative" partons will be subtracted from all final observables to account for the back-reaction in the Boltzmann transport equations. They are part of the jet-induced medium excitation and manifest as the diffusion wake behind the propagating jet shower partons \cite{Wang:2013cia,Li:2010ts,He:2015pra}.
In the LBT model we assume jet shower parton density and jet-induced medium response is small in the linear approximation ($\delta f\ll f$) so that one can neglect interaction among jet shower and recoil partons. One considers only interaction between jet shower and recoil partons with thermal medium partons. The bulk medium evolves independently according to a hydrodynamic model that provides spatial and time information on the local temperature and fluid velocity during parton-medium interaction. This linear approximation will break down when the jet-induced medium excitation becomes comparable to the local thermal parton density. To extend LBT beyond the linear approximation, a coupled LBT and hydrodynamic (CoLBT-hydro) model \cite{Chen:2017zte} has been developed in which soft partons from LBT jet transport are fed back to the bulk medium as a source term in the hydrodynamic equations while energetic partons propagate through the medium which evolve simultaneously with the source term updated in real time. This coupled approach is important for detailed study of the jet-induced medium excitation. For the study of jet suppression, the LBT model with the linear approximation will suffice.
In the LBT model, a parton recombination model developed by the Texas A \& M University group within the JET Collaboration~\cite{Han:2016uhh} is used for hadronization of both jet shower and recoil medium partons. The model has been used successfully to describe light flavor hadron suppression in heavy-ion collisions \cite{Chen:2017zte}. In this paper, we will only use the partonic information for jet reconstruction and study single inclusive jet suppression and jet energy loss.
\section{Single inclusive jet spectra in p+p collisions}
\label{ppjet}
For the study of single inclusive jet spectra in high-energy heavy-ion collisions, we have to first provide initial jet shower parton distributions from elementary nucleon-nucleon collisions and then let these jet shower partons propagate in the LBT model through bulk medium that evolves according to the hydrodynamic model. Each of the initial jet shower partons is assigned with a formation time determined from their virtuality, energy and transverse momentum (relative to the jet direction). They start interaction with medium partons only after their initial formation time. We then use the information for the final partons and the FASTJET package \cite{Cacciari:2011ma}, which is specially modified to take into account of the subtraction of ``negative" partons, with the anti-$k_{t}$ algorithm to reconstruct jets and calculate the final single inclusive jet spectra.
We will use PYTHIA 8 \cite{Sjostrand:2006za} to simulate production of initial jet shower partons in this study. To ensure enough statistics for initial jet production at any large transverse momentum, we divide the range of transverse momentum to many bins with bin size $dp_{T i}$. We then use PYTHIA 8 to generate initial jet shower partons (with both initial and final state radiation) with a trigger on the transverse momentum transfer $p_{Ti} \in (p_{T i}-d p_{T i}/2, p_{T i}+dp_{T i}/2)$ and the cross section $d\sigma_{\rm LO}^{{\rm pp}(c)}/dp_{T i}$ in the leading-order (LO) perturbative QCD (pQCD) for production of initial hard parton $c$ in p+p collisions. For any given trigger $p_{T i}$, we generate a given number of events for jet production. After jet reconstruction using FASTJET with a given jet-cone radius $R$, one can get an event-averaged single inclusive jet distribution $dN^{\rm jet}_{(c)}(p_{Ti})/dydp_T$ for a given trigger $p_{Ti}$, here $p_T$ and $y$ are the transverse momentum and rapidity of the final jet, respectively, as reconstructed from the final partons with FASTJET. The final single inclusive jet cross section in p+p collisions will be given by,
\begin{equation}
\frac{d^2\sigma^{\rm jet}_{\rm pp}}{dp_Tdy} = \sum_c\int dp_{T i} \frac{d\sigma_{\rm LO}^{{\rm pp}(c)} }{dp_{Ti}}
\frac{d^2N^{\rm jet}_{(c)}(p_{Ti}, p_T)} {dp_T dy},
\label{eq-jetcrs}
\end{equation}
where the LO pQCD cross section for the production of initial hard parton $c$ in p+p collisions is given by
\begin{eqnarray}
\frac{d \sigma^{{\rm pp}(c)}_{\rm LO}}{dp_{Ti}} & = & 2 p_{Ti}\sum_{a,b,d} \int dy_c dy_d x_a f_{a/p} (x_a, \mu^2)
\nonumber\\
& & \times x_b f_{b/p} (x_b, \mu^2) \frac{d\hat\sigma_{ab\to cd}}{dt},
\label{eq:cs.pp}
\end{eqnarray}
where $y_c$ and $y_d$ are rapidities of the final hard partons in the $a+b\rightarrow c+d$ processes, $x_a=x_{Ti}(e^{y_c}+e^{y_d})$ and $x_b=x_{Ti}(e^{-y_c}+e^{-y_d})$ are the light-cone momentum fractions carried by the initial partons from the two colliding protons with $x_{Ti}=2p_{Ti}/\sqrt{s}$, $f_{a/p}(x,\mu^2)$ is the parton distribution function inside a proton at the scale $\mu^2=p_{Ti}^2$ and
$d\hat\sigma_{ab\to cd}/dt$ is the parton level leading order cross section which depends on the Mandelstam variables
$\hat s=x_ax_bs$, $\hat t=-p_{Ti}^2(1+e^{y_d-y_c})$ and $\hat u=-p_{Ti}^2(1+e^{y_c-y_d})$. Because of higher-order corrections through initial and final state radiation in PYTHIA 8, there can be more than two jets in the final state and the transverse momentum $p_T$ of the final leading jets can be different from the value of the trigger $p_{Ti}$.
Shown in Figs.~\ref{jetCS_2760} and \ref{jetCS_twoEnergy} are differential single inclusive jet cross sections with jet-cone size $R=0.4$ as a function of the final jet transverse momentum $p_T$ in different rapidity windows of p+p collisions at $\sqrt{s}=2.76$ and 5.02 TeV, respectively, from PYTHIA 8 as compared to ATLAS experimental data \cite{Aad:2014bxa,Aaboud:2018twu}. PYTHIA 8 can describe the experimental data well. In Fig.~\ref{jetCS_twoEnergy}, we also compare the single inclusive jet spectra at two different colliding energies at LHC. One can see that the shape of the single inclusive jet spectra at $\sqrt{s}=5.02$ TeV are much flatter than at 2.76 TeV, which is determined mainly by the parton distribution functions in a proton.
\begin{figure}[htbp]
\centering
\includegraphics[width=8.5cm]{jetCS2760.pdf}
\caption{(Color online) The single inclusive jet double differential cross section as a function of $p_{T}$ in different rapidity bins in p+p collisions at $\sqrt{s} = 2.76$ TeV using anti-$k_{t}$ algorithm with jet cone radius R = $0.4$. The closed symbols are ATLAS experimental data \cite{Aad:2014bxa} while the curves are from PYTHIA 8 simulations. The results for different rapidities are scaled by successive powers of $10^{2}$ for clear presentation.}
\label{jetCS_2760}
\end{figure}
\begin{figure}[htbp]
\centering
\includegraphics[width=8.5cm]{jetCS.pdf}
\caption{(Color online) The inclusive jet double differential cross section as a function of $p_{T}$ in different rapidity bins in p+p collisons at $\sqrt{s} = 5.02$ TeV (solid) using anti-$k_{t}$ algorithm with jet cone radius R = $0.4$ from PYTHIA 8 as compared to ATLAS experimental data \cite{Aaboud:2018twu}. PYTHIA 8 results at $\sqrt{s} = 2.76$ (dashed) are also shown as a comparison. Results for different rapidities are scaled by successive powers of $10^{2}$.}
\label{jetCS_twoEnergy}
\end{figure}
\section{Suppression of single inclusive jet spectra in A+A collisions}
\label{aajet}
\subsection{ Single inclusive jet cross section in A+A collisions}
We assume that the initial production rates of hard partons in A+A collisions are the same as the superposition of nucleon-nucleon collisions,
except that we need to consider the nuclear modification of the initial parton distributions \cite{Eskola:2009uj,Ru:2016wfx}. The jet shower
partons from PYTHIA 8 simulations in each event will then go through medium transport and propagation within the LBT model. Using FASTJET with the
same jet cone-size $R$ for jet reconstruction, we get an event-averaged final single inclusive jet distribution $d\widetilde{N}^{\rm jet}_{(c)}(p_{Ti},{\bf r},{\bf b},\phi_c)/dydp_T$ for any given transverse coordinate $\bf r$ of the binary nucleon-nucleon collision that produces the initial hard partons, the impact-parameter $\bf b$ of the nucleus-nucleus collisions and the azimuthal angle $\phi_c$ of the initial hard parton $c$. The cross section for single inclusive jet production in A+A collision is then given by,
\begin{eqnarray}
\frac{d \sigma^{\rm jet}_{\rm AA}}{dp_{T}dy} & = &\sum_{a,b,c,d} \int d^2{\bf r} d^2{\bf b} t_A(r) t_A(|{\bf b}-{\bf r}|) \frac{d\phi_c}{\pi} dy_c dy_d \nonumber\\
&& \times \int dp_{Ti} p_{Ti} x_a f_{a/A} (x_a, \mu^2) x_b f_{b/B} (x_b, \mu^2)
\nonumber \\
&& \times \frac{d\hat\sigma_{ab\to cd}}{dt} \frac{d\widetilde{N}^{\rm jet}_{(c)}(p_{Ti},p_T,{\bf r},{\bf b},\phi_c)}{dydp_T},
\label{eq:cs.aa}
\end{eqnarray}
where $t_{A}(r)$ is the nuclear thickness function with normalization $\int d^2{\bf r} t_A(r)=A$ and $f_{a/A}(x,\mu^2)$ is the nuclear modified parton distribution function \cite{Eskola:2009uj,Ru:2016wfx} per nucleon. The range of integration over the impact parameter $\bf b$ is determined by the centrality of the nucleus-nucleus collisions according to the experimental measurement.
Interaction between shower and medium partons in heavy-ion collisions will in general reduce the transverse momentum of the final jets, leading to the medium modification of the final single inclusive jet distribution $d\widetilde{N}^{\rm jet}_{(c)}(p_{Ti},{\bf r},{\bf b},\phi_c)/dydp_T$ relative to the vacuum one, $dN^{\rm jet}_{(c)}(p_{Ti})/dydp_T$, in p+p collisions. This will lead to the suppression of the single inclusive jet cross section in heavy-ion collisions. The suppression factor is given by the ratio of the jet cross sections for A+A and p+p collisions normalized by the averaged number of binary nucleon-nucleon collisions,
\begin{equation}
R_{\rm AA}=\frac{1}{\int d^2rd^2b t_A(r) t_A(|{\bf b}-{\bf r}|)} \frac{d\sigma^{\rm jet}_{\rm AA}}{d\sigma^{\rm jet}_{\rm pp}}.
\label{eq:raa}
\end{equation}
In the jet reconstruction with FASTJET we also subtract underlying-event (UE) background in a scheme inspired by the method in the experimental studies \cite{Aad:2012vca}. Seed jets are defined as those with at least one particle whose transverse energy is larger than 3 GeV and with a leading particle whose transverse energy is four times or larger than the average transverse energy per particle within the jet. The UE background transverse energy density is calculated over the whole area of coverage excluding the area of these seed jets. In heavy-ion collisions, we also include modulation of the UE transverse energy distribution due to anisotropic flow of the bulk medium. This UE transverse energy within the transverse area of each jet is then subtracted from the jet energy in both p+p and A+A collisions. In LBT simulations, only jet shower partons, radiated gluons and recoil medium partons (energy carried by the ``negative" partons is subtracted) are used for jet reconstruction in FASTJET. The UE background is very small as compared to the UE in experimental analyses which includes all hadrons from the bulk medium. The contribution of UE to the jet energy before the subtraction in LBT simulations is about a few percent in central Pb+Pb and much smaller in p+p collisions.
The effect of UE is more important for low energy jets with large jet radii.
For heavy-ion collisions, we will use PYTHIA 8 to simulate the production of initial jet shower partons which will then propagate through the dynamically evolving QGP medium according to the LBT model. We will neglect the nuclear modification of the initial parton distributions in cold nuclei which should be small in the jet production processes with momentum scale $Q^2> 4000$ GeV$^2$ \cite{Eskola:2009uj,Ru:2016wfx}. We assign a formation time $\tau_0\approx 2k_0/k_T^2$ for each of the initially produced jet shower partons before which the parton is assumed to free-stream without interaction with medium partons.
\subsection{CLVisc hydrodynamics for bulk medium evolution}
For the space-time evolution of the QGP medium in heavy-ion collisions, we use the space-time profile from the CLVisc (3+1)D viscous hydrodynamic model \cite{Pang:2014ipa,Pang:2012he}. CLVisc parallelizes the Kurganov-Tadmor algorithm \cite{KURGANOV2000241} to solve the hydrodynamic equation for the bulk medium and Cooper-Frye particlization on GPU, using Open Computing Language (OpenCL). With massive amount of computing parallelized on GPUs and Single Instruction Multiple Data (SIMD) vector operations on modern CPUs, CLVisc brings about the best performance increase so far to (3+1)D hydrodynamics on heterogeneous computing devices and provide the event-by-event space-time hydrodynamic profiles for simulations of jet transport within the LBT model in this study. The initial condition for energy-momentum density distributions for event-by-event CLVisc hydro simulations are obtained from partons in A Multi-Phase Transport (AMPT) model \cite{Lin:2004en} with a Gaussian smearing,
\begin{equation}
\begin{aligned}
T^{\mu\nu} &(\tau_{0},x,y,\eta_{s}) = K\sum_{i}
\frac{p^{\mu}_{i}p^{\nu}_{i}}{p^{\tau}_{i}}\frac{1}{\tau_{0}\sqrt{2\pi\sigma_{\eta_{s}}^{2}}}\frac{1}{2\pi\sigma_{r}^{2}}\\
&\hspace{-0.1in} \times \exp \left[-\frac{(x-x_{i})^{2}+(y-y_{i})^{2}}{2\sigma_{r}^{2}} - \frac{(\eta_{s}-\eta_{i s})^{2}}{2\sigma_{\eta_{s}}^{2}}\right],
\end{aligned}
\label{eq:Pmu}
\end{equation}
where $p^{\tau}_{i}=m_{iT}\cosh(Y_{i}-\eta_{i s})$, $p^{x}_{i}=p_{i x}$, $p^{y}_{i}=p_{i y}$
and $p^{\eta}_{i}=m_{i T}\sinh(Y_{i}-\eta_{i s})/\tau_{0}$ for parton $i$, which runs over all partons produced in the AMPT model simulations.
We have chosen $\sigma_{r}=0.6$ fm, $\sigma_{\eta_{s}}=0.6$ in our calculations.
The transverse mass $m_{T}$, rapidity $Y$ and spatial rapidity $\eta_{s}$ are calculated from the parton's four-momenta and spatial coordinates. There is no
Bjorken scaling in the above initial condition because of early parton cascade before the initial time and the uncertainty principle applied to the initial formation time in AMPT. The scale factor $K$ and the initial time $\tau_{0}$ are two parameters that one can adjust to fit the experimental data on central rapidity density of produced hadrons. We will use the ideal version of CLVisc with a parametrized equation of state (EoS) s95p-v1\cite{Huovinen:2009yb} to obtain the hydrodynamic evolution of the bulk medium in 200 events of heavy-ion collisions in each centrality to simulate jet transport in each bin of the initial transverse momentum transfer $p_{T i}$. We set the width of the bin in the initial transverse momentum transfer to be $\Delta p_{T i}=10$ GeV/$c$ and generate 1000 sets of initial jet showers from PYTHIA 8 in each bin for each of the 200 hydro events. The total number of events of initial jet production for each centrality in each $p_{T i}$ bin is therefore $N_{\rm event}=200\times 1000$. This is also the total number of events in each $p_{Ti}$ bin in p+p collisions.
The AMPT model employs the HIJING model \cite{Wang:1991hta,Gyulassy:1994ew} to generate the initial bulk parton or minijet production according to the Glauber model of nuclear collisions with the Woods-Saxon nuclear distribution. The geometrical distribution of the initial triggered jets in the transverse plane is sampled according to the initial minijet distribution in each AMPT event. The same AMPT event also provides the initial condition for the energy-momentum density distribution for CLVisc hydrodynamic simulations of the space-time evolution of the bulk medium in which jet transport is simulated according to the LBT model. The centrality classes of heavy-ion collisions are defined according to the initial parton multiplicity distribution and the averaged number of participant nucleons $\langle N_{\rm part}\rangle$ in each centrality class is computed accordingly. The interaction rate in Eq.~(\ref{eq-rate}) and jet transport coefficient in Eq.~(\ref{eq-qhat}) are all proportional to the medium parton density which will vanish in the hadronic phase of the bulk medium. The jet-medium interaction will be terminated in the hadronic phase and the final partons will be used for jet reconstruction within the FASTJET. Equation (\ref{eq-jetcrs}) will then be used to calculate the differential single inclusive jet cross section per binary nucleon-nucleon pair in heavy-ion collisions within a given centrality class. The suppression factor $R_{\rm AA}(p_T)$ is defined [Eq.~(\ref{eq:raa})] as the ratio between this cross section per binary nucleon-nucleon pair in heavy-ion collisions and that of single inclusive jet cross section in p+p collisions which is calculated from the same PYTHIA 8 events that provide the initial jet shower configurations for simulations of jet transport within LBT.
\begin{figure}[htbp]
\includegraphics[width=8.5cm]{RAA_alphas.pdf}
\caption{(Color online) The suppression factor $R_{\rm AA}$ of single inclusive jet spectra in the central rapidity $|y|<2.1$ region of 0-10\% central Pb+Pb collisions at $\sqrt{s}=2.76$ TeV from LBT simulations with different values of $\alpha_{\rm s}$ as compared to the ATLAS data at the LHC \cite{Aad:2014bxa}. UES and ``negative" partons are both included in the jet reconstruction with $R=0.4$ and anti-$k_t$ jet-finding algorithm.}
\label{RAA_alphas}
\end{figure}
\begin{figure}[htbp]
\includegraphics[width=8.5cm]{chi_010.pdf}
\caption{(Color online) $\chi^{2}$/d.o.f. of LBT fits to ATLAS data \cite{Aad:2014bxa} on $R_{\rm AA}(p_T)$ as a function of $\alpha_{s}$ in 0-10\% central Pb + Pb collisions at $\sqrt{s} = 2.76$ TeV with anti-$k_{t}$ algorithm and jet-cone size $R = 0.4$ in jet rapidity range $|y| < 2.1$, (black line with circle) with ``negative" partons and UES, (red line with square) with ``negative" partons but without UES, (blue line with uptriangle) with UES but without ``negative" partons, and (purple line with downtriangle) without ``negative" partons and UES.}
\label{chi010}
\end{figure}
\subsection{Suppression of single inclusive jet spectra}
Shown in Fig.~\ref{RAA_alphas} are suppression factors for single inclusive jet production in the central rapidity $|y|<2.1$ region of 0-10\% central Pb+Pb collisions at $\sqrt{s}=2.76$ TeV from LBT simulations with different values of the fixed strong coupling constant $\alpha_{\rm s}$ as compared to the ATLAS data at the LHC \cite{Aad:2014bxa}. Underlying event background subtraction (UES) and ``negative" partons due to back-reaction (diffusion wake) have both been included in the jet reconstruction and determination of the final jet transverse momentum using FASTJET with anti-$k_T$ algorithm and jet-cone size $R=0.4$. The central line is the LBT result with a value of $\alpha_{\rm s}=0.15$ that best fits the ATLAS data according to the $\chi^2$ distribution as shown in Fig.~\ref{chi010} in which we also show the $\chi^2$/d.o.f. (degrees of freedom) from fits of LBT results to the ATLAS data with different options on whether ``negative" partons and UES are included in the jet reconstruction from LBT calculations. One can see from Fig.~\ref{chi010} that both ``negative" partons from the back-reaction and the UES have non-negligible effects on the reconstructed jet energy and the suppression factor for single inclusive jet spectra in heavy-ion collisions. Both effects reduce the transverse energy within the cone of the reconstructed jets. These effects are more important for jets with large radii.
The effect of UE is more important for low energy jets while the effect of ``negative" partons are non-negligible for jets at all energies.
With both effects included, one needs smaller interaction strength within the LBT model to fit the experimental data on single inclusive jet suppression in heavy-ion collisions. They, however, do not change the minimum values of $\chi^2$/d.o.f. because of large uncertainties in the experimental data. With slightly different $\alpha_{\rm s}$, they can all describe the experimental data equally well.
\begin{figure}[htbp]
\includegraphics[width=8.5cm]{RAA_015_4in1.pdf}
\caption{(Color online) The suppression factor $R_{\rm AA}$ of single inclusive jet spectra in the central rapidity $|y|<2.1$ region of 0-10\% central Pb+Pb collisions at $\sqrt{s}=2.76$ TeV from LBT simulations with fixed $\alpha_{\rm s}=0.15$ as compared to the ATLAS data at the LHC \cite{Aad:2014bxa}.
The jet reconstruction with $R=0.4$ and anti-$k_t$ algorithm includes four different options on ``negative" partons and UES: (a) with both ``negative" partons and UES, (b) with ``negative" partons but without UES, (c) with UES but without ``negative" partons, and (d) without ``negative" partons and UES.}
\label{RAA_4opts}
\end{figure}
As another illustration of the effects of ``negative" partons and UES on the single inclusive jet suppression, we show in Fig.~\ref{RAA_4opts} the suppression factors $R_{\rm AA}(p_T)$ for 0-10\% central Pb+Pb collisions at $\sqrt{s}=2.76$ TeV from LBT simulations with fixed $\alpha_{\rm s}=0.15$ and different options on ``negative" partons and UES as compared to the ATLAS data. Both effects lead to a bigger jet energy loss and therefore smaller values of the suppression factor, though the effect of ``negative" partons is larger. Without ``negative" partons, the effect of UES is also understandably larger than with ``negative" partons. One can also see this from the $\chi^2$/d.o.f. distribution in Fig.~\ref{RAA_alphas} by comparing the effects of UES when ``negative" partons are included or not.
We will examine the effect of ``negative" and recoil partons on the jet energy loss in more detail in the next section.
We note that the fixed value of $\alpha_{\rm s}=0.15$ from the best fits to experimental data is only an effective strong coupling constant in the elastic scattering matrix elements and radiative gluon spectra in the LBT model in which we use the perturbative Debye screening mass in Eq.~(\ref{eq-mud}) to regularize the collinear divergence. It is possible that other non-perturbative physics such as chromo-magnetic monopoles can play a role in the parton-medium interaction \cite{Liao:2008jg,Liao:2008dk,Xu:2015bbz,Xu:2014tda} that can effectively increase the screen mass. Furthermore, the non-zero mass of thermal partons can also reduce the effective thermal parton density significantly in the interaction rate. These can both increase the value of the effective strong coupling constant in LBT in order to fit the experimental data. In the remainder of this paper, we will use this value of fixed $\alpha_{\rm s}$ for all LBT calculations that include both ``negative" partons and UES, unless otherwise specified.
\begin{figure}[htbp]
\centering
\includegraphics[width=8.5cm]{RAA_ctrls_rap5.pdf}
\caption{(Color online) LBT results on $R_{\rm AA}(p_T)$ in the central rapidity $|y|<2.1$ region of Pb+Pb collisions at $\sqrt{s}=2.76$ TeV for different centralities as compared to ATLAS data~\cite{Aad:2014bxa}.}
\label{RAA_ctrl}
\end{figure}
\begin{figure}[htbp]
\centering
\includegraphics[width=8.5cm]{RAA_Npart.pdf}
\caption{(Color online) LBT results on $R_{\rm AA}$ in Pb+Pb collisions at $\sqrt{s}=2.76$ TeV as a function of the number of nucleon participants $\langle N_{\rm part}\rangle$ in each centrality bin in two $p_T$ ranges, $p_T=80-100$ (solid), $180-200$ GeV/$c$ (dashed), as compared to experimental data from ATLAS~\cite{Aad:2014bxa}.}
\label{RAA_Npart}
\end{figure}
With the only adjustable parameter $\alpha_{\rm s}$ fixed through the best fit to the ATLAS data on single inclusive jet suppression in 0-10\% central Pb+Pb collisions at $\sqrt{s}=2.76$ TeV, we can predict the suppression factors for other centralities, rapidities and colliding energies. Shown in Fig.~\ref{RAA_ctrl}
are suppression factors for single inclusive jet spectra in three different centrality bins of Pb+Pb collisions at $\sqrt{s}=2.76$ TeV as compared to the ATLAS data. LBT results agree well with the data within the experimental errors. We have also calculated the inclusive jet suppression factor in 6 different centrality bins of Pb+Pb collisions at $\sqrt{s}=2.76$ TeV and plot it as a function of the mean number of participant nucleons $\langle N_{\rm part}\rangle$ in Fig.~\ref{RAA_Npart} for two different ranges of transverse momentum $p_T=80-120$ (solid line), $180-200$ GeV/$c$ (dashed line) as compared to ATLAS data at $p_T=80-120$ GeV/$c$. The LBT model can also describe well the experimental data on the centrality dependence of the single jet suppression.
In Fig.~\ref{RAA_ctrl_rap}, we show the LBT results on single inclusive jet suppression factors in four different rapidity regions in 0-10\% central (solid lines) and 30-40\% semi-central (dashed lines) Pb+Pb collisions at $\sqrt{s}=2.76$ TeV. The suppression factor has a very weak rapidity dependence within $|y|<2.1$ consistent with ATLAS experimental data.
\begin{figure}[htbp]
\centering
\includegraphics[width=8.5cm]{RAA_ctrls_rap4in1.pdf}
\caption{(Color online) LBT results on $R_{\rm AA}(p_T)$ in four different jet rapidities of (red solid) 0-10\% and (blue dashed) 30-40\% central Pb+Pb collisions at $\sqrt{s}=2.67$ TeV as compared to ATLAS data~\cite{Aad:2014bxa}.}
\label{RAA_ctrl_rap}
\end{figure}
LBT results on the single jet suppression factor in the central rapidity region of 0-10\% central Pb+Pb collisions at $\sqrt{s}=2.76$ are also compared to data from both ATLAS~\cite{Aad:2014bxa} and CMS \cite{Khachatryan:2016jfl} experiment at LHC in Fig.~\ref{RAA_Exp}. Data from both experiments are consistent with each other within their respective errors and with LBT calculations.
\begin{figure}[htbp]
\includegraphics[width=8.5cm]{RAA_Exp.pdf}
\caption{(Color online) Experimental data on $R_{\rm AA}(p_T)$ from ATLAS \cite{Aad:2014bxa} (red circle) and CMS \cite{Khachatryan:2016jfl} (blue square) for 0-10\% central Pb+Pb collisions at $\sqrt{s}=2.76$ TeV are compared to LBT calculations.}
\label{RAA_Exp}
\end{figure}
\section{Colliding energy and transverse momentum dependence of jet suppression}
\label{sec:jetsuppression}
\subsection{Colliding energy dependence}
In order to calculate the suppression of single inclusive jet spectra at different colliding energies, one first has to provide the initial conditions for the 3+1D hydrodynamic evolution. In our study here we use the initial parton production from the AMPT model for the initial condition for CLVisc hydrodynamic calculations. The scale factor in Eq.~(\ref{eq:Pmu}) is adjusted so that the final charged hadron rapidity density from the hydrodynamic calculation fits the experimental data in 0-10\% central Pb+Pb collisions at $\sqrt{s}=2.76$ and 5.02 TeV, respectively \cite{Abbas:2013bpa,Adam:2016ddh}. There is an increase of about 20\% in the charged hadron multiplicity density from 2.76 to 5.02 TeV. The corresponding event averaged initial temperature at the center of 0-10\% central Pb+Pb collisions is 469 and 529 MeV at an initial time $\tau_0=0.5$ fm/$c$, respectively, at these two colliding energies.
\begin{figure}[htbp]
\includegraphics[width=8.5cm]{RAA_twoEnergy.pdf}
\caption{(Color online) LBT results on $R_{\rm AA}(p_T)$ in central rapidity $|y|<2.1$ for single inclusive jet spectra in 0-10\% central Pb+Pb collisions at $\sqrt{s} = 2.76$ (red dashed line) and 5.02 TeV (blue solid line) as compared to ATLAS data~\cite{Aad:2014bxa,Aaboud:2018twu}.}
\label{RAA_twoEnergy}
\end{figure}
We assume the effective strong coupling constant in LBT is independent of the local temperature in this study and therefore can predict the suppression factor for single inclusive jet spectra in Pb+Pb collisions at $\sqrt{s}=5.02$ TeV as shown in Fig.~\ref{RAA_twoEnergy} together with the latest data from ATLAS experiment \cite{Aad:2014bxa,Aaboud:2018twu}. One can observe two striking features in the LBT calculations which are consistent with the experimental data. The first feature is the very weak or none colliding energy dependence at LHC energy range despite the fact that the initial parton density at 5.02 TeV is about 20\% higher than at 2.76 TeV. The second feature is the weak transverse momentum dependence of the jet suppression factor in the range of the experimental coverage which is very different from the suppression factor for single inclusive charged hadrons \cite{Aamodt:2010jd,CMS:2012aa,Khachatryan:2016odn,Acharya:2018qsh}.
\subsection{Jet energy loss distribution}
To understand the colliding energy and transverse momentum dependence of the jet suppression factor, we have to understand the transverse momentum dependence of the average jet energy loss and its fluctuations. For given initial production point ${\bf r}$, impact parameter ${\bf b}$ and propagation direction $\phi_c$, we assume that the medium-modified single inclusive jet distribution is given by the convolution of the jet distribution in vacuum $dN^{\rm jet}_{(c)}(p_{Ti},p_T)/dydp_T$ and the jet energy loss distribution $w_c(\Delta p_T, p_{T}, {\bf r},{\bf b},\phi_c)$,
\begin{eqnarray}
\frac{d\widetilde{N}^{\rm jet}_{(c)}(p_{Ti},p_T,{\bf r},{\bf b,}\phi_c)}{dydp_T}&=&\int d\Delta p_T \frac{d^2N^{\rm jet}_c(p_{Ti},p_T+\Delta p_T)} {dp_T dy}
\nonumber \\
&&\hspace{-0.4in} \times w_c(\Delta p_T, p_{T}+\Delta p_T, {\bf r},{\bf b},\phi_c),
\end{eqnarray}
where we assume that the implicit dependence of jet energy loss distribution $w_c$ on the initial hard parton's transverse momentum $p_{Ti}$ is only through an explicit dependence on the final jet transverse momentum $p_T$ in vacuum. Averaging over the energy loss fluctuation due to distribution of the production point and the propagation direction, one can define the energy loss
distribution for a given centrality class of A+A collisions as
\begin{eqnarray}
W^{(c)}_{\rm AA}(\Delta p_T, p_{T})&=&\int d^2{\bf r} d^2{\bf b} t_A(r) t_A(|{\bf b}-{\bf r}|) \frac{d\phi_c}{2\pi} \nonumber \\
&\times& \frac{w_c(\Delta p_T, p_{T}, {\bf r},{\bf b},\phi_c)}{\int d^2{\bf r }d^2{\bf b} t_A(r) t_A(|{\bf b}-{\bf r}|)}.
\end{eqnarray}
The cross section for single inclusive jet production in A+A collision in Eq.~(\ref{eq:cs.aa}) can be rewritten as
\begin{eqnarray}
\frac{d \sigma^{\rm jet}_{\rm AA}}{dp_{T}dy} & = & \int dp_{Ti} d\Delta p_T \frac{d \sigma^{{\rm AA}(c)}_{\rm LO}}{dp_{Ti}} W^{(c)}_{\rm AA}(\Delta p_T, p_{T}+\Delta p_T)
\nonumber \\
&\times& \frac{d^2N^{\rm jet}_c(p_{Ti},p_T+\Delta p_T)} {dp_T dy},
\label{eq:jetaa}
\end{eqnarray}
where the effective LO pQCD jet production cross section per binary nucleon-nucleon interaction is defined as
\begin{eqnarray}
\frac{d \sigma^{{\rm AA}(c)}_{\rm LO}}{dp_{Ti}} & = & 2 p_{Ti}\sum_{a,b,d} \int dy_c dy_d x_a f_{a/A} (x_a, \mu^2)
\nonumber\\
& & \times x_b f_{b/A} (x_b, \mu^2) \frac{d\hat\sigma_{ab\to cd}}{dt}.
\label{eq:LOaa}
\end{eqnarray}
If we neglect the small nuclear modification of parton distribution functions at very large momentum scale~\cite{Eskola:2009uj,Ru:2016wfx}, $d \sigma^{{\rm AA}(c)}_{\rm LO}/dp_{Ti}\approx d \sigma^{{\rm pp}(c)}_{\rm LO}/dp_{Ti}$, the modification factor for single inclusive jet production in A+A collisions can be written as
\begin{eqnarray}
R_{\rm AA} (p_{T}) &\approx& \int d\Delta p_T W_{\rm AA}(\Delta p_T, p_T+\Delta p_T) \nonumber \\
&\times& \frac{d\sigma^{\rm jet}_{\rm p+p}(p_{T} + \Delta p_T)}{d\sigma^{\rm jet}_{\rm p+p}(p_{T})},
\label{shift}
\end{eqnarray}
where $W_{\rm AA}$ is the flavor-averaged parton energy loss distribution for a given centrality class of A+A collisions and jet-cone size $R$. If the average jet energy loss is small, the above jet suppression factor can be approximated with
\begin{equation}
R_{\rm AA} (p_{T}) \approx \frac{d\sigma^{\rm jet}_{\rm p+p}(p_{T} + \langle \Delta p_T\rangle)}{d\sigma^{\rm jet}_{\rm p+p}(p_{T})},
\label{shift2}
\end{equation}
where the average jet energy loss is given by
\begin{equation}
\langle \Delta p_T\rangle(p_T)=\int d\Delta p_T \Delta p_T W_{\rm AA}(\Delta p_T, p_T),
\label{aveloss}
\end{equation}
which should depend on the vacuum jet energy $p_T$, colliding energy $\sqrt{s}$, centrality and the jet-cone size $R$.
\begin{figure}[htbp]
\centering
\includegraphics[width=8.5cm]{pTloss_twoEnergy.pdf}
\caption{(Color online) Average jet transverse energy loss as a function of vacuum jet $p_{T}$ with anti-$k_t$ and $R = 0.4$
in $|y| < 2.1$ of central 0 - 10 \% Pb+Pb collisions at (solid) $\sqrt{s} = 5.02$ GeV and (dash) 2.76 TeV. Black lines with circles are the LBT results without recoil and ``negative" partons, while red lines with squares are with recoil and ``negative" partons and blue lines with diamonds are with recoil but without ``negative" partons.}
\label{pTloss_twoEnergy}
\end{figure}
To illustrate the colliding energy and transverse momentum dependence of the jet energy loss and its fluctuation, we first show the averaged energy
loss $\langle \Delta p_T\rangle$ in Fig.~\ref{pTloss_twoEnergy} for leading jets in the 0-10\% central Pb+Pb collisions at two colliding energies, $\sqrt{s}=2.76$ and 5.02 TeV, from LBT simulations. In the calculations, the leading jet with a large cone size $R=1$ from PYTHIA 8 in each event and the associated jet shower partons are identified. These jet shower partons are then used for the reconstruction of the vacuum leading jet in p+p collisions with a given jet-cone size $R$ and UES. These same jet shower partons are allowed to propagation through the hydrodynamic medium in LBT and the transverse energy of the final medium-modified leading jet with cone size $R$ is calculated with the same jet-finding algorithm and UES. The difference between the final transverse energies of the vacuum and medium-modified leading jet is defined as the jet transverse energy loss as shown in Fig.~\ref{pTloss_twoEnergy} as a function of the vacuum jet transverse energy. An alternative definition of the jet energy loss is the energy difference between the leading jet in p+p and the leading jet in A+A in the same direction of the vacuum leading jet with the angular difference smaller than the jet-cone size, $\Delta r<R$. The results are approximately the same.
The transverse jet energy loss at $\sqrt{s}=5.02$ TeV is indeed about 15\% larger than at $\sqrt{s}=2.76$ TeV in the $p_T=50-400$ GeV/$c$ range when the medium response (recoil and ``negative" partons) is taken into account in the calculation of the transverse energy of the medium-modified leading jet. It increases with the vacuum jet transverse energy logarithmically similar to that of a single parton~\cite{Guo:2000nz,Wang:2001ifa}. As we will discuss later in detail, such a weak $p_T$-dependence of the jet transverse energy loss is caused by a combination of effects due to jet-induced medium response, radial expansion and jet flavor (quarks and gluons) composition.
\begin{figure}[htbp]
\centering
\includegraphics[width=8.5cm]{px_SingleJet.pdf}
\caption{(Color online) LBT results on jet energy loss distribution $W_{\rm AA}(x)$ as a function of the scaled jet energy loss $x=\Delta p_T/\langle \Delta p_T\rangle$ in Pb+Pb collisions (a) for three different vacuum jet energies, (b) three different centralities and (c) two different colliding energies at LHC.}
\label{elossdistr}
\end{figure}
We also show the jet energy loss distributions $W_{\rm AA}(\Delta p_T, p_T)$ as a function of the scaled variable $x=\Delta p_T/\langle \Delta p_T\rangle$ from LBT simulations in Fig.~\ref{elossdistr} for leading jets (a) with vacuum transverse momentum $p_T=100, 200, 300$ GeV/$c$ in 0-10\% central Pb+Pb collisions at $\sqrt{s}=2.76$ TeV, (b) for $p_T=300$ GeV/$c$ in Pb+Pb collisions with different centralities (0-10\%, 10-20\%, 20-30\%) at $\sqrt{s}=2.76$ TeV, and (c) for $p_T=300$ GeV/$c$ in 0-10\% Pb+Pb collisions at both $\sqrt{s}=2.76$ and 5.02 TeV. We can see that the jet energy loss distribution has an scaling behavior in the scaled variable $x=\Delta p_T/\langle \Delta p_T\rangle$ approximately independent of the vacuum jet $p_T$ and the colliding energy for a given centrality of heavy-ion collisions. The dependence of the jet energy loss distribution on the vacuum jet energy and colliding energy is only implicit through the average jet energy loss $\langle \Delta p_T\rangle(p_T, \sqrt{s})$. Such a scaling property of the jet energy loss distribution is essentially determined by the fluctuation of the jet energy loss caused by a scattering that can transport jet shower partons to the outside of the jet cone and the average number of such out-of-cone scatterings in a given centrality class of A+A collisions. It can be used to extract the jet energy loss distributions from experimental data on jet spectra in p+p and A+A collisions using the convolution relationship in Eq.~(\ref{shift}) \cite{He:2018gks}. Note that the scaling behavior of $W_{\rm AA}(x)$ will be violated at very large values of $x$ for finite values of the vacuum jet transverse momentum $p_T$ due to energy-momentum conservation since the total jet energy loss is limited by the initial or vacuum jet energy. This violation will only influence the tails of the scaling jet energy loss distributions as seen in Fig.~\ref{elossdistr} where the total jet energy loss is large.
\subsection{Understanding the colliding energy and transverse momentum dependence}
Given the jet energy loss distribution, $p_T$ and $\sqrt{s}$ dependence of the average jet transverse energy loss, one should be able to estimate the suppression of jet spectra by shifting jet production cross section as measured in p+p collisions through Eq.~(\ref{shift}) or (\ref{shift2}). As we can see in Fig.~\ref{jetCS_twoEnergy}, the shape of the single inclusive jet spectra at $\sqrt{s}=5.02$ TeV is much flatter than that at 2.76 TeV in the same $p_T$ range. This colliding energy dependence of the single inclusive jet spectra in p+p collisions is one of the deciding factors that will influence the energy and transverse momentum dependence of the jet suppression factor $R_{\rm AA}(p_T)$.
Shown in Fig.~\ref{RAA_shift_twoEnergy} are the jet suppression factors (dashed lines) obtained by shifting the transverse momentum in the jet production cross section in p+p collisions with the average transverse energy loss as shown in Fig.~\ref{pTloss_twoEnergy} according to Eq.~(\ref{shift2}), together with the full LBT calculations (solid lines) and ATLAS data. A scaling factor of $1.174$ and $1.165$ is multiplied to the shifted spectra at $\sqrt{s} = 2.76$ and 5.02 TeV, respectively, to keep the number of inclusive jets the same. One can see that the colliding energy and the transverse momentum dependence of the jet suppression factor can be approximately determined by the behavior of the transverse energy loss and the shape of the initial jet production spectra. The approximate 15\% increase in the transverse energy loss from $\sqrt{s}=2.76$ to 5.02 TeV, as shown in Fig.~\ref{pTloss_twoEnergy}, is mostly offset by the decrease of the slope of the jet $p_T$ spectra (becoming flatter), leading to a suppression factor that has a very weak colliding energy dependence. The initial jet production spectra in the large $p_T$ region at both colliding energies are more exponential than power-law-like in the large $p_T$ region due to the fall-off of parton distribution functions in the large momentum-fraction region. This shape of the initial production spectra coupled with the weak $p_T$-dependence of the transverse energy loss in these regions of $p_T$ leads to a very weak $p_T$ dependence of the jet suppression factor. Note that the weak $p_T$ dependence of the jet transverse energy loss is partially caused by the influence of jet-induced medium response on the jet energy within a given cone size $R$ as shown in Fig.~\ref{pTloss_twoEnergy}. A detailed analysis of the colliding energy and $p_T$ dependence of the suppression factor given the initial jet spectra in p+p collisions can provide important information about the jet energy loss distributions according to Eq.~(\ref{shift}). This has been investigated in detail in a separate study \cite{He:2018gks}.
\begin{figure}[htbp]
\includegraphics[width=8.5cm]{RAA_shift_twoEnergy.pdf}
\caption{(Color online) Experimental data on $R_{\rm AA}$ for 0-10\% central Pb+Pb collisions at (red solid squares) $\sqrt{s} = 2.76$ TeV and (blue solid circles) 5.02 TeV \cite{Aad:2014bxa,Aaboud:2018twu} as compared to (solid lines) LBT calculations and (dashed) the suppression factor obtained by shifting the jet spectra in p+p collisions by the average jet energy loss from Fig.~\ref{pTloss_twoEnergy} according to Eq.~(\ref{shift2}).}
\label{RAA_shift_twoEnergy}
\end{figure}
\section{Effects of medium response, radial expansion and jet flavor}
\label{sec:eloss}
As we have shown in the previous section, the behavior of the suppression factor for single inclusive jets is closely related to the colliding energy and transverse momentum dependence of the jet energy loss due to jet-medium interaction in an expanding QGP. We will examine in this section the effects of medium response, radial expansion and jet flavor on the jet energy loss in detail.
\subsection{Effects of medium response and radial expansion}
Similar to the calculation of jet energy loss in the last section, we focus on the leading jet in both p+p and central (0-10\%) Pb+Pb collisions. Only the jet shower partons associated with the leading jet within a large jet-cone size $R=1$ in PYTHIA 8 simulations of p+p collisions are used for propagation within LBT in 200 events of hydrodynamic profiles with fluctuating initial conditions for 0-10\% central Pb+Pb collisions. FASTJET is used to calculate the transverse energy of the vacuum and medium-modified leading jet with UE subtraction and the transverse energy loss is calculated for different jet-cone sizes. We choose three different jet-cone sizes $R=0.3$, 0.4 and 0.5 to investigate the dependence on the jet-cone size. To study the effect of radial expansion, we also compare to the case where the same jet shower partons propagate in a static medium with a constant temperature $T=0.28$ GeV and finite length (or propagation time) $L=4$ fm. The length is approximately the average propagation length in 0-10\% central Pb+Pb collisions and the temperature is chosen such that the jet transverse energy loss for $R=0.4$ in the static medium is the same as that of a dynamically evolving medium in 0-10\% central Pb+Pb collisions at $\sqrt{s}=2.76$ TeV in the lowest $p_T$ bin in our study here.
\begin{figure}[htbp]
\includegraphics[width=8.5cm]{pTloss_2760.pdf}
\caption{(Color online) LBT results on average $p_{T}$ loss $\langle \Delta p_T\rangle$ for jets in $|y| < 2.1$ as a function of the vacuum jet $p_{T}$ with anti-$k_t$ algorithm and $R = 0.3, 0.4, 0.5$ for [(a), (c), (e)] hydrodynamic background in central 0 - 10\% Pb+Pb collisions at $\sqrt{s} = 2.76$ TeV and [(b), (d), (f)] static medium at $T = 0.28$ GeV with fixed length $L = 4$ fm. Black lines with circles are results without recoil and ``negative" partons, while red lines with squares are with recoil and ``negative" partons and blue lines with diamonds are with recoil but without ``negative" partons.}
\label{pTloss_2760}
\end{figure}
\begin{figure}[htbp]
\includegraphics[width=8.5cm]{pTloss_5020.pdf}
\caption{The same as Fig.~\ref{pTloss_2760} except for $\sqrt{s} = 5.02$ TeV.}
\label{pTloss_5020}
\end{figure}
Shown in Figs.~\ref{pTloss_2760} and \ref{pTloss_5020} are the average transverse energy loss as a function of the vacuum jet $p_T$ in 0-10\% central Pb+Pb collisions (left) at $\sqrt{s}=2.76$ and 5.02 TeV, respectively, and a static medium with a constant temperature $T=0.28$ GeV and finite length (propagation time) $L=4$ fm (right) for three different jet-cone sizes $R=0.3$, 0.4 and 0.5. Without the inclusion of medium response (recoil and ``negative" partons) (black lines with circles) the jet transverse energy loss is significantly larger than that with medium response (red lines with squares). Inclusion of ``negative" partons increases the jet energy loss only slightly. The inclusion of the medium response (mainly recoil partons) not only reduces the net jet energy loss but also its dependence on the vacuum jet $p_T$, making the $p_T$-dependence much flatter. As we have seen in the last section, this weaker $p_T$-dependence of the jet energy loss is responsible for the $p_T$-dependence of the jet suppression factor $R_{\rm AA}(p_T)$ given the shape of the vacuum jet spectra in p+p collisions. The reduction of the jet energy loss due to the inclusion of medium response increases with the jet cone-size, since the energy carried by recoil partons is spread to wide angles away from the jet axis. The radial expansion in the hydrodynamic medium helps to transport recoil partons to a wider angle away from the jet axis. This makes the net jet energy loss more dependent on the jet-cone size as compared to the case of jet propagation in a static medium. This is more so for the effect of ``negative" partons. In all scenarios, the jet energy loss in general decreases with the jet-cone size $R$.
\subsection{Flavor dependence}
It is known that gluons lose more than twice the energy as quarks in a QCD medium and the flavor composition of single inclusive jets in p+p collisions depends on the transverse momentum and colliding energy. The transverse momentum and colliding energy dependence of the average jet energy loss in heavy-ion collisions should also be influenced by the flavor composition of the initial jets. We will examine this in detail here.
\begin{figure}[htbp]
\includegraphics[width=8.5cm]{pTlossgq_2760.pdf}
\caption{(Color online) The same as Fig.~\ref{pTloss_2760} but for (solid lines) gluon and (dashed lines) quark jets.}
\label{pTlossgq_2760}
\end{figure}
\begin{figure}[htbp]
\includegraphics[width=8.5cm]{pTlossgq_5020.pdf}
\caption{The same as Fig.~\ref{pTloss_2760} except for gluon (solid lines) and quark jets (dashed lines) at $\sqrt{s}=5.02$ TeV.}
\label{pTlossgq_5020}
\end{figure}
In the high-energy limit when jet shower parton energy is much bigger than the local temperature $E\gg T$, the $t$-channel gluon and quark scattering cross sections can be approximated by their small angle limits,
\begin{eqnarray}
\frac{d\sigma_{ab}}{dq_\perp^2} &\approx& C_{ab} \frac{2\pi\alpha_{\rm s}^2}{(q_\perp^2+\mu_D^2)^2}, \\
&&\left(C_{gg}=\frac{9}{4}, C_{qg}=1, C_{qq}=\frac{4}{9}\right). \nonumber
\label{eq-small-el}
\end{eqnarray}
One can calculate the elastic parton energy loss,
\begin{eqnarray}
\label{eloss}
\frac{dE_{\rm el}^{a}}{dx}&=&\sum_b \int dq_\perp^2 \frac{d^3k}{(2\pi)^3} f_b(k)\frac{q_\perp^2}{2k^0} \frac{d\sigma_{ab}}{dq_\perp^2} \nonumber \\
&\approx&C_a\frac{3\pi}{2} \alpha_{\rm s}^2 T^2 \ln (\frac{s^*}{4\mu _D^2}) ,
\end{eqnarray}
where $s^*\approx 2.6 ET$~\cite{He:2015pra}. Similarly, the jet transport coefficient as defined in Eq.~(\ref{eq-qhat}) is,
\begin{equation}
\label{<qperp>}
\hat q_a
\approx C_a \frac{42 \zeta(3)}{\pi} \alpha_{\rm s}^2T^3 \ln (\frac{s^*}{4\mu _D^2}),
\end{equation}
where $s^*\approx 5.7ET$~\cite{He:2015pra}. Since the radiative gluon spectra in Eq.~(\ref{induced}) is proportional to $\hat q_a$, both the elastic and radiative energy loss of a propagating parton in a QGP medium depend on its color charge, $C_F=4/3$ for a quark and $C_A=3$ for a gluon \cite{He:2015pra,CasalderreySolana:2007sw}.
The net energy loss of a jet in a QGP medium should also depend on the color charge of its originator, though the dependence is weaker than the energy loss of a single parton, since a jet shower contains both quarks (anti-quarks) and gluons whether it is originated from a highly virtual quark or gluon.
In PYTHIA 8 simulations, we tag the flavor of a leading jet in p+p collisions by the flavor of the final parton in the hard $2\rightarrow 2$ processes in the direction of the final jet and assign the same flavor tagging to the final jet after propagation in the QGP medium. Shown in Figs.~\ref{pTlossgq_2760} and \ref{pTlossgq_5020} are the averaged net jet transverse energy loss as a function of the vacuum jet $p_T$ for gluon (solid lines) and quark jets (dashed lines) with three different jet-cone sizes ($R=0.3$, 0.4, and 0.5) in the static (right) and hydrodynamic QGP medium (left) in 0-10\% Pb+Pb collisions at $\sqrt{s}=2.76$ and 5.02 TeV, respectively. The energy loss of flavor-tagged jets follows the same trend as the flavor-averaged jet energy loss in Figs.~\ref{pTloss_2760} and \ref{pTloss_5020}. Gluon jets however lose more energy than quark jets. The effect of medium response, inclusion of which reduces the net jet energy loss, is also stronger for gluon jets than quark jets.
\begin{figure}[htbp]
\includegraphics[width=8.5cm]{pTlossgqR_2760.pdf}
\caption{(Color online) LBT results on ratios of energy loss of gluon jets over quark jets in $|y| < 2.1$ as a function of the vacuum jet $p_{T}$ with anti-$k_t$ algorithm and $R = 0.3, 0.4, 0.5$ for [(a), (c), (e)] hydrodynamic background in central 0 - 10\% Pb+Pb collisions at $\sqrt{s} = 2.76$ TeV and [(b), (d), (f)] static medium at $T = 0.28$ GeV with fixed length $L = 4$ fm. Black lines with circles are results without recoil and ``negative" partons, while red lines with squares are with recoil and ``negative" partons and blue lines with diamonds are with recoil but without ``negative" partons.}
\label{pTlossgqR_2760}
\end{figure}
\begin{figure}[htbp]
\includegraphics[width=8.5cm]{pTlossgqR_5020.pdf}
\caption{(Color online) The same as Fig.~\ref{pTlossgqR_2760} except for $\sqrt{s}=5.02$ TeV.}
\label{pTlossgqR_5020}
\end{figure}
To illustrate the difference between gluon and quark jet energy loss, we show in Figs.~\ref{pTlossgqR_2760} and \ref{pTlossgqR_5020} the ratio of gluon to quark jet energy loss from Figs.~\ref{pTloss_2760} and \ref{pTloss_5020}, respectively. Since jet showers also contain gluons even if they are initiated by a hard quark, the net energy loss of a gluon-tagged jet is always larger than that of a quark-tagged jet but smaller than 9/4, which is the ratio of energy loss of a single gluon and quark, as seen in the LBT calculation. The ratio of gluon and quark-tagged jet energy loss with medium response increases from 1.2 to about 1.4 in the $p_T$ range shown. This means the medium sees more of the jet's original color charge for larger vacuum jet $p_T$. Without the medium response, the ratio is slightly smaller. This indicates that the effect of medium recoil is bigger for gluon-tagged jets because of their stronger interaction with medium and larger energy loss than quark-tagged jets. The ratio is also slightly influenced by the radial expansion and has moderate dependence on the jet cone-size.
\begin{figure}[!h]
\includegraphics[width=7.5cm]{ngqpT_twoEnergy.pdf}
\caption{(Color online) Transverse momentum dependence of the fraction of (solid lines) gluon jet and (dashed lines) quark jet within $|y|<2.1$ in p+p collisions at (red squares) $\sqrt{s} = 2.76$ and (blue circles) 5.02 TeV from PYTHIA 8 simulations with anti-$k_t$ and $R = 0.3, 0.4, 0.5$. }
\label{ngqpT_twoEnergy}
\end{figure}
To better understand the final flavor-averaged jet energy loss, one also needs to know the initial flavor composition of single inclusive jets as reconstructed with FASTJET. Shown in Fig.~\ref{ngqpT_twoEnergy} are fractions of gluon (solid lines) and quark-tagged jets (dashed lines) as a function of the vacuum jet $p_T$ with three different jet-cone sizes ($R=0.3$, 0.4 and 0.5) in p+p collisions at $\sqrt{s}=2.76$ (red squares) and 5.02 TeV (blue circles). The gluon (quark) jet fraction decreases (increases) with the vacuum jet $p_T$ as determined by the parton distributions inside a nucleon. The fractions have almost no dependence on the jet-cone size. At fixed values of jet $p_T$, the gluon (quark) fraction is bigger (smaller) at higher colliding energy or small parton initial momentum fraction $x_T=p_T/2\sqrt{s}$. We have checked that given these flavor compositions, $\gamma_g(p_T)$ and $\gamma_q(p_T)$ in Fig.~\ref{ngqpT_twoEnergy}, and the flavor-tagged jet energy loss, $\Delta p^g_T(p_T)$ and $\Delta p_T^q(p_T)$ in Figs.~\ref{pTlossgq_2760} and \ref{pTlossgq_5020}, one can recover the inclusive jet energy loss in Figs.~\ref{pTloss_2760} and \ref{pTloss_5020} through
\begin{equation}
\langle \Delta p_T\rangle =\gamma_g \langle \Delta p^g_T\rangle + \gamma_q \langle \Delta p_T^q \rangle.
\end{equation}
According to this flavor composition, the quark fraction among the inclusive jets increases with $p_T$. Since quark jet energy loss is smaller than gluon jet, the $p_T$-dependence of the effective flavor-averaged jet energy loss for single inclusive jets is weaker than that for flavor-tagged jets (quark or gluon). Together with the effect of recoil partons from medium response, this further weakens the $p_T$-dependence of the effective inclusive jet energy loss and consequently leads to the observed $p_T$-dependence of the suppression factor $R_{\rm AA}(p_T)$. As one increases the colliding energy, the gluon jet fraction at fixed $p_T$ increases. This will increase the effective inclusive jet energy loss accordingly. With the increased initial energy density in the bulk medium, the increased inclusive jet energy loss at higher colliding energy is, however, offset by the flatter initial jet spectra and leads to a weak colliding energy dependence of the jet suppression factor.
\subsection{Rapidity dependence of jet suppression}
The jet flavor composition shown in Fig.~\ref{ngqpT_twoEnergy} are averaged over the central rapidity region $|y|<2.1$ which is determined by the flavor dependence of the parton distribution functions (PDF's) inside a proton and the partonic cross sections. The flavor dependence, especially gluons versus quarks, of PDF's is known to vary with the momentum fraction $x$ of partons favoring gluons at small $x$. The jet flavor composition will therefore depend on the rapidity of the final jets. Shown in Fig.~\ref{ngqpT_rapidity} are the gluon (red solid lines) and quark (blue dashed linee) jet fractions as a function of the vacuum jet $p_T$ with jet-cone size $R=0.4$ in different rapidity bins in p+p collisions at $\sqrt{s}=2.76$ TeV. The gluon (quark) fraction decreases (increases) with rapidity at a fixed value of jet $p_T$. The cross-point where gluon and quark fraction become equal moves to smaller $p_T$ as the rapidity increases. As an illustration of the rapidity dependence of the flavor composition, we plot in Fig.~\ref{RAA_y} the gluon fraction (blue solid line) as a function of rapidity for $80<p_T<100$ GeV/$c$ in p+p collisions at $\sqrt{s}=2.76$ TeV. It decreases from $\gamma_g=0.68$ at $y=0$ to 0.52 at $y=2.1$. According to Fig.~\ref{pTlossgqR_2760}, gluon jets lose about 1.2 more energy than quark jets for $p_T=80-100$ GeV/$c$. The jet energy loss $\Delta p_T =\Delta p_T^g \gamma_g +(1-\gamma_g)\Delta p_T^q \approx (1+0.2\gamma_g)\Delta p_T^q$ will only decrease by 2.8\% due to the decrease of the gluon fraction from $y=0$ to 2.1. The jet energy loss for both flavors will decrease from central to large rapidity due to the spatial distribution of the bulk medium density. This rapidity dependence of the jet energy loss is offset by the rapidity dependence of the initial jet spectra which become steeper as a function of $p_T$ at large rapidity. The final jet suppression factor $R_{AA}$ will then have a very weak rapidity dependence within the range $0<|y|<2.1$ as shown in Fig.~\ref{RAA_y} for $80<p_T<100$ GeV/$c$ (red dashed line) (see also Fig.~ \ref{RAA_ctrl_rap}) which is consistent with the ATLAS data~\cite{Aad:2014bxa}. Please note that two different observables, gluon fraction $\gamma_g$ and single jet suppression factor $R_{AA}$, are plotted in Fig.~\ref{RAA_y} for convenience.
\begin{figure}[!h]
\includegraphics[width=7.5cm]{ngqpTrap.pdf}
\caption{(Color online) Transverse momentum dependence of the fraction of gluon jet (red solid lines) and quark jet (blue dashed lines) for different jet rapidity $y$ in p+p collisions at $\sqrt{s} = 2.76$ TeV from PYTHIA 8 simulations with anti-$k_t$ and $R = 0.4 $. }
\label{ngqpT_rapidity}
\end{figure}
\begin{figure}[!h]
\includegraphics[width=7.5cm]{RAA_y.pdf}
\caption{(Color online) Rapidity dependence of the initial gluon jet fraction $\gamma_g$ (blue solid line) and jet suppression factor $R_{AA}$ for $80<p_T<100$ (red dashed line) in 0-10\% central Pb+Pb collisions at $\sqrt{s} = 2.76$ TeV from LBT simulations with anti-$k_t$ and $R = 0.4 $. Solid squares are ATLAS data~\cite{Aad:2014bxa} on $R_{AA}$. Note that two different observables, gluon fraction $\gamma_g$ and single jet suppression factor $R_{AA}$, are plotted in this figure.}
\label{RAA_y}
\end{figure}
\subsection{Cone-size dependence of jet suppression}
As we have shown in the above subsections, medium response and radial expansion can both influence the net jet energy loss and lead to a stronger jet-cone size dependence. The net jet energy loss decreases with the cone size as jets with a bigger cone size will include more medium recoil partons and radiated gluons. This in principle should also lead to a unique cone size dependence of the single inclusive jet suppression, which should also be influenced by the cone size dependence of the single inclusive jet spectra in p+p collisions. Shown in Fig.~\ref{cone-ratio} are ratios of the single inclusive jet spectra from LBT simulations of 0-10\% central Pb+Pb collisions (solid) as compared to p+p results (dashed) from PYTHIA 8 with different cone sizes in the central rapidity region at $\sqrt{s}=5.02$ TeV. One can see that single inclusive jet spectra are in general smaller for smaller jet-cone size. The bigger energy loss for jets with smaller jet-cone size will further reduce the spectra relative to that with a bigger jet-cone size. Though the magnitude of the jet spectra decreases with smaller jet-cone size, the shape of the spectra is actually flatter [$\sigma(R=0.2)/\sigma(R=0.4)$ and $\sigma(R=0.3)/\sigma(R=0.4)$ both increase with $p_T$]. Since the net jet energy loss increases with smaller jet-cone size, the corresponding jet suppression should be stronger (smaller values of $R_{\rm AA}$), which in turn should be off-set somewhat by the flatter jet spectra in vacuum.
\begin{figure}[!h]
\includegraphics[width=7.5cm]{ATLAS_Rratio.pdf}
\caption{(Color online) Ratios of single inclusive jet spectra with different jet cone-size, $\sigma(R=0.2)/\sigma(R=0.4)$ (lines with squares) and $\sigma(R=0.3)/\sigma(R=0.4)$ (lines with circles), as a function of $p_T$ in (solid) 0-10\% central Pb+Pb and (dashed) p+p collisions at $\sqrt{s}=5.02$ TeV from LBT and PYTHIA 8 simulations, respectively.}
\label{cone-ratio}
\end{figure}
\begin{figure}[!h]
\includegraphics[width=7.5cm]{ATLAS_RAA_R2345.pdf}
\caption{(Color online) Suppression factor of single inclusive jet spectra $R_{\rm AA}$ as a function of $p_T$ in central rapidity region of 0-10\% Pb+Pb collisions at $\sqrt{s}=5.02$ TeV from LBT with (solid) and without medium recoil (including ``negative" partons) (dashed) for different jet-cone sizes, $R$=0.5, 0.4, 0.3 and 0.2 as compared to CMS data \cite{Khachatryan:2016jfl} in 0-5\% Pb+Pb collisions at $\sqrt{s}=2.76$ TeV.}
\label{RAA-cone}
\end{figure}
Shown in Fig.~\ref{RAA-cone} are LBT results on the single jet suppression factor with (solid) and without medium recoil (including ``negative" partons) (dashed) as a function of $p_T$ in central rapidity region of 0-10\% Pb+Pb collisions at $\sqrt{s}=5.02$ TeV for different jet-cone sizes, $R$=0.5, 0.4, 0.3 and 0.2. We observe that the suppression factor increases with the jet-cone size as the net jet energy loss gets smaller for bigger jet-cone size. Without medium recoil, the suppression factors are not only significantly smaller due to increased energy loss but also much less sensitive to the jet-cone size. The jet suppression as measured by CMS experiment~\cite{Khachatryan:2016jfl} for Pb+Pb collisions at $\sqrt{s}=2.76$ TeV show almost no jet-cone size dependence. However, the systematic uncertainties are too big to see the predicted jet-cone size dependence from LBT simulations shown. Similar behavior was also predicted in Refs.~\cite{Vitev:2008rz,Vitev:2009rd,Zapp:2012ak,Kang:2017frl}. But the $p_T$-dependence is different in LBT because of the influence of medium response and radial expansion. More precision measurements of the cone size dependence of the jet suppression can therefore elucidate the underlying processes responsible for the final jet suppression.
\section{Predictions at RHIC}
\label{rhicpredict}
As we have shown in this study, the transverse momentum dependence of the single inclusive jet suppression factor in heavy-ion collisions is determined mainly by the $p_T$-dependence of the jet energy loss and the shape of the initial single inclusive jet spectra in p+p collisions. Since the single inclusive jet spectra at RHIC energy $\sqrt{s}=200$ GeV is much steeper in the $p_T$ range available as shown by PYTHIA 8 results and STAR experimental data~\cite{Abelev:2006uq} in Fig.~\ref{jetCS200}, the single inclusive jet suppression factor at RHIC should have different transverse momentum dependence from that at LHC, depending on the $p_T$-dependence of the jet energy loss. While fractions of quark and gluon-initiated jets are about the same at around $p_T$= 20 GeV/$c$, jets become mostly quark-dominated at large $p_T$ at RHIC as shown by PYTHIA 8 results in Fig.~\ref{ngqpT_200}. The net energy loss for quark and gluon-initiated jets in the RHIC $p_T$ range is however very similar as shown in Fig.~\ref{pTlossgq_200}. The effect of jet-induced medium response is also much smaller in this $p_T$ range and the jet energy loss has a weak dependence on jet cone size, both due to a shorter duration of the QGP phase in central Au+Au collisions at RHIC. The net jet energy loss as shown in Fig.~\ref{pTloss_200} has a weaker transverse momentum and jet-cone size dependence as compared to that at LHC. The combined effect of the steep initial jet spectra at RHIC and weak transverse momentum dependence of the jet energy loss in the $p_T$ range leads to the single inclusive jet suppression factor that actually decreases slightly with the final jet transverse momentum as shown in Fig.~\ref{RAA_200} for Au+Au collisions with three different centralities at $\sqrt{s}=200$ GeV. This is quite different from the $p_T$-dependence of the jet suppression factor at the LHC that increases with $p_T$, though weakly. This unique colliding energy and transverse momentum dependence of the single inclusive jet suppression at RHIC will be important to verify and one can directly infer the $p_T$-dependence of jet energy loss given the measured initial jet production spectra in p+p collisions at the same energy~\cite{He:2018gks}.
\begin{figure}[htbp]
\centering
\includegraphics[width=7.5cm]{jetCS200.pdf}
\caption{(Color online) PYTHIA 8 result of the inclusive jet differential cross section as a function of $p_{T}$ in the central rapidity of p+p collisions at $\sqrt{s} = 200$ GeV with anti-$k_{t}$ algorithm and jet-cone radius R = $0.4$ as compared to STAR data \cite{Abelev:2006uq}.}
\label{jetCS200}
\end{figure}
\begin{figure}[htbp]
\centering
\includegraphics[width=7.5cm]{ngqpT_200.pdf}
\caption{(Color online) Transverse momentum dependence of the number fraction of (solid) gluon and (dashed) quark jets in p+p collisions at $\sqrt{s} = 200$ GeV from PYTHIA 8 with anti-$k_t$ and jet-cone sizes $R = 0.3, 0.4$ and 0.5.}
\label{ngqpT_200}
\end{figure}
\begin{figure}[htbp]
\includegraphics[width=7.5cm]{pTlossgq_200.pdf}
\caption{(Color online) LBT results on the average jet transverse energy loss of (solid) gluon and (dashed) quark jets within $|y| < 2.1$ with anti-$k_t$ algorithm and jet-cone sizes $R = 0.3, 0.4, 0.5$ as a function of the vacuum jet $p_{T}$ in $0 - 10 \%$ central Au+Au collisions at $\sqrt{s} = 200$ GeV. Black lines with circles are without recoil and ``negative" partons, while red lines with squares are with recoil and ``negative" partons and blue lines with diamonds are with recoil but without ``negative" partons.}
\label{pTlossgq_200}
\end{figure}
\begin{figure}[htbp]
\includegraphics[width=7.5cm]{pTloss_200.pdf}
\caption{(Color online) The same as Fig.~\ref{pTlossgq_200} except for flavor-averaged jet transverse energy loss. }
\label{pTloss_200}
\end{figure}
\begin{figure}[htbp]
\centering
\includegraphics[width=8.5cm]{RAA_200.pdf}
\caption{(Color online) LBT predictions for $R_{\rm AA}$ of single inclusive jet spectra in Au+Au collisions at $\sqrt{s}=200$ GeV with three different centralities.}
\label{RAA_200}
\end{figure}
\section{Conclusions}
\label{summary}
We have carried out a systematic study of jet energy loss and single inclusive jet suppression in high-energy heavy-ion collisions within the LBT model with CLVisc (3+1)D event-by-event hydrodynamic evolution of the bulk medium which is constrained by the bulk hadrons spectra. The LBT model can describe well the dependence of the jet suppression factor $R_{\rm AA}(p_T)$ on the colliding energy, centrality, transverse momentum and rapidity as measured by experiments at LHC. While the average net jet energy loss with a given jet-cone size in Pb+Pb collisions at $\sqrt{s}=5.02$ TeV is larger than that at $\sqrt{s}=2.76$ TeV due to the increased initial bulk medium density and larger fraction of gluon-initiated jets, the final jet suppression factor $R_{\rm AA}(p_T)$ at $\sqrt{s}=5.02$ TeV is actually comparable or even slightly larger than that at $\sqrt{s}=2.76$ TeV. This colliding energy dependence is mainly determined by the initial jet production spectra in p+p collisions which are harder at 5.02 TeV as compared to that at 2.76 TeV. The weak transverse momentum dependence of jet suppression factor at both energies is dictated by the initial jet production spectra, $p_T$-dependence of the net jet energy loss and the jet energy loss fluctuations. We have analyzed the net jet energy loss and its $p_T$-dependence within the LBT model in detail. We found that it is influenced by inclusion of jet-induced medium response, radial expansion and jet flavor (quark and gluon) composition, all leading to a weaker $p_T$-dependence of the averaged jet energy loss. The inclusion of jet medium response and influence of radial expansion also lead to a stronger cone-size dependence of the net jet energy loss. We have shown that this will also lead to a unique cone-size dependence of the single jet suppression.
We have also provided predictions for the single inclusive jet suppression factor in Au+Au collisions at the RHIC energy $\sqrt{s}=200$ GeV. Because of the steeper initial jet production spectra, we predict that the jet suppression factor at RHIC actually decreases slightly with $p_T$ in the $p_T<50$ GeV/$c$ range,
though the $p_T$-dependence of net jet energy loss is weaker than that at LHC. Such unique energy and $p_T$-dependence of the jet suppression factor is a direct consequence of the $p_T$-dependence of jet energy loss given the measured initial jet production spectra in p+p collisions. Extraction of the $p_T$-dependence of jet energy loss and energy loss fluctuations will provide an important link between experimental measurement of jet suppression and jet transport properties in quark-gluon plasma in high-energy heavy-ion collisions.
\clearpage
\begin{acknowledgments}
This work was supported in part by the National Science Foundation of China under Grant No. 11221504, by the Major State Basic Research Development Program in China under Grant No. 2014CB845404, by the Director, Office of Energy Research, Office of High Energy and Nuclear Physics, Division of Nuclear Physics, of the U.S. Department of Energy under Contract No. DE-AC02-05CH11231 and No. DE-SC0013460, and by the US National Science Foundation within the framework of the JETSCAPE collaboration, under Grant No. ACI-1550228 and No. ACI-1550300. This research used GPU workstations at CCNU and computer resources of the National Energy Research Scientific Computing Center (NERSC), a U.S. Department of Energy Office of Science User Facility operated under Contract No. DE-AC02-05CH11231.
\end{acknowledgments}
|
1,108,101,566,347 | arxiv | \section{Introduction}
\label{sec:intro}
Type IIP supernovae (SNe~IIP, ``P'' stands for plateau at the light curve)
originate from massive stars that retain a significant fraction of the
hydrogen envelope until the core collapse.
A pre-SN star before the explosion has the structure of a red supergiant (RSG)
\citep{GIN_71, Sma_09} which favors the high luminosity at the plateau
powered by the explosion energy.
The plateau with a duration of about 100\,days is followed by the radioactive
tail which, in turn, is powered by $^{56}$Co decay.
SNe~IIP show a broad range of plateau luminosities: from
$\sim$2$\times10^{41}$\,erg\,s$^{-1}$ for subluminous SNe~IIP, e.g.,
SN~2003Z \citep{UCP_07}, to $\sim$10$^{43}$\,erg\,s$^{-1}$ for SN~2009kf,
the most energetic case among ordinary SNe~IIP \citep{UCB_10}.
SN~1987A-like supernovae related to the explosion of a blue supergiant (BSG)
sometimes are classified as peculiar SNe~IIP, although their broad luminosity
maximum at about 90--100\,days is completely powered by the radioactive
decay of $^{56}$Co \citep{Woo_88}.
According to the evolutionary models, SNe~IIP originate from the progenitors
in the range of $9 - M_\mathrm{\,IIP}$ with $M_\mathrm{\,IIP} \approx
25$\Msun for solar metallicity and $M_\mathrm{\,IIP} \approx 40$\Msun for
low metallicities \citep{HFWLH_03}.
The hydrodynamic modelling of the well-observed SNe~IIP recovers the ejecta
masses in the range of 13.1--28.1\Msun for the sample of 10 SNe~IIP
\citep{UC_17}.
This is in line with the ejecta mass range of 12.4--25.6\Msun for the sample
of 9 SNe~IIP, inferred by \citet{Nad_03} using the scaling relations based
on the hydrodynamic models \citep{LN_85}.
These ``hydrodynamic'' ejecta masses combined with the neutron star mass lead
to the minimum progenitor masses in the range of 14--30\Msun without taking
into account the mass loss.
A comparison of the archive photometry of SN~IIP progenitors with the
evolutionary RSG models indicates the lower progenitor masses of 8--18\Msun
\citep{Sma_15}, that brings about yet unsettled tension between the
hydrodynamic masses and the masses recovered from the archival photometry
\citep[but see e.g.][]{PZS_17}.
There is a general consensus that the SN~IIP explosion is caused by the core
collapse, although the conversion of the binding energy of a newly born
neutron star into the kinetic energy of the ejecta is not yet fully
understood.
The preferred mechanism is the neutrino-driven explosion, however the successful
self-consistent model is available so far only for low energy
($\approx$10$^{50}$\,erg) events related to $\approx$10\Msun progenitors
\citep{Jan_17}.
The general physical arguments admit that the neutrino-driven explosion is able
to provide the energy of up to $2\times10^{51}$\,erg \citep{Jan_17}.
The alternative mechanism is the magneto-rotational explosion driven by either
magnetic bipolar jets \citep{LW_70, WMW_02, BMA_18}, or by amplified
toroidal field along the equatorial plane \citep{Bis_71, BMA_18}.
The intrinsic feature of the magneto-rotational explosion seems to be a bipolar
ejecta asymmetry.
Another remarkable property of the magneto-rotational mechanism is its
potential to produce the high-energy events \citep{BDL_07} that could
account for SN~2009kf with the explosion energy of $2.15\times10^{52}$\,erg
\citep{UCB_10}.
The magneto-rotational explosion of SNe~IIP thus might be indicated by
the high explosion energy and the ejecta asymmetry.
The explosion energy of SNe~IIP along with the ejecta mass and the pre-SN
radius can be recovered only via the modelling of the light curve and
expansion velocities of the well-observed objects.
As to the explosion asymmetry, it could be imprinted in the asphericity of
the $^{56}$Ni ejecta.
The latter, in turn, can be revealed via the H$\alpha$ asymmetry at the nebular
stage when the line emissivity closely traces the energy deposition of
gamma-rays and positrons from the $^{56}$Co decay \citep{Chu_07}.
Until recently the most conspicuous H$\alpha$ asymmetry in SNe~IIP was
demonstrated by SN~2004dj \citep{CFS_05} and was interpreted as an outcome
of the bipolar $^{56}$Ni ejecta.
Less pronounced, yet apparent, is the asymmetry of SN~2013ej \citep{MDJ_17}
that shows signatures of the asymmetric high-velocity $^{56}$Ni
ejecta \citep{UC_17}.
The asymmetry shown by the recent type IIP SN~2016X \citep{BDER_19}
significantly outclasses that of SN~2004dj.
The nebular H$\alpha$ profile looks weird: two separated peaks of comparable
intensity with the deep minimum in between.
This H$\alpha$ profile unambiguously indicates the bipolar $^{56}$Ni ejecta
\citep{BDER_19} presumably observed along the bipolar axis.
But this is not the only surprise demonstrated by SN~2016X.
Even more unusual is the absence of the SN~IIP generic [\ion{O}{i}] 6300,
6364\,\AA\ nebular emission.
This emission is barely seen on day 340 and completely absent at day 471
\citep{BDER_19}.
At the moment this puzzling phenomenon remains a challenging problem.
Finally, we couldn't help noticing the unusual occultation effect due
to the internal dust.
On day 471 the red H$\alpha$ peak is significantly attenuated by the dust
which signals the internal dust formation \citep{BDER_19} at the right
time for SNe~IIP.
On day 740 the red H$\alpha$ peak completely disappears which reflects
the increase of the amount of the internal dust.
The surprising fact however is that the blue H$\alpha$ peak does not show
any sign of the additional blueshift caused by the occultation, which
would be present in the case of the central (quasi-)spherical dusty zone
likewise in SN~1987A \citep{LDGB_89, MIW_17} and SN~1999em
\citep{ECP_03}.
The unusual manifestations of SN~2016X raise a question, whether we see
the explosion of a normal massive RSG with the progenitor mass and
the explosion energy typical for explored SNe~IIP, i.e., $M \sim 13-30$\Msun
and $E \sim (0.2-2)\times10^{51}$\,erg \citep{UC_17}, or we face
an extraordinary event.
Fortunately, photometric and spectral observations \citep{HWH_18, BDER_19}
provide us with an excellent basis to explore different aspects of
SN 2016X in detail and possibly to clear up the issue.
Below SN 2016X will be studied in several ways.
We start with the hydrodynamic modelling to determine principal SN parameters,
i.e., the ejecta mass, the kinetic energy, and the pre-SN radius;
the amount of $^{56}$Ni will be obtained directly from the luminosity in
the radioactive tail.
We then recover the $^{56}$Ni distribution in the envelope from the nebular
double-peaked H$\alpha$ by means of the computation of the energy
deposition produced by the asymmetric $^{56}$Ni distribution.
This simulation includes effects of the absorption of the H$\alpha$ emission
by the dust, which will permit us to constrain the spatial distribution
of the dusty material.
Finally, we will use X-ray observations during the first 20 days after
the explosion \citep{GDS_16, BDER_19} to infer the pre-SN wind density and
to test a compatibility of the SN hydrodynamic parameters with the
observational effects of the ejecta/wind interaction.
Throughout the paper we use the distance $D = 15.2\pm3.0$\,Mpc \citep{BDER_19}
and the reddening $E(B-V) = 0.04$\,mag \citep{HWH_18}.
The explosion date is set to be 2016 January 18.7 that is recovered from
the fit of the earliest $V$ magnitudes by the hydrodynamic model.
This moment is only 0.25\,days earlier compared to that adopted by
\citet{HWH_18}.
\section{Hydrodynamic modelling}
\label{sec:hydro}
The principal SN parameters, i.e., the explosion energy, the ejecta mass, and
the pre-SN radius are recovered via the hydrodynamic description of the
observational bolometric light curve and the expansion velocity at
the photosphere; the latter should be preferably fitted at the early
photospheric stage, when the effects of the internal asymmetry of the
radioactive $^{56}$Ni ejecta are minimal.
Another crucial parameter, the total amount of $^{56}$Ni, is directly inferred
from the flux at the early (<200\.days) stage of the radioactive tail.
It is reasonable to start the section with clarifying the data that constitute
the observational bases for the hydrodynamic modelling of SN~2016X.
\subsection{Observational data}
\label{sec:hydro-odata}
The available $UBVRI$ photometry \citep{HWH_18} is used to recover the
bolometric light curve of SN~2016X.
The filter fluxes are fitted by a black body spectrum multiplied by the
reduction factor that takes into account the spectrum suppression in
the blue and ultraviolet regions primarily due to the line opacity.
The reduction factor depends on the SN age and its wavelength
dependence is recovered from the SN~1987A spectral energy distribution
at different ages \citep{PKS_95}.
The photometric data taken by the 0.8-m Tsinghua University-NAOC telescope (TNT)
and the Lijiang 2.4-m telescope (LJT) \citep{HWH_18} are in good
agreement and used to recover the observational bolometric light curve.
The time evolution of the observed photospheric velocity is taken from
\citet{HWH_18}.
In addition to the photospheric velocity, we need the maximal velocity of
the SN ejecta because of its key role in constraining hydrodynamic model.
The point is that the maximal velocity is sensitive to the choice of the
radius of the pre-SN model and its density structure.
The earliest spectrum that provides us with a reliable value of the maximal
velocity is the spectrum at 4.56\,days.
Our estimate of the velocity from the blue edge of the H$\alpha$ emission at
that moment is $13750\pm500$\kms, in accordance with the value reported
by \citet[Fig.~10]{HWH_18}.
\subsection{Model overview}
\label{sec:hydro-model}
\begin{figure}
\includegraphics[width=\columnwidth, clip, trim=0 237 54 138]{fig1.eps}
\caption
Density distribution as a function of interior mass (Panel \textbf{a}) and
radius (Panel \textbf{b}) for the pre-SN model.
The central core of 1.4\Msun is omitted.
}
\label{fig:denmr}
\end{figure}
\begin{figure}
\includegraphics[width=\columnwidth, clip, trim=8 17 46 250]{fig2.eps}
\caption
Mass fraction of hydrogen (\emph{black line\/}), helium
(\emph{blue line\/}), CNO elements (\emph{green line\/}),
and Fe-peak elements excluding radioactive $^{56}$Ni
(\emph{magenta line\/}) in the ejected envelope.
}
\label{fig:chcom}
\end{figure}
The one-dimensional hydrodynamic code with the radiation transfer
\citep{Utr_04} is used to explode the hydrostatic non-evolutionary
pre-SN model.
This approach is preferred because the pre-SN model produced by the
stellar evolution computations is generally unable to describe SN~IIP
observations.
This was understood in the wake of SN~1987A \citep{Woo_88}.
A serious modification of the pre-SN model is required that includes
the extended mixing between the metal-rich ejecta, the He-rich and H-rich
envelopes with the smoothing of steep density gradients.
The hand-made mixing is used therefore for the pre-SN model in order to
imitate in the one-dimensional model the mixing produced by the real
essentially three-dimensional explosion \citep{UWJM_17}.
The acceptable pre-SN model is found via numerical simulations of a set of
models with different SN parameters.
The explosion of pre-SN model is initiated by a supersonic piston applied to
the stellar envelope at the boundary with the collapsing 1.4\Msun core.
The pre-SN density and chemical composition of the optimal model are shown
in Fig.~\ref{fig:denmr} and Fig.~\ref{fig:chcom}, respectively.
We did not solve the optimization problem rigorously, since this procedure
requires enormous computational efforts.
Instead, the optimal model is recovered as a compromise between the fits to
the observed light curve and the evolution of the velocity at the
photosphere.
\subsection{Optimal model and supernova parameters}
\label{sec:hydro-modpar}
\begin{figure}
\includegraphics[width=\columnwidth, clip, trim=0 16 54 245]{fig3.eps}
\caption
The model bolometric light curve (\emph{blue line\/}) overplotted on
the bolometric luminosities which we recovered from the photometric data
reported by \citet{HWH_18}.
}
\label{fig:blc}
\end{figure}
\begin{figure}
\includegraphics[width=\columnwidth, clip, trim=6 16 54 245]{fig4.eps}
\caption
The evolution of model velocity at the photosphere defined by the level
$\tau_{eff} = 2/3$ (\emph{blue line\/}) and $\tau_\mathrm{T} = 2/3$
(\emph{magenta line\/}) is compared with the photospheric velocities
estimated from the line absorption minimum \citep{HWH_18}.
The red point is the early expansion velocity obtained from the blue
wing of H$\alpha$.
The horizontal line is the model maximal velocity corresponding to
the strongest density peak formed at the shock breakout stage;
the interaction with the wind is neglected.
For the late-time mismatch see Section~\ref{sec:hydro-modpar}.
}
\label{fig:vph}
\end{figure}
The optimal hydrodynamic model satisfactorily reproduces the bolometric light
curve (Fig.~\ref{fig:blc}) and the expansion velocity at the early stage
(Fig.~\ref{fig:vph}).
The model maximal velocity specified by the density peak at 14000\kms
(Fig.~\ref{fig:denni}) is also consistent with the observed maximal velocity
of $13750\pm500$\kms recovered from the H$\alpha$ emission at 4.56\,days.
Note that both definitions of the photosphere via the effective and
Thomson opacity predict close velocity values.
The model $V$-band light curve fits satisfactorily the initial
behavior of the absolute $V$ magnitude including the discovery point
(Fig.~\ref{fig:onset}); the shown fit suggests that the explosion occurred
at 2016 January 18.7, i.e., 0.25\,days earlier compared to the explosion
moment adopted by \citet{HWH_18}.
\begin{figure}
\includegraphics[width=\columnwidth, clip, trim=15 18 54 213]{fig5.eps}
\caption
The gas (\emph{blue line\/}) and $^{56}$Ni (\emph{red line\/}) density
as a function of velocity at $t=50$\,days.
\emph{Dash-dotted line\/} is the exponential fit $\exp(-v/v_0)$.
}
\label{fig:denni}
\end{figure}
\begin{figure}
\includegraphics[width=\columnwidth, clip, trim=0 18 52 244]{fig6.eps}
\caption[]
$V$-band light curve (\emph{blue line\/}) during the first 20 days for
the optimal model.
Arrow marks the upper limit $V>18.0$\,mag for a non-detection on
January 18.351, red point is the average of the two earliest $V$
estimates separated by 0.008\,days with the uncertainty of 0.3\,mag,
and green point is the $V$ magnitude of 15.1 at January 20.586
\citep{BSS_16}.
Crosses are the observational points of \citet{HWH_18}.
The model explosion date is January 18.7.
}
\label{fig:onset}
\end{figure}
The optimal model is specified by the ejecta mass $M = 28$\Msun, the
kinetic energy $E = 1.73\times10^{51}$\,erg, and the pre-SN radius
$R_0 = 436$\Rsun.
The $^{56}$Ni mass directly recovered from the radioactive tail is 0.0295\Msun.
The radial distribution of $^{56}$Ni in the model is a spherical representation
of the bipolar $^{56}$Ni ejecta recovered from the modelling of the
H$\alpha$ profile at the nebular stage (Section~\ref{sec:ha}).
The total density and the density of $^{56}$Ni in the freely expanding envelope
is shown in Fig.~\ref{fig:denni}.
The oscillatory structure of the density distribution in the outermost layers
($v > 14000$\kms) forms at the shock breakout and is related to the
instability of the radiative acceleration due to the strong opacity
dependence on the temperature and density.
The characteristic property of the optimal model is the large fraction of the
kinetic energy residing in the outer layers: the 4\Msun external ejecta,
about 14\% of the total ejecta mass, contain about 50\% of the total kinetic
energy (Fig.~\ref{fig:mev}).
This feature is closely related to the ability of the hydrodynamic model to
reproduce both the initial luminosity peak and the high expansion velocity
of the external layers.
\begin{figure}
\includegraphics[width=\columnwidth, clip, trim=51 16 24 188]{fig7.eps}
\caption
Distribution of velocity and kinetic energy along the mass coordinate
in the SN ejecta of the optimal model on day 50.
}
\label{fig:mev}
\end{figure}
The uncertainty in the derived SN parameters can be estimated by a variation
of the model parameters around the optimal model.
The uncertainty of the distance (see Section~\ref{sec:intro}) implies
the 40\% uncertainty in the bolometric luminosity.
The scatter in the plot of the photospheric velocity versus time
(Fig.~\ref{fig:vph}) suggests the uncertainty of 7\% in the photospheric
velocity.
We estimate the maximal uncertainty of the plateau length as 3\,days, i.e.,
3\% of the plateau duration.
With these uncertainties of observables, we find the errors of
$\pm360$\Rsun for the initial radius, $\pm2.1$\Msun for the ejecta
mass, $\pm0.19\times10^{51}$\,erg for the explosion energy, and
$\pm0.012$\Msun for the total $^{56}$Ni mass.
The model reveals some deviations from the data, which require comments.
The luminosity excess at the plateau (Fig.~\ref{fig:blc}) stems from the
spherical approximation of the bipolar $^{56}$Ni distribution.
Given the bipolar geometry of the $^{56}$Ni ejecta, the escaping flux should
be anisotropic at the late photospheric stage when the radioactivity
contributes to the escaping luminosity.
Furthermore, according to the H$\alpha$ model (Section~\ref{sec:ha}),
the rear $^{56}$Ni component has larger both mass and velocity compared
to the front component which implies that the backside photosphere
is brighter than the front one.
The observed ``isotropic'' luminosity ($4 \pi D^2 f$) defined via the observed
flux $f$ thus underestimates the overall SN~2016X luminosity, which accounts
for the model flux excess at the plateau.
Another mismatch is the lower velocity at the photosphere compared to the
observed values after about 30\,days (Fig.~\ref{fig:vph}).
This disparity also stems from the spherical approximation of the $^{56}$Ni
distribution.
Indeed, the bipolar $^{56}$Ni ejecta result in the prolate shape of the
photosphere with the large axis aligned along the line of sight.
The observed velocities of absorption minima at the late plateau therefore
are larger than the photospheric velocities of the spherical model.
\subsection{Significance of early stage}
\label{sec:hydro-estage}
The determination of SN~IIP parameters is based on describing the bolometric
or monochromatic light curves and the evolution of expansion velocity at
the photospheric level by means of hydrodynamic modelling.
It is obvious that the derived parameters are more reliable in the case of
a SN~IIP well observed photometrically and spectroscopically from the
explosion moment till the radioactive tail.
Here we would like to emphasize a significant role of the initial
($t < 30$\,days) stage in recovering the SNe~IIP parameters, because
this issue is oftentimes missed.
The issue was partially explored for the well-observed SN~2005cs \citep{UC_08}.
The optimal hydrodynamic model in this case is characterized by the parameter
set $M_{ej} = 15.9$\Msun, $E = 4.1\times10^{50}$\,erg, and $R_0 = 600$\Rsun.
On the other hand, ignoring the fit to the initial stage permits us to describe
the plateau of the light curve and the evolution of the photospheric velocity
at the ages $t > 30$\,days by the explosion of a 9\Msun pre-SN star with
the parameters $M_{ej} = 7.8$\Msun, $E = 1.4\times10^{50}$\,erg,
and $R_0 = 700$\Rsun \citep{UC_08}.
Neglecting the early stage in the hydrodynamic modelling for SN~2005cs
thus leads to a strong disagreement with the optimal model.
A similar mismatch between the models with the full and reduced approaches was
recently demonstrated for SN~1999em \citep[see][Fig.~13]{UWJM_17}.
An alternative ``easy-to-use'' approach to estimate the basic SN~IIP parameters
is provided by the Litvinova-Nadyozhin relations between the parameters and
the SN observables: the plateau duration, the luminosity and the
photospheric velocity at the middle of the plateau \citep{LN_85}.
Although these relations are based on the extended grid of hydrodynamic models
one should keep in mind that the models are not aimed at the description
of the full data set on the bolometric light curve and the expansion
velocities, so the parameters derived with this approach may differ by
a significant factor from those determined via the hydrodynamic modelling
of a particular SN~IIP.
Another drawback of this approach is neglecting the influence of radioactive
$^{56}$Ni on the light curve.
The hydrodynamic parameters of the well-observed SN~1999em are
$M_{ej} = 19$\Msun, $E = 1.3\times10^{51}$\,erg, and $R_0 = 500$\Rsun
\citep{Utr_07}, whereas the Litvinova-Nadyozhin relations result in
$M_{ej} = 15$\Msun, $E = 0.68\times10^{51}$\,erg, and $R_0 = 414$\Rsun
\citep{Nad_03}, i.e., 20\% lower ejecta mass and twice as low explosion
energy.
For SN~2016X the Litvinova-Nadyozhin relations suggest $M_{ej} = 23.5$\Msun,
$E = 1.76\times10^{51}$\,erg, and $R_0 = 130$\Rsun, i.e., 16\% lower ejecta
mass and 3 times lower pre-SN radius compared to our optimal model.
To summarize, neglecting the description of the light curve and expansion
velocities at the early epoch, $t < 30$\,days, can significantly
affect the SN~IIP parameters inferred via the radiation hydrodynamic
modelling of the RSG explosion in the framework of setting described above.
\section{Double-peaked H$\alpha$ and dust distribution}
\label{sec:ha}
The double-peaked H$\alpha$ profile in late nebular spectra of SN~2016X
\citep{BDER_19} is attributed to the bipolar $^{56}$Ni ejecta embedded
in the spherical hydrogen envelope.
The striking feature of SN~2016X is that the H$\alpha$ peaks are fully
separated by the deep minimum \citep[Fig.~1]{BDER_19} which indicates
that we look at the supernova almost along the bipolar axis.
On days 471 and 740 the H$\alpha$ is affected by the dust absorption:
the red peak first gets weaker and on day 740 completely disappears.
The model for the H$\alpha$ at late stages therefore should include absorption
by the internal dust.
The central to our model is the assumption that the bipolar $^{56}$Ni ejecta
do not disturb the overall hydrogen spherical symmetry.
For the accepted bipolar $^{56}$Ni distribution the gamma-ray energy
deposition is calculated in the single flight approximation.
The effective absorption coefficient for gamma-quanta of $^{56}$Co decay
is approximated as $k_{\gamma} = 0.06 Y_e$\,cm$^2$\,g$^{-1}$, where
$Y_{e}$ is a number of electrons per nucleon \citep{KF_92}.
Positrons of $e^-$-capture deposit their kinetic energy on-the-spot.
The H$\alpha$ emissivity is assumed to be proportional to the local deposition
rate; the emissivity saturation due to the complete ionization never
attains at the relevant epochs.
The additional hydrogen ionization by the photoionization from
the second level is neglected; this process however dominates at
the early (195\,days) nebular stage when the peaks contrast is
relatively small and on day 142 when the double-peaked structure is not
seen at all \citep{HWH_18}.
The transformation between days 142 and 195 is related to the significant
decrease of the rate of the hydrogen photoionization from the second level
compared to the non-thermal ionization and excitation.
On day 340 the photoionization from the second level is negligible, so the
H$\alpha$ emissivity rate is uniquely linked with the local deposition
of the energy of gamma-quanta and positrons of $^{56}$Co decay,
which favours the reliable inference of the $^{56}$Ni distribution from
the H$\alpha$ profile.
The hydrogen abundance is assumed to be solar and homogeneous all over
envelope except for the central region $v < v_h = 500$\kms where
no hydrogen is assumed.
The density distribution of the homologously expanding envelope is exponential,
$\rho = \rho_0\exp{(-v/v_0)}$ with $\rho_0\propto t^{-3}$ and $v_0$
determined by the ejecta mass of 28\Msun and the kinetic energy of
$1.73\times10^{51}$\,erg.
\begin{table}
\centering
\caption[]{Parameters of $^{56}$Ni components.}
\label{tab:ni}
\begin{tabular}{lccc}
\hline
\noalign{\smallskip}
Component & $v_s$ & $v_r$ & $\mu$ \\
& \multicolumn{2}{c}{(km\,s$^{-1}$)} & \\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
front & 1100 & 1100 & 1 \\
rear & 2400 & 1600 & 1.5 \\
central & 0 & 2000 & 0.075 \\
\noalign{\smallskip}
\hline
\end{tabular}
\end{table}
\begin{figure}
\includegraphics[width=\columnwidth, clip, trim=52 112 71 127]{fig8.eps}
\caption
H$\alpha$ line in nebular spectra of SN~2016X (\emph{black line\/}) with
the overplotted profile of model (\emph{red line\/}) in
Table~\ref{tab:ni} on days 340, 471, and 740.
Panel \textbf{a}: on day 340 the dust absorption is absent.
Panel \textbf{b}: the spectrum on day 471 is reproduced by the same model
as on day 340 that includes a geometrically thin dusty disk with
the optical depth $\tau_d = 0.85$ shifted by 100\kms towards far side
(see Fig.~\ref{fig:cart}).
Panel \textbf{c}: the spectrum on day 740 with the same model as on day 471,
but for the dust optical depth of 3.
Panel \textbf{d}: the spectrum on day 740 cannot be reproduced with the
dusty sphere (\emph{blue line\/}) nor the geometrically thick dusty
disk (\emph{magenta line\/}).
}
\label{fig:ha}
\end{figure}
The bipolar $^{56}$Ni distribution is represented by the front and rear
homogeneous spheres.
We also tried homogeneous ellipsoids and conies but with less success.
The central spherical component with the boundary velocity of 2000\kms
is also included.
All the components lie on the same axis arbitrarily inclined by
$i = 10^{\circ}$ with respect to the line of sight; in fact the observed
profile admits the inclination angle of the bi-polar axis in the range of
$i \lesssim 20^{\circ}$.
Table~\ref{tab:ni} contains the derived shift ($v_s$), radius ($v_r$),
and the relative mass of components for the optimal model
(Fig.~\ref{fig:ha}).
Interestingly, the recovered bipolar structure is asymmetric: the mass and
the shift velocity of the rear component are significantly larger compared
to the front component (Table~\ref{tab:ni}).
\begin{figure}
\includegraphics[width=0.9\columnwidth, clip, trim=202 271 79 144]{fig9.eps}
\centering
\caption
The configuration of the three homogeneous spheres of $^{56}$Ni in the
model of double-peaked H$\alpha$.
The black vertical line section renders the dusty disk in the model of
H$\alpha$ on day 740.
Slanted line section shows the line of sight (LOS) assuming $10^{\circ}$
inclination.
}
\label{fig:cart}
\end{figure}
On days 471 and 740 the H$\alpha$ is strongly affected by the dust formed in
the inner ejecta \citep{BDER_19}.
Remarkably, an attempt to describe this effect in terms of the central dusty
sphere, likewise it has been done in the case of SN~1987A \citep{LDGB_89}
and SN~1999em \citep{ECP_03}, fails (Fig.~\ref{fig:ha}d).
The dusty thick disk aligned perpendicular to the bipolar axis with the
diameter/thickness ratio of 5 is also ruled out (Fig.~\ref{fig:ha}d).
While the red component can be fully absorbed for both dust configurations,
the blue peak is modified by the dust absorption to an extent that makes
both models unacceptable.
The best fit on day 740 (Fig.~\ref{fig:ha}c) is attained in the model with
a thin dusty disk of the radius of 2200\kms and the optical depth
$\tau_d = 3$, aligned perpendicular to the bipolar axis and shifted by
$\approx$100\kms towards rear component.
It should be emphasized, that the model circular plane disk is an idealization.
In reality this could be a non-circular irregular disk-like structure.
On day 471 the same model requires the disk optical depth of 0.85
(Fig.~\ref{fig:ha}b).
The overall configuration of $^{56}$Ni components with the model dusty disk
is shown in Fig.~\ref{fig:cart}.
\begin{table}
\centering
\caption[]{Parameters of \ion{Ca}{ii} components.}
\label{tab:ca}
\begin{tabular}{lccc}
\hline
\noalign{\smallskip}
Component & $v_s$ & $v_r$ & $\mu$ \\
& \multicolumn{2}{c}{(km\,s$^{-1}$)} & \\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
front & 800 & 800 & 1 \\
rear & 2400 & 1600 & 0.2 \\
central & 0 & 2000 & 1.25 \\
\noalign{\smallskip}
\hline
\end{tabular}
\end{table}
\begin{figure}
\includegraphics[width=0.9\columnwidth, clip, trim=124 130 80 220]{fig10.eps}
\centering
\caption
[\ion{Ca}{ii}]~7291, 7324\,\AA\ doublet in SN~2016X on day 340
(\emph{black line\/}) with the overplotted model profile
(\emph{red line\/}) (Table \ref{tab:ca}) that demonstrates the
dominance of the front component.
Zero of radial velocity corresponds to the rest frame position of
the 7281.47\,\AA\ line.
}
\label{fig:ca}
\end{figure}
It should be emphasized that the absence of the dust extinction on day 340
is consistent with the fact that the dust in SN~1987A and SN~1999em forms
only after 400\,days.
This however raises a question, what is the origin of the blueshift of emission
peaks of [\ion{Ca}{ii}]~7291, 7324\,\AA\ doublet reported by \citet{BDER_19}.
This cannot be the effect of the internal dust because even at the early
nebular phase (195\,days) the [\ion{Ca}{ii}] doublet shows the similar
asymmetry.
We suggest that the line asymmetry in the [\ion{Ca}{ii}] doublet is related to
the asymmetry of the luminosity of the front and rear bi-polar components
of [\ion{Ca}{ii}] emission.
This possibility is illustrated in Fig.~\ref{fig:ca} that shows [\ion{Ca}{ii}]
doublet on day 340 with the overplotted synthetic profile.
The model includes three spherical components of homogeneous emissivity similar
to the $^{56}$Ni distribution.
Models parameters, i.e., shift, radius, and relative luminosities of
\ion{Ca}{ii} components are given in Table~\ref{tab:ca}.
These values emphasize the fact that the doublet luminosity of the front
component is 5 times larger than rear one.
This asymmetry can arise both from the different Ca masses or/and different
ionization and excitation conditions in the front and rear components.
The electron number densities in the components are comparable so, if the
temperatures are also comparable, then the front component
contains several times larger amount of Ca compared to the rear component.
It might well be that the asymmetry of Ca components is the another
manifestation of the asymmetry of $^{56}$Ni components.
The model for [\ion{Ca}{ii}] profile includes additional parameter, the
ratio $R$ of blue-to-red emissivity that, in turn, depends on the
line Sobolev optical depth.
We find that the observed profile requires $R = 1.3$ (compared to $R = 1.5$
for the optically thin case).
The recovered ratio corresponds to the optical depth in the 7291\,\AA\ line
$\tau(7291\mbox{\AA}) = 1.08$.
Since the \ion{Ca}{ii} luminosity of the front component dominates and its
radius is minimal, the \ion{Ca}{ii} density is maximal in the front
component.
The recovered optical depth therefore refers primarily to the front component.
The Sobolev optical depth can be converted to the \ion{Ca}{ii} mass of
$1.1\times10^{-3}$\Msun for a given volume of the front component.
Assuming a linear scaling between the mass and luminosity we obtain the
total \ion{Ca}{ii} mass of three components of
$\approx2.7\times10^{-3}$\Msun.
Given a possible contribution of \ion{Ca}{iii}, (\ion{Ca}{ii} can be easily
ionized by Ly$\alpha$ quanta) the found mass of Ca should be considered
as the lower limit.
\section{Wind density and X-rays}
\label{sec:wind}
The reported X-ray luminosity of SN 2016X \citep{BDER_19} can be used to
recover the density of the circumstellar (CS) gas lost by the pre-SN.
To this end we employ the interaction model based on the thin shell
approximation \citep{Che_82, Giu_82}.
The model was described earlier \citep{CCL_04} and we recap here only essential
points.
The gas swept up by the forward and reverse shock forms the shell whose
expansion rate is governed by the equations of motion and mass conservation.
The X-ray luminosity of both shocks at the moment $t$ is calculated as
the shock kinetic luminosity with the factor of the radiation efficiency
$\eta = t/(t+t_c)$, where $t_c$ is the cooling time of the postshock gas
calculated for the density four times of the preshock density.
The SN density distribution is set to be exponential
$\rho = \rho_0\exp{(-v/v_0)}$, which is in line with the hydrodynamic
model (cf. Fig.~\ref{fig:denni}).
Parameters $\rho_0$ and $v_0$ are specified by the ejecta mass $M_{ej}$ and
the kinetic energy $E$.
The escaping X-rays are subject to the absorption in the SN ejecta and in the
cool dense shell that forms due to the cooling of shocked ejecta in the
reverse shock.
To compare the model X-ray luminosity with the observed values, we take into
account only the X-ray radiation in the range $h\nu < 10$\,keV in
accordance with the reported {\em Chandra} and {\em Swift} data
\citep{BDER_19}.
The X-ray emission from the reverse shock gets into this band, while the
relatively low contribution of the forward shock luminosity is taken
into account adopting the spectrum $\epsilon^{-0.5}\exp{(-\epsilon/kT)}$.
\begin{figure}
\includegraphics[width=\columnwidth, clip, trim=52 132 10 198]{fig11.eps}
\caption
The X-ray luminosity in the range of $< 10$\,keV and the boundary velocity
of the unshocked ejecta for the CS interaction model.
Panel \textbf{a}: the total escaping X-ray luminosity (\emph{red line\/})
with contribution of the forward shock luminosity (\emph{blue line\/}).
Dots are the {\em Chandra} and {\em Swift} luminosity in $0.3-10$\,keV band
\citep{BDER_19}.
Panel \textbf{b}: the model boundary velocity of the unshocked ejecta
(\emph{red line\/}).
Diamonds show the maximal velocity of the ejecta recovered from
the H$\alpha$ blue emission wing on day 4.6 and from the blue edge of
the H$\alpha$ absorption component at 28.8\,days in the spectrum of
\citet{HWH_18}.
Inset shows the CS density distribution.
}
\label{fig:xray}
\end{figure}
The model X-ray luminosity reproduces the reported data (Fig.~\ref{fig:xray}a)
for the ejecta parameters $M_{ej} = 28$\Msun, $E = 2\times10^{51}$\,erg, and
the CS density distribution $\rho = const$ in the range of
$r < 4\times10^{14}$\,cm and $\rho \propto r^{-2}$ for larger radii
(Fig.~\ref{fig:xray}b, inset).
The model boundary velocity of the unshocked ejecta well fits the velocity
at 4.56\,days found from the blue wing of the H$\alpha$ emission and
the velocity at 28.8\,days found from the blue edge of the H$\alpha$
absorption component (Fig.~\ref{fig:xray}b).
To summarize, the SN and CSM models reproduce both the X-ray data and the
evolution of the maximal velocity of the unshocked ejecta.
Yet it should be emphasized that the recovered CS density distribution can be
also consistent with the other options of $M_{ej}$ and $E$, provided their
values obey the scaling $E \propto M_{ej}^{\,0.88}$.
The found wind density at $r > 4\times10^{14}$\,cm is characterized by the
parameter $w = \dot{M}/u = 1.9\times10^{14}$\,g\,cm$^{-1}$.
It is useful to express this parameter via convenient units as
$w = \dot{M}_{-6}/u_{10} = 3$, where $\dot{M}_{-6}$ is in units of
$10^{-6}$\Msun\,yr$^{-1}$ and $u_{10}$ is in units of 10\kms.
The inferred value $w = 3$ is three times as large as that for the type IIP
SN~1999em and SN~2004dj \citep{CCU_07}.
Since the mass-loss rate increases with the stellar mass one can conclude
that the progenitor of SN~2016X was more massive compared to both mentioned
SNe~IIP.
In order to find the RSG mass loss rate, one needs to know the wind velocity.
The wind velocity of Milky Way RSGs with a mass of $\sim$30\Msun
(e.g. $\mu$\,Cep and VX\,Sgr) is 20\kms \citep{MJ_11}.
Assuming the same wind velocity for SN 2016X, the mass-loss rate of its
progenitor turns out to be $6\times10^{-6}$\Msun\,yr$^{-1}$.
\section{Discussion and Conclusions}
\label{sec:disc}
\begin{table}
\centering
\caption[]{Hydrodynamic models of type IIP supernovae.}
\label{tab:sumtab}
\begin{tabular}{@{ } l c c @{ } c @{ } c @{ } c c @{ }}
\hline
\noalign{\smallskip}
SN & $R_0$ & $M_{ej}$ & $E$ & $M_{\mathrm{Ni}}$
& $v_{\mathrm{Ni}}^{max}$ & $v_{\mathrm{H}}^{min}$ \\
& (\Rsun) & (\Msun) & ($10^{51}$\,erg) & ($10^{-2}$\Msun)
& \multicolumn{2}{c}{(km\,s$^{-1}$)}\\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
1987A & 35 & 18 & 1.5 & 7.65 & 3000 & 600 \\
1999em & 500 & 19 & 1.3 & 3.6 & 660 & 700 \\
2000cb & 35 & 22.3 & 4.4 & 8.3 & 8400 & 440 \\
2003Z & 230 & 14 & 0.245 & 0.63 & 535 & 360 \\
2004et & 1500 & 22.9 & 2.3 & 6.8 & 1000 & 300 \\
2005cs & 600 & 15.9 & 0.41 & 0.82 & 610 & 300 \\
2008in & 570 & 13.6 & 0.505 & 1.5 & 770 & 490 \\
2009kf & 2000 & 28.1 & 21.5 & 40.0 & 7700 & 410 \\
2012A & 715 & 13.1 & 0.525 & 1.16 & 710 & 400 \\
2013ej & 1500 & 26.1 & 1.4 & 3.9 & 6500 & 800 \\
2016X & 436 & 28.0 & 1.73 & 2.95 & 4000 & 760 \\
\noalign{\smallskip}
\hline
\end{tabular}
\end{table}
The recovered parameters of peculiar SN~2016X suggest rather high ejecta mass
compared to other SNe~IIP (Table~\ref{tab:sumtab}) which were studied
earlier on and whose parameters were summarized by \citet{UC_17}.
Allowing for the collapsed core, the pre-SN mass amounts to 29.4\Msun
which should be considered as a lower limit for the main-sequence mass
of the progenitor.
The mass lost by the BSG wind during the main-sequence phase can be estimated
unfortunately only with a large uncertainty.
The theoretical bolometric luminosity of a 30\Msun progenitor
is $\approx$4$\times10^5\,L_{\sun}$ \citep[e.g.][]{EFSMC_13}.
Observational mass-loss rate for a BSG of that luminosity lies in the range
from $5\times10^{-8}$ to $\sim$10$^{-6}$\Msun\,yr$^{-1}$
\citep[cf.][]{KK_17}.
Given the lifetime at the hydrogen burning stage of about $6\times10^6$\,yr
\citep{MMSSC_94}, the lost mass thus turns out in the range of $\sim$0.3 to
$\sim$6\Msun.
At the RSG stage, the recovered mass-loss rate of about
$6\times10^{-6}$\Msun\,yr$^{-1}$ and the RSG lifetime of
$\approx$10$^5$\,yr \citep{MCE_15} result in the lost mass of $\sim$0.6\Msun.
The total mass lost by the progenitor thus lies in the range of 0.9 to
6.6\Msun.
These values combined with the error of the ejecta mass are translated into
the SN~2016X progenitor mass range from 28.2 to 38.1\Msun.
Even with the minimal progenitor mass of about 30\Msun, SN~2016X turns out
the most massive among normal SNe~IIP on the scatter diagrams
$E$ vs. $M_\mathrm{ZAMS}$ and $M$($^{56}$Ni) vs. $M_\mathrm{ZAMS}$
(Fig.~\ref{fig:ennims}).
We speculate that the large progenitor mass could be somehow related to
peculiar manifestations of SN~2016X including the bipolar structure of
the $^{56}$Ni ejecta, the low luminosity of the [\ion{O}{i}] doublet,
and the unusual disk-like distribution of the dusty material.
The pronounced bipolar $^{56}$Ni ejecta is likely produced by the bipolar
explosion asymmetry.
Factors that favored this asymmetry could include a large scale instability
at the core collapse or/and a rotation.
It is likely that the dusty disk-like structure is also an outcome the bipolar
asymmetry.
This structure could be related to either the dense two-dimensional
condensation in the equatorial plane formed during the explosion, or
the fragment of the dense shell of the $^{56}$Ni bubble in the far
hemisphere.
\begin{figure}
\includegraphics[width=\columnwidth, clip, trim=0 23 27 27]{fig12.eps}
\caption
Explosion energy (Panel \textbf{a}) and $^{56}$Ni mass (Panel \textbf{b})
vs. hydrodynamic progenitor mass for SN~2016X and ten other
core-collapse SNe \citep{UC_17}.
}
\label{fig:ennims}
\end{figure}
The absence of the normal [\ion{O}{i}] emission might be explained by
the low amount of the synthesized oxygen in the ejected envelope.
There could be two reasons for that: (i) the low-mass progenitor
$M_\mathrm{ZAMS} \approx 10$\Msun with the pre-SN devoid of the oxygen
mantle around the collapsing core, or (ii) the fallback of the oxygen
shell onto the black hole.
The first possibility should be discarded since it contradicts to the large
ejecta mass.
The second option cannot be ruled out because we are not aware of the explosion
details.
An alternative possibility is that the O-rich matter in SN~2016X ejecta
is too cold for the normal [\ion{O}{i}] emission.
This situation could arise, if CO and SiO molecules are formed at the nebular
stage all over the oxygen ejecta.
The conjecture follows the findings that the cooling of the O-rich matter
via CO and SiO molecules in SN~1987A strongly suppresses the nebular
[\ion{O}{i}] emission, so the observed [\ion{O}{i}] emission comes out only
from the oxygen matter devoid of molecules \citep{LD_95}.
For SN~2016X the luminosity of the [\ion{O}{i}] 6300, 6364\,\AA\ on day 340
is $\leq$7.7$\times10^{37}$\,erg\,s$^{-1}$ according to the spectrum
reported by \cite{BDER_19}.
The oxygen-core mass of a 30\Msun progenitor is of 8\Msun \citep{WHW_02}.
Taking into account the collapsed 1.4\Msun core and assuming the solar
C/O ratio, we obtain the 4.6\Msun oxygen ejecta.
The luminosity of the [\ion{O}{i}] doublet from this amount of oxygen meets
the observation constraint, if the excitation temperature of the oxygen
is $\leq$2000\,K.
This requirement is easily fulfilled since the temperature in the oxygen zone
of SN~1987A cooled by CO and SiO molecules is about 1800\,K during the
first year and later on gets lower \citep{LD_95}.
The SN~2016X is presumably a special case in which CO and SiO molecules form
throughout the O-rich matter.
This assumption combined with a moderate amount of $^{56}$Ni would result in
the strong cooling of all the O-rich gas thus inhibiting [\ion{O}{i}]
emission.
One may conjecture that the required Si and O abundance in the oxygen-rich
matter is the outcome of the He and C burning in combination with the
convection and rotation-induced mixing \citep[e.g.][]{Heg_98}.
If the efficient CO and SiO cooling occurs in SN~2016X, one expects that
similar high-mass SNe~IIP at the nebular stage should demonstrate a strong CO
and SiO emission in the vibrational fundamental and first overtone bands.
The recovered mass of the SN~2016X ejecta aggravates the well-known disparity
between the relatively high masses of SN~IIP progenitors estimated by means
of the hydrodynamic modelling of the well-observed SNe~IIP
(Table~\ref{tab:sumtab}) and the lower progenitor masses inferred from
the archival photometry using the stellar evolution models.
At the moment the stellar evolution theory is unable to reliably fix the upper
boundary of the mass range producing SNe~IIP, $M_\mathrm{\,IIP}$, because
the resulting loss of the hydrogen envelope is a matter of the adopted
prescription for mass loss and rotation effects.
Recent evolutionary calculations of massive stars suggest that
$M_\mathrm{\,IIP} \sim 20$\Msun at the solar metallicity without rotation
\citep{LC_18}.
This is lower than the former estimate of $\sim$30--35\Msun \citep{LC_10}.
The reassessment is caused by a high mass-loss rate taken from \citet{LCZL_05}.
However, one should keep in mind that this mass-loss prescription suffers from
large uncertainties.
Particularly, it predicts a larger mass-loss rate by a factor of about 1.8\,dex
compared to that of the well-studied Galactic RSG $\alpha$\,Ori and
$\mu$\,Cep \citep[Figure~11]{LCZL_05}.
The situation with the value of $M_\mathrm{\,IIP}$ predicted by the theory of
stellar evolution thus looks rather uncertain with the conservative estimate
of $M_\mathrm{\,IIP}$ in the range of 20--35\Msun for stars with the solar
metallicity and zero rotation velocity.
\section*{Acknowledgements}
We thank Subo Dong for kindly sharing spectra of SN~2016X.
V.P.U. is partially supported by Russian Scientific Foundation grant
19-12-00229.
|
1,108,101,566,348 | arxiv | \section{Introduction}
2S 1845$-$024\xspace (also known as GS 1843$-$024) is a transient X-ray source discovered with the {\it Ginga} observatory \citep{1988IAUC.4661....2M,Koyama1990}. It belongs to the class of high mass X-ray binaries (HXMB). Many of the physical properties of the system and the neutron star (NS) are still unknown. The system contains an X-ray pulsar (XRP) with a spin period $P_{\rm spin}$ = 94.8 s \citep{Makino1988,Zhang1996}. A series of the Burst and Transient Source Experiment (BATSE) observations performed in 1991--1997 detected 10 type I outbursts revealing an orbital period $P_{\rm orbital}$ = 242.18 $\pm$ 0.01~d for the system \citep{Finger1999}. More outbursts around periastron passage (orbital phase zero) were detected later by different observatories \citep[e.g.,][]{2008ARep...52..138D}. No type II outbursts have yet been detected from the source. The timing analysis allowes to determine the orbital parameters of the system: the high eccentricity of $e=0.879\pm0.005$ and the projected semi-major axis $a_{\rm x}\sin i = 689 \pm 38$ lt-s, suggesting a high-mass companion (M > 7$M_{\rm \odot}$) for 2S 1845$-$024\xspace\ \citep{Finger1999,Koyama1990}.
The companion star in this system has not yet been directly identified. However, the source is classified as a transient Be/XRP based on the outburst pattern and the highly eccentric orbit \citep{Koyama1990,Zhang1996,Finger1999}. In addition, the location of the source in the \citet{Corbet1986} diagram is consistent with a Be/NS binary. The 2--38 keV X-ray spectrum of 2S 1845$-$024\xspace\, obtained by the {\it Ginga} Large Area Counter (LAC), fitted by a power-law with a high-energy cutoff model, revealed a large hydrogen column density $N_{\rm H}$ $\simeq$ $(1.5-3.0) \times 10^{23}$ cm$^{-2}$ in the direction to the source \citep{Koyama1990}. Assuming that the lower limit on $N_{\rm H}$ is accounted for by the interstellar medium, \citet{Koyama1990} estimated the source distance to be about 10 kpc. We emphasize that there is no {\it Gaia} distance measurements available for this source.
The BATSE observations of 2S 1845$-$024\xspace\ also measured a secular long-term spin-up trend at a rate of $\dot{\nu} \sim 2.7 \times 10^{-13}$ Hz s$^{-1}$ during the 1991--1997 period of activity \citep{Finger1999}. Currently, however, the observations provided with the {\it Fermi} Gamma-ray Burst Monitor (GBM) Accreting Pulsars Program \citep[GAPP\footnote{\url{http://gammaray.nsstc.nasa.gov/gbm/science/pulsars/}};][]{Malacaria2020} show that the source has been in a spin-down phase during the last six years. It can be, therefore, inferred that the source had undergone a torque reversal before entering to the long-term spin-down trend with a rate $\dot{\nu} \sim -2.4 \times 10^{-13}$ Hz s$^{-1}$ \citep{Malacaria2020}. Because there is no data available for the source in the period between 51560 and 56154 MJD, \cite{Malacaria2020} estimated the torque reversal occurred on 53053 $\pm$ 250 MJD by extrapolating the spin-up and spin-down log-term trends in the gap between BATSE and GBM observations.
Although, there are several X-ray observations available for 2S 1845$-$024\xspace, the properties of the source in the soft and hard X-ray bands have not been fully investigated. Namely, some fundamental parameters as the NS magnetic field strength, the type of the companion star, and the distance to the system are not determined or still under debate. In the current work, we used a single {\it NuSTAR\xspace}\ observation, which was performed during a normal type I outburst on 2017 April 14 as well as several other archival observations obtained with different X-ray satellites, to perform a detailed temporal and spectral analysis of 2S 1845$-$024\xspace\ in a wide energy band in order to determine its properties.
\section{Observations and data reduction}
Since the discovery, 2S 1845$-$024\xspace\, has been extensively observed by several instruments such as {\it NuSTAR\xspace}, {\it XMM-Newton\xspace}, {\it Chandra\xspace}\ and {\it Swift\xspace}. The summary of the observations utilized in our work is given in Table~\ref{tab:observations}. Here we focus on the details of the observations obtained by the mentioned X-ray observatories which were performed at different orbital phases (see Fig.~\ref{fig:orbit}) calculated using ephemeris $T_{\rm Periastron}$ = 2449616.98$\pm$0.18 (JD) \citep{Finger1999}. The temporal and spectral analysis was done using {\sc heasoft} 6.28\footnote{\url{http://heasarc.nasa.gov/lheasoft/}} and {\sc xspec} 12.11.1b\footnote{\url{https://heasarc.gsfc.nasa.gov/xanadu/xspec/manual/XspecManual.html}}. For the spectral analysis the data were grouped to have at least 25 count per energy bin in order to use $\chi^2$ statistics unless otherwise stated in the text.
\begin{figure}
\begin{center}
\includegraphics[width=0.9\columnwidth]{orb_model1.pdf}
\end{center}
\caption{Orbital phases corresponding to the date of each observation performed by {\it NuSTAR\xspace}, {\it XMM-Newton\xspace}, {\it Chandra\xspace}\ and {\it Swift\xspace}/XRT.}
\label{fig:orbit}
\end{figure}
\subsection{{\it NuSTAR\xspace}\ observations}
\label{nu-observations}
{\it NuSTAR\xspace}\ X-ray observatory consists of two identical and independent co-aligned X-ray telescopes focusing the incident X-rays into two Focal Plane Modules A and B (FPMA and FPMB) \citep{Harrison2013}. The instruments contain four (2$\times$2) solid-state cadmium zinc telluride (CdZnTe) pixel detectors operating in a wide energy range of 3--79 keV. {\it NuSTAR\xspace}\ instruments provide an X-ray imaging resolution of 18$\hbox{$^{\prime\prime}$}$ full width at half maximum (FWHM) and a spectral energy resolution of 400 eV (FWHM) at 10 keV. 2S 1845$-$024\xspace\ was observed with {\it NuSTAR\xspace}\ on 2017 April 14 for a duration of $\sim$35 ks during the peak of the outburst. In order to reduce the raw data, we followed the standard procedure explained in {\it NuSTAR\xspace}\ official user guides\footnote{\url{https://nustar.ssdc.asi.it/news.php\#}} the {\it NuSTAR\xspace}\ Data Analysis Software {\sc nustardas} v2.0.0 with a {\sc caldb} version 20201130. The source and background photons were extracted from circular regions with radii 90$\hbox{$^{\prime\prime}$}$ and 150$\hbox{$^{\prime\prime}$}$, respectively, for both the modules.
\subsection{{\it Swift\xspace}\ observations}
\label{swiftdata}
2S 1845$-$024\xspace\ was observed by the XRT telescope \citep{Burrows2005swift} onboard the \textit{Neil Gehrels Swift Observatory} \citep[{\it Swift\xspace};][]{Gehrels2004swift} several times in the period of 2007--2019. In this study, we used five {\it Swift\xspace}/XRT observations, all obtained in the photon counting (PC) mode as listed in Table~\ref{tab:observations}. The corresponding spectra were extracted using the online tools\footnote{\url{https://www.swift.ac.uk/user_objects/}} \citep{Evans2009XRTonline} provided by the UK Swift Science Data Centre. Because the count rate in all {\it Swift\xspace}\ observations is below 0.3 count s$^{-1}$, the data were not affected by the pile-up effect.\footnote{\url{https://www.swift.ac.uk/analysis/xrt/pileup.php}}
One of the {\it Swift\xspace}/XRT observations (ObsID 00088089001) were performed simultaneously with the {\it NuSTAR\xspace}\ observation allowing us to get the spectral parameters in a wider energy band 0.3--79 keV. The source spectra as observed by {\it Swift\xspace}/XRT and {\it NuSTAR\xspace}/FPMA-B were then fitted simultaneously in the energy range 0.3--10 and 4--79 keV, respectively, accounting for difference in a normalization.
\begin{table}
\centering
\caption{Observation log of 2S 1845$-$024\xspace.}
\begin{tabular}{cccc}
\hline \hline
ObsID & Start date & Start MJD & Exposure (ks) \\
\hline
\multicolumn{4}{c}{{\it NuSTAR\xspace}} \\
90201056002 & 2017-04-14 & 57857.59 & 34.71 \\
\multicolumn{4}{c}{{\it XMM-Newton\xspace}} \\
0302970601 & 2006-04-11 & 53836.75 & 22.66 \\
0302970801 & 2006-10-06 & 54014.39 & 15.91 \\
\multicolumn{4}{c}{{\it Chandra\xspace}} \\
2692 & 2002-08-18 & 52504.25 & 4.96 \\
2689 & 2002-09-04 & 52521.42 & 14.80 \\
2691 & 2002-09-06 & 52523.31 & 14.76 \\
2690 & 2002-09-12 & 52529.78 & 15.09 \\
10512 & 2009-02-21 & 54883.40 & 5.76 \\
\multicolumn{4}{c}{{\it Swift\xspace}/XRT} \\
00609139000 & 2014-08-10 & 56879.59 & 0.80 \\
00033739001 & 2015-04-14 & 57126.04 & 0.59 \\
00707545000 & 2016-08-06 & 57606.47 & 1.53 \\
00745966000 & 2017-04-06 & 57849.52 & 0.57 \\
00088089001 & 2017-04-14 & 57857.82 & 1.98 \\
\multicolumn{4}{c}{\it UKIDSS/UKIRT} \\
4543927 & 2006-06-12 & 53898.468 & 0.39 \\
6610544 & 2006-06-12 & 53898.472 & 0.36\\
\hline
\end{tabular}
\label{tab:observations}
\end{table}
\begin{figure}
\begin{center}
\includegraphics[width=0.9\columnwidth]{PP.pdf}
\end{center}
\caption{\textit{Top panels:} Pulse profile of 2S 1845$-$024\xspace\ in different energy bands obtained from the {\it NuSTAR\xspace}\ observation. Fluxes are normalized by the mean flux in each energy range. The red and blue dashed lines show the main maximum and minimum in the 3--7 keV band, respectively. The black dotted lines in the most upper panel show the phase segments which were used to extract the phase-resolved spectra. \textit{Bottom panel:} Hardness ratio of the source over the pulse phase calculated as a ratio of normalized count rates in the pulse profiles in the energy bands 18--30 and 3--7 keV.
The hardness ratio of the unity is indicated by the horizontal blue solid line.}
\label{fig:PP}
\end{figure}
\subsection{{\it XMM-Newton\xspace}\ observations}
\label{xmmtData}
The X-ray Multi-Mirror Mission ({\it XMM-Newton\xspace}) \citep{Jansen2001} carries three X-ray telescopes each with a medium spectral resolution European Photon Imaging Camera at the focus operating in the range of 0.2--10 keV (EPIC-MOS1, -MOS2 and -pn).
2S 1845$-$024\xspace\ was observed by {\it XMM-Newton\xspace}\ two times in 2006 with the exposure times of $\sim$23 and $\sim$16 ks with all three EPIC X-ray instruments. We reduced and analyzed the data following the standard procedure explained in Science Analysis System (SAS) user guide\footnote{\url{https://xmm-tools.cosmos.esa.int/external/xmm_user_support/documentation/sas_usg/USG/}} using the software SAS version 17.0.0 and the latest available calibration files. We extracted the source spectra and light curves from a source-centered circular regions with a radius of 20\hbox{$^{\prime\prime}$}\ for all three instruments. The background likewise was extracted from source-free regions of the same radius in the same chips. We note that there are no MOS1 data available for observation ObsID 0302970601.
\subsection{{\it Chandra\xspace}\ observations}
\label{chandradata}
2S 1845$-$024\xspace\ was observed by the {\it Chandra\xspace}\ advanced CCD Imaging Spectrometer (ACIS) several times in 2002 and 2009 (see Table~\ref{tab:observations}) providing a total exposure time of 55.4 ks. In all observations the source is located in ACIS-S3 except for the observation ObsID 10512 in which the detector ACIS-I3 was used. Following the standard pipeline procedure,\footnote{\url{https://cxc.cfa.harvard.edu/ciao/threads/index.html}} we reprocessed the data to extract new event files (level 2) using the task {\sc chandra$\_$repro} from the software package {\sc ciao} v4.12 with an up-to-date {\sc caldb} v4.9.1. We then extract the source and background spectra from circular regions with a radii of 10$\hbox{$^{\prime\prime}$}$ and 30$\hbox{$^{\prime\prime}$}$, respectively.
\begin{figure}
\begin{center}
\includegraphics[width=0.9\columnwidth]{PF.pdf}
\end{center}
\caption{Energy dependence of the pulse fraction of 2S 1845$-$024\xspace\ obtained from the {\it NuSTAR\xspace}\ observation.}
\label{fig:PF}
\end{figure}
\subsection{UKIDSS/UKIRT observations}
\label{ukidssdata}
In order to study the type of the companion star in 2S 1845$-$024\xspace\ using the methods explained in \cite{Karasev2015} and \cite{Nabizadeh2019}, the magnitudes of the star in two near-IR filters $H$ and $K$ should be known. We took the magnitude of the counterpart in $K$ filter from the latest public release of the UKIDSS catalog {\it UKIDSS/GPS DR11 PLUS}\footnote{\url{http://surveys.roe.ac.uk/wsa/}}. However, the magnitude of the source in the $H$ filter is not present in that catalog. To solve this problem, we performed an additional photometric analysis of UKIDSS image data (id 4543927 observed on 2006 June 12) using PSF-photometry (DAOPHOT II\footnote{\url{http://www.star.bris.ac.uk/~mbt/daophot/}}) methods.
Having obtained the instrumental magnitudes of all stars in the 3\hbox{$^\prime$}{} vicinity of 2S 1845$-$024\xspace\, we were able to compare these instrumental magnitudes with the ones in the standard UKIDSS catalog (HAperMag3).
We then selected only the stars brighter than 17 magnitudes in the $H$-filter for this analysis, excluding overexposed objects. Thus we estimated a mean correction value and converted DAOPHOT magnitude (in the $H$-filter) of the probable counterpart into the real/observed magnitude in the corresponding filter (see Table~\ref{tab:2S_IR}).
We emphasize that 2S 1845$-$024\xspace\ is not detected in the $J$ filter.
\section{Analysis and results}
\begin{table}
\centering
\caption{Orbital parameters of 2S 1845$-$024\xspace\citep{Finger1999}.}
\label{tab:orbital}
\begin{tabular}{lr}
\hline \hline
Orbital period & 242.18$\pm$0.01 days\\
$T_{\rm Periastron}$ & 2449616.98$\pm$0.18 JD \\
$a_{\rm x}\sin i$ & 689$\pm$38 lt-s \\
Longitude of periastron & 252.2$\pm$9.4 deg \\
Eccentricity & 0.8792$\pm$0.0054 \\
\hline
\end{tabular}
\end{table}
\subsection{Pulse profile and pulsed fraction}
For timing analysis we used {\it NuSTAR\xspace}\ barycentric-corrected and background-subtracted light curves. The binary motion correction was also applied to the light curves to convert the observed time to the binary-corrected time using the orbital parameters, obtained from \citet{Finger1999} given in Table~\ref{tab:orbital}. The long exposure time and high count rate allowed us to determine the spin period of the NS of $P_{\rm spin}$ = 94.7171(3)~s. To obtain the spin period and its uncertainty the standard {\sc efsearch} procedure from the {\sc ftool} package was applied on $10^3$ simulated light curves created by using the count rates and uncertainties of the original 3--79 keV light curve \citep[see e.g., ][]{Boldin2013}. Considering the wide energy range of {\it NuSTAR\xspace}, we were able to study the pulse profile of the source as a function of energy. For this, we first extracted the source and background light curves in five energy bands 3--7, 7--18, 18--30, 30--50 and 50--79 keV. We then combined the light curves extracted from the modules FPMA and FPMB in order to increase the statistics.
The energy-dependent light curves were folded with the obtained pulse period using the task {\sc efold} from the {\sc xronos} package. Evolution of the pulse profile with energy is shown in the top five panels of Fig.~\ref{fig:PP}. Pulse profiles demonstrate a complicated structure consisting of multiple peaks. The main maximum and the main minimum are around 0.1--0.2 and 0.6--0.7, respectively, where the zero phase is chosen arbitrarily. As can be seen, the pulse profile depends on energy with multi-peak structure becoming more prominent at higher energies. The most significant changes take place around the main minimum and the main maximum of the profile. It can be best illustrated with the hardness ratio constructed using the pulse profiles in 3--7 and 18--30 keV bands as shown in the bottom panel of Fig.~\ref{fig:PP}. The hardness ratio shows two clear hardening of the emission at the rising part of the main maximum and at the center of the main minimum.
We also calculated the pulsed fraction, determined as $PF = (F_{\max} - F_{\min})/(F_{\max} + F_{\min})$, where the $F_{\max}$ and $F_{\min}$ are the maximum and minimum fluxes of the pulse profile, as a function of energy. In the majority of XRPs, the pulse fraction shows a positive correlation with the energy \citep{Lutovinov2009}, however, as shown in Fig.~\ref{fig:PF}, the pulsed fraction in 2S 1845$-$024\xspace\ has values around 40--50\% with no prominent dependence on the energy.
\begin{table}
\caption{Phenomenological models used to fit the source spectral continuum.}
\label{tab:model}
\begin{tabular}{ll}
\hline \hline
Model & Photon energy distribution \\
\hline
\textsc{cutoffpl} & $N(E) = K{E^{-{\rm \alpha}}} {\rm exp}(-E/\beta)$ \\
\textsc{po$\times$highecut} & $M(E)=K{E^{-{\rm \alpha}}}\exp[(E_{\rm c}-E)/E_{\rm f}]$, ($E \geq E_{\rm c}$) \\
& $M(E) = K{E^{-{\rm \alpha}}}$, ($E \leq E_{\rm c}$) \\
\textsc{npex} & $N(E) = (A_{\rm 1}E^{-\alpha_{\rm 1}}+A_{\rm 2}E^{+\alpha_{\rm 2}})~{\rm exp}(-E/kT)$ \\
\textsc{fdcut} & $N(E) = A_{\rm PL} {E^{-\Gamma}}/[{{\rm exp}((E - E_{\rm cut})/E_{\rm fold}) + 1}]$ \\
\textsc{comptt} & Comptonization model from \citet{Titarchuk1994} \\
\hline
\end{tabular}
\small
\end{table}
\subsection{Phase-averaged spectroscopy}
\label{Phase-averaged}
The simultaneous observations of 2S 1845$-$024\xspace\ obtained with {\it Swift\xspace}/XRT and {\it NuSTAR\xspace}\ allowed us to perform the spectral analysis in a broad band, 0.3--79 keV, for the first time for the source. The broadband spectrum of 2S 1845$-$024\xspace\ shown in Fig.~\ref{fig:best-fit} turned out to have a shape typical for XRPs \citep[][]{Filippova2005}. According to \citet{Koyama1990}, the source continuum can be fitted by a phenomenological model such as a power-law with high-energy exponential cut-off. However, to find the best-fit model, we tested several continuum models as listed in Table~\ref{tab:model}. Consequently, the {\sc fdcut} model could not fit the spectrum, while {\sc cutoffpl}, {\sc npex} and {\sc comptt} gave acceptable fits with $\chi^2$ (d.o.f) of 2098 (1769), 1787 (1766) and 2007 (1768), respectively. The model {\sc po $\times$ highecut} fitted the spectrum slightly better with $\chi^2$ (d.o.f) = 1769 (1767). Therefore, and also to be able to make a comparison between our results and the previous studies, we used this preferred model for both the phase-averaged and the phase-resolved analysis.
The Galactic and intrinsic absorption was modeled using photoelectric absorption model {\sc tbabs} with abundances from \citep{Wilms2000} and atomic cross-sections adopted from \citet{Verner1996}. We also used a Gaussian emission component to account for the narrow fluorescent iron line at 6.4 keV.
The best-fit composite model ({\sc constant $\times$ tbabs (po $\times$ highecut + gaussian})) along the data and the corresponding residuals are shown in Fig.~\ref{fig:best-fit} and the best-fit parameters and the corresponding uncertainties at 68.3\% (1$\sigma$) confidence level are given in Table~\ref{tab:best-fit}. The fit revealed a large hydrogen column density $N_{\rm H}$ = (22.7$\pm0.7$) $\times 10^{22}$ cm$^{-2}$. We note that the Galactic mean value in the direction to the source is 1.81 $\times 10^{22}$ cm$^{-2}$ \citep{Willingale2013} which is significantly lower than what we have obtained. This discrepancy can be due to a significant intrinsic absorption in the system. To study this, we studied variations of the column density as a function of orbital phase.
We utilized the eleven archival observations (see~Table~\ref{tab:observations}) performed at different orbital phases as listed in Table~\ref{tab:nh}. Since the data cover only soft X-ray band below 10 keV, we modeled the spectra using a simple composite model {\sc tbabs $\times$ (po + gaussian)}. We note that the {\it NuSTAR\xspace}\ spectra were also fitted using the same model in the energy range 4--10 keV. Due to the lack of high count statistics in some observations we were unable to detect the iron emission line and thus fixed the line centroid energy and width to our best-fit values from the joint {\it Swift\xspace}+{\it NuSTAR\xspace}\ data. Consequently, the column density for different orbital phases are obtained and given in Table~\ref{tab:nh}. The corresponding X-ray flux for each observation was also calculated in the energy range 0.3--10 keV and reported in the same table. The data show strong dependence of $N_{\rm H}$ on the orbital phase as well a correlation with the flux (see~Table~\ref{tab:nh}). For those observations with lower exposure time, we binned the spectra to have at least 1 count s$^{-1}$ and used W-statistics \citep{Wachter1979} in order to get more reliable fits.
We emphasize that the best-fit model showed no evidence of a Cyclotron Resonant Scattering Feature (CRSF) in the broad-band source spectra (see~Fig. \ref{fig:best-fit}). However, we continued searching for the possible cyclotron line following the steps explained by \citet{Doroshenko2020}. As a result, we did not detect any absorption feature at any energy with significance above $\sim$2.4$\sigma$.
\begin{figure}
\begin{center}
\includegraphics[width=0.9\columnwidth]{best-fit3.pdf}
\end{center}
\caption{\textit{Top panel:} Broad band X-ray spectrum of 2S 1845$-$024\xspace\ extracted from {\it Swift\xspace}/XRT (green crosses) and {\it NuSTAR\xspace}/FPMA and FPMB (red and black crosses). Solid blue line represents the best-fit model {\sc constant $\times$ tbabs $\times$ (po $\times$ highecut + gau)}.
\textit{Bottom panel:} Residuals from the best-fit model in units of standard deviations.
We emphasize that the {\it Swift\xspace}/XRT spectrum is obtained in the range 0.3--10 keV, however, there are not enough soft X-ray photons below 3~keV because the spectrum is highly absorbed.
}
\label{fig:best-fit}
\end{figure}
\begin{table}
\caption{Best-fit parameters for the joint {\it Swift\xspace}/XRT and {\it NuSTAR\xspace}\ phase-averaged spectrum approximated with the {\sc constant $\times$ tbabs(powerlaw $\times$ highecut + gaussian)} model.}
\label{tab:best-fit}
\begin{tabular}{lccr}
\hline \hline
Model & Parameters & Unit & Value\\
\hline
\textsc{constant} & {\it NuSTAR\xspace}$^{a}$ & & 1.015$\pm0.003$ \\
& {\it Swift\xspace}/XRT$^{b}$ & & 0.69$\pm0.03$ \\
\textsc{tbabs} & $N_{\rm H}$ & $10^{22}$ cm$^{-2}$ & 22.7$\pm0.7$ \\
\textsc{powerlaw} & $\Gamma$ & & 1.23$\pm0.02$ \\
& norm & ($\times10^{-2}$) & 3.6$\pm0.2$ \\
\textsc{highecut} & $E_{\rm cut}$ & keV & 8.2$\pm0.2$ \\
& $E_{\rm fold}$ & keV & 28.6$\pm0.8$ \\
\textsc{gaussian} & $E_{\rm Fe}$ & keV & 6.35$\pm0.03$ \\
& $\sigma_{\rm Fe}$ & keV & 0.10$^{+0.07}_{-0.09}$ \\
& norm & $10^{-4}$ ph s$^{-1}$cm$^{-2}$ & 1.3$\pm0.3$ \\
\hline
$F_{0.3-79}$$^{c}$ & & $10^{-9}$ erg s$^{-1}$cm$^{-2}$ & 1.07$\pm0.01$\\
$F_{0.3-10}$$^{c}$ & & $10^{-10}$ erg s$^{-1}$cm$^{-2}$ & 4.10$\pm0.09$ \\
\hline
$\chi^2$ & & & 1769 \\
d.o.f. & & & 1767 \\
\hline
\end{tabular}
\small
\tablefoot{
\tablefoottext{a}{Cross-calibration normalization constant between {\it NuSTAR\xspace}/FPMA and FMPB. }
\tablefoottext{b}{Cross-calibration normalization constant between {\it NuSTAR\xspace}/FPMA and {\it Swift\xspace}/XRT.}
\tablefoottext{c}{Unabsorbed X-ray flux.}
}
\end{table}
\begin{figure}
\begin{center}
\includegraphics[width=0.9\columnwidth]{phase-res2.pdf}
\end{center}
\caption{Variations of the spectral parameters of the best-fit model as a function of pulse phase. The black crosses from the uppermost to the lowest panel show: neutral hydrogen column density $N_{\rm H}$ in units of $10^{22}$ cm$^{-2}$, photon index, cutoff energy, folding energy. The full energy (3--79 keV) averaged pulse profile of the source is shown in gray in each panel. Errors are 1$\sigma$.
}
\label{fig:phase-res}
\end{figure}
\begin{table*}
\centering
\caption{Spectral parameters of 2S 1845$-$024\xspace\ as a function of orbital phase.}
\label{tab:nh}
\begin{tabular}{lccccc}
\hline \hline
Observatory & ObsID & Orbital phase & $\Gamma$ & $N_{\rm H}$ & Flux$^{a}$ \\
& & & & ($\times 10^{22}$ cm$^{-2}$) & (erg~s$^{-1}$~cm$^{-2}$)\\
\hline
{\it Chandra\xspace}\ & 2691 & 0.003 & 0.08$\pm0.39$ & 52$\pm$11 & 2.4$^{+0.6}_{-0.4} \times 10^{-11}$\\
{\it Swift\xspace}\ & 00033739001 & 0.009 & 0.7$^{+1.0}_{-0.6}$ & 68$^{+22}_{-10}$ & 2.5$^{+3.5}_{-0.9} \times 10^{-10}$ \\
{\it NuSTAR\xspace}+{\it Swift\xspace}$^{b}$ & 90201056002+00088089001 & 0.029 & 1.15$\pm0.03$ & 21.0$\pm0.9$ & 3.14$^{+0.06}_{-0.08} \times 10^{-10}$ \\
{\it XMM-Newton\xspace}\ & 0302970801 & 0.160 & 0.7$\pm0.3$ & 20$^{+4}_{-3}$ & 1.8$^{+0.3}_{-0.2} \times 10^{-12}$ \\
{\it XMM-Newton\xspace}\ & 0302970601 & 0.426 & 1.6$^{+0.6}_{-0.5}$ & 22$^{+6}_{-5}$ & 1.7$^{+1.9}_{-0.6} \times 10^{-12}$ \\
{\it Chandra\xspace}\ & 10512 & 0.748 & 0.4$\pm0.2$ & 9$^{+10}_{-8}$ & 9.6$^{+4.2}_{-3.8} \times 10^{-13}$ \\
{\it Chandra\xspace}\ & 2692 & 0.924 & $-0.3^{+1.3}_{-0.9}$ & 13$^{+11}_{-7}$ & 9.7$^{+2.4}_{-2.1} \times 10^{-13}$ \\
{\it Swift\xspace}\ & 00609139000 & 0.991 & 2.0$\pm0.7$ & 106$^{+18}_{-17}$ & 1.9$^{+5.8}_{-1.0} \times 10^{-9}$ \\
{\it Swift\xspace}\ & 00707545000 & 0.992 & $-0.1^{+0.5}_{-0.7}$ & 31$^{+10}_{-8}$ & 4.7$^{+1.0}_{-0.7} \times 10^{-10}$ \\
{\it Swift\xspace}\ & 00745966000 & 0.996 & 0.4$\pm0.8$ & 32$^{+14}_{-11}$ & 6.0$^{+3.9}_{-1.4} \times 10^{-10}$ \\
\hline
\end{tabular}
\small
\tablefoot{
\tablefoottext{a}{Unabsorbed X-ray fluxes in energy range 0.3--10 keV.}
\tablefoottext{b}{The fit parameters and flux obtained from a joint fit in range 0.3--10 keV.}
}
\end{table*}
\subsection{Phase-resolved spectroscopy}
\label{phase-resolved}
Phase-resolved spectroscopy is a useful technique to study the spatial properties of the emitting region of the NS. Based on the good counting statistics, we extracted twenty equally spaced phase bins (see upper panel in Fig.~\ref{fig:PP}) from the available {\it NuSTAR\xspace}\ observation of 2S 1845$-$024\xspace. Each spectrum was fitted with our best-fit model ({\sc constant $\times$ tbabs (po $\times$ highecut + gaussian}); see Sec.~\ref{Phase-averaged}). Similar to the phase-average spectral analysis, we fixed the iron line width at 0.1 keV for all 20 spectra. The evolution of the fit parameters are shown in Fig.~\ref{fig:phase-res}.
The hydrogen column density $N_{\rm H}$ varies in the range of (15--31) $\times 10^{22}$ cm$^{-2}$ showing a marginally significant deviation from a constant. The photon index $\Gamma$ shows a similar behavior as $N_{\rm H}$ varying from $\sim$0.7 at the main maximum to $\sim$1.5 at the second minimum of the pulse. The cutoff energy $E_{\rm cut}$ remains almost constant around 8 keV throughout the pulse with variations between 5.8 and 9.5 keV. The folding energy $E_{\rm fold}$ is more variable reaching $\sim$48 keV near the second minimum of the pulse and decreasing down to 19 keV at the main maximum.
Because there is possible strong internal correlation between $N_{\rm H}$ and $\Gamma$ in soft X-ray band, we constructed the confidence contour plot of these two parameters using the spectra of the phases 0.5 and 0.8 where these parameters have different values (see Fig.~\ref{fig:contours}). We see that although the values of $N_{\rm H}$ for two phases agree within 2$\sigma$ confidence level, the photon index is significantly different pointing to the intrinsic variability of the spectrum.
\begin{figure}
\begin{center}
\includegraphics[width=0.9\columnwidth]{contours.pdf}
\end{center}
\caption{Confidence contours of $N_{\rm H}$ versus $\Gamma$ obtained using the best-fit model for the spin phase-resolved spectra at phases 0.5 and 0.8 (see the text). The blue, green and red contours correspond to the 1$\sigma$, 2$\sigma$ and 3$\sigma$ confidence levels obtained using $\chi^2$ statistics for 2 free parameters.
}
\label{fig:contours}
\end{figure}
\subsection{X-ray position and IR companion}
Due to the poor localization of 2S 1845$-$024\xspace, the nature of the optical counterpart in this system is yet unknown. 2S 1845$-$024\xspace\ is located in the Scutum region which is crowded by transient XRPs and their companions \citep{Koyama1990Galactic-arm}. In order to determine the source position from the X-ray data, we selected one of the {\it Chandra\xspace}\ observation (ObsID 2689). Using the task {\sc celldetect} standard routines,\footnote{\url{https://cxc.cfa.harvard.edu/ciao/threads/celldetect/}} we obtained the source position at R.A. = 18$^{\rm h}48^{\rm m}16\fs8$ and Dec. = $-2\degr25\arcmin25\farcs1$ (J2000).
A total uncertainty of 1$\hbox{$^{\prime\prime}$}$ (at 90\% confidence level radius), including the systematic uncertainty of {\it Chandra\xspace}\ absolute positions,\footnote{\url{https://cxc.harvard.edu/cal/ASPECT/celmon/}} was obtained for the localization accuracy of the source.
We also obtained the astrometrically corrected source coordinates from the averaged image of all available {\it Swift\xspace}/XRT observations using the online XRT products generator.\footnote{\url{https://www.swift.ac.uk/user_objects/}} Based on this, the source is located at R.A. = 18$^{\rm h}48^{\rm m}16\fs91$ and Dec. = $-2\degr25\arcmin26\farcs1$ (J2000) with an error radius of $2\farcs5$ at 90\% confidence level, which is fully consistent with the {\it Chandra\xspace}\ results (see Fig.~\ref{fig:counterpart}).
\subsection{Nature of IR companion}
Using the results of {\it Chandra\xspace}\ localization and data of the UKIDSS near-IR sky survey, we were able to identify the IR-counterpart of 2S 1845$-$024\xspace\ (see Fig.~\ref{fig:counterpart}, left panel). The coordinates and magnitudes of the IR counterpart are given in Table~\ref{tab:2S_IR}. An expected class of the star as well as the distance to it can be estimated using the method successfully applied earlier in a number of sources \citep[see, e.g.,][]{Karasev2015,Nabizadeh2019}.
\begin{figure}
\begin{center}
\includegraphics[width=\columnwidth,trim={0.5cm 10cm 0.5cm 10cm},clip]{UKI_plus_Spitzer_NEW.pdf}
\end{center}
\caption{Images of the sky around 2S 1845$-$024\xspace\ in the $K$-filter obtained by the UKIRT-telescope (GPS/UKIDSS sky survey, left) and in the $3.6\mu$-band obtained by the {\it Spitzer} telescope (right). The red circles indicate an uncertainty for the source position based on the {\it Swift} (dashed line) and {\it Chandra} (solid line) data, respectively. Cyan contours mark two IR objects closest to the X-ray position.
}
\label{fig:counterpart}
\end{figure}
\begin{table}
\centering
\caption{Coordinates and IR-magnitudes of the counterpart of 2S 1845$-$024\xspace\ based on {UKIDSS/GPS} and \textit{Spitzer} data.}
\label{tab:2S_IR}
\begin{tabular}{lr}
\hline \hline
RA & 18$^{\rm h}$48$^{\rm m}$16\fs87 \\
Dec & -02$\degr25\arcmin25\farcs2$ \\
$l$ & 30\fdg4151 \\
$b$ & $-$0\fdg4031 \\
$H$ & $17.82\pm0.04$ \\
$K$ & $15.52\pm0.03$ \\
$[3.6]$ $\si{\micro\metre}$ & $12.74\pm0.07$ \\
$[4.5]$ $\si{\micro\metre}$ & $12.35\pm0.14$ \\
$[5.4]$ $\si{\micro\metre}$ & $11.66\pm0.11$ \\
\hline
\end{tabular}
\end{table}
Comparing the measured color of the source $(H-K)=2.30\pm0.05$ with intrinsic colors $(H-K)_0$ of different classes of stars \citep[][all values were converted into the UKIRT filter system via relations from \citealt{Carpenter2001}]{2014AcA....64..261W, 2015AN....336..159W}, we can estimate corresponding extinction corrections $E(H-K)=(H-K)-(H-K)_0$. 2S 1845$-$024\xspace\ is located far from the Galactic bulge, therefore, we can use a standard extinction law \citep{Cardelli1989} to transform each $E(H-K)$ into the extinction $A_{K}$. In turn, comparing absolute magnitudes of the same classes of stars $M_{\rm K}$ \citep{2000MNRAS.319..771W, 2006MNRAS.371..185W,2007MNRAS.374.1549W} with the measured magnitude of the source in the $K$-filter, we are able to estimate a probable distance $D$ to each class of stars as $5-5\log_{10}D=M_{\rm K} - K + A_K$. Results of this approach are indicated in Fig.~\ref{fig:IR_class}.
Unfortunately, having magnitudes only in two filters makes it challenging to come up with a solid conclusion about the nature of the IR companion, however, the extinction $A_ K$ towards the system can be roughly estimated. According to Fig.~\ref{fig:IR_class}, $A_ K\simeq4.1$ accounts for OB-stars, including giants or supergiants, and $A_ K\simeq3.7$ for red giants. By converting these extinction magnitudes into the hydrogen column density $N_{\rm H}$ using the standard relations $A_V=8.93\times A_K$ \citep{Cardelli1989} and $N_{\rm H} = 2.87\times 10^{21} \times A_V$ \citep{Foight2016}, we obtain $N_{\rm H} \simeq(10-11)\times10^{22}$ cm$^{-2}$ for different types of the companion stars. At the same time, the X-ray spectrum revealed a significantly higher column density of 22.7 $\times$ 10$^{22}$ cm$^{-2}$, that is typical for highly absorbed HMXB systems \citep[see, e.g.,][]{Rahoui2008}. This circumstance may indicate that 2S 1845$-$024\xspace\ belongs to this class of binary systems.
To clarify the nature of the companion, we also used the mid-IR data obtained by {\it Spitzer} telescope\footnote{\url{http://www.astro.wisc.edu/sirtf/}} (see Table~\ref{tab:2S_IR}). However, as can be seen from Fig.~\ref{fig:counterpart} there is another star located near the probable IR-counterpart of 2S 1845$-$024\xspace. Spatial resolution of {\it Spitzer} did not allow us to fully resolve these objects (see cyan contours in Fig.~\ref{fig:counterpart}). Therefore, we were not able to exclude that the resulting mid-IR fluxes mentioned in Table~\ref{tab:2S_IR} are affected by the confusion of these two stars.
\begin{figure}
\centering
\includegraphics[width=0.9\columnwidth]{S2_source_TYPE_RECALC_NEW.pdf}
\caption{`Distance-extinction' diagram showing how far the star (black dots for normal and cyan for Be ones) of a specific class should be located if it is a counterpart of 2S 1845$-$024\xspace\ and the appropriate extinction towards such a star.}
\label{fig:IR_class}
\end{figure}
Nevertheless, if we assume OB-supergiant (B9Iab, B5Iab, O5Ia etc.) as a counterpart of 2S 1845$-$024\xspace, the distance to the source is expected to be more than $\sim$16 kpc (see Fig.~\ref{fig:IR_class}). This is in line with \citet{Koyama1990} who estimated a 10-kpc distance to the source based on the high $N_{\rm H}$ value in the source spectrum. Our spectral analysis also supports these results as $N_{\rm H}$ shows variations on the orbital timescale from $\sim$(1--2) $\times$ 10$^{23}$ cm$^{-2}$ at phase around 0.5 to $\sim$10$^{24}$ cm$^{-2}$ around the periastron passage. The lowest value of $N_{\rm H}$ is almost an order of magnitude higher than the Galactic mean value in the direction to the source. This fact along with a positive correlation of the $N_{\rm H}$ value with the X-ray flux points to the presence of a strong stellar wind in the system. Similar behavior is observed in other XRPs with hypergiant optical companions \citep[e.g., for GX 301--2,][]{2014MNRAS.441.2539I}. But at the same time, we cannot rule out other classes of stars to be a companion. Thus, to establish reliably the nature of the IR companion of 2S 1845$-$024\xspace\ the spectroscopic observations in the near-IR band, for example K-band spectroscopy, are required. After the class of the companion star will be established we will be able to use the diagram shown in Fig.~\ref{fig:IR_class} to estimate the distance to the source with high accuracy.
\section{Discussion and Conclusions}
\label{discussion}
In this work, we presented the results of the detailed X-ray and IR analysis of the poorly studied XRP 2S 1845$-$024\xspace\ and its companion during the type I outburst of the source in 2017. For X-ray analysis, we used a single {\it NuSTAR\xspace}\ observation performed during the outburst and several X-ray observations obtained by {\it XMM-Newton\xspace}, {\it Chandra\xspace}\ and {\it Swift\xspace}. For IR analysis, data obtained from UKIDSS/GPS and $\it Spitzer$/GLIMPSE surveys were used.
In order to determine the magnetic field strength of the NS in the system which was one of our prime goals, we searched for possible cyclotron absorption line in the broad-band {\it NuSTAR\xspace}\ spectrum. Such feature was not discovered in phase-averaged nor in phase-resolved spectra of 2S 1845$-$024\xspace. Therefore, it can be inferred that either the line does not exist in the considered energy range or it is too weak to be detected with the current sensitivity of the observations. In the former case, considering the lower and upper limits of the operating energy-band of the {\it NuSTAR\xspace}\ instruments, we only can estimate the magnetic field strength of the source to be either weaker than ~$\sim$4 $\times$ 10$^{11}$ G or stronger than $\sim$7 $\times$ 10$^{12}$ G. Further sensitive observations are required to make a solid conclusion.
In order to determine the nature of the companion and the distance to 2S 1845$-$024\xspace, we performed analysis of the IR data. However, the availability of the magnitudes only in two ($H$ and $K$) filters allowed us to roughly classified the IR-companion in 2S 1845$-$024\xspace\ as an OB-supergiant star located at a distances of more than $\sim$16 kpc. To establish a more accurate estimation for the nature of the IR-companion in this system as well as the distance to the source, sensitive spectroscopic observations in the near-IR band (i.e. K-band spectroscopy) are required.
Our conclusion about the class of the optical companion is supported by the X-ray spectral properties of the source.
The good coverage of the binary orbit with observations in the soft X-rays, allowed us to investigate the variation of column density $N_{\rm H}$ as a function of orbital phase which revealed the presence of a strong stellar wind in the system. However, we emphasize that an extensive study of the iron line are required to support this interpretation \citep[see][]{2014MNRAS.441.2539I}.
The estimation of the distance to 2S 1845$-$024\xspace\ can be also done using the observed fluxes and presumable luminosity of the source in the different states. Particularly, in the low state, when the observed flux drops down to about $10^{-12}$ erg s$^{-1}$ cm$^{-2}$\ (see Table~\ref{tab:nh}), one can expect the luminosity of the source to be above $\sim$10$^{34}$ erg s$^{-1}$ in the case of the ongoing accretion \citep{Tsygankov2017cold-disk,Tsygankov2019cold-disk} and, therefore, the distance to the system can not be below $\sim$10~kpc. From another side, the peak luminosity during type I outbursts from the transient XRPs can be of the order of 10$^{37}$ erg s$^{-1}$. Taking into account the maximal observed flux from 2S 1845$-$024\xspace\ of around $10^{-9}$ erg s$^{-1}$ cm$^{-2}$\ one can estimate an upper limit on the distance as $\sim$15~kpc. These rough estimates agree with results obtained from the IR data.
\begin{acknowledgements}
This work was supported by the grant 14.W03.31.0021 of the Ministry of Science and Higher Education of the Russian Federation. We also acknowledge the support from the Finnish Cultural Foundation through project number 00200764 and 85201677 (AN), the Academy of Finland travel grants 317552, 322779, 324550, 331951, and 333112, the National Natural Science Foundation of China grants 1217030159, 11733009, U2038101, U1938103, and the Guangdong Major Project of the Basic and Applied Basic Research grant 2019B030302001 (LJ). This work is based in part on data of the UKIRT Infrared Deep Sky Survey. Also, part of this work is based on observations made with the Spitzer Space Telescope, which is operated by the Jet Propulsion Laboratory, California Institute of Technology under a contract with NASA.
\end{acknowledgements}
\bibliographystyle{aa}
|
1,108,101,566,349 | arxiv | \section{\label{sec:Intro} Introduction}
\begin{figure}
\insertfig{rnplane}
\caption[$r-n_s$ plane] {
Natural inflation predictions and WMAP3 constraints in the $r$-$\ensuremath{n_s}$
plane. (Solid/blue) lines running from approximately the lower left
to upper right are predictions for constant $N$ and varying $f$,
where $N$ is the number of e-foldings prior to the end of inflation
at which current modes of scale $k = 0.002$ Mpc$^{-1}$
were generated and $f$ is the width of the potential.
The remaining (dashed/red) lines are for constant $f$ and varying
$N$.
The (light blue) band corresponds to the values of $N$ for
standard post-inflation cosmology with $(\textrm{1 GeV})^4 <
\ensuremath{\rho_\textrm{RH}} < V_\textrm{end}$. Filled (nearly vertical) regions are the
parameter spaces allowed by WMAP3 at 68\% and 95\% C.L.'s
(error contours taken from Ref.~\cite{Kinney:2006qm}).
Natural inflation is consistent with the WMAP3 data for
$f \gae 0.7\ensuremath{m_\textrm{Pl}}$ and essentially all likely values of $N$.
}
\label{fig:rnplane}
\end{figure}
Inflation was proposed \cite{Guth:1980zm,Kazanas:1980tx,
Starobinsky:1980te,Sato:1981ds,Sato:1980yn} to solve several
cosmological puzzles: an early period of accelerated expansion explains
the homogeneity, isotropy, and flatness of the universe, as well as the
lack of relic monopoles. While inflation results in an approximately
homogeneous universe, inflation models also predict small
inhomogeneities. Observations of inhomogeneities via the cosmic
microwave background (CMB) anisotropies and structure formation are now
providing tests of inflation models.
The release of three years of data from the Wilkinson Microwave
Anisotropy Probe (WMAP3) \cite{Spergel:2006hy} satellite have generated
a great deal of excitement. First, generic predictions of inflation
match the observations: the universe has a critical density
($\Omega=1$), the density perturbation spectrum is nearly scale
invariant, and superhorizon fluctuations are evident. Second, current
data is beginning to differentiate between inflationary models and
already rules some of them out \cite{Spergel:2006hy,Alabidi:2006qa,
Peiris:2006ug,Easther:2006tv,Seljak:2006bg,Kinney:2006qm,Martin:2006rs,
Peiris:2006sj}. (For example, quartic potentials and generic tree-level
hybrid models do not provide a good match to the data.)
It is the purpose of this paper to illustrate that the model
known as Natural Inflation is an excellent match to current data.
Inflation models predict two types of perturbations, scalar and tensor,
which result in density and gravitational wave fluctuations,
respectively. Each is typically characterized by a fluctuation
amplitude ($\ensuremath{\Pscalar^{1/2}}$ for scalar and $\ensuremath{\Ptensor^{1/2}}$ for tensor, with
the latter usually given in terms of the ratio $r \equiv
\ensuremath{P_T}/\ensuremath{P_\mathcal{R}}$) and a spectral index ($\ensuremath{n_s}$ for scalar and
$\ensuremath{n_\textrm{T}}$ for tensor) describing the mild scale dependence of the
fluctuation amplitude. The amplitude $\ensuremath{\Pscalar^{1/2}}$ is normalized
by the height of the inflationary potential. The inflationary
consistency condition $r = -8 \ensuremath{n_\textrm{T}}$ further reduces the number of free
parameters to two, leaving experimental limits on $\ensuremath{n_s}$ and $r$ as the
primary means of distinguishing among inflation models. Hence,
predictions of models are presented as plots in the $r$-$\ensuremath{n_s}$ plane.
Most inflation models suffer from a potential drawback: to match
various observational constraints, notably CMB anisotropy measurements
as well as the requirement of sufficient inflation,
the height of the inflaton potential must be of a much smaller scale
than that of the width, by many orders of magnitude (\textit{i.e.}, the potential
must be very flat). This requirement of two very different mass scales
is what is known as the ``fine-tuning'' problem in inflation, since
very precise couplings are required in the theory to prevent radiative
corrections from bringing the two mass scales back to the same level.
The natural inflation model (NI) uses shift symmetries to generate a
flat potential, protected from radiative corrections, in a natural way
\cite{Freese:1990rb}. In this regard, NI is one of the best motivated
inflation models.
One of the major results of the paper is shown in \reffig{rnplane}.
The predictions of NI are plotted in the $r$-$\ensuremath{n_s}$ plane for various
parameters: the width $f$ of the potential and number of e-foldings $N$
before the end of inflation at which present day fluctuation modes of
scale $k=0.002$ Mpc$^{-1}$ were produced. $N$ depends upon the
post-inflationary universe and is $\sim$50-60. Also shown in the
figure are the observational constraints from WMAP's recent 3-year
data, which provides some of the tightest constraints on inflationary
models to date \cite{Spergel:2006hy}. The primary result is that NI,
for $f \gae 0.7\ensuremath{m_\textrm{Pl}}$, is consistent with current observational
constraints.
In this paper we take $\ensuremath{m_\textrm{Pl}} = 1.22 \times 10^{19}$ GeV. Our result
extends upon a previous analysis of NI \cite{Freese:2004un}
that was based upon WMAP's first year data \cite{Spergel:2003cb}.
Earlier analyses \cite{Adams:1992bn,Moroi:2000jr} have placed
observational constraints on this model using COBE data
\cite{Smoot:1992td}. Other papers have more recently considered NI in
light of the WMAP1 and WMAP3 data \cite{Alabidi:2005qi,Alabidi:2006qa}.
This paper emphasizes two further results as well. First, we
investigate the running of the spectral index in natural inflation,
\textit{i.e.}\ the dependence of $\ensuremath{n_s}$ on scale, and find that it is small: two
orders of magnitude smaller than the sensitivity of WMAP3 and below the
sensitivity of any planned experiment. Second, we
find how far down the potential the field is at the time structure is
produced, and find that for $f > 5 \ensuremath{m_\textrm{Pl}}$ the relevant part of the
potential is indistinguishable from a quadratic potential. (Still,
the naturalness motivation for NI renders it a superior model to a
quadratic potential as the latter typically lacks
an explanation for its flatness).
We will begin by discussing the model of natural inflation in
\refsec{NI}: the motivation, the potential, the evolution of the
inflaton field, and relating pre- and post-inflation scales. In
\refsec{Fluctuations}, we will examine the scalar and tensor
perturbations predicted by NI and compare them with the WMAP 3-year
data. In \refsec{Running}, we will address the running of the spectral
index. In \refsec{Potential}, we will examine the location on the
potential at which the observable e-folds of inflation take place and
examine where NI falls in the small field/large field/hybrid model
categorization scheme. We conclude in \refsec{Conclusion}.
\section{The Model of Natural Inflation\label{sec:NI}}
\textit{Motivation:}
To satisfy a combination of constraints on inflationary models, in
particular, sufficient inflation and microwave background anisotropy
measurements \cite{Spergel:2003cb,Spergel:2006hy}, the
potential for the inflaton field must be very flat. For a general
class of inflation models involving a single slowly-rolling field, it
has been shown that the ratio of the height to the (width)$^4$ of the
potential must satisfy \cite{Adams:1990pn}
\begin{equation} \label{eqn:Vratio}
\chi \equiv \Delta V/(\Delta \phi)^4 \le {\cal O}(10^{-6} - 10^{-8})
\, ,
\end{equation}
where $\Delta V$ is the change in the potential $V(\phi)$ and $\Delta
\phi$ is the change in the field $\phi$ during the slowly rolling
portion of the inflationary epoch. Thus, the inflaton must be
extremely weakly self-coupled, with effective quartic self-coupling
constant $\lambda_{\phi} < \orderof{\chi}$ (in realistic models,
$\lambda_{\phi} < 10^{-12}$). The small ratio of mass scales required
by \refeqn{Vratio} quantifies how flat the inflaton potential must be
and is known as the ``fine-tuning'' problem in inflation.
A recent review of inflation can be found in Ref.~\cite{Bassett:2005xm}.
Three approaches have been taken toward this required flat potential
characterized by a small ratio of mass scales. First, some simply say
that there are many as yet unexplained hierarchies in physics, and
inflation requires another one. The hope is that all these
hierarchies will someday be explained. In these cases, the tiny
coupling $\lambda_{\phi}$ is simply postulated \textit{ad hoc} at tree
level, and then must be fine-tuned to remain small in the presence of
radiative corrections. But this merely replaces a cosmological
naturalness problem with unnatural particle physics. Second, models
have been attempted where the smallness of $\lambda_{\phi}$ is
protected by a symmetry, \textit{e.g.}, supersymmetry. In these cases (\textit{e.g.},
\cite{Holman:1984yj}), $\lambda_{\phi}$ may arise from a small ratio
of mass scales; however, the required mass hierarchy, while stable, is
itself unexplained. In addition, existing models have limitations.
It would be preferable if such a hierarchy, and thus inflation itself,
arose dynamically in particle physics models.
Hence, in 1990 a third approach was proposed, Natural Inflation
\cite{Freese:1990rb}, in which the inflaton potential is flat due to
shift symmetries. Nambu-Goldstone bosons (NGB) arise whenever a
global symmetry is spontaneously broken. Their potential is exactly
flat due to a shift symmetry under $\phi \rightarrow \phi + \textrm{
constant}$. As long as the shift symmetry is exact, the inflaton
cannot roll and drive inflation, and hence there must be additional
explicit symmetry breaking. Then these particles become pseudo-Nambu
Goldstone bosons (PNGBs), with ``nearly'' flat potentials, exactly as
required by inflation. The small ratio of mass scales required by
\refeqn{Vratio} can easily be accommodated. For example, in the case
of the QCD axion, this ratio is of order $10^{-64}$. While inflation
clearly requires different mass scales than the axion, the point is
that the physics of PNGBs can easily accommodate the required small
numbers.
The NI model was first proposed and a simple analysis performed in
\cite{Freese:1990rb}. Then, in 1993, a second paper followed which
provides a much more detailed study \cite{Adams:1992bn}.
Many types of candidates have subsequently been explored for natural
inflation. For example, WHK and K.T.\ Mahanthappa considered NI
potentials generated by radiative corrections in models with explicitly
broken Abelian \cite{Kinney:1995xv} and non-abelian \cite{Kinney:1995cc}
symmetries, showing that NI models with $f \sim \ensuremath{m_\textrm{Pl}}$ and $f \ll \ensuremath{m_\textrm{Pl}}$
can both be generated in self-consistent field theories.
Ref.~\cite{Kawasaki:2000yn} used shift symmetries
in Kahler potentials to obtain a flat potential and drive natural
chaotic inflation in supergravity. Additionally,
\cite{Arkani-Hamed:2003wu,Arkani-Hamed:2003mz} examined natural
inflation in the context of extra dimensions and \cite{Kaplan:2003aj}
used PNGBs from little Higgs models to drive hybrid inflation. Also,
\cite{Firouzjahi:2003zy,Hsu:2004hi} use the natural inflation idea of
PNGBs in the context of braneworld scenarios to drive inflation.
Freese \cite{Freese:1994fp} suggested using a PNGB as the rolling
field in double field inflation \cite{Adams:1991ma} (in which the
inflaton is a tunneling field whose nucleation rate is controlled by
its coupling to a rolling field). We will focus in this paper on the
original version of natural inflation, in which there is a single
rolling field.
\textit{Potential:}
The PNGB potential resulting from explicit breaking of a shift symmetry
in single field models (in four spacetime dimensions) is generally of
the form
\begin{equation} \label{eqn:potential}
V(\phi) = \Lambda^4 [1 \pm \cos(N\phi/f)] \, .
\end{equation}
We will take the positive sign in \refeqn{potential} (this choice has
no effect on our results) and take $N = 1$, so the potential, of
height $2 \Lambda^4$, has a unique minimum at $\phi = \pi f$ (the
periodicity of $\phi$ is $2 \pi f$).
For appropriately chosen values of the mass scales, \textit{e.g.}\ $f \sim \ensuremath{m_\textrm{Pl}}$
and $\Lambda \sim \ensuremath{m_\textrm{GUT}} \sim 10^{15}$ GeV, the PNGB field $\phi$ can
drive inflation. This choice of parameters indeed produces the small
ratio of scales required by \refeqn{Vratio}, with $\chi \sim
(\Lambda/f)^4 \sim 10^{-13}$. While $f \sim \ensuremath{m_\textrm{Pl}}$ seems to be a
reasonable scale for the potential width, there is no reason to
believe that $f$ cannot be much larger than $\ensuremath{m_\textrm{Pl}}$. In fact, Kim,
Nilles \& Peloso \cite{Kim:2004rp} as well as the idea of N-flation
\cite{Dimopoulos:2005ac} showed that an \textit{effective} potential of
$f \gg \ensuremath{m_\textrm{Pl}}$ can be generated from two or more axions, each with
sub-Plankian scales. We shall thus include the possibility of
$f \gg \ensuremath{m_\textrm{Pl}}$ is our analysis and show that these parameters can fit the
data.
\textit{Evolution of the Inflaton Field:}
The evolution of the inflaton field is described by
\begin{equation} \label{eqn:eom}
\ddot{\phi} + 3H\dot{\phi} + \Gamma\dot{\phi} + \ensuremath{V^{\prime}}(\phi) = 0
\, ,
\end{equation}
where $\Gamma$ is the decay width of the inflaton. A sufficient
condition for inflation is the slow-roll (SR) condition $\ddot{\phi} \ll
3 H \dot{\phi}$. The expansion of the scale factor $a$, with $H =
\dot{a}/a$, is determined by the scalar field dominated Friedmann
equation,
\begin{equation} \label{eqn:friedman}
H^2 = \frac{8\pi}{3\ensuremath{m_\textrm{Pl}}^2} V(\phi) .
\end{equation}
The slow roll (SR) condition implies that two conditions are met:
\begin{eqnarray} \label{eqn:epsilonA}
\epsilon(\phi)
&\approx& \frac{\ensuremath{m_\textrm{Pl}}^2}{16\pi}
\left[ \frac{\ensuremath{V^{\prime}}(\phi)}{V(\phi)} \right]^2
\nonumber\\
&=& \frac{1}{16\pi}
\left( \frac{\ensuremath{m_\textrm{Pl}}}{f} \right)^2
\left[ \frac{\sin(\phi/f)}{1+\cos(\phi/f)} \right]^2 \ll 1
\end{eqnarray}
and
\begin{eqnarray} \label{eqn:etaA}
\eta(\phi)
&\approx& \frac{\ensuremath{m_\textrm{Pl}}^2}{8\pi}
\left[ \frac{\ensuremath{V^{\prime\prime}}(\phi)}{V(\phi)}
- \frac{1}{2} \left(
\frac{\ensuremath{V^{\prime}}(\phi)}{V(\phi)} \right)^2
\right] \nonumber\\
&=& - \frac{1}{16\pi} \left( \frac{\ensuremath{m_\textrm{Pl}}}{f} \right)^2 \, \ll 1.
\end{eqnarray}
Inflation ends when the field $\phi$ reaches a value $\ensuremath{\phi_\textrm{e}}$ such that
$\epsilon(\phi) < 1$ is violated, or
\begin{equation} \label{eqn:phie}
\cos(\ensuremath{\phi_\textrm{e}}/f) = \frac{1 - 16\pi(f/\ensuremath{m_\textrm{Pl}})^2}{1 + 16\pi(f/\ensuremath{m_\textrm{Pl}})^2} \, .
\end{equation}
\refFig{epsilon} illustrates the value of $\epsilon$ during periods
where density fluctuations are produced; one can see that indeed
$\epsilon \ll 1$.
\begin{figure}
\insertfig{epsilon}
\caption[Slow roll parameter $\epsilon$]{
The slow roll parameter $\epsilon$ is shown as a function of the
potential width $f$ for various numbers of e-foldings $N$ before the
end of inflation.
The (light blue) band corresponds to the values
of $N$ consistent with the standard post-inflation cosmology, as
given by \refeqn{Nk}, for an end of reheating energy density
$(\textrm{1 GeV})^4 < \ensuremath{\rho_\textrm{RH}} < V_\textrm{end}$, where the lower
bound is a result of nucleosynthesis constraints.
}
\label{fig:epsilon}
\end{figure}
More accurate results can be attained by numerically solving the
equation of motion, \refeqn{eom}, together with the Friedmann
equations. Such calculations have been performed in
Ref.~\cite{Adams:1992bn}, where it was shown the SR analysis is
accurate to within a few percent for the $f \gae 0.5\ensuremath{m_\textrm{Pl}}$ parameter
space we will be examining. Thus, we are justified in using the SR
approximation in our calculations.
\textit{Relating Pre- and Post-Inflation Scales:}
To test inflationary theories, present day observations must be
related to the evolution of the inflaton field during the inflationary
epoch. Here we show how a comoving scale $k$ today can be related
back to a point during inflation. We need to find the value of $N_k$,
the number of e-foldings before the end of inflation, at which
structures on scale $k$ were produced.
Under a standard post-inflation cosmology, once inflation ends, the
universe undergoes a period of reheating. Reheating can be
instantaneous or last for a prolonged period of matter-dominated
expansion. Then reheating ends at $T <T_\textrm{RH}$, and the
universe enters its usual radiation-dominated and subsequent
matter-dominated history. Instantaneous reheating ($\ensuremath{\rho_\textrm{RH}} = \rho_e$)
gives the minimum number of e-folds as one looks backwards
to the time of perturbation production, while a prolonged period of
reheating gives a larger number of e-folds.
The relationship between scale $k$ and the number of e-folds $N_k$
before the end of inflation has been shown to be \cite{Lidsey:1995np}
\begin{equation} \label{eqn:Nk}
N_k = 62 - \ln\frac{k}{a_0 H_0}
- \ln\frac{10^{16}\,\textrm{GeV}}{V_k^{1/4}}
+ \ln\frac{V_k^{1/4}}{\ensuremath{V_\textrm{e}}^{1/4}}
- \frac{1}{3} \ln\frac{\ensuremath{V_\textrm{e}}^{1/4}}{\ensuremath{\rho_\textrm{RH}}^{1/4}} \, .
\end{equation}
Here, $V_k$ is the potential when $k$ leaves the horizon during
inflation, $\ensuremath{V_\textrm{e}} = V(\ensuremath{\phi_\textrm{e}})$ is the potential at the end of inflation,
and $\ensuremath{\rho_\textrm{RH}}$ is the energy density at the end of the reheat period.
Nucleosynthesis generally requires $\ensuremath{\rho_\textrm{RH}} \gae (\textrm{1 GeV})^4$,
while necessarily $\ensuremath{\rho_\textrm{RH}} \le \ensuremath{V_\textrm{e}}$. Since $\ensuremath{V_\textrm{e}}$ may be of order
$\ensuremath{m_\textrm{GUT}} \sim 10^{15}$ GeV or even larger, there is a broad
allowed range of $\ensuremath{\rho_\textrm{RH}}$; this uncertainty in $\ensuremath{\rho_\textrm{RH}}$ translates
into an uncertainty of 10 e-folds in the value of $N_k$ that
corresponds to any particular scale of measurement today.
Henceforth we will use $N$ to refer to the number of e-foldings prior
to the end of inflation that correspond to scale $k = 0.002$
Mpc$^{-1}$, the scale at which WMAP presents their results%
\footnote{The current horizon scale corresponds to
$k \approx 0.00033$ Mpc$^{-1}$. The difference in these two scales
corresponds to only a small difference in e-foldings of
$\Delta N \lae 2$: while we shall present parameters evaluated at
$k = 0.002$ Mpc$^{-1}$, those parameters evaluated at the current
horizon scale will have essentially the same values (at the few
percent level).
}.
Under the standard cosmology, this scale corresponds to
$N\sim$50-60 (smaller $N$ corresponds to smaller $\ensuremath{\rho_\textrm{RH}}$), with a
slight dependence on $f$. However, if one were to consider
non-standard cosmologies \cite{Liddle:2003as}, the range of possible
$N$ would be broader. Hence we will show results for the more
conservative range $40 \le N \le 70$, in addition to the more limited
standard cosmology range.
\section{\label{sec:Fluctuations} Perturbations}
As the inflaton rolls down the potential, quantum fluctuations lead
to metric purturbations that are rapidly inflated beyond the horizon.
These fluctuations are frozen until they re-enter the horizon during
the post-inflationary epoch, where they leave their imprint on large
scale structure formation and the cosmic microwave background (CMB)
anisotropy \cite{Guth:1982ec,Hawking:1982cz,Starobinsky:1982ee}.
In this section, we will examine the scalar (density) and tensor
(gravitational wave) purturbations predicted by natural inflation
and compare them with the WMAP 3 year (WMAP3) data
\cite{Spergel:2006hy}.
\subsection{\label{sec:Scalar} Scalar (Density) Fluctuations}
The perturbation amplitude for the density fluctuations (scalar modes)
produced during inflation is given by
\cite{Mukhanov:1985rz,Mukhanov:1988jd,Mukhanov:1990me,Stewart:1993bc}
\begin{equation} \label{eqn:Pscalar}
\ensuremath{\Pscalar^{1/2}}(k) = \frac{H^2}{2\pi\dot{\phi}_k} \, .
\end{equation}
Here, $\ensuremath{\Pscalar^{1/2}}(k) \sim \frac{\delta\rho}{\rho}|_\textrm{hor}$
denotes the perturbation amplitude when a given wavelength re-enters the
Hubble radius in the radiation- or matter-dominated era, and
the right hand side of \refeqn{Pscalar} is to be evaluated when
the same comoving wavelength ($2\pi/k$) crosses outside the horizon
during inflation.
Normalizing to the COBE \cite{Smoot:1992td} or WMAP
\cite{Spergel:2006hy} an\-iso\-tropy measurements gives $\ensuremath{\Pscalar^{1/2}} \sim
10^{-5}$. This normalization can be used to approximately fix the
height $\Lambda$ of the potential \refeqn{potential}. The largest
amplitude perturbations on observable scales are those produced
$N \sim 60$ e-folds before the end of inflation (corresponding to the
horizon scale today), when the field value is $\phi = \phi_N$. Under
the SR approximation, the amplitude on this scale takes the value
\begin{equation} \label{eqn:Pscalar2}
\ensuremath{P_\mathcal{R}} \approx \frac{128\pi}{3}
\left( \frac{\Lambda}{\ensuremath{m_\textrm{Pl}}} \right)^4
\left( \frac{f}{\ensuremath{m_\textrm{Pl}}} \right)^2
\frac{[1 + \cos(\phi_N/f)]^3}{\sin^2(\phi_N/f)} \, .
\end{equation}
The values for $\Lambda$ corresponding to $\ensuremath{\Pscalar^{1/2}} = 10^{-5}$ are
shown in \reffig{lambda}. We see that $\Lambda \sim
10^{15}$-$10^{16}$~GeV for $f \sim \ensuremath{m_\textrm{Pl}}$, yielding an inflaton mass
$m_\phi = \Lambda/f^2 \sim 10^{11}$-$10^{13}$~GeV. Thus, a potential
height $\Lambda$ of the GUT scale and a potential width $f$ of the
Planck scale are required in NI in order to
produce the fluctuations responsible for large scale
structure. For $f \gg \ensuremath{m_\textrm{Pl}}$, the potential height scales as
$\Lambda \sim (10^{-3}\ensuremath{m_\textrm{Pl}}) \sqrt{f/\ensuremath{m_\textrm{Pl}}}$.
\begin{figure}
\insertfig{lambda}
\caption[Potential height scale $\Lambda$]{
The potential height scale $\Lambda$ corresponding to
$\ensuremath{\Pscalar^{1/2}} = 10^{-5}$ is shown as a function of the potential
width $f$ for various numbers of e-foldingss $N$ before the end
of inflation.
The (light blue) band corresponds to the values of $N$ consistent
with the standard post-inflation cosmology for
$\ensuremath{\rho_\textrm{RH}} > (\textrm{1 GeV})^4$.
}
\label{fig:lambda}
\end{figure}
The fluctuation amplitudes are, in general, scale dependent. The
spectrum of fluctuations is characterized by the spectral index $\ensuremath{n_s}$,
\begin{equation} \label{eqn:ns}
\ensuremath{n_s} - 1 \equiv \frac{\ensuremath{\mathrm{d}}\ln\ensuremath{P_\mathcal{R}}}{\ensuremath{\mathrm{d}}\ln k}
\approx -\frac{1}{8\pi} \left( \frac{\ensuremath{m_\textrm{Pl}}}{f} \right)^2
\frac{3 - \cos(\phi/f)}{1 + \cos(\phi/f)} \, .
\end{equation}
The spectral index for natural inflation is shown in \reffig{ns}. For
small $f$, $\ensuremath{n_s}$ is essentially independent of $N$, while for
$f \gae 2\ensuremath{m_\textrm{Pl}}$, $\ensuremath{n_s}$ has essentially no $f$ dependence. Analytical
estimates can be obtained in these two regimes:
\begin{equation} \label{eqn:nsA}
\ensuremath{n_s} \approx
\begin{cases}
1 - \frac{\ensuremath{m_\textrm{Pl}}^2}{8 \pi f^2} \, ,
& \textrm{for} \,\, f \lae \frac{3}{4}\ensuremath{m_\textrm{Pl}} \\
1 - \frac{2}{N} \, ,
& \textrm{for} \,\, f \gae 2\ensuremath{m_\textrm{Pl}} \, .
\end{cases}
\end{equation}
Previous analyses of COBE data, based in part on determinations of this
spectral index, have led to constraints on the width of the natural
inflation potential of $f \gae 0.3\ensuremath{m_\textrm{Pl}}$ \cite{Adams:1992bn} and
$f \gae 0.4\ensuremath{m_\textrm{Pl}}$ \cite{Moroi:2000jr}, while analysis of WMAP's first
year data requires $f \gae 0.6\ensuremath{m_\textrm{Pl}}$ \cite{Freese:2004un}. Values of $f$
below these constraints would lead to $\ensuremath{n_s} < 0.9$, reducing fluctuations
at small scales and suppressing higher order acoustic peaks (relative to
lower order peaks) to a level inconsistent with the CMB data. The WMAP
3-year data yield $\ensuremath{n_s} = 0.951_{-0.019}^{+0.015}$
($\ensuremath{n_s} = 0.987_{-0.037}^{+0.019}$ when tensor modes are included in the
fits) on the $k=0.002 {\rm Mpc}^{-1}$ scale%
\footnote{As discussed in \refsec{Running}, the running of the spectral
index $\ensuremath{n_s}$ in natural inflation is so small that the value of $\ensuremath{n_s}$
at the scale of the WMAP3 measurements
is virtually identical to its value on the horizon scale.
}.
This WMAP3 result leads to the somewhat tighter constraint
$f \gae 0.7\ensuremath{m_\textrm{Pl}}$ at 95\% C.L.
\begin{figure}
\insertfig{ns}
\caption[Spectral index $n_s$]{
The spectral index $n_s$ is shown as a function of the potential
width $f$ for various numbers of e-foldingss $N$ before the end
of inflation.
The (light blue) band corresponds to the values of $N$ consistent
with the standard post-inflation cosmology for
$\ensuremath{\rho_\textrm{RH}} > (\textrm{1 GeV})^4$.
}
\label{fig:ns}
\end{figure}
\subsection{\label{sec:Tensor} Tensor (Gravitational Wave) Fluctuations}
In addition to scalar (density) perturbations, inflation also produces
tensor (gravitational wave) perturbations with amplitude
\begin{equation} \label{eqn:Ptensor}
\ensuremath{\Ptensor^{1/2}}(k) = \frac{4H}{\sqrt{\pi}\ensuremath{m_\textrm{Pl}}} \, .
\end{equation}
Here, we examine the tensor mode predictions of natural inflation and
compare with WMAP data.
\begin{figure}
\insertfig{ratio}
\caption[Tensor to scalar ratio $r$]{
The tensor to scalar ratio $r \equiv \ensuremath{\frac{P_T}{P_\mathcal{R}}}$ is shown as a function
of the potential width $f$ for various numbers of e-foldingss $N$
before the end of inflation.
The (light blue) band corresponds to the values of $N$ consistent
with the standard post-inflation cosmology for
$\ensuremath{\rho_\textrm{RH}} > (\textrm{1 GeV})^4$.
}
\label{fig:ratio}
\end{figure}
Conventionally, the tensor amplitude is given in terms of the
tensor/scalar ratio
\begin{equation} \label{eqn:Pratio}
r \equiv \ensuremath{\frac{P_T}{P_\mathcal{R}}} = 16 \epsilon \, ,
\end{equation}
which is shown in \reffig{ratio} for natural inflation. For small $f$,
$r$ rapidly becomes negligible, while $f \to \frac{8}{N}$ for
$f \gg \ensuremath{m_\textrm{Pl}}$. In all cases, $r \lae 0.2$, well below the WMAP limit of
$r < 0.55$ (95\% C.L., no running).
As mentioned in the introduction,
in principle, there are four parameters describing scalar and tensor
fluctuations: the amplitude and spectra of both components, with the
latter characterized by the spectral indices $\ensuremath{n_s}$ and $\ensuremath{n_\textrm{T}}$
(we are ignoring any running here). The amplitude of the scalar
perturbations is normalized by the height of the potential (the energy
density $\Lambda^4$). The tensor spectral index $\ensuremath{n_\textrm{T}}$ is not
an independent parameter since it is related to the tensor/scalar ratio
$r$ by the inflationary consistency condition $r = -8 \ensuremath{n_\textrm{T}}$.
The remaining free parameters are the spectral index $\ensuremath{n_s}$ of the scalar
density fluctuations, and the tensor amplitude (given by $r$).
Hence, a useful parameter space for plotting the model predictions
versus observational constraints is on the $r$-$\ensuremath{n_s}$ plane
\cite{Dodelson:1997hr,Kinney:1998md}. Natural inflation generically
predicts a tensor amplitude well below the detection sensitivity of
current measurements such as WMAP. However, the situation will improve
markedly in future experiments with greater sensitivity such as QUIET
\cite{winstein} and PLANCK \cite{unknown:2006uk}, as well as
proposed experiments such as CMBPOL \cite{Bock:2006yf}.
In \reffig{rnplane}, we show the predictions of natural inflation for
various choices of the number of e-folds $N$ and the mass scale $f$,
together with the WMAP3 observational constraints. Parameters
corresponding to fixed $N=(40,50,60,70)$ with varying $f$ are shown
as (solid/blue) lines from the lower left to upper right. The
orthogonal (dashed/red) lines correspond to fixed $f$ with varying $N$.
The (blue) band are the values of $N$ consistent with standard
post-inflation cosmology for reheat temperatures above the
nucleosynthesis limit of $\sim$1 GeV, as discussed previously. The
solid regions are the WMAP3 allowed parameters at 68\% and 95\% C.L.'s.
For a given $N$, a fixed point is reached for $f \gg \ensuremath{m_\textrm{Pl}}$; that is,
$r$ and $\ensuremath{n_s}$ become essentially independent of $f$ for any
$f \gae 10\ensuremath{m_\textrm{Pl}}$. This is apparent from the $f=10\ensuremath{m_\textrm{Pl}}$ and $f=100\ensuremath{m_\textrm{Pl}}$
lines in the figure, which are both shown, but are indistinguishable.
As seen in the figure, $f \lae 0.7\ensuremath{m_\textrm{Pl}}$ is excluded. However,
$f \gae 0.8\ensuremath{m_\textrm{Pl}}$ falls well into the WMAP3 allowed region and is thus
consistent with the WMAP3 data.
\section{\label{sec:Running} Running of the Spectral Index}
\begin{figure}
\insertfig{nsrun}
\caption[Spectral index running $\nsrun$]{
The spectral index running $\nsrun$ is shown as a function of the
number of e-foldings $N_k$ before the end of inflation for several
values of the potential width $f$ (note that larger $N_k$
corresponds to smaller values of $k$ as in \refeqn{Nk}.
The (light blue) filled region corresponds to the values of $N$
consistent with the standard post-inflation cosmology for
$\ensuremath{\rho_\textrm{RH}} > (\textrm{1 GeV})^4$.
}
\label{fig:nsrun}
\end{figure}
In general, $\ensuremath{n_s}$ is not constant: its variation can be characterized
by its running, $\nsrun$. In this section, we use numerical solutions
to the equation of motion, \refeqn{eom}, as the slow roll approximation
(to the order used throughout this paper) is inaccurate for determining
the running. As shown in \reffig{nsrun}, natural inflation predicts a
small, $\orderof{10^{-3}}$, negative spectral index running. This is
negligibly small for WMAP sensitivities and this model is essentially
indistinguishable from zero running in the WMAP analysis. While WMAP
data prefer a non-zero, negative running of $\orderof{10^{-1}}$ when
running is included in the analysis, zero running is not excluded at
95\% C.L. In Ref.~\cite{Easther:2006tv}, it was shown that the WMAP3
central value for the running would result in $N<30$ for single field,
slow roll inflation, an insufficient amount of expansion to solve the
cosmological problems for which inflation was proposed (\textit{e.g.}\ flatness of
the universe). Reanalysis of the WMAP data with an $N>30$ prior removes
the preference for non-zero running and tightly constrains the running
to be, at most, of $\orderof{10^{-2}}$ \cite{Peiris:2006sj}; this
result, however, applies only for single field inflation models for
which the slow roll formalism is valid. Analysis of WMAP data when
combined with Lyman-$\alpha$ forest and Supernovae Ia data also suggests
a smaller $\orderof{10^{-2}}$ running that is consistent with zero at
about the 1$\sigma$ level \cite{Seljak:2006bg}.
\begin{figure*}
\insertfig{potential60}
\hspace{\stretch{1}}
\insertfig{potential0}
\caption[Inflaton potential]{
The natural inflation potential is shown, along with a quadratic
expansion around the potential minimum. Also shown are
the positions on the potential at 60 e-foldings prior to the end
of inflation (left panel) and at the end of inflation (right panel)
for potential widths $f=(0.5,0.7,1,2,10,100) \ensuremath{m_\textrm{Pl}}$. For
$f \gae 3\ensuremath{m_\textrm{Pl}}$, the relevant portion of the potential is essentially
quadratic during the last 60 e-foldings of inflation.
}
\label{fig:potential}
\end{figure*}
Small scale CMB experiments such as CBI \cite{Mason:2002tm}, ACBAR
\cite{Kuo:2002ua}, and VSA \cite{Dickinson:2004yr} will provide more
stringent tests of the running and hence of specific inflation models.
The predicted running for NI is too small to be detected in even these
experiments: if these experiments definitively detect a strong running
(\textit{i.e.}, excluding a zero/trivial running), natural inflation in the form
discussed here would be ruled out.
\section{\label{sec:Potential} Inflaton Potential and Inflationary Model
Space}
In this section, we will examine the evolution of the inflaton field
$\phi$ along the potential. We will show that the location on the
potential at which the final $\sim$60 e-foldings of inflation occurs
depends on the width $f$ of the potential. We will also show that
natural inflation can fall into either the `large field' or `small
field' categorization defined by \cite{Dodelson:1997hr}, depending again
on the value of $f$.
The natural inflation potential is shown in
\reffig{potential}. For comparison, a quadratic expansion
about the minimum at $\phi = \pi f$ is also shown. Inflation occurs
when the field slowly rolls down the potential and ends at the point
where the field begins to move rapidly (technically, when
$\epsilon \ge 1$). In the right panel of the figure, we show the
location along the potential where inflation ends ($N_k=0$) for various
values of the potential width $f$. In the left panel, the location
along the potential is shown at $N_k=60$ e-foldings prior to the end of
inflation, the approximate time when fluctuations were produced that
correspond to the current horizon. This is not necessarily where
inflation began: the field may have started at any point further up the
potential and produced more than 60 e-foldings of expansion. The
rolling of the field above these points, however, would have produced
modes which are still on super-horizon scales today and hence are
unobservable. In the following discussion, we will be referring only to
the \textit{observable} ($N_k \lae 60$) portion of the inflaton
evolution. For all $f \gae 0.5\ensuremath{m_\textrm{Pl}}$, inflation ends somewhere near the
bottom of the potential, with inflation for larger $f$ ending farther
down the potential than for smaller $f$. We can see, however, that the
start of the observable portion of rolling is spread widely over the
potential. For $f \lae 1\ensuremath{m_\textrm{Pl}}$, current horizon modes were produced
while the field was near the top of the potential. Conversely, for
$f \gae 3\ensuremath{m_\textrm{Pl}}$, those modes were produced near the bottom of the
potential. For $f \geq 5 \ensuremath{m_\textrm{Pl}}$, the observationally relevant portion
of the potential is essentially a $\phi^2$ potential; note, however,
that in natural inflation this effectively power law potential is
produced via a natural mechanism.
\begin{figure}
\insertfig{rnplane2}
\caption[$r-n_s$ plane]{
Natural inflation predictions in the $r$-$\ensuremath{n_s}$ plane (parameters
and regions labeled as in \reffig{rnplane}), as well as the regions
classifying small field, large field, and hybrid inflation models.
Natural inflation falls into different classes depending on the
potential width $f$: for $f \lae 1.5\ensuremath{m_\textrm{Pl}}$, natural inflation can be
classified as a small field model, while for $f \gae 1.5\ensuremath{m_\textrm{Pl}}$,
natural inflation can be classified as a large field model.
}
\label{fig:rnplane2}
\end{figure}
Due to the variety of inflation models, there have been attempts to
classify models into a few groups. Dodelson, Kinney \& Kolb
\cite{Dodelson:1997hr} have proposed a scheme with three categories:
small field, large field, and hybrid inflation models, which are easily
distinguishable in the SR approximation by the SR parameters $\epsilon$
and $\eta$. Small field models are characterized by $\ensuremath{V^{\prime\prime}}(\phi) < 0$
and $\eta < -\epsilon$, large field models by $\ensuremath{V^{\prime\prime}}(\phi) > 0$ and
$-\epsilon < \eta \leq \epsilon$, and hybrid models by $\ensuremath{V^{\prime\prime}}(\phi) > 0$
and $\eta > \epsilon >0$. To first order in slow roll,
$\ensuremath{n_s} = 1-4\epsilon-2\eta$ and $r = 16\epsilon$, so the categories have
distinct regions in the $r$-$\ensuremath{n_s}$ plane, as shown in \reffig{rnplane2}.
Also shown in the figure are the predictions for natural inflation;
parameters are labeled as in \reffig{rnplane} (which showed the same
predictions, albeit with a logarithmic rather than linear scale). From
\reffig{rnplane2}, it can be seen that natural inflation does not fall
into a single category, but may be either small field or large field,
depending on the potential width $f$. This should not be surprising
from the preceding discussion of the potential. For $f \lae 1.5\ensuremath{m_\textrm{Pl}}$,
$\phi$ is on the upper part of the potential, where $\ensuremath{V^{\prime\prime}}(\phi) < 0$,
at $N_k=60$ and, thus, falls into the small field regime. For
$f \gae 1.5\ensuremath{m_\textrm{Pl}}$, $\phi$ is lower down the potential, where
$\ensuremath{V^{\prime\prime}}(\phi) > 0$, at $N_k=60$ and falls into the large field regime
along with power law ($V(\phi) \sim \phi^p$ for $p>1$) models. (The
large field regime for NI was first noted by Alabidi and Lyth in
Ref.~\cite{Alabidi:2005qi}.) The WMAP3 constraints shown in
\reffig{rnplane} and discussed in \refsec{Fluctuations}, requiring
$f \gae 0.7\ensuremath{m_\textrm{Pl}}$, still allow natural inflation to fall into either of
the small or large field categories.
\section{\label{sec:Conclusion} Conclusion}
Remarkable advances in cosmology have taken place in the past decade
thanks to Cosmic Microwave Background experiments. The release of the
3 year data set by the Wilkinson Microwave Anisotropy Probe is leading
to exciting times for inflationary cosmology. Not only are generic
predictions of inflation confirmed (though there are still outstanding
theoretical issues), but indeed individual inflation models are
beginning to be tested.
Currently the natural inflation model, which is extremely
well-motivated on theoretical grounds of naturalness, is a good fit to
existing data. In this paper, we showed that for potential width $f >
0.7 \ensuremath{m_\textrm{Pl}}$ and height $\Lambda \sim \ensuremath{m_\textrm{GUT}}$ the model is in good
agreement with WMAP3 data. Natural inflation predicts very little
running, an order of magnitude lower than the sensitivity of WMAP.
The location of the field in the potential while perturbations on
observable scales are produced was shown to depend on the width
$f$. Even for values $f>5 \ensuremath{m_\textrm{Pl}}$ where the relevant parts of the
potential are indistinguishable from quadratic, natural inflation
provides a framework free of fine-tuning for the required potential.
There has been some confusion in the literature as to whether natural
inflation should be characterized as a `small-field' or `large-field'
model. In \reffig{rnplane2} we demonstrated that either categorization
is possible, depending on the value of $f$, and that both are in
agreement with data.
Natural inflation makes definite predictions for tensor modes, as shown
in \reffig{rnplane}. Of particular significance is that current
observational constraints place a \textit{lower limit} on the
tensor/scalar ratio for Natural Inflation of order $10^{-3}$, a value
which is within range of proposed future high-precision cosmological
measurements \cite{Kinney:1998md,Friedman:2006zt}. Therefore Natural
Inflation represents a model which is both well-motivated and testable.
\begin{acknowledgments}
CS and KF acknowledge the support of the DOE and the Michigan
Center for Theoretical Physics via the University of Michigan.
CS also acknowledges the support of the William I.\ Fine Theoretical
Physics Institute at the University of Minnesota.
WHK is supported in part by the National Science Foundation under
grant NSF-PHY-0456777.
KF thanks R.~Easther, M.~Turner, and L.~Verde for useful discussions.
\end{acknowledgments}
|
1,108,101,566,350 | arxiv | \section{Model}\label{s:sezione1}
A quantum channel that uses continuous alphabet can be modeled by a Bosonic field mode
whose phase space quadratures enable for continuous
variable encoding/decoding \cite{caves94}.
On $n$ uses of such a channel we have to consider $n$ independent Bosonic modes,
described by annihilation
operators $a_k$ for $k=1,\cdots,n$.
\begin{figure}[t]
\begin{center}
\epsfxsize=.8\hsize\leavevmode\epsffile{fig1.eps}
\end{center}
\caption{(Color online) Scheme of the communication scenario: $n$ uses
of the lossy Bosonic channel correspond
to $n$ input Bosonic modes $a_k$ interacting with the
environment modes $b_k$ trough $n$ beam splitters.}
\label{f:fig1}
\end{figure}
As depicted in Fig.~\ref{f:fig1} we restrict the analysis
to the case where each $a_k$ interacts with
an environment mode $b_k$ through a
beam splitter of transmittivity $\eta\in[0,1]$, thus modeling lossy channels.
The signal-noise coupling is then characterized by
$U\equiv \otimes_{k=1}^n U_k \label{prima}$
with
\begin{eqnarray}
U_k=\exp\left[\,(a_k^\dag b_k-a_kb_k^\dag)\arctan\sqrt{\frac{1-\eta}\eta}\;\right]\;
\label{Vdefu}\;,
\end{eqnarray}
the unitary operator which satisfies the following transformations~\cite{walls94}
\begin{eqnarray}
U_k a_k U_k^\dag &=&\sqrt{\eta}\; a_k - \sqrt{1-\eta} \; b_k\,,
\nonumber\\
U_k b_k U_k^\dag&=&\sqrt{\eta}\; b_k + \sqrt{1-\eta} \; a_k\,.
\label{unouno}
\end{eqnarray}
Let ${r}$ be the density matrix in the Hilbert space
${\cal H}_{\mbox{\small{tot}}}^{(n)}\equiv \otimes_{k=1}^n {\cal H}_k$
which describes the input state of the $n$ channel uses.
Here ${\cal H}_k$ is the Hilbert space associated with input mode $a_k$.
For a memoryless channel the environment acts independently on each
$a_k$. This can be described by assuming the modes $b_k$ to be in the same state $\rho_b$.
The output density matrix corresponding to $r$ is hence given by~\cite{getalPRA04}
\begin{eqnarray}
{\cal L}({r})=\mbox{Tr}_b\left[U\:({r}\otimes {{r}}_b)\:U^\dag\right]\,,
\label{mappanew}
\end{eqnarray}
where the trace is performed over the environment's degrees of freedom, initially in the state
${{r}}_b \equiv \rho_b^{\otimes n}$. Because of the tensorial structure of
$U$ and ${{r}}_b$, the map~(\ref{mappanew}) becomes
\begin{eqnarray}
{\cal L}({r})= \otimes_{k=1}^n {\ell}_k ({r})\;,
\label{mappanewTEN}
\end{eqnarray}
with $\ell_k$ being the map on ${\cal H}_k$ associated with the $k$-th channel use which transforms
the density matrix $\rho$ of ${\cal H}_k$ according to
\begin{eqnarray}
\ell_k(\rho)\equiv \mbox{Tr}_{b_k}\left[U_k\:({\rho}\otimes {\rho}_b)\:U_k^\dag\right]
\;.
\label{Vmappaelle}
\end{eqnarray}
A memory channel is characterized by non trivial correlations between the environment actions
on the different channel uses which cannot be accounted for by Eq.~(\ref{mappanewTEN}).
We model this situation by replacing the separable state $r_b$ of Eq.~(\ref{mappanew})
with the entangled state
\begin{eqnarray}
\tilde{{r}}_b \equiv \Omega_b\; {r}_b \; \Omega_b^{\dag}\,,
\label{mappa1}
\end{eqnarray}
where $\Omega_b$ is a unitary, multi-mode squeezing operator \cite{walls94}
\begin{eqnarray}
\Omega_b \equiv \exp [ \;\sum_{k,k'} ( \xi_{kk'}^* b_k b_{k'}- \xi_{kk'}
b_k^{\dag} b_{k'}^{\dag})]\,,
\label{squeezing}
\end{eqnarray}
which couples the $b_k$ modes through the squeezing parameters $\xi_{kk'}$.
The corresponding output state of the channel is hence described by the map
\begin{eqnarray}
\tilde{\cal L}({r})=\mbox{Tr}_b\left[U\:({r}\otimes \tilde{{r}}_b)\:U^\dag\right]\,.
\label{mappa}
\end{eqnarray}
The dependence of Eq.(\ref{mappa}) on $n$ is generally
more involved with respect to that of Eq.~(\ref{mappanewTEN}).
Equation (\ref{mappa}) also depends on
the parameters $\xi_{kk'}$ and for
for $\xi_{kk'}=0$ it is $\Omega_b=\openone$ and $\tilde{\cal L}={\cal L}$.
It is worth noting that in defining the memory channel model~(\ref{mappa})
it is not necessary to assume Eqs.~(\ref{Vdefu}) and (\ref{squeezing}).
As a matter of fact, $U_k$ can be any unitary operator that couples
the $k$-th channel use mode with its noise $b_k$,
while $\Omega_b$ can be any unitary operator
which introduces correlations between the $b_k$.
We will focus on the
case described by Eqs.~(\ref{Vdefu}) and (\ref{squeezing})
since here an interesting simplification occurs.
Our aim is now to relate the memory channel of Eq.~(\ref{mappa})
to the memoryless channel
of Eq.~(\ref{mappanew}).
Let us consider
\begin{eqnarray}
\Omega \equiv \exp[ \;\sum_{k,k'} ( \xi_{kk'}^* a_k a_{k'}- \xi_{kk'}
a_k^{\dag} a_{k'}^{\dag})]\,,\label{squeezing1}
\end{eqnarray}
which represents a multi-mode-squeezing
(unitary) operator acting on the inputs mode $a_k$ with
the same squeezing parameters $\xi_{kk'}$ of~(\ref{squeezing}).
Defining the density matrix
\begin{eqnarray}
\tilde{{r}} \equiv \Omega^{\dag} {r} \; \Omega\,,
\label{squeezing2}
\end{eqnarray}
and using Eq.~(\ref{mappa1})
we rewrite Eq.~(\ref{mappa}) as
\begin{eqnarray}
\tilde{\cal L}({r})=
\mbox{Tr}_b\left[U \; (\Omega \otimes \Omega_b )
\:( \tilde{{r}}\otimes {r}_b ) \: (\Omega^\dag \otimes \Omega_b^{\dag})\;
U^\dag\right]\,.
\label{mappanewintermedio}
\end{eqnarray}
The transformations~(\ref{unouno}) can be used
to verify that
\begin{eqnarray}
U \big( a_k a_{k'} + b_kb_{k'} \big) U^\dag = \; a_k a_{k'} + b_k b_{k'}
\;\label{Vomegann} \;,
\end{eqnarray}
which shows that $\Omega \otimes \Omega_b$ commutes with $U$.
Therefore Eq.~(\ref{mappanewintermedio}) yields
\begin{eqnarray}
\tilde{\cal L}({r})&=&
\Omega \: \mbox{Tr}_b\left[ \;
U \:(\tilde{{r}}\otimes {r}_b) \: U^\dag \;\right]\; \Omega^\dag \;,
\label{mappanew1}
\end{eqnarray}
where: {\em i)} since $\Omega$ does not act on $b_k$,
we have moved it out of the trace operation,
{\em ii)} since $\Omega_b$
is unitary we have used the cyclicity of the trace to eliminate it.
Notice that, apart from the unitary operator $\Omega$, the right-hand side
of Eq.~(\ref{mappanew1})
is a standard memoryless Bosonic channel~(\ref{mappanew})
which couples the input state $\tilde{{r}}$ with
the environment state ${r}_b$; thus we can write Eq.~(\ref{mappanew1}) as
\begin{eqnarray}
\tilde {\cal L}({{r}})&=& \Omega \; {\cal L}(\tilde{{r}}) \; \Omega^\dag \;.
\label{fin}
\end{eqnarray}
This equation shows that the map
$\tilde{\cal L}$ can be decomposed in the following
three operations (see also Fig.~\ref{f:fig2}):
\begin{enumerate}
\item[{\em 1)}]apply the anti-squeezing operator $\Omega^\dag$ to the input state ${r}$;
\item[{\em 2)}]send the resulting state in the channel~$\cal L$;
\item[{\em 3)}]squeeze the final state with $\Omega$.
\end{enumerate}
\begin{figure}[t]
\begin{center}
\epsfxsize=.99\hsize
\leavevmode\epsffile{fig2.eps}
\end{center}
\caption{(Color online) Decomposition of $\tilde{\cal L}$ of Eq.~(\ref{mappa}).
Input states enter the system in $A$ (input) and leave it in $B$ (output).
According to Eq.~(\ref{fin}) we can identify two intermediate steps: in $A^\prime$
the input state has been transformed by
the unitary operator $\Omega^\dag$ and
enters the map $\cal L$;
in $B^\prime$ it is finally transformed by $\Omega$.}
\label{f:fig2}
\end{figure}
Notice that, if the noise parameters $\xi_{kk\prime}$ are known to the
communicating parties, the
unitary operators $\Omega^\dag$
and $\Omega$ at {\em 1)} and {\em 3)}
can always be included in the encoding and decoding stages of
the transmission. In this sense, $\tilde{\cal L}$ and
${\cal L}$ are unitarily equivalent and one expects their ability
in transferring information (classical or quantum) to be the same.
\section{Constrained inputs}\label{s:sezione2}
The ``equivalence'' of $\tilde{\cal L}$ and $\cal L$
is partially broken in the case of constrained inputs \cite{caves94}.
However also in this case, Eq.~(\ref{fin}) can be used to
relate the capacities of these two channels.
Let us consider for example the capacity of the memoryless channel when
$\rho_b$'s of Eq.~(\ref{mappanew})
are thermal states with average photon number
$M$, i.e.
\begin{eqnarray}
\rho_b\equiv\frac{1}{M+1}\left(\frac{M}{M+1}
\right)^{b_k^\dag b_k}\,.
\label{vacuum}
\end{eqnarray}
Under the hypothesis that the inputs states ${r}$ of the channel $\cal L$
have less then $N$ photons
per channel use,
\begin{eqnarray}
\mbox{Tr} [\; {r}\; \sum_{k=1}^n a_k^\dag a_k ] \leqslant n N\,,
\label{vacuum1}
\end{eqnarray}
it is believed~\cite{hw01,getalPRL04,sha04}
that the classical capacity $C({\cal L},N)$ of ${\cal L}$ can be
saturated by using Gaussian encodings. These allow one
to achieve a transmission
rate equal to
\begin{eqnarray}
G({\cal L},N) = n \, \left[ \;g(\eta N + (1-\eta) M) - g((1-\eta)M)
\;\right],
\label{capacity}
\end{eqnarray}
where
\begin{eqnarray}
g(x) = (x+1)\ln(x+1) -x\ln x \;,
\label{Vfunzioneg}
\end{eqnarray}
and where the linear dependence on $n$ is a consequence
of the absence of memory effects in the transmission.
Even though the identity $C({\cal L},N)= G({\cal L},N)$
has been proved~\cite{getalPRL04} only for
$M=0$ (environment's vacuum state),
there are strong evidences that it should also
apply for $M>0$.
\subsection{Upper bounds}\label{s:subsection21}
In the following we derive two independent upper bounds for the
maximum number of classical information $C(\tilde{\cal L},N)$
that can be reliably transmitted
through the $n$ uses of the memory channel $\tilde{\cal L}$
when its inputs $r$ are constrained
by Eq.~(\ref{vacuum1}).
Equation~(\ref{fin}) establishes that
transmitting ${r}$ into $\tilde{\cal L}$
is equivalent to transmitting $\tilde {r}$ of Eq.~(\ref{squeezing2})
into $\cal L$. The maximum
average photon number $\overline{N}$
per channel use associated with the latter state can be computed using the
transformations (\ref{unouno}). In particular, for $r$ satisfying Eq.~(\ref{vacuum1}) one
can show that
\begin{eqnarray}
\mbox{Tr} [ \; {\tilde {r}} \; \sum_{k=1}^n a_k^\dag a_k ] \leqslant n \overline{N}\,,
\label{ult}
\end{eqnarray}
where
\begin{eqnarray}
\overline{N} = N \;\left[ \; \cosh(4 \overline{d}) + \sinh(4 |\overline{d}|) \;\right]
+ s_1 + s_2 \geqslant N\,.
\label{NBAR}
\end{eqnarray}
In the above expression
$s_1$ and $s_2$ are positive quantities defined in Appendix~\ref{s:appendice1}
and $\overline{d}$ is the eigenvalue of the of the $n\times n$ matrix $\xi_{k k'}$
(assumed real symmetric for the sake of simplicity)
having maximum absolute value.
The quantity $\overline{N}$
determines the maximum value of average photon number
per channel use that is entering the channel $\cal L$ at point $A^\prime$ of
Fig.~\ref{f:fig2} when we feed the channel
$\tilde{\cal L}$ with $N$ photons per use.
We can exploit this fact to conclude that the capacity $C(\tilde{\cal L},N)$ cannot
be greater than the capacity $C({\cal L},\overline{N})$ of the memoryless
channel ${\cal L}$ with $\overline{N}$ average photon number per channel use, i.e.
\begin{eqnarray}
C(\tilde{\cal L},N) & \leqslant& C({\cal L},\overline{N})
\label{capacitylast} \;.
\end{eqnarray}
Clearly this inequality does not depend on the validity of the
conjecture~\cite{hw01,getalPRL04,sha04}. However, in order
to derive an explicit expression for the bound~(\ref{capacitylast})
it is useful to assume~\cite{hw01,getalPRL04,sha04} and evaluating the right-hand side
term of~(\ref{capacitylast}) by means of the function $G(\tilde{\cal L},N)$
of Eq.~(\ref{capacity}), i.e.
\begin{eqnarray}
C(\tilde{\cal L},N) & \leqslant& n \,
\left[g(\eta \overline{N} + (1-\eta) M) - g((1-\eta)M)
\right] \;. \nonumber \\
\label{Vcapacitylast}
\end{eqnarray}
An alternative upper bound for $C(\tilde{\cal L},N)$
can be obtained by fixing $n$ and by
assuming the corresponding map $\tilde{\cal L}$ to
represent a memoryless channel.
This allows us to
derive the following inequality \cite{nota1}
\begin{eqnarray}
C(\tilde{\cal L},N) \leqslant
\sup_m \; C_m(\tilde{\cal L}^{\otimes m},N)/ m
\;, \label{add0}
\end{eqnarray}
where $m$ is the number of successive uses of the ``memoryless''
channel $\tilde{\cal L}$ and where~\cite{HSW}
\begin{eqnarray}
C_m(\tilde{\cal L}^{\otimes m},N)
&\equiv& {\max_{p^{(i)},{R}^{(i)} ; N}}\Big\{
S(\tilde{\cal L}^{\otimes m} ({R})) \label{add} \\
&& \qquad -\sum_j\;p^{(i)}
S(\tilde{\cal L}^{\otimes m}
({R}^{ (i)})) \; \Big\} \nonumber \;,
\end{eqnarray}
is the maximum amount of information the two communicating
parties can share by feeding with
probabilities $p^{(i)}$ the $m$ copies of $\tilde{\cal L}$
with messages ${R}^{(i)}\in \left( {\cal H}^{(n)}_{\mbox{\small tot}}
\right)^{\otimes m}$.
Here $S({R}) =-\mbox{Tr} [ {R} \ln {R} ]$ is the von
Neumann entropy
and ${R}=\sum_i \;p^{(i)} {R}^{(i)}$
is the average input of $\tilde{\cal L}^{\otimes m}$.
The maximization in Eq.~(\ref{add}) is performed over all ensembles
$\{ p^{(i)}, {R}^{(i)}\}$ which, for each $\tilde{\cal L}$, satisfy the energy
constraint~(\ref{vacuum1}), i.e.
\begin{eqnarray}
\mbox{Tr}[ {R} \; ( \sum_{k=1}^n a_{k}^\dag a_{k} )^{\otimes m} ] \leqslant m\; n N\,.
\label{vacuumEMME}
\end{eqnarray}
Likewise Refs.~\cite{getalPRL04,sha04} we provide an
upper bound for~(\ref{add}) by replacing the first/second term at the
right-hand side with the maximum/minimum respectively
\begin{eqnarray}
C_m(\tilde{\cal L},N) &\leqslant& m \; \max_{{r} ; \; N } \Big\{
S(\tilde{\cal L}({r})) \Big\}
- \min_{ {R} }
\Big\{ S(\tilde{\cal L}^{\otimes m}({R})) \Big\} .
\nonumber \\ \label{add1}
\end{eqnarray}
The subaddittivity
of the von Neumann entropy has been used to transform the maximization
over ${R}\in \left( {\cal H}_{\mbox{\small tot}}^{(n)}\right)^{\otimes
m}$ into
maximization over inputs ${r}$ of ${\cal H}_{\mbox{\small tot}}^{(n)}$,
and the constraint~(\ref{vacuumEMME}) has been dropped in the
minimization.
Equation~(\ref{add1}) establishes that $C_m(\tilde{\cal L},N)$
can be bounded by the difference between
the {\em maximum} output entropy of {\em single} use ($m=1$) of $\tilde{\cal L}$ and
the {\em minimum} output entropy of the $m$ channel uses: let us compute these quantities.
For inputs $r$
that satisfy the constraint~(\ref{vacuum1}), Eqs.~(\ref{unouno}) and (\ref{mappa})
establish that the maximum
average photon number we can get at the output of the channel $\tilde{\cal L}$ is equal to
$n N_{\mbox{\small out}}$ where,
\begin{eqnarray}
N_{\mbox{\small{out}}}&=& \eta N + (1-\eta) ( s_0 M + s_1 ) \;,
\label{energiaOUT}
\end{eqnarray}
with $s_1$ as in Eq.~(\ref{NBAR}) and
\begin{eqnarray}
s_0 \equiv \sum_{j=1}^{n} \cosh(4 d_j)/ n \geqslant 1\;,\label{esse0}
\end{eqnarray}
(see Appendix ~\ref{s:appendice2} for details).
We can hence upper bound the output entropy of $\tilde{\cal L}$ with
$n$ times the entropy $g(N_{\mbox{\small{out}}})$ of a
thermal state whose total average photon number is equal to
$N_{\mbox{\small{out}}}$ \cite{getalPRL04} .
To compute the minimum output entropy of the channel $\tilde{\cal L}$ we
use a conjecture proposed in Ref. \cite{getalPRA04}.
In fact, from Eq.~(\ref{fin}) and the invariance of $S$ under unitary operations, we have
\begin{eqnarray}
\min_{R} \Big\{ S(\tilde{\cal L}^{\otimes m} (R))
\; \Big\}
= \min_{R} \Big\{ S( {\cal L}^{\otimes m} (R) ) \; \Big\} \;
\label{mini1}
\end{eqnarray}
According to the analysis of Ref.~\cite{getalPRA04}
the minimum output entropy of the
channel $\cal L$ should be provided by vacuum input: this result has not
been proven yet but, as in the case of the conjecture Eq.~(\ref{capacity}),
there is strong evidence in support of it (as a matter of fact these
two conjectures are strongly related). Assuming the
conjecture of Ref.~\cite{getalPRA04} we can simplify Eq.~(\ref{mini1}) as
follows,
\begin{eqnarray}
\min_{R} \Big\{ S(\tilde{\cal L}^{\otimes m} (R)) \; \Big\}
= m \; n \; g((1-\eta) M)
\label{mini}\;.
\end{eqnarray}
which replaced in Eq.~(\ref{add1}) and (\ref{add0}) gives,
\begin{eqnarray}
C(\tilde{\cal L},N) &\leqslant& n \; \big[ \; g \left( \eta \; N + (1-\eta) \;
( s_0 M + s_1 ) \right) \nonumber \\
&& \qquad - g((1-\eta) M) \; \big]
\label{add2}\; .
\end{eqnarray}
The right-hand sides of Eqs.~(\ref{Vcapacitylast}) and (\ref{add2})
are two independent upper bounds for the capacity of the $n$ successive
uses of the memory channel $\tilde{\cal L}$.
They have been derived by assuming the conjectures discussed in
Refs.~\cite{hw01,getalPRL04,sha04} and Ref.~\cite{getalPRA04},
respectively.
Both of them are greater or equal to
the alleged capacity $G({\cal L},{N})$
of Eq.~(\ref{capacity}) of a memoryless channel $\cal L$ with average
constraint $N$ (it follows for instance from the fact that $g(x)$ is an increasing function of
$x$).
\subsection{Lower bound}\label{s:subsection22}
A lower bound for $C(\tilde{\cal L},N)$ can be obtained by providing an
encoding-decoding procedure that allows to achieve reliable information transfer.
This is not a simple task for a memory channel.
However we can use the decomposition rule~(\ref{fin}) to transform encodings
of $\cal L$ (which are simpler to characterize) into encodings of $\tilde{\cal L}$.
The only known encoding that allows the memoryless channel $\cal L$ to
asymptotically achieve the transmission rate~(\ref{capacity})
requires the sender to feed the channel
with thermal states \cite{getalPRL04,hw01,sha04}.
Suppose that she/he manages to produce a thermal state
at point $A^\prime$ of Fig.~\ref{f:fig2} and assume that the average photon
number of such state is $N^\prime$.
This means that the state of the $n$ modes in $A^\prime$ is given by
\begin{eqnarray}
\tilde r = \bigotimes_{k=1}^n \frac{1}{N^\prime+1} \left( \frac{N^\prime}{N^\prime+1}
\right)^{a^\dag_k a_k}\,.
\label{thermalAprime}
\end{eqnarray}
The corresponding state in $A$ is obtained by inverting the
relation Eq.~(\ref{squeezing2}) and has average photon number equal to
\begin{eqnarray}
\mbox{Tr} [ \; \Omega \; \tilde{r} \; \Omega^\dag \sum_k a_k^\dag a_k ] = n \;
( s_0\; N^\prime + s_1)\,,
\label{finale}
\end{eqnarray}
with $s_0$ and $s_1$ as in Eqs.~(\ref{NBAR}) and (\ref{energiaOUT})
(see Appendix~\ref{s:appendice3} for details).
Since we are allowed to supply less then $N$ average
photon number per channel use, we should require
\begin{eqnarray}
s_0\; N^\prime + s_1 \leqslant N\,, \qquad \Longrightarrow
\qquad N^\prime \leqslant \frac{N - s_1}{s_0} \leqslant N\,.
\label{finale1}
\end{eqnarray}
For all $N^\prime$ satisfying the above relation the sender is able to
use the optimal encoding~(\ref{thermalAprime}) to transfer messages with
capacity $G({\cal L},N^\prime)$ given in Eq.~(\ref{capacity}).
This means that for
large enough $n$,
the following inequality holds
\begin{eqnarray}
&&C(\tilde{\cal L},N) \geqslant G({\cal L}, (N - s_1)/{s_0}) \;.
\label{capacitylast11}
\end{eqnarray}
Since $C({\cal L}, N)$ is always greater than the right-hand side
of Eq.~(\ref{capacitylast11}), we cannot claim that
$C(\tilde{\cal L},N)$ is definitely greater than $C({\cal L},N)$.
\section{Conclusions}\label{s:sezione3}
We have discussed a model of quantum \textit{memory}
channel employing continuous alphabets which relies on the use of multi-mode squeezed
(entangled) environment state.
In the simple case of lossy Bosonic channel we have found a unitarily equivalence~(\ref{fin})
between the map $\tilde{\cal L}$ of the memory channel and the map $\cal L$ of its
memoryless counterpart. When no constraints on the input states there is
a perfect equivalence in the ability of the these channels in transferring
information.
As a consequence, entangled inputs can only be used to reach
an optimal encoding, but they do not improve the channel's performance.
This shows that the role of
entanglement is subtle.
In particular, it seems no longer useful when other unlimited resources are
available.
In the more realistic scenario of energy constrained
input states, we provided upper and lower bounds for the
capacity of the memory channel.
In particular, from Eqs.~(\ref{capacitylast}) and
(\ref{capacitylast11}) we have
\begin{eqnarray}
G({\cal L}, (N-s_1)/s_0) \leqslant C(\tilde{\cal L},N) \leqslant C({\cal L},\overline{N})\,,
\label{ultima}
\end{eqnarray}
which, assuming the conjecture~\cite{hw01,getalPRL04,sha04},
shows that the classical capacity of the memory channel is bounded by
classical capacities of the
memoryless channel ${\cal L}$ having different power constraints.
It is worth noticing that, because of Eq.~(\ref{fin}),
the above relation generalizes also to all the other
capacities (e.g. quantum capacity, entanglement assisted
capacity \cite{bs98}) of $\tilde{\cal L}$
and ${\cal L}$.
Finally, we believe that the results presented here,
though not giving a conclusive
answer on the usefulness of entanglement versus memory effects,
are deep enough.
Furthermore,
the presented model is fairly general and could be used to study a
variety of specific and practical situations.
For instance it could be interesting to analyze the case where Eq.(7)
only connects nearest neighbors modes, that is, each use is only affected by the previous one.
As such this work paves the way for further studies in Bosonic memory channels.
|
1,108,101,566,351 | arxiv | \section{Introduction}
\subsection{Algebraic and analytic crystal limits}
Let $K$ be a compact semisimple Lie group, $G$ its complex form.
The quantized coordinate ring on $G$ can be constructed in two different flavours.
In the algebraic approach we define a $\mathbb{C}(q)$-algebra $\cO_q[G]$ with $q$ an indeterminate, while in the analytic approach we define a $\mathbb{C}$-algebra $\cO[G_q]$, or a $*$-algebra $\cO[K_q]$ if we are interested in the compact form, with $q \in (0, \infty) \setminus\{1\}$ being a numerical value.
In this paper, we study these algebras in the crystal limit $q \to 0$ (or $q \to \infty$). This is defined quite differently in the two communities.
In the algebraic world, the $q=0$ limit is realized by the theory of crystal bases, due to Kashiwara \cite{Kashiwara:crystal1, Kashiwara:crystal2} and Lusztig \cite{Lusztig:canonical}.
Beginning with the quantized enveloping algebra $\cU_q(\lie{g})$ of $\mathfrak{g}=\mathfrak{k}_\mathbb{C}$ over $\mathbb{C}(q)$ or $\mathbb{Q}(q)$, one replaces the quantized Lie algebra generators $E_i$, $F_i$ by algebraic renormalizations $\tilde{E}_i$, $\tilde{F}_i$, called the Kashiwara operators, and then localizes the finite-dimensional modules at $q=0$. The resulting localized modules admit nice ``crystal'' bases with an elegant combinatorial structure.
This approach was used by \cite{Kashiwara:global, LeclercThibon, Iglesias:bitableaux} to define a crystal basis for the quantized coordinate rings $\cO_q[G]$.
In the analytical world, one is interested in the continuous fields of quantized coordinate rings, in the $C^*$-algebraic sense of Woronowicz \linebreak \cite{Woronowicz:SUq2, Woronowicz:pseudogroups} and Vaksman-Soibelman \cite{VakSoi:SUq2, Soibelman}, as well as the subalgebras of continuous functions on quantized flag varieties. Here, we have a handful of results, in particular due to Hong and Szymanski \cite{HonSzy:spheres, HonSzy:lens}, which show that in certain examples ($\mathrm{SU}_q(2)$, quantum projective spaces, quantum spheres and quantum lens spaces), the continuous field of $C^*$-algebras over $(0,\infty)$ extends to $[0,\infty]$, with fibres at the boundaries given by graph $C^*$-algebras.\footnote{In fact, their results are stronger, showing that their quantized function algebras are isomorphic for all values of $q\neq1$, although only in the $q=0$ limit do the graph algebra generators coincide with matrix coefficients for the quantum group.}
A recent study of the $q=0$ limit of $C(\mathrm{SU}_q(n))$ has also been undertaken in \cite{GirPal}, although without comparing the results to graph algebras.
In fact, it is known that groups with rank larger than one do not admit a description as a graph algebra, see the remarks at the end of the introduction of \cite{HonSzy:spheres}.
To overcome this difficulty, one can look at the more general concept of higher-rank graph algebras, due to Kumjian and Pask \cite{KumPas}.
Roughly speaking, a higher-rank graph is a graph whose edges are classed into $N$ colours, and which is equipped with an equivalence relation on paths such that, for instance, a path of two edges of colours red-blue can always be replaced by an equivalent path of colours blue-red with the same start and end points. In this way, paths $e$ in the graph are equipped with a \emph{degree}, or \emph{coloured length}, $\mathsf{d}(e)\in\mathbb{N}^N$. For the precise definition, see \cref{sec:k-graph_algebra}.
In a recent advance, Olof Giselsson \cite{Giselsson:SU3} has shown that $C(SU_q(3))$ is isomorphic to the $C^*$-algebra of a $2$-graph, which is reproduced in \cref{fig:graph-SU3}.
\begin{figure}[h]
\centering
\begin{tikzpicture}[
vertex/.style = {align=center, inner sep=2pt},
Rarr/.style = {->, red},
Barr/.style = {->, blue, dotted},
shadow/.style = {white, line width=3pt},
Rloop/.style = {->, red, out=165, in=195, loop},
Bloop/.style = {->, blue, out=15, in=-15, loop, dotted}
]
\node (v1) at ( 0, 0) [vertex] {$\bullet$};
\node (v2) at (-2,-1) [vertex] {$\bullet$};
\node (v3) at ( 2,-1) [vertex] {$\bullet$};
\node (v4) at (-2,-2) [vertex] {$\bullet$};
\node (v5) at ( 2,-2) [vertex] {$\bullet$};
\node (v6) at ( 0,-3) [vertex] {$\bullet$};
\draw [Rloop] (v1) edge (v1);
\draw [Bloop] (v1) edge (v1);
\draw [Barr] (v2)--(v1);
\draw [Rarr] (v3)--(v1);
\draw [Rloop] (v2) edge (v2);
\draw [Bloop] (v2) edge (v2);
\draw [Rarr, transform canvas={xshift=-0.5em}] (v4)--(v2);
\draw [Barr] (v4)--(v2);
\draw [Rarr] (v5)--(v2);
\draw [Rarr] (v6)--(v2);
\draw [Rloop] (v3) edge (v3);
\draw [Bloop] (v3) edge (v3);
\draw [shadow] (v4)--(v3);
\draw [Barr] (v4)--(v3);
\draw [shadow] (v6)--(v3);
\draw [Barr] (v6)--(v3);
\draw [Rarr] (v5)--(v3);
\draw [Barr, transform canvas={xshift=0.5em}] (v5)--(v3);
\draw [Rloop] (v4) edge (v4);
\draw [Bloop] (v4) edge (v4);
\draw [Rarr] (v6)--(v4);
\draw [Barr] (v6)--(v5);
\draw [Rloop] (v5) edge (v5);
\draw [Bloop] (v5) edge (v5);
\draw [Rloop] (v6) edge (v6);
\draw [Bloop] (v6) edge (v6);
\end{tikzpicture}
\caption{Giselsson's $2$-graph for $SU(3)$. Note that this is not just the Bruhat graph for $SU(3)$---there are additional edges.}
\label{fig:graph-SU3}
\end{figure}
The main result of this article is that the coordinate ring of any quantized compact semisimple Lie group admits a $q = 0$ limit which is a higher-rank graph algebra.
The construction of the higher-rank graph, which is summarized below and makes use of the theory of crystal bases, will make clear the link between the algebraic and analytic approaches to the crystal limit.
The generators of the $C^*$-algebra of our higher-rank graphs can be represented by tensor products of shift operators.
They have a much simpler algebra structure than the algebras $C(K_q)$, even at $q=1$, since giving an explicit formula for a product of matrix coefficients requires the use of Clebsch-Gordan formulas, whereas the product rules for graph algebras are purely combinatorial. In this way, our results can be seen as a non-commutative geometric counterpart of the simplifications afforded by crystal basis theory in the algebraic context.
\subsection{Statement of main results}
The quantum groups $\cO[G_q]$ with $q$ specialized in $(0, \infty) \setminus\{1\}$ can be represented faithfully on a Hilbert space using the work of Soibelman \cite{Soibelman}. By composing this with the specialization map\footnote{The specialization map is only partially defined, since not all elements of $\cO_q[G]$ can be specialized at any $q\in(0, \infty) \setminus\{1\}$. We will ignore this detail in the introduction.} $\cO_q[G] \to \cO[G_q]$, we obtain for every value $q\in(0, \infty) \setminus\{1\}$ a representation $\pi_q$ of $\cO_q[G]$ on the Hilbert space $\mathsf{H}=\ell^2(\mathbb{N})^{\otimes l} \otimes L^2(T)$, where $l$ is the length of the longest word in the Weyl group of $K$ and $T$ is the maximal torus of $K$.
Using the theory of crystal bases, we can easily define a subring $\cO_q^{\bfA_0}[G]$ of the $\mathbb{C}(q)$-algebra $\cO_q[G]$, consisting of elements that can be "specialized at $q = 0$" (see for instance \cite[\S6]{Iglesias:bitableaux} for this construction, where it is denoted $\widetilde{\mathcal{F}}$).
We begin by proving the following analytic result, which is probably known to experts.
\begin{theorem}
For any $u \in \cO_q^{\bfA_0}[G]$, the one-parameter family of operators $\pi_q(u)$ admits a well-defined norm limit $\pi_0(u)$ as $q \to 0$.
\end{theorem}
However, an important subtlety arises when we introduce the $*$-struc\-ture. One can define a $*$-structure on the $\mathbb{C}(q)$ algebra $\cO_q[G]$ which specializes to the usual $*$-structure on $\cO[G_q]$ when $q \in (0, \infty) \setminus\{1\}$, and which defines the real form $\cO[K_q]$. However, the ${\mathbf{A}_0}$-subalgebra $\cO_q^{\bfA_0}[G]$ is not stable under the $*$, see \cref{ex:SU2-star}, so the algebra $\cO_q^{\bfA_0}[G]/q\cO_q^{\bfA_0}[G]$ at $q=0$ doesn't inherit a $*$-structure.
To resolve this issue, we define a new ${\mathbf{A}_0}$-algebra $\cO_q^{\bfA_0}[K] \subset \cO_q[G]$ which is stable under the $*$. Essentially, $\cO_q^{\bfA_0}[K]$ is just the ${\mathbf{A}_0}$-algebra generated by $\cO_q^{\bfA_0}[G]$ and $\cO_q^{\bfA_0}[G]^*$, although for technical reasons we define it slightly differently, see \cref{def:OqAOK}.
The limit $\pi_0$ of the Soibelman representations extends to $\cO_q^{\bfA_0}[K]$, and this
allows us to define a $*$-subalgebra $\cO[K_0]=\pi_0(\cO_q^{\bfA_0}[K])$ of $\mathcal{B}(\mathsf{H})$.
Our main task is to investigate the structure of this $*$-algebra.
For this, a major role is played by crystal basis theory, as follows. Given a dominant integral weight $\lambda \in \bP^+$, we write $V(\lambda)$ for the simple $\cU_q(\lie{g})$-module of highest weight $\lambda$, and $\mathcal{B}(\lambda)$ for the associated crystal. Recall that if $\mu, \mu' \in \bP^+$ are dominant integral weights with $\mu + \mu'= \lambda$, then there is a unique non-trivial morphism of crystals $\iota_{\mu, \mu'}: \mathcal{B}(\lambda) \to \mathcal{B}(\mu) \otimes \mathcal{B}(\mu')$. The image of this inclusion is called the \emph{Cartan component} of the tensor product $\mathcal{B}(\mu) \otimes \mathcal{B}(\mu')$, namely the unique irreducible component of highest weight $\mu + \mu'$.
\begin{definition}
Let $\lambda, \mu \in \bP^+$ with $\mu \leq \lambda$. The \emph{$\mu$-right end} of a crystal element $b \in \mathcal{B}(\lambda)$ is the element $\mathsf{R}(b) = b'' \in \mathcal{B}(\mu)$ determined by $\iota_{\lambda-\mu,\mu}: b \mapsto b'\otimes b''$.
If ${\boldsymbol{\Pi}} = (\varpi_1, \cdots, \varpi_r)$ is the family of fundamental weights of $K$ and $\lambda\geq\rho =\sum_i\varpi_i$, then we write
\[
\mathsf{R}_{\boldsymbol{\Pi}}(b) = \big(\mathsf{R}_{\varpi_1}(b), \cdots, \mathsf{R}_{\varpi_r}(b) \big)
\]
for the family of \emph{fundamental right ends} of $b \in \mathcal{B}(\lambda)$.
\end{definition}
We can now define the higher-rank graph associated to the crystal limit of $\cO[K_q]$. We identify the set of dominant integral weights $\bP^+$ with the monoid $\mathbb{N}^r$ by identifying $(n_i) \in \mathbb{N}^r$ with $\sum_i n_i \varpi_i$.
\begin{theorem}
\label{prop:intro-hgraph}
We can define an $r$-graph $\Lambda_{\lie{g}}$ as follows. The vertex set is
\[
\hgraphstd^0 = \{ \mathsf{R}_{\boldsymbol{\Pi}}(b) \mid b \in \mathcal{B}(\rho) \}
\]
namely the fundamental right ends of elements of $\mathcal{B}(\rho)$.
The paths are given by pairs $(v, b) \in \hgraphstd^0 \times \mathcal{B}(\lambda)$ where $v = \mathsf{R}_{\boldsymbol{\Pi}}(c)$ is such that $c \otimes b$ is in the Cartan component of $\mathcal{B}(\rho) \otimes \mathcal{B}(\lambda)$. The range and source maps are given by
\begin{align*}
& \mathsf{s}(v,b) = v,
&& \mathsf{r}(v,b) = \mathsf{R}_{\boldsymbol{\Pi}}(c\otimes b).
\end{align*}
\end{theorem}
\begin{remark}
It is part of the proposition that the above definitions depend only on the vertex $v\in\hgraphstd^0$ and not on the choice of crystal element $c\in\mathcal{B}(\rho)$ which represents it.
\end{remark}
Our main theorem is the following. We write $\mathrm{KP}_\CC(\hgraphstd)$ for the Kumjian-Pask algebra of $\Lambda_{\lie{g}}$ in the sense of \cite{Aranda:Kumjian-Pask}, and $C^*(\Lambda_{\lie{g}})$ for the higher rank graph $C^*$-algebra in the sense of \cite{KumPas}.
\begin{theorem}
Let $K$ be a compact, connected, simply connected semisimple Lie group, with complexified Lie algebra $\mathfrak{g}=\mathfrak{k}_\mathbb{C}$, and let $\Lambda_{\lie{g}}$ be the higher rank graph from \cref{prop:intro-hgraph}. There is an isomorphism of $*$-algebras $\cO[K_0]\cong\mathrm{KP}_\CC(\hgraphstd)$. Its $C^*$-closure is $C(K_0)\cong C^*(\Lambda_{\lie{g}})$.
\end{theorem}
In fact, we obtain a much more general result, see \cref{thm:Soibelman_faithful}. Firstly, we may allow $K$ to be non-simply connected. Secondly, we consider the quantized coordinate ring of the canonical torus bundle $Y_S \to X_S$ over any generalized flag variety for $K$ associated to a set $S\subseteq{\boldsymbol{\Delta}}$ of simple roots, see \cref{sec:flag_manifolds} for the notation. The coordinate ring $\mathcal{O}[Y_S]$ is generated by matrix coefficients for simple $\mathfrak{g}$-modules with highest weights in a submonoid $\bP^+_{K,S} \subseteq \bP^+$. We write $\mathsf{C} = (\vartheta_1, \cdots, \vartheta_N)$ for the set of dominant weights which generate $\bP_{K,S}$. The quantized coordinate rings of $Y_S$ and $X_S$ admit analytic limits at $q=0$.
We prove that by replacing ${\boldsymbol{\Pi}}$ by $\mathsf{C}$ in the definition of $\Lambda_{\lie{g}}$, we obtain another higher-rank graph $\Lambda_{\lie{g}, \Cset}$ and show that we have an isomorphism $\mathcal{O}[Y_{S,0}] \cong \mathrm{KP}_\CC(\hgraph)$. The gauge-invariant subalgebra is then $\mathcal{O}[X_{S,0}]\cong \mathrm{KP}_\CC(\hgraph)_0$. These isomorphisms extend naturally to the $C^*$-algebras.
The results can be summarized in the following way.
\begin{theorem}
\label{thm:continuous_field}
Let $K$ be a compact connected semisimple Lie group, and let $Y_S$ be the canonical torus bundle over the generalized flag variety $X_S$ associated to a set $S\subset{\boldsymbol{\Delta}}$ of simple roots.
There is a continuous field of $C^*$-algebras $C(Y_{S,\bullet})$ over $[0,\infty]$ whose fibres are
\begin{equation*}
C(Y_{S,q}) \cong
\begin{cases}
C(Y_{S}), & q=1,\\
C(Y_{S,q}), & q\in(0, \infty) \setminus\{1\}\\
C^*(\Lambda_{\lie{g}, \Cset}), &q\in\{0,\infty\}.
\end{cases}
\end{equation*}
where $\Lambda_{\lie{g}, \Cset}$ is the higher rank graph described above (and spelled out in \cref{thm:hgraph}).
\end{theorem}
\begin{remark}
The $C^*$-algebras $C(K_q)$ are known to be abstractly isomorphic for all $q\in(0, \infty) \setminus\{1\}$, see \cite{Giselsson}. It is not yet known if they are also isomorphic to the graph algebra $C(K_0)$, other than for the cases $K=\mathrm{SU}_q(2)$ and $\mathrm{SU}_q(3)$, the latter due again to Giselsson \cite{Giselsson}, and a few examples in the case of homogeneous spaces, due to Hong and Szymanski \cite{HonSzy:spheres}.
\end{remark}
\subsection{Structure of the paper}
In \cref{sec:background} we review some background material on quantized enveloping algebras and crystal bases.
In \cref{sec:coordinate_ring_def} we review some standard material on quantized coordinate rings, as well as defining the ${\mathbf{A}_0}$-form $\cO_q^{\bfA_0}[K]$ of $\cO_q[K]$.
In \cref{sec:coordinate_rings} we show that the elements of $\cO_q^{\bfA_0}[K]$ admit analytic limits in terms of the Soibelman representation. This allows us to define the $*$-algebra $\cO[K_0]$, which is our main object of study.
In \cref{sec:Cartan-braiding} we introduce the notion of Cartan braiding, which is the crystal limit of a certain rescaled version of the braiding in the category of finite-dimensional $\cU_q(\lie{g})$-modules.
This allows us to obtain various relations holding in $\cO[K_0]$.
In \cref{sec:properties-Cartan} we study the Cartan braiding in more detail: we show that it satisfies the hexagon and braid relations, and gives a partial action of the symmetric group on tensor products.
We also introduce the notion of right end of a crystal.
In \cref{sec:k-graph_algebra} we discuss higher-rank graph and their corresponding algebras. We prove, using crystal bases, that one can associate a higher-rank graph $\Lambda_{\lie{g}}$ of rank $r$ to any complex semisimple Lie algebra $\mathfrak{g}$ of rank $r$ (in fact, we prove this in a more general setting).
In \cref{sec:crystal-algebra} we introduce the crystal algebra $\cA_{\lie{k}}$ as a useful tool to study the relation between $\cO[K_0]$ and $\mathrm{KP}_\CC(\hgraphstd)$, the $*$-algebra associated to the higher-rank graph $\Lambda_{\lie{g}}$.
The main result here is that we have surjective $*$-homomorphisms $\mathrm{KP}_\CC(\hgraphstd) \to \cA_{\lie{k}}$ and $\cA_{\lie{k}} \to \cO[K_0]$.
In \cref{sec:crystal-limit} we show that the three $*$-algebras
$\mathrm{KP}_\CC(\hgraphstd)$, $\cA_{\lie{k}}$ and $\cO[K_0]$ are all $*$-isomorphic, which gives our main result about the structure of the crystal limit $\cO[K_0]$.
Finally in \cref{sec:further-properties} we discuss some further properties of the higher-rank graphs $\Lambda_{\lie{g}}$, mainly related to the role of the Weyl groups, as well as discussing some explicit examples.
\subsection*{Acknowledgements}
We would like to thank Olof Giselsson for sharing with us the example of $\mathrm{SU}_q(3)$ as a $2$-graph algebra, as well as comparing his own approach with ours.
We would also like to thank Sergey Neshveyev for discussions concerning continuous fields of $C^*$-algebras in the context of compact quantum groups.
\section{Background material}
\label{sec:background}
\subsection{Ground fields and specialization}
As mentioned before, we will consider the quantized enveloping algebra $\cU_q(\lie{g})$ and quantized coordinate ring $\cO_q[G]$ in two settings: 1) over the field of rational functions $\mathbb{C}(q)$ with $q$ an indeterminate; 2) over $\mathbb{C}$ with $q \in (0, \infty) \setminus\{1\}$.
We write $\mathbb{C}[q]$ for the polynomial ring in $q$, so that $\mathbb{C}(q)$ is its field of fractions. We consider the three subrings
\begin{align}
\nonumber
{\mathbf{A}_0} &:= \{ g / h : g, h \in \mathbb{C}[q], \ h(0) \neq 0 \}, \\
\label{eq:base_rings}
\mathbf{A}_\infty &:= \{ g / h : g, h \in \mathbb{C}[q], \ h(\infty) \neq 0 \},\\
\nonumber
\mathbf{A} &:= \mathbb{C}[q,q^{-1}].
\end{align}
For every fixed $q\in[0,\infty]$, there is a partially defined \emph{specialization map}
\(
\ev_{q} : \mathbb{C}(q) \to \mathbb{C}
\)
given by evaluation at $q$. We apologize for the reuse of $q$ as both formal parameter and specialized value $q\in[0,\infty]$. The rings ${\mathbf{A}_0}$, $\mathbf{A}_\infty$, $\mathbf{A}$ contain the elements which can be evaluated at $0$, at $\infty$ and at all $q\in\mathbb{C}^\times$, respectively.
\subsection{Quantized enveloping algebras}
\label{sec:Uqg}
Let $\mathfrak{g}$ be a complex semisimple Lie algebra and fix a Cartan subalgebra $\mathfrak{h}$. Write $\mathfrak{k}$ for the compact real form of $\mathfrak{g}$ and put $\mathfrak{t} = \mathfrak{k} \cap \mathfrak{h}$. The connected, simply connected Lie groups associated to $\mathfrak{g}$ and $\mathfrak{k}$ are denoted by $G$ and $K$. The maximal torus of $K$ is $T \cong \mathbb{T}^r$, where $r$ denotes the rank.
We write ${\boldsymbol{\Phi}}$ for the set of roots of $\mathfrak{g}$ and
${\boldsymbol{\Delta}} = \{\alpha_1, \cdots, \alpha_r\}$ for a choice of simple roots.
The root lattice will be denoted by $\bQ$ and the lattice of integral weights by $\bP$, generated by the fundamental weights ${\boldsymbol{\Pi}} = \{\varpi_1, \cdots, \varpi_r\}$. The abelian semigroup $\bP^+ = \mathbb{N} \cdot {\boldsymbol{\Pi}} \cong \mathbb{N}^r$ of dominant weights will play an important role in the higher-rank graph to be defined later.
We put $\rho=\sum_{i=1}^r \varpi_i$ as usual.
We write $(\slot,\slot)$ for the invariant bilinear form on $\mathfrak{h}^*$ such that the short roots $\alpha$ satisfy $(\alpha,\alpha)=2$.
The quantized enveloping algebra $\cU_q(\lie{g})$ is the Hopf algebra with generators $E_i$, $F_i$ and $K_i^{\pm1}$ for $i = 1, \cdots, r$, and with the standard algebra relations that can be found for instance in \cite[\S6.1.2]{KliSch}, or \cite[\S{}4.3]{Jantzen} %
with $K_i=K_{\alpha_i}$.
Again, we may consider this either over the field $\mathbb{C}(q)$ or over $\mathbb{C}$ with $q\in(0,\infty)\setminus\{1\}$ a fixed parameter.
There are different possible choices for the coproduct.
The one we consider is more or less standard in the context of crystal bases\footnote{For the operator algebraist, it is equivalent to coproduct from \cite{NesTus:book}, but the opposite of the coproduct from \cite{KliSch, VoiYun:CQG}.}
and is given by
\begin{equation}
\label{eq:coproduct}
\Delta(K_i) = K_i \otimes K_i, \quad
\Delta(E_i) = E_i \otimes K_i^{-1} + 1 \otimes E_i, \quad
\Delta(F_i) = F_i \otimes 1 + K_i \otimes F_i.
\end{equation}
We write $\cU_q(\lie{h})$ for the Hopf subalgebra generated by the $K_i^{\pm1}$.
\begin{comment}
Following the notation in \cite[\S{}9.20]{Jantzen}, we write $\tau_1$ for the antiautomorphism of $\cU_q(\lie{g})$, over $\mathbb{C}(q)$, defined on generators by
\begin{align}
\label{eq:tau1}
\tau_1(K_i) = K_i, &&
\tau_1(E_i) = q_i F_i K_i^{-1}, &&
\tau_1(F_i) = q_i^{-1} K_i E_i.
\end{align}
Here $q_i = q^{(\alpha_i, \alpha_i) / 2}$.
In the analytic picture, with $q\in(0, \infty) \setminus\{1\}$, we consider a $*$-structure on $\cU_q(\lie{g})$ given by the same action on generators,
\begin{align}
\label{eq:star-structure}
K_i^* = K_i, &&
E_i^* = q_i F_i K_i^{-1}, &&
F_i^* = q_i^{-1} K_i E_i,
\end{align}
but extended $\mathbb{C}$-antilinearly. This corresponds to the compact real form $\mathfrak{k}$ of $\mathfrak{g}$. We write $\cU_q(\lie{k})$ for $\cU_q(\lie{g})$ equipped with this $*$-structure.
\end{comment}
We also consider a $*$-structure on $\cU_q(\lie{g})$ which corresponds to the compact real form $\mathfrak{k}$ of $\mathfrak{g}$.
It acts on the generators by
\begin{align}
\label{eq:star-structure}
K_i^* = K_i, &&
E_i^* = q_i F_i K_i^{-1}, &&
F_i^* = q_i^{-1} K_i E_i,
\end{align}
and $q^* = q$ when we work over $\mathbb{C}(q)$. Here $q_i = q^{(\alpha_i, \alpha_i) / 2}$.
We write $\cU_q(\lie{k})$ for $\cU_q(\lie{g})$ equipped with this $*$-structure.
We note that, since we are putting $q^* = q$, the specialization map at $q\in\mathbb{C}^\times$ is a $*$-morphism only when $q$ is real.
\begin{remark}
We warn the reader that this is not the most common choice for the $*$-structure in the operator algebraic picture. It is a convenient choice when working with crystal bases because the formulas \eqref{eq:star-structure} coincide with those of the standard anti-automorphism of $\cU_q(\lie{g})$ called $\tau_1$ in \cite[\S9.20 (3)]{Jantzen}, except that our involution $*$ is extended $\mathbb{C}$-antilinearly.
\end{remark}
We write $V(\lambda)$ for the irreducible finite-dimensional integrable (\emph{i.e.}, type 1) $\cU_q(\lie{g})$-module of highest weight $\lambda \in \bP^+$. Again, this can be constructed either in the algebraic setting as a module over $\mathbb{C}(q)$, or in the analytic setting as a $\mathbb{C}$-vector space. If necessary, we will denote the latter by $V(\lambda)_q$ with $q\in(0, \infty) \setminus\{1\}$ a fixed parameter, but generally we will suppress the subscript and interpret the $\cU_q(\lie{g})$-module $V(\lambda)$ according to context. Once again, there is a partially defined specialization map $\ev_q:V(\lambda) \to V(\lambda)_q$ for each $q\in(0, \infty) \setminus\{1\}$.
There is a unique $\mathbb{C}(q)$-valued inner product on $V(\lambda)$ such that the $\cU_q(\lie{g})$-action is a $*$-representation and the highest weight vector $v_\lambda$ has norm $1$, see \cite[Lemma 9.20 c)]{Jantzen}. For convenience, we will refer to this as the \emph{standard inner product} on $V(\lambda)$.
\subsection{Crystal bases}
\begin{comment}
We denote the Kashiwara operators by $\tilde{E}_i$ and $\tilde{F}_i$, with $i=1, \cdots, r$. For the definition we refer to the textbooks \cite[Definition 4.1.2]{Hong-Kang} and \cite[\S9.2]{Jantzen}, in which $\tilde{E}_i$ is denoted by $\widetilde{e}_i$ and $\tilde{E}_{\alpha_i}$, respectively. For the analyst unfamiliar with the crystal basis theory, $\tilde{E}_i$ and $\tilde{F}_i$ are renormalizations of the generators $E_i, F_i\in\cU_q(\lie{g})$ which admit nice algebraic limits at $q=0$. They play a similar role to that of the operator phases of $E_i$ and $F_i$ in the operator algebraic world.
In particular, the Kashiwara operators act on any finite-dimensional integrable $\cU_q(\lie{g})$-module $V$ over the ground field $\mathbb{C}(q)$.
We recall the key definitions concerning crystal bases for $V$.
\end{comment}
In this section we briefly recall various facts associated with Kashiwara's theory of crystal bases \cite{Kashiwara:crystal1, Kashiwara:crystal2}.
Our main references are the textbooks \cite{Hong-Kang} and \cite{Jantzen}.
We need the Kashiwara operators $\tilde{E}_i$ and $\tilde{F}_i$, which are algebraic renormalizations of $E_i$ and $F_i$ with well-defined limits at $q = 0$.
They play a similar role to the operator phases of $E_i$ and $F_i$ in the $C^*$-algebraic world.
To define them, we first introduce the divided powers
\begin{align*}
E_i^{(k)} & := E_i^k/[k]_{q_i}!, &
F_i^{(k)} & := F_i^k/[k]_{q_i}!,
\end{align*}
where $[k]_q := \frac{q^k - q^{-k}}{q - q^{-1}}$ and $[k]_q! := [k]_q [k-1]_q \cdots [1]_q$.
The Kashiwara operators are defined by imposing that, for any weight vector $u_0$ in an integrable $\cU_q(\lie{g})$-module $V$ with $E_iu_0=0$, we have
\begin{align*}
\tilde{E}_i :F_i^{(k)}u_0 &\mapsto F_i^{(k-1)}u_0, &
\tilde{F}_i :F_i^{(k)}u_0 &\mapsto F_i^{(k+1)}u_0,
\end{align*}
with the convention $F_i^{(-1)} u_0 = 0$.
\begin{comment}
Let $V$ be a $\cU_q(\lie{g})$-module, with weight decomposition $V = \bigoplus_{\mu \in \bP(V)} V_\mu$.
Then, for any $i\in\{1,\cdots,r\}$, every weight vector $u \in V_\mu$ can be decomposed uniquely as
\[
u = u_0 + F_i u_1 + \cdots + F_i^{(N)} u_N, \quad N \geq 0,
\]
where the vectors $u_k \in V_{\mu + k \alpha_i}$ are such that $E_i u_k = 0$.
\begin{definition}
With respect to the decomposition given above, the \emph{Kashiwara operators} $\tilde{E}_i$ and $\tilde{F}_i$ are defined by
\[
\tilde{E}_i u := \sum_{k = 1}^N F_i^{(k - 1)} u_k, \qquad
\tilde{F}_i u := \sum_{k = 0}^N F_i^{(k + 1)} u_k.
\]
\end{definition}
\end{comment}
We can now define crystal bases.
\begin{definition}
A \emph{crystal lattice} in $V$ is an ${\mathbf{A}_0}$-submodule $\mathcal{L}$ of $V$ such that:
\begin{enumerate}
\item $\mathcal{L}$ is finitely generated over ${\mathbf{A}_0}$ and generates $V$ as a vector space over $\mathbb{C}(q)$,
\item $\mathcal{L} = \bigoplus_{\mu \in \bP} \mathcal{L}_\mu$, where $\mathcal{L}_\mu = \mathcal{L} \cap V_\mu$,
\item $\tilde{E}_i \mathcal{L} \subset \mathcal{L}$ and $\tilde{F}_i \mathcal{L} \subset \mathcal{L}$ for all $i$.
\end{enumerate}
\end{definition}
\begin{definition}
A \emph{crystal basis} of $V$ is a pair $(\mathcal{L}, \mathcal{B})$, where $\mathcal{L}$ is a crystal lattice and $\mathcal{B}$ is a $\mathbb{C}$-basis of the quotient $\mathcal{L} / q \mathcal{L}$ such that:
\begin{enumerate}
\item $\mathcal{B} = \bigcup_\mu \mathcal{B}_\mu$, where $\mathcal{B}_\mu = \mathcal{B} \cap (\mathcal{L}_\mu / q \mathcal{L}_\mu)$,
\item $\tilde{E}_i \mathcal{B} \subset \mathcal{B} \sqcup \{0\}$ and $\tilde{F}_i \mathcal{B} \subset \mathcal{B} \sqcup \{0\}$ for all $i$,
\item for any $b, b^\prime \in \mathcal{B}$ and $i$ we have $b = \tilde{E}_i b^\prime \Longleftrightarrow b^\prime = \tilde{F}_i b$.
\end{enumerate}
\end{definition}
In this way, we replace the action of $\cU_q(\lie{g})$ on a finite-dimensional module $V$ by a purely combinatorial action of the Kashiwara operators $\tilde{E}_i,\tilde{F}_i$ on the finite set $\mathcal{B}\sqcup\{0\}$.
Kashiwara proved that every finite-dimen\-sion\-al integrable $\cU_q(\lie{g})$-module admits a crystal basis, which is unique up to isomorphism. We denote the crystal basis of the simple module $V(\lambda)$ by $(\mathcal{L}(\lambda), \mathcal{B}(\lambda))$.
The crystal basis is orthonormal in the following sense, compare \cite[Lemma 5.1.6]{Hong-Kang}, \cite[Theorem 9.25]{Jantzen}.
\begin{lemma}
\label{lem:orthogonality}
Fix $\lambda \in \bP^+$. Let $\{v_1, \cdots, v_d\} \subset \mathcal{L}(\lambda)$ be a family of lifts for the crystal basis $\mathcal{B}(\lambda) = \{b_1, \cdots, b_d\}$ of $V(\lambda)$. Let $\ip{\slot,\slot}$ be the standard inner product on $V(\lambda)$. Then $\ip{v_a,v_b} \in \delta_{ab} + q {\mathbf{A}_0}$.
As a consequence, for any $u,v\in\mathcal{L}(\lambda)$ we have $\ip{u,v} \in {\mathbf{A}_0}$.
\end{lemma}
From an analytic point of view, \cref{lem:orthogonality} means that for any $u, v \in \mathcal{L}(\lambda)$, the limit of the specialized inner products
\[
\lim_{q\to 0} \ev_q\ip{u,v}
\]
exists. Note that $\ev_q\ip{u,v}$ does not necessarily exist for all $q>0$, but since $\ip{u,v}\in{\mathbf{A}_0}$, it at least exists in some neighbourhood of $0$.
We will use similar observations frequently in this work, and as above will use the notation $\lim_{q\to0}$ when strictly speaking we mean $\lim_{q \to 0^+}$.
\subsection{Tensor products}
A major feature of crystal bases is the simplicity of their tensor products, and this will play a major role in what follows.
Let $V_1$ and $V_2$ be finite-dimensional integrable $\cU_q(\lie{g})$-modules with crystal bases $(\mathcal{L}_1, \mathcal{B}_1)$ and $(\mathcal{L}_2, \mathcal{B}_2)$ respectively.
Then $(\mathcal{L}_1 \otimes_{{\mathbf{A}_0}} \mathcal{L}_2, \mathcal{B}_1 \times \mathcal{B}_2)$ is a crystal basis of $V_1 \otimes V_2$, see \cite[Theorem 4.4.1]{Hong-Kang}.
We will be lazy and write $\mathcal{L}_1\otimes\mathcal{L}_2$ to mean the crystal lattice $\mathcal{L}_1\otimes_{\mathbf{A}_0}\mathcal{L}_2$.
To describe the action of the Kashiwara operators we first define
\[
\varepsilon_i(b) := \max \left\{ k \in \mathbb{N}_0 : \tilde{E}_i^k b \neq 0 \right\}, \quad
\varphi_i(b) := \max \left\{ k \in \mathbb{N}_0 : \tilde{F}_i^k b \neq 0 \right\}.
\]
Then the action on the tensor product is given by
\begin{align}
\label{eq:tensor_E}
\tilde{E}_i (b \otimes b') &= \begin{cases}
\tilde{E}_i b \otimes b' & \varphi_i(b) \geq \varepsilon_i(b') \\
b \otimes \tilde{E}_i b' & \varphi_i(b) < \varepsilon_i(b')
\end{cases},
\\
\label{eq:tensor_F}
\tilde{F}_i (b \otimes b') &= \begin{cases}
\tilde{F}_i b \otimes b' & \varphi_i(b) > \varepsilon_i(b') \\
b \otimes \tilde{F}_i b' & \varphi_i(b) \leq \varepsilon_i(b')
\end{cases}.
\end{align}
Here, as is customary, we denote the element $(b_1, b_2) \in \mathcal{B}_1 \times \mathcal{B}_2$ by $b_1 \otimes b_2$, and we use the conventions $b_1 \otimes 0 = 0 \otimes b_2 = 0$.
\section{The quantized coordinate ring}
\label{sec:coordinate_ring_def}
\subsection{Definitions}
Let $V$ be a finite-dimensional integrable $\cU_q(\lie{g})$-module over $\mathbb{C}(q)$ and let $V^*=\Hom_{\mathbb{C}(q)}(V,\mathbb{C}(q))$ be the dual module.
A \emph{matrix coefficient} for the module $V$ is a $\mathbb{C}(q)$-linear map $c^V_{f, v}$ on $\cU_q(\lie{g})$ of the form
\begin{equation}
\label{eq:matrix_coefficient}
c^V_{f, v}(X) := f(X v), \quad v \in V,\ f \in V^*.
\end{equation}
The \emph{quantized coordinate ring} $\cO_q[G]$ is the subspace of $\Hom_{\mathbb{C}(q)}(\cU_q(\lie{g}),\mathbb{C}(q))$ spanned by the matrix coefficients of finite-dimensional integrable modules.
This is a Hopf algebra with operations obtained from those of $\cU_q(\lie{g})$ by duality. Here, we use the algebraists' convention of a non-skew pairing:
\begin{align*}
&(\Delta(X), a\otimes b) = (X,ab),
&(X\otimes Y, \Delta(a)) = (XY, a).
\end{align*}
Explicitly, this corresponds to the matrix coproduct on matrix coefficients:
\[
\Delta(c^V_{f,v}) = \sum_i c^V_{f,v_i} \otimes c^V_{f^i,v},
\]
where $\{v_i\}$ and $\{f_i\}$ are any basis and dual basis for $V$ and $V^*$, respectively.
We denote the left and right regular representations of $\cU_q(\lie{g})$ on $\cO_q[G]$ by
\begin{align}
\label{eq:regular_repns}
X\triangleright c^V_{f,v} &= c^V_{f,Xv}, &
c^V_{f,v} \triangleleft X &= c^V_{f\circ X,v}
\end{align}
We also have a version of the quantized coordinate ring defined over $\mathbb{C}$ for a fixed complex parameter $q \in (0, \infty) \setminus\{1\}$. This algebra will be denoted $\cO[G_q]$, with apologies for the subtle difference in notation. For every $q \in (0, \infty) \setminus\{1\}$ we have a partially defined specialization map
\[
\ev_q : \cO_q[G] \to \cO[G_q],
\]
induced by evaluation at $q$.
In \cite{Kashiwara:global} Kashiwara introduced a crystal lattice for $\cO_q[G]$ consisting of finite sums of matrix coefficients $c^V_{f,v}$ where $v$ belongs to a crystal lattice $\mathcal{L}$ for $V$ and $f$ belongs to $\mathcal{L}^* = \Hom_{\mathbf{A}_0}(\mathcal{L},{\mathbf{A}_0}) \subset V^*$.
We denote this ${\mathbf{A}_0}$-form by $\cO_q^{\bfA_0}[G]$.
This algebra was also studied by Iglesias in \cite{Iglesias:bitableaux}, where he uses the notation $\widetilde{\mathcal{F}}$.
\subsection{Star structure and compact integral form}
\label{sec:compact_form}
The $*$-structure \eqref{eq:star-structure} on $\cU_q(\lie{g})$ induces a corresponding $*$-structure on $\cO_q[G]$ and on each $\cO[G_q]$ by the formula $(X,a^*) = \overline{(S(X)^*,a)}$.
When equipped with this $*$-structure, we denote the $*$-algebras $\cO_q[G]$ and $\cO[G_q]$ by $\cO_q[K]$ and $\cO[K_q]$, respectively. Again, we caution that $q^* = q$, and so the specialization map \linebreak $\cO_q[K]\to\cO[K_q]$ is a $*$-morphism only when $q$ is real.
As mentioned in the introduction, the ${\mathbf{A}_0}$-form $\cO_q^{\bfA_0}[G]$ is not stable under the $*$. This can be seen in the following example.
\begin{example}
\label{ex:SU2-star}
Let $V$ be the fundamental representation of $\mathcal{U}_q(\mathfrak{sl}_2)$ over $\mathbb{C}(q)$. Let $v_1$ be the highest weight vector for $V$ and $v_2 = F v_1$. Let $\{f^1,f^2\}\in V^*$ be the dual basis for $V^*$. The subspace of $\cO_q^{\bfA_0}[G]$ corresponding to the fundamental representation is spanned by the matrix coefficients $u^V_{ij} = c^V_{f^i,v_j}$ with $i,j \in \{1,2\}$.
In this particular case, $\{v_1,v_2\}$ is an orthonormal basis for $V$, so upon specialization at $q\in(0, \infty) \setminus\{1\}$, the matrix coefficients $u^V_{ij}$ coincide with the traditional generators for the operator algebraists' $*$-algebra $\cO[K_q]$, which in Woronowicz's notation \cite{Woronowicz:SUq2} are denoted
\begin{align*}
\alpha &= u^V_{11} , &
-q\gamma^* &= u^V_{12} , \\
\gamma &= u^V_{21} , &
\alpha^* &= u^V_{22} .
\end{align*}
We see that $u^{V*}_{21} = -q^{-1}u^V_{12}$, which does not belong to $\cO_q^{\bfA_0}[G]$.
\end{example}
Because of this,
we cannot simply equip the algebra $\cO_q^{\bfA_0}[G]$ with the $*$ operation. Instead, we proceed as follows.
For each simple module $V(\lambda)$ fix a lift of the crystal basis to a basis of weight vectors $\{v_i\}_i \subset \mathcal{L}(\lambda)$, with dual basis $\{f^i\}_i \subset V(\lambda)^*$.
We impose that $v_1 = v_\lambda$ is the highest weight vector of $V(\lambda)$, and so $f^1 = f^{-\lambda}$ is the lowest weight element.
With this notation, we define the matrix coefficients
\begin{equation}
\label{eq:genf_genv}
\mathsf{f}^\lambda_i := c^{V(\lambda)}_{f^i, v_\lambda}, \quad
\mathsf{v}^\lambda_i := S(c^{V(\lambda)}_{f^{-\lambda}, v_i}).
\end{equation}
We also note that $\mathsf{v}^\lambda_i = c^{V(\lambda)^*}_{\tilde{v}_i, f^{-\lambda}}$, where $\{\tilde{v_i}\}_i \subset V(\lambda)^{**}$ is the dual basis to $\{f^i\}_i$.
Following Joseph \cite[\S9.1.6]{Joseph:book}, let $\cO_q[G/N^+]$ and $\cO_q[G/N^-]$ be the $\mathbb{C}(q)$-subalgebras of $\cO_q[G]$ generated by the matrix coefficients $\mathsf{f}^\lambda_i$ and $\mathsf{v}^\lambda_i$, respectively (for all $\lambda\in\bP^+$ and $i=1, \cdots, \dim V(\lambda)$). Joseph uses the notation $R_q[G/N^+]$ and $R_q[G/N^-]$.
The multiplication map $\cO_q[G/N^+] \otimes_{\mathbb{C}(q)} \cO_q[G/N^-] \to \cO_q[G]$ is surjective, see \cite[Proposition 9.2.2]{Joseph:book}.
We now define ${\mathbf{A}_0}$-forms of these subalgebras.
\begin{definition}
\label{def:OqAOK}
We define:
\begin{itemize}
\item $\cO^{\mathbf{A}_0}_q[G/N^+]$ as the ${\mathbf{A}_0}$-subalgebra generated by the elements $\mathsf{f}^\lambda_i$,
\item $\cO^{\mathbf{A}_0}_q[G/N^-]$ as the ${\mathbf{A}_0}$-subalgebra generated by the elements $\mathsf{v}^\lambda_i$,
\item $\cO^{\mathbf{A}_0}_q[K]$ as the ${\mathbf{A}_0}$-subalgebra generated by the elements $\mathsf{f}^\lambda_i$ and $\mathsf{v}^\lambda_i$.
\end{itemize}
\end{definition}
Thus, while $\cO_q[K]$ and $\cO_q[G]$ denote the same $\mathbb{C}(q)$-algebra with or without the $*$-structure, we stress that $\cO_q^{\bfA_0}[K]$ and $\cO_q^{\bfA_0}[G]$ are not the same ${\mathbf{A}_0}$-subalgebra. They represent different $q=0$ limits of the families of algebras $\cO[K_q]=\cO[G_q]$.
The notation $\cO^{\mathbf{A}_0}_q[K]$ suggests that this algebra is closed under the $*$-structure, which is not immediately obvious from its definition.
However this is true, as shown in the next result.
\begin{proposition}
\label{prop:v_f_adjoints}
We have $(\cO^{\mathbf{A}_0}_q[G/N^+])^* = \cO^{\mathbf{A}_0}_q[G/N^-]$. Moreover
\[
\begin{split}
(\mathsf{f}^\lambda_i)^* & \equiv \mathsf{v}^\lambda_i \mod q \cO^{\mathbf{A}_0}_q[G/N^-], \\
(\mathsf{v}^\lambda_i)^* & \equiv \mathsf{f}^\lambda_i \mod q \cO^{\mathbf{A}_0}_q[G/N^+].
\end{split}
\]
\end{proposition}
\begin{proof}
Let $\{v_i\}_i$ be the lift of the crystal basis of $V(\lambda)$ as above.
Let $(\cdot, \cdot)$ be the unique inner product on $V(\lambda)$ invariant under $*$ such that $(v_1, v_1) = 1$.
Let $G^i_j = (v_i, v_j)$ be the Gram matrix for the basis $\{v_i\}_i$.
Some linear algebra shows that we have the identity
\[
(c^{V(\lambda)}_{f^i, v_j})^* = \sum_{k, l} (G^{-1})^l_i G^j_k S(c^{V(\lambda)}_{f^k, v_l}).
\]
Now consider the case of $\mathsf{f}^\lambda_i = c^{V(\lambda)}_{f^i, v_1}$.
We have $G^1_k = (v_1, v_k) = \delta_{1 k}$, since different weight spaces are orthogonal under the inner product. Then we get
\[
(\mathsf{f}^\lambda_i)^* = \sum_l (G^{-1})^l_i \mathsf{v}^\lambda_l.
\]
Recall from \cref{lem:orthogonality} that the basis $\{v_i\}_i$ becomes orthonormal at $q = 0$, which implies that $G^i_j$ and $(G^{-1})^i_j$ are equal to $\delta_{i j}$ modulo terms in $q {\mathbf{A}_0}$.
Thus we obtain the desired result for $(\mathsf{f}^\lambda_i)^*$. The result for $(\mathsf{v}^\lambda_i)^*$ is proven similarly, and it follows that $(\cO^{\mathbf{A}_0}_q[G/N^+])^* = \cO^{\mathbf{A}_0}_q[G/N^-]$.
\end{proof}
\begin{remark}
It is not clear from the definition above whether $\cO_q^{\bfA_0}[G] \subset \cO_q^{\bfA_0}[K]$, although we expect this to be the case. Answering this question would require a more careful study of the crystal lattice structure. Since we don't need this property here, we won't discuss it further.
\end{remark}
\subsection{The gauge action}
\label{sec:gauge_action}
For the moment, we continue to suppose that $K$ is simply connected, with rank $r$. That is, we have $T \cong \mathbb{T}^r$ for the maximal torus and $\bP \cong \hat{T} \cong \mathbb{Z}^r$ for the lattice of integral weights.
The quantized coordinate ring $\cO_q[K]$ is $\mathbb{Z}^r$-graded as follows.
\begin{definition}
\label{def:Zr-grading}
\label{def:gauge_action}
For any $\mu \in \bP$, we write $\cO_q[K]_{\mu}$ for the span of all matrix coefficients $c^V_{f, v}$ where $v \in V$ is a vector of weight $\mu$, so that $\cO_q[K] = \bigoplus_{\mu \in \bP} \cO_q[K]_\mu$ is a $\bP$-graded algebra.
\end{definition}
The $\bP$-grading corresponds to the gauge action of the torus subgroup $T$,
\begin{equation}
z\cdot c^V_{f,v} = z^\mu c^V_{f,v} \quad \text{for } c^V_{f,v}\in\cO_q[K]_\mu,
\end{equation}
where $z^\mu = e^{(\mu,\log(z))}$ denotes the evaluation at $z\in T$ of the unitary character associated to $\mu\in\bP$. Geometrically, this corresponds to the realization of $K$ as a principal $T$-bundle over the full flag variety $X=K/T$, and this language is extended by analogy to the quantum case. The gauge-invariant subalgebra of $\cO_q[K]$ is then the quantized coordinate ring of the flag variety.
\subsection{Flag manifolds and non-simply connected groups}
\label{sec:flag_manifolds}
We finish this preliminary section with the structure theory of the quantized coordinate rings of general flag varieties. This material could be skipped on a first reading. We follow the conventions of Stokman \cite{Stokman:quantum_orbit_method}.
Let $\mathfrak{b}_+$ denote the Borel subalgebra of $\mathfrak{g}$ generated by $\mathfrak{h}$ and the elements $E_1,\cdots,E_r$. The standard parabolic subalgebras are indexed by subsets of simple roots $S\subseteq{\boldsymbol{\Delta}}$. Specifically, we denote by $\mathfrak{p}_S \supseteq \mathfrak{b}_+$ the standard parabolic which contains $F_i$ if and only if $\alpha_i\inS$. The Levi factor $\mathfrak{l}_S$ of $\mathfrak{p}_S$ is generated by $\mathfrak{h}$ and those $E_i,F_i$ with $\alpha_i\inS$.
The associated Lie subgroups of $G$ will always be denoted by the corresponding uppercase letters. We write $X_S = G/P_S$ for the corresponding flag variety.
Passing to the compact form, we have $X_S = K/K_S$ where $\mathfrak{k}_S = \mathfrak{k} \cap \mathfrak{p}_S$. The Lie subalgebra $\mathfrak{k}_S$ decomposes as
\[
\mathfrak{k}_S = \mathfrak{k}^0_S \oplus \mathfrak{z}_S,
\]
where $\mathfrak{z}_S$ is the centre of $\mathfrak{k}_S$ and $\mathfrak{k}^0_S$ is semisimple. Then $Y_S = K/K^0_S$ is a principal torus bundle over the flag variety $X_S$ with fibres $Z_S \cong \mathbb{T}^N$, where $N=r-|S|$.
For instance, when $S=\emptyset$ we have $K_\emptyset = T$ and $K^0_\emptyset=\{1\}$, so the torus bundle $Y_\emptyset\to X_\emptyset$ is the principal bundle $K\to K/T$ with fibres $Z_\emptyset = T\cong\mathbb{T}^r$. More generally, the torus group $Z_S$ identifies canonically with the dual torus of the abelian group $\bP_{S^c}\subseteq \bP$ generated by the fundamental weights $\varpi_i$ with $i\notinS$.
All of the above can be quantized.
We shall not spell out all the details here, but refer directly to \cite{Stokman:quantum_orbit_method}. Following our above notation, we write $\cO[X_{S,q}]$ and $\cO[Y_{S,q}]$ for what Stokman \cite{Stokman:quantum_orbit_method} calls $\mathbb{C}_q[U/K_S]$ and $\mathbb{C}_q[U/K^0_S]$ in Definitions 2.3(a) and Section 4, namely
\begin{align*}
\cO[Y_{S,q}]
& = \{ a\in\cO[K_q] \mid T \triangleright a = \epsilon(T)a \text{ for all } T\in \mathcal{U}_q(\mathfrak{k}^0_S)\},\\
\cO[X_{S,q}]
& = \{ a\in\cO[K_q] \mid T \triangleright a = \epsilon(T)a \text{ for all } T\in \mathcal{U}_q(\mathfrak{k}_S)\}.
\end{align*}
The following is due to Stokman \cite{Stokman:quantum_orbit_method}.
We retain the notation $\mathsf{f}^\lambda_i = c^{V(\lambda)}_{f, v_1^\lambda}$, $\mathsf{v}^\lambda_i = c^{V(\lambda)^*}_{\tilde{v}, f^1_\lambda}$ for the generators of $\cO_q[G/N^+]$ and $\cO_q[G/N^-]$ from Equation \eqref{eq:genf_genv}.
We denote the specialized versions of these elements by the same symbols, for simplicity.
\begin{theorem}
\label{thm:flag_generators}
Let $\bP_S^+$ denote the set of dominant weights which are non-negative integral linear combinations of the fundamental weights $\varpi_k$ with $\alpha_k \in {\boldsymbol{\Delta}} \setminus S$.
The quantized coordinate ring $\cO[Y_{S,q}]$ of the torus bundle $Y_S\to X_S$ is generated as an algebra by the matrix coefficients
\[
\left\{ \mathsf{f}^\lambda_i, \mathsf{v}^\lambda_i \mid \lambda\in\bP^+_S \right\}.
\]
This algebra is stable under the gauge action of $T$ defined in \cref{sec:gauge_action}, and the gauge-invariant subalgebra is $\cO[X_{S,q}]$.
\end{theorem}
\begin{proof}
In fact, Stokman \cite[Theorem 4.1]{Stokman:quantum_orbit_method} proves that $\cO[Y_{S,q}]$ is generated by $c^{V(\lambda)}_{f, v_1^\lambda}$ and $c^{V(\lambda)^*}_{\tilde{v}, f^1_\lambda}$ with $\lambda = \varpi_k$ for $\alpha_k \notin S$, so our first claim is in fact weaker. The second claim follows immediately upon observing that $\cO[X_{S,q}]$ consists of those elements of $\cO[Y_{S,q}]$ which are invariant under the action of $\cU_q(\lie{h})$.
\end{proof}
\begin{remark}
The algebra $\cO[Y_{S,q}]$ is already invariant under the action of $K_i$ for $\alpha_i\inS$, so the gauge action in \cref{thm:flag_generators} is determined entirely by the action of the generators $K_i$ with $\alpha_i\notinS$. Thus the relevant gauge action on $\cO[Y_{S,q}]$ reduces to an action of $\mathbb{T}^N$ with $N=r-|S|$.
\end{remark}
Next we consider the case of a connected but not simply connected compact semisimple Lie group $K$.
Let $\mathfrak{k}$ be the Lie algebra of $K$. Let $\widetilde{K}$ be the universal cover of $K$, and $\bQ$, $\bP$ be the root and weight lattices of $\mathfrak{k}$. Then there is a lattice $\bP_K\subset\mathfrak{h}^*$ with $\bQ \subseteq \bP_K \subseteq \bP$ such that the irreducible representations of $K$ are precisely those of the simply connected group $\widetilde{K}$ with highest weights in $\bP_K^+ = \bP_K \cap \bP^+$. As a consequence we have
\begin{equation}
\label{eq:OqKtilde}
\cO_q[K] = \Vect \left\{\left. c^{V(\lambda)}_{f, v} \right| \lambda\in\bP_K^+,~ f\in V(\lambda)^*, ~ v\in V(\lambda) \right\} \quad \subseteq \cO_q[\Ktilde],
\end{equation}
as a $*$-subalgebra.
We can then define generalized flag varieties $X=K/K_S$ and their torus bundles $Y=K/K^0_S$ for the non simply connected group $K$. Combining the above with Stokman's result, \cref{thm:flag_generators}, we get the following.
\begin{theorem}
\label{thm:flag_generators2}
Let $K$ be a compact semisimple Lie group, not necessarily simply connected. The quantized coordinate ring $\cO[Y_{S,q}]$ of the torus bundle $Y_S\to X_S$ is generated by the matrix coefficients
\[
\left\{ \mathsf{f}^\lambda_i, \mathsf{v}^\lambda_i \mid \lambda\in\bP^+_{K,S} \right\}.
\]
where $\bP_{K,S}^+ := \bP_K^+ \cap \bP_S^+$.
The gauge-invariant subalgebra of $\cO[Y_{S,q}]$ is $\cO[X_{S,q}]$.
\end{theorem}
\section{The analytic limit of the quantized coordinate ring}
\label{sec:coordinate_rings}
\subsection{The analytic limit for \texorpdfstring{$SU_q(2)$}{SUq(2)}}
\label{sec:SUq2_limit}
We identify the integral weights of $\mathfrak{sl}(2)$ with the integers, $\bP=\mathbb{Z}$, so that $V(m)$ is the irreducible integral representation of dimension $m+1$. Let%
\footnote{Unfortunately, this notation clashes with the common convention of using $v^\lambda_1$ for the highest weight vector in $V(\lambda)$, which we have followed elsewhere in this article when $K\neq\mathrm{SU}(2)$.}
$v^m_0$ denote a highest weight vector of $V(m)$, and put $v^m_k = F^{(k)}v^m_0$.
The ${\mathbf{A}_0}$-span of these vectors is a crystal lattice $\mathcal{L}(m)$ and their images $b^m_k\in\mathcal{L}(m)/q\mathcal{L}(m)$ define a crystal basis with crystal $\mathcal{B}(m) = \{b^m_k \mid k = 0, \cdots, m\}$.
Let $\{f_m^k \mid k = 0, \cdots, m\}$ denote the basis of $V(m)^*$ dual to $\{v^m_k \mid k = 0, \cdots, m\}$.
We write $u^m_{ij} = c^{V(m)}_{f^i,v_j}$ for the associated matrix coefficients. For the matrix coefficients of the fundamental representation we will use the operator algebraists' notation from \cite{Woronowicz:SUq2}, that is
\begin{align*}
\alpha &= u^1_{00} , &
-q\gamma^* &= u^1_{01} , \\
\gamma &= u^1_{10} , &
\alpha^* &= u^1_{11} .
\end{align*}
Note that the basis $\{v^m_k\}$ for $V(m)$ is generally only orthogonal, but in the case $m=1$ the basis $\{v^1_0,v^1_1\}$ is also orthonormal, so we do indeed recover Woronowicz's generators.
They satisfy the following relations
\[
\alpha \gamma = q \gamma \alpha, \quad
\alpha \gamma^* = q \gamma^* \alpha, \quad
\gamma \gamma^* = \gamma^* \gamma, \quad
\alpha^* \alpha + \gamma^* \gamma = 1, \quad
\alpha \alpha^* + q^2 \gamma \gamma^* = 1.
\]
\begin{definition}[Woronowicz \cite{Woronowicz:SUq2}, Vaksman-Soibelman \cite{VakSoi:SUq2}]
\label{def:SU2-rep}
Fix $q \in (0, \infty) \setminus\{1\}$. The \emph{standard representation} of the quantized coordinate ring $\mathcal{O}[\mathrm{SU}_q(2)]$ is the $*$-representation $\tilde\pi_q$ on $\ell^2(\mathbb{N}) = \Vect\{e_0, e_1, \cdots\}$ determined by
\begin{align*}
\tilde\pi_q(\alpha)e_n &= \sqrt{1-q^{2n}} e_{n-1}, &
\tilde\pi_q(\gamma)e_n &= q^ne_n, &
&(n\geq0).
\end{align*}
We will use the same notation $\tilde\pi_q$ for the partially defined representation of the algebraists' quantized coordinate ring $\tilde\pi_q\circ\ev_q:\mathcal{O}_q[\mathrm{SU}(2)] \to \mathcal{B}(\ell^2(\mathbb{N}))$.
\end{definition}
If $V$ is any finite-dimensional integrable $\cU_q(\lie{g})$-module over the ground field $\mathbb{C}(q)$ then any $v\in V$ can be specialized at all but finitely many $q\in(0,\infty)$, and likewise for any $f\in V^*$. Therefore,
for any given element $u\in\cO_q[K]$, the representation $\tilde\pi_q(u)$ is well-defined for all sufficiently small $q > 0$, so it makes sense to ask about the existence of the limit of $\tilde\pi_q(u)$ as $q\to0$. The following result is essentially known, compare \cite{Woronowicz:SUq2, VakSoi:SUq2, HonSzy:spheres}.
\begin{theorem}
\label{thm:SL2_limit}
For any $u\in \mathcal{O}_q^{\mathbf{A}_0}[\mathrm{SL}(2)]$, the one-parameter family of operators $\tilde\pi_q(u)\in\mathcal{B}(\ell^2\mathbb{N})$ admits a norm limit as $q\to0$, which we denote by $\tilde\pi_0(u)$.
Explicitly, if $m\in\mathbb{N}$ and $i,j \in \{0, \cdots, m\}$, then
\begin{align*}
\tilde\pi_0(u^m_{ij}) =
\begin{cases}
T^{j} P_0 T^{*m-i}, & \text{if } i>j \\
T^{i} T^{*m-i} , & \text{if } i=j \\
0, & \text{if } i<j
\end{cases}
\end{align*}
where $T$ is the right-shift operator and $P_0$ is the orthogonal projection onto $e_0$.
\end{theorem}
\begin{proof}
For $m=0$, we have $\tilde\pi_0(u^0_{00})=1$, and for $m=1$, the formulas in Definition \ref{def:SU2-rep} give the desired limits as $q\to0$:
\begin{align*}
\tilde\pi_0(u^1_{00}) & = T^*, &
\tilde\pi_0(u^1_{01}) & = 0,\\
\tilde\pi_0(u^1_{10}) & = P_0, &
\tilde\pi_0(u^1_{11}) & = T.
\end{align*}
Since the fundamental matrix coefficients generate $\mathcal{O}_q^{\mathbf{A}_0}[\mathrm{SL}(2)]$ as an $\mathbf{A}_0$-algebra, the existence of the limit for all elements of $\mathcal{O}_q^{{\mathbf{A}_0}}[\mathrm{SL}(2)]$ follows immediately.
It remains to check the stated formula when $m>1$.
There is a unique inclusion of $\cU_q(\lie{g})$-modules $\iota_m:V(m) \hookrightarrow V(1)^{\otimes m}$ which sends the highest weight vector $v^m_0$ to $v^1_0\otimes\cdots\otimes v^1_0$. Equation \eqref{eq:tensor_F} shows that the Kashiwara operator $\tilde{F}$ acts on $\mathcal{B}(1)^{\otimes m}$ by
\[
\tilde{F}: (b^1_1)^{\otimes k} \otimes (b^1_0)^{\otimes (m-k)} \mapsto
(b^1_1)^{\otimes (k+1)} \otimes (b^1_0)^{\otimes (m-k-1)},
\]
for $k = 0, \cdots, m - 1$, so it follows that
\[
\iota_m(v^m_k) = (v^1_1)^{\otimes k} \otimes (v^1_0)^{\otimes (m-k)} \mod q\mathcal{L}(1)^{\otimes m}.
\]
Note that for every $i,j \in \{0,\ldots, m\}$ and every $a,b\in\mathbb{N}$ we have
\[
\big( (f^1_1)^{\otimes i} \otimes (f^1_0)^{\otimes m-i} , \tilde{E}^a\tilde{F}^b (v^1_1)^{\otimes j} \otimes (v^1_0)^{\otimes (m-j)} \big)
= \big( f^i_m , \tilde{E}^a\tilde{F}^b v^m_j) \mod q{\mathbf{A}_0},
\]
which implies that
\begin{align*}
u^m_{ij}
& = \begin{cases}
(u^1_{11})^{j} (u^1_{10})^{i-j} (u^1_{00})^{m-i} , & \text{if } i > j \\
(u^1_{11})^{i} (u^1_{00})^{m-i} , & \text{if } i = j \\
(u^1_{11})^{i} (u^1_{01})^{j-i} (u^1_{00})^{m-j} , & \text{if } i < j
\end{cases}
\quad\mod q\mathcal{O}_q^{\mathbf{A}_0}[\mathrm{SL}(2)].
\end{align*}
Now we can use the result for $m=1$ to deduce the general formula.
\end{proof}
This result immediately extends to our compact ${\mathbf{A}_0}$-form $\mathcal{O}_q^{\mathbf{A}_0}[\mathrm{SU}(2)]$ from \cref{def:OqAOK}, as follows.
\begin{corollary}
\label{cor:SU2_limit}
For any $u\in \mathcal{O}_q^{\mathbf{A}_0}[\mathrm{SU}(2)]$, the one-parameter family of operators $\tilde\pi_q(u)\in\mathcal{B}(\ell^2\mathbb{N})$ admits a norm limit as $q\to0$, which we denote by $\tilde\pi_0(u)$.
\end{corollary}
\begin{proof}
By \cref{thm:SL2_limit}, for every generator $\mathsf{f}^\lambda_i$ of $\mathcal{O}_q^{\mathbf{A}_0}[\mathrm{SL}(2)/N^+]$, the operators $\tilde\pi_q(\mathsf{v}^\lambda_i)$ admit a norm-limit as $q\to0$. Since the $\tilde\pi_q$ are $*$-re\-pre\-sent\-ations, \cref{prop:v_f_adjoints} shows that the same is true of the generators $\mathsf{v}^\lambda_i$ of $\mathcal{O}_q^{\mathbf{A}_0}[\mathrm{SL}(2)/N^-]$. The result follows.
\end{proof}
Note, for instance, that the matrix coefficient $\gamma^*=(u^1_{10})^* = -q^{-1}u^1_{01}$ does not belong to the ${\mathbf{A}_0}$-form $\widetilde{\mathcal{F}}=\mathcal{O}_q^{\mathbf{A}_0}[\mathrm{SL}(2)]$, but nonetheless it does admit an analytic limit at $q=0$, namely $\lim_{q\to0} \tilde\pi_q(\gamma^*) = P_0$.
\subsection{The analytic limit in higher rank}
Next, we generalize the previous result to higher rank.
Let $K$ be a connected, simply connected compact semisimple Lie group. Let $W$ denote its Weyl group.
For every $\alpha_i \in {\boldsymbol{\Delta}}$ there is a Hopf $*$-morphism $\iota_i:\mathcal{U}_{q_i}(\mathfrak{su}(2)) \hookrightarrow \cU_q(\lie{k})$ sending $E$ to $E_i$. The dual map $\Res_{S_i}^{K_q}:=\iota_i^*:\cO_q[K] \twoheadrightarrow \mathcal{O}_{q_i}[\mathrm{SU}(2)] $ is called the \emph{restriction to the quantum subgroup $S_i\cong \mathrm{SU}_{q_i}(2)$} associated to the simple root $\alpha_i$. It specializes at $q\in(0, \infty) \setminus\{1\}$ to a $*$-morphism $\Res_{S_i}^{K_q}:\cO[K_q] \twoheadrightarrow \mathcal{O}[\mathrm{SU}_{q_i}(2)] $.
This allows us to define a $*$-representation for each $\alpha_i\in{\boldsymbol{\Delta}}$ and each $q \in (0, \infty) \setminus\{1\}$ by
\begin{equation}
\label{eq:pi_i}
\tilde\pi_{q, i} := \tilde\pi_q \circ \Res_{S_i}^{K_q}: \cO[K_q] \to \mathcal{B}(\ell^2(\mathbb{N})).
\end{equation}
As previously, we will use the same notation for the partially defined representation of the $\mathbb{C}(q)$-algebra $\cO_q[K]$ obtained by first specializing at $q$.
We also have a restriction morphism to the maximal torus subgroup $\res^{K_q}_T : \cO[K_q] \to \cO[T]$ for each $q \in (0, \infty) \setminus\{1\}$. Explicitly,
\[
\res^{K_q}_T : c^V_{f^i,v_j} \mapsto \delta_{ij} c^V_{f^i,v_i},
\]
where $\{v_i\}$ is a basis of weight vectors, $\{f^i\}$ a dual basis, and the right-hand side is understood as a matrix coefficient for $T\subset K$.
Composing this with the multiplication representation $\mathsf{M}:\cO[T]\to\mathcal{B}(L^2(T))$ yields the $*$-representation
\begin{equation}
\label{eq:pi_T}
\chi = \mathsf{M} \circ\res^{K_q}_T : \cO[K_q] \to \mathcal{B}(L^2(T)),
\end{equation}
as well as its partially defined analogue on $\cO_q[K]$.
\begin{definition}[Soibelman \cite{Soibelman}]
\label{def:big_cell_rep}
Fix a decomposition $w_0 = s_{i_1} \cdots s_{i_l}$ of the longest word of $W$. We put $\mathsf{H} = \ell^2(\mathbb{N})^{\otimes l} \otimes L^2(T)$.
For $q \in (0, \infty) \setminus\{1\}$, we define the \emph{Soibelman representation} to be
\begin{align*}
\pi_q := (\tilde\pi_{i_1, q} \otimes \tilde\pi_{i_2, q} \otimes \cdots \otimes \tilde\pi_{i_l, q} \otimes \chi) \circ \Delta^{(l)} : \cO[K_q] &\to \mathcal{B}(\mathsf{H}).
\end{align*}
Again, we use the same notation for the partially defined representation of $\cO_q[K]$.
\end{definition}
\begin{remark}
Soibelman \cite{Soibelman} defined a family of irreducible $*$-represent\-ations indexed by the symplectic leaves of $K$. The representation in \cref{def:big_cell_rep} corresponds to the direct integral over all symplectic leaves of maximal dimension. It is faithful but not irreducible. One can obtain an irreducible representation by replacing the multiplication representation of the torus $\chi$ by any character $\chi_t = \ev_{t}\circ\res^{K_q}_T$ where $\ev_t$ denotes evaluation at a point $t\in T$. For a nice account of Soibelman's work, and generalizations, see \cite{NesTus:functions}.
\end{remark}
\begin{theorem}
\label{thm:G_limit}
For any $u \in \cO_q^{\bfA_0}[G]$, the one-parameter family of operators $\pi_q(u)$ admits a norm-limit as $q \to 0$, which we denote by $\pi_0(u)$.
\end{theorem}
\begin{proof}
Firstly, observe that $\Delta:\cO_q^{\bfA_0}[G]\to\cO_q^{\bfA_0}[G]\otimes\cO_q^{\bfA_0}[G]$.
Next, note that a crystal lattice $\mathcal{L}$ for a $\cU_q(\lie{k})$-module $V$ is also a $\mathcal{U}_q(\mathfrak{su}_2)$-crystal lattice for the restriction of the module $V$ to $S_i$ for every $i = 1, \cdots, r$. Therefore the maps $\res^{K_q}_{S_i}:\cO_q[K] \to \mathcal{O}_{q_i}[\mathrm{SU}(2)]$ restrict to morphisms $\cO_q^{\bfA_0}[G] \to \mathcal{O}_{q_i}^{\mathbf{A}_0}[\mathrm{SL}(2)]$. The result now follows from Theorem \ref{thm:SL2_limit} and the definition of the Soibelman representation.
\end{proof}
By the same argument as in Corollary \ref{cor:SU2_limit}, we obtain limits for our compact ${\mathbf{A}_0}$-form $\cO_q^{\bfA_0}[K]$.
\begin{corollary}
\label{cor:K_limit}
For any $u \in \cO_q^{\bfA_0}[K]$, the one-parameter family of operators $\pi_q(u)$ admits a norm-limit as $q \to 0$, which we denote by $\pi_0(u)$.
\end{corollary}
\begin{definition}
\label{def:OK0}
We put $\cO[K_0]:=\pi_0(\cO_q^{\bfA_0}[K])$ and refer to this $*$-algebra as the crystal limit of $\cO[K_q]$.
\end{definition}
We make a similar definition for the $q=0$ limit of the quantized coordinate rings of flag varieties, following \cref{thm:flag_generators2}. Note that $\cO[K_0]$ inherits a gauge action of the torus $T\cong\mathbb{T}^r$ from that of $\cO_q[K]$.
\begin{definition}
\label{def:OXO}
With notation as in \cref{sec:flag_manifolds}, let $K$ be a compact connected semisimple Lie group, not necessarily simply connected, let $S\subset{\boldsymbol{\Delta}}$ be a set of simple roots, and let $Y_S$ be the corresponding principal torus bundle over the flag variety $X_S$. We define $\cO[Y_{S,0}]$ to be the subalgebra of $\cO[K_0]$ generated by the elements $\tilde\pi_0(\mathsf{f}^\lambda_i)$ and $\tilde\pi_0(\mathsf{v}^\lambda_i)$ with $\lambda\in\bP^+_{K,S}$. The gauge-invariant subalgebra is denoted by $\cO[X_{S,0}]$.
\end{definition}
The remainder of the paper will be dedicated to describing the structure of the crystal limit $\cO[K_0]$, as well as the subalgebras $\cO[Y_{S,0}] \subset \cO[K_0]$ as above.
\section{The Cartan braiding and the crystal limit}
\label{sec:Cartan-braiding}
\subsection{Braiding and commutation relations}
At this point, we will be \linebreak obliged to enlarge our base field slightly in order to work with expressions of the form $q^{(\lambda,\lambda')}$ where $\lambda,\lambda'\in\bP$ are any integral weights. Let $L\in\mathbb{N}$ be the smallest positive integer such that $(\bP,\bP)\subseteq \frac{1}{L}\mathbb{Z}$. Then we work over $\mathbb{C}(s)$ and put $q=s^L$. Likewise we redefine the rings ${\mathbf{A}_0}$, $\mathbf{A}_\infty$ and $\mathbf{A}$ with the parameter $s$ in place of $q$. This has essentially no effect on the crystal theory. Rather than rewrite expressions in terms of $s$, we will write $q^k$ where $k\in\frac{1}{L}\mathbb{Z}$.
We denote by $\hat{R}_{V, W}: V \otimes W \to W \otimes V$ the braiding corresponding to the finite-dimensional integrable $\cU_q(\lie{g})$-modules $V$ and $W$.
It is a $\cU_q(\lie{g})$-module isomorphism which generalizes the classical flip map.
The braiding is not quite unique; the choice we make can be characterized uniquely by
\begin{equation}
\label{eq:R-matrix_convention}
\begin{split}
\hat{R}_{V, W}(v \otimes w) &= q^{-(\wt(v), \wt(w))} w \otimes v + \sum_i w_i \otimes v_i,\\
&
\text{
where $\wt(w_i) <\wt(w)$ and $\wt(v_i) > \wt(v)$ for every $i$.
}
\end{split}
\end{equation}
Here, $\wt$ denotes the weight of a weight vector.
In particular, in the case $V = V(\lambda)$ and with $v_\lambda$ a highest weight vector, we get $\hat{R}_{V, V}(v_\lambda \otimes v_\lambda) = q^{-(\lambda, \lambda)} v_\lambda \otimes v_\lambda$.
The braiding can be used to give commutation relations for the elements of $\cO_q[G]$.
Let $V$ and $W$ be $\cU_q(\lie{g})$-modules with bases $\{v_i\}_i$ and $\{w_i\}_i$ and denote by $\{f^i\}_i$ and $\{g^i\}_i$ the dual bases. Let us introduce the coefficients of $\hat{R}_{V,W}$ by
\[
\hat{R}_{V, W}(v_i \otimes w_j) = \sum_{k, l} (\hat{R}_{V, W})_{i j}^{k l} w_k \otimes v_l.
\]
Then, using the fact that the braiding satisfies $\hat{R}_{V, W} \Delta(X) = \Delta(X) \hat{R}_{V, W}$ for any $X \in \cU_q(\lie{g})$, it is easy to derive the relations
\begin{equation}
\label{eq:OqG-relation}
\begin{split}
c^V_{f^i, v_k} c^W_{g^j, w_l} & = \sum_{a, b, c, d} (\hat{R}_{V, W}^{-1})^{i j}_{a b} (\hat{R}_{V, W})^{c d}_{k l} c^W_{g^a, w_c} c^V_{f^b, v_d} \\
& = \sum_{a, b, c, d} (\hat{R}_{W, V})^{i j}_{a b} (\hat{R}_{W, V}^{-1})^{c d}_{k l} c^W_{g^a, w_c} c^V_{f^b, v_d}.
\end{split}
\end{equation}
\subsection{The Cartan braiding}
Before discussing the crystal limit of the braiding operators, let us make some observations about tensor products of irreducible crystals.
\begin{definition}
\label{def:Cartan_component}
A finite-dimensional integrable $\cU_q(\lie{g})$-module $V$ will be called a \emph{product of irreducibles} if it is of the form $V(\lambda_1)\otimes\cdots\otimes V(\lambda_n)$ for some $\lambda_1, \cdots, \lambda_n \in \bP^+$. Such a module contains a unique irreducible submodule of highest weight $\lambda = \sum_i \lambda_i$, which we call the \emph{Cartan component}. We refer to $\lambda$ as the \emph{highest weight} of $V$ (although strictly speaking it is the largest among the highest weights of all irreducible submodules of $V$).
The same terminology will be used to refer to the associated \emph{Cartan component} of a product of irreducible crystals.
If $\mathcal{B} = \mathcal{B}(\lambda_1) \otimes \cdots \otimes \mathcal{B}(\lambda_n)$ is a product of irreducible crystals, we denote by $\eta:\mathcal{B}\sqcup\{0\}\to\{0,1\}$ the indicator function of its Cartan component:
\[
\eta(b)=
\begin{cases}
1 & \text{if $b$ is in the Cartan component}, \\
0 &\text{otherwise}.
\end{cases}
\]
\end{definition}
The following well-known property of the Cartan component will occasionally be useful (we include the easy proof for completeness).
\begin{lemma}
\label{lem:tensor_fact}
Let $b_{\lambda}$ and $b_{w_0\lambda}$ denote the highest and lowest weight elements, respectively, of the irreducible crystal $\mathcal{B}(\lambda)$. Then the tensor $c\otimes b_{\lambda'}$ belongs to the Cartan component of $\mathcal{B}(\lambda)\otimes\mathcal{B}(\lambda')$ for every $c\in\mathcal{B}(\lambda)$. Similarly, $b_{w_0\lambda}\otimes c'$ belongs to the Cartan component for every $c'\in\mathcal{B}(\lambda')$.
\end{lemma}
\begin{proof}
The first statement follows by repeatedly applying the Kashiwara operators $\tilde{F}_i$ to the highest weight element $b_\lambda \otimes b_{\lambda'}$ and using the tensor product formula \eqref{eq:tensor_F}. Similarly, the second follows by applying the Kashiwara operators $\tilde{E}_i$ to the lowest weight element $b_{w_0\lambda} \otimes b_{w_0\lambda'}$.
\end{proof}
We now return to the braiding.
Consider two simple modules $V(\lambda)$ and $V(\lambda')$.
We have crystal lattices of $V(\lambda)\otimes V(\lambda')$ and $V(\lambda')\otimes V(\lambda)$ given by
\[
\cL_{V(\lambda) \otimes V(\lambda')} := \cL(\lambda) \otimes_{{\mathbf{A}_0}} \cL(\lambda^\prime), \quad
\cL_{V(\lambda') \otimes V(\lambda)} := \cL(\lambda^\prime) \otimes_{{\mathbf{A}_0}} \cL(\lambda).
\]
The braiding $\hat{R}_{V(\lambda), V(\lambda')}$ does not typically map $\cL_{V(\lambda)\otimes V(\lambda')}$ into $\cL_{V(\lambda')\otimes V(\lambda)}$ because of the negative powers of $q$ in \eqref{eq:R-matrix_convention}.
This deficiency can be remedied by multiplying by an appropriate power of $q$, as we show in the next theorem.
\begin{theorem}
\label{thm:braiding_limit}
Given any simple modules $V(\lambda)$ and $V(\lambda')$ we have
\[
q^{(\lambda, \lambda')} \hat{R}_{V(\lambda), V(\lambda')}(\mathcal{L}_{V(\lambda) \otimes V(\lambda')}) \subseteq \mathcal{L}_{V(\lambda') \otimes V(\lambda)}.
\]
Moreover, the map $q^{(\lambda, \lambda')} \hat{R}_{V(\lambda), V(\lambda')}$ induces a morphism of crystals
\[
\braid_{\mathcal{B}(\lambda),\mathcal{B}(\lambda')}:\mathcal{B}(\lambda)\otimes\mathcal{B}(\lambda') \to \mathcal{B}(\lambda') \otimes \mathcal{B}(\lambda).
\]
It is an isomorphism between the Cartan components of the two crystals, and is zero on all other components.
\end{theorem}
\begin{proof}
It is a consequence of the tensor product rule \eqref{eq:tensor_E} for $\tilde{E}_i$ that every highest weight element of the crystal $\mathcal{B}(\lambda) \otimes \mathcal{B}(\lambda')$ is of the form $b_\lambda \otimes b'$, where $b_\lambda$ is the highest weight element of $\mathcal{B}(\lambda)$, and $b'$ is some element of $\mathcal{B}(\lambda')$. Let $v_\lambda \in \mathcal{L}(\lambda)$ and $w \in \mathcal{L}(\lambda')$ be lifts of $b_\lambda$ and $b'$, respectively, to weight vectors. Write $\mu$ for the weight of $w$.
Let us write $V = V(\lambda)$ and $W = V(\lambda')$ for convenience.
By the nature of the braiding described in \eqref{eq:R-matrix_convention}, we have
\begin{equation}
\label{eq:rescaled_braiding1}
q^{(\lambda, \lambda')} \hat{R}_{V,W}(v_\lambda \otimes w) = q^{(\lambda, \lambda') - (\lambda, \mu)} w \otimes v_\lambda.
\end{equation}
Since $\lambda \in \bP^+$ and $\lambda' - \mu \in \bQ^+ = \mathbb{N} \cdot {\boldsymbol{\Delta}}$, we have $(\lambda,\lambda'-\mu)\geq0$. Therefore, \eqref{eq:rescaled_braiding1} is in $\mathcal{L}_{W\otimes V}$, and moreover is in $q\mathcal{L}_{W\otimes V}$ unless $w$ is the highest weight vector of $W$.
The images of the various highest weight elements $v_\lambda\otimes w$ above, under repeated action of the Kashiwara operators $\tilde{F}_i$, generate the crystal lattice $\mathcal{L}_{V,W}$ as an ${\mathbf{A}_0}$-module. Since $\hat{R}_{V,W}$ is a morphism of $\cU_q(\lie{g})$-modules, we get
\[
q^{(\lambda, \lambda')} \hat{R}_{V, W}(\tilde{F}(v_\lambda \otimes w)) =
q^{(\lambda, \lambda') - (\lambda, \mu)} \tilde{F} (w \otimes v_\lambda)
\in \mathcal{L}_{W \otimes V}
\]
for any word $\tilde{F} = \tilde{F}_{i_1} \cdots \tilde{F}_{i_n}$ in the Kashiwara operators $\tilde{F}_i$.
This proves that the map $q^{(\lambda,\lambda')} \hat{R}_{V,W}$ respects the crystal lattices and induces a map of crystals.
For the final claim, note that crystal element $b'\otimes b_\lambda$ associated to the right hand side of \eqref{eq:rescaled_braiding1} always belongs to the Cartan component by \linebreak \cref{lem:tensor_fact}. Since the left-hand side is a highest-weight element by assumption, it belongs to the Cartan component only if $w$ is of weight $\lambda'$. It follows that the crystal morphism $\sigma_{\mathcal{B}(\lambda),\mathcal{B}(\lambda')}$ must vanish on all non-Cartan components. On the Cartan component, \eqref{eq:rescaled_braiding1} gives \linebreak $q^{(\lambda,\lambda')}\hat{R}_{V,W}(v_\lambda\otimes v_{\lambda'}) = v_{\lambda'}\otimes v_\lambda$, from which we conclude that $\sigma_{\mathcal{B}(\lambda),\mathcal{B}(\lambda')}$ is an isomorphism here.
\end{proof}
\begin{definition}
\label{def:Cartan-braiding}
Let $V$ and $V'$ be products of irreducible modules with highest weights $\lambda$ and $\lambda'$, respectively, and let $(\mathcal{L}, \mathcal{B})$ and $(\mathcal{L}', \mathcal{B}')$ be their crystal bases. The \emph{Cartan braiding} is the morphism of crystals
\[
\braid = \braid_{\mathcal{B}, \mathcal{B}'}: \mathcal{B}\otimes \mathcal{B}' \to \mathcal{B}' \otimes \mathcal{B}
\]
induced by the morphism of crystal lattices $q^{(\lambda,\lambda')} \hat{R}_{V, V'}: \mathcal{L}\otimes_{\mathbf{A}_0}\mathcal{L}' \to \mathcal{L}'\otimes_{\mathbf{A}_0}\mathcal{L}$.
\end{definition}
The use of the word braiding here is a bit misleading. Firstly, braidings are supposed to be isomorphisms, while our Cartan braiding is not, see \cref{thm:braiding_limit}. More importantly, the Cartan braiding $\braid_{\mathcal{B},\mathcal{B}'}$ is only defined when $\mathcal{B}$ and $\mathcal{B}'$ products of irreducible modules, not on general direct sums of irreducibles. This is because we need a well-defined notion of highest weights $\lambda$ and $\lambda'$ for $\mathcal{B}$ and $\mathcal{B}'$ in order to define it, see below.
If $V$ and $V'$ are irreducible modules, then the Cartan braiding is the crystal morphism described in \cref{thm:braiding_limit}, that is, it is an isomorphism between the Cartan components of $\mathcal{B} \otimes \mathcal{B}'$ and $\mathcal{B}' \otimes \mathcal{B}$, and zero on all other irreducible components.
If $V$ and $V'$ are merely products of irreducibles, then we need a few remarks to justify that the Cartan braiding is well-defined. If $\mu, \mu' \in \bP^+$ are the highest weights of any components of $V$ and $V'$ respectively, then we have $\lambda - \mu$ and $\lambda - \mu'$ in $\mathbb{N} \cdot {\boldsymbol{\Delta}}$. Therefore
\[
(\lambda, \lambda') = (\lambda - \mu, \lambda') + (\mu, \lambda' - \mu') + (\mu, \mu') \geq (\mu, \mu'),
\]
so $q^{(\lambda, \lambda')} \hat{R}_{V(\mu),V(\mu')} = q^k q^{(\mu, \mu')} \hat{R}_{V(\mu), V(\mu')}$ for some $k \geq 0$.
Now \cref{thm:braiding_limit} shows that $q^{(\lambda, \lambda')} \hat{R}_{V, V'}: \mathcal{L} \otimes_{\mathbf{A}_0} \mathcal{L}' \to \mathcal{L}'\otimes_{\mathbf{A}_0} \mathcal{L}$ is well defined and descends to a morphism of crystals.
Note though that in this case, it is possible for $\braid_{\mathcal{B}, \mathcal{B}'}$ to be non-zero on components other than the Cartan component, as the following example shows.
\begin{example}
Let $\mathfrak{g} = \mathfrak{sl}_4$, and consider $\mathcal{B} = \mathcal{B}(\varpi_1)$ and $\mathcal{B}' = \mathcal{B}(\varpi_2)\otimes\mathcal{B}(\varpi_3)$, where $\varpi_1,\varpi_2,\varpi_3$ are the fundamental weights. Then $\mathcal{B}'\cong \mathcal{B}(\varpi_2+\varpi_3)\oplus\mathcal{B}(\varpi_1)$. One can calculate
\[
(\varpi_1,\varpi_2+\varpi_3) = {\frac34} = (\varpi_1,\varpi_1).
\]
It follows that $\braid_{\mathcal{B},\mathcal{B}'}$ is non-zero on two distinct components in the triple product $\mathcal{B}(\varpi_1)\otimes\mathcal{B}(\varpi_2) \otimes\mathcal{B}(\varpi_3)$.
On the other hand, if we consider the map $\id_{\mathcal{B}(\varpi_1)}\otimes\braid_{\mathcal{B}(\varpi_2),\mathcal{B}(\varpi_3)}$, this will kill $\mathcal{B}(\varpi_1) \otimes \mathcal{B}(\varpi_1) \subset \mathcal{B}(\varpi_1) \otimes (\mathcal{B}(\varpi_2)\otimes\mathcal{B}(\varpi_3))$. In fact, we will show that the Cartan component of any tensor product of irreducibles is characterized by the non-vanishing of every possible action of the Cartan braidings. See \cref{thm:longest_word} below for a precise statement.
\end{example}
The Cartan braiding is very easy to describe in one particular situation.
\begin{example}
When $V = V' = V(\lambda)$, the Cartan braiding is the ``projection'' onto the component of highest weight $2\lambda$, \emph{i.e.},
\[
\braid_{\mathcal{B}(\lambda),\mathcal{B}(\lambda)} (b\otimes b') =
\begin{cases}
b\otimes b' & \text{if } \eta(b\otimes b')=1,\\
0 & \text{otherwise.}
\end{cases}
\]
\end{example}
Throughout this paper we will consider the running example of $K = \mathrm{SU}(3)$, with the aim of rederiving the result of Giselsson mentioned in the introduction. We begin by determining the Cartan braiding in this case.
\begin{example}
\label{ex:braiding-sl3}
For $\mathfrak{g} = \mathfrak{sl}_3$, we denote the crystals graphs of $\mathcal{B}(\varpi_1)$ and $\mathcal{B}(\varpi_2)$ by
\begin{center}
\begin{tikzcd}[column sep=2em]
a_1 \arrow[r, "1"] & a_2 \arrow[r, "2"] & a_3 & & b_1 \arrow[r, "2"] & b_2 \arrow[r, "1"] & b_3
\end{tikzcd}
\end{center}
Using the tensor product rule for the Kashiwara operators, we find that the crystal graphs of $\mathcal{B}(\varpi_1) \otimes \mathcal{B}(\varpi_1)$ and $\mathcal{B}(\varpi_2) \otimes \mathcal{B}(\varpi_2)$ are given by
\begin{center}
\begin{tikzcd}[column sep=1.5em]
a_1 \otimes a_1 \arrow[r, "1"] & a_2 \otimes a_1 \arrow[r, "2"] \arrow[d, "1"] & a_3 \otimes a_1 \arrow[d, "1"] & & b_1 \otimes b_1 \arrow[r, "2"] & b_2 \otimes b_1 \arrow[r, "1"] \arrow[d, "2"] & b_3 \otimes b_1 \arrow[d, "2"] \\
a_1 \otimes a_2 \arrow[d, "2"] & a_2 \otimes a_2 \arrow[r, "2"] & a_3 \otimes a_2 \arrow[d, "2"] & & b_1 \otimes b_2 \arrow[d, "1"] & b_2 \otimes b_2 \arrow[r, "1"] & b_3 \otimes b_2 \arrow[d, "1"] \\
a_1 \otimes a_3 \arrow[r, "1"] & a_2 \otimes a_3 & a_3 \otimes a_3 & & b_1 \otimes b_3 \arrow[r, "2"] & b_2 \otimes b_3 & b_3 \otimes b_3
\end{tikzcd}
\end{center}
Similarly, for the products $\mathcal{B}(\varpi_1) \otimes \mathcal{B}(\varpi_2)$ and $\mathcal{B}(\varpi_2) \otimes \mathcal{B}(\varpi_1)$ we have
\begin{center}
\begin{tikzcd}[column sep=1.5em]
a_1 \otimes b_1 \arrow[r, "1"] \arrow[d, "2"] & a_2 \otimes b_1 \arrow[r, "2"] & a_3 \otimes b_1 \arrow[d, "2"] & & b_1 \otimes a_1 \arrow[r, "2"] \arrow[d, "1"] & b_2 \otimes a_1 \arrow[r, "1"] & b_3 \otimes a_1 \arrow[d, "1"] \\
a_1 \otimes b_2 \arrow[r, "1"] & a_2 \otimes b_2 \arrow[d, "1"] & a_3 \otimes b_2 \arrow[d, "1"] & & b_1 \otimes a_2 \arrow[r, "2"] & b_2 \otimes a_2 \arrow[d, "2"] & b_3 \otimes a_2 \arrow[d, "2"] \\
a_1 \otimes b_3 & a_2 \otimes b_3 \arrow[r, "2"] & a_3 \otimes b_3 & & b_1 \otimes a_3 & b_2 \otimes a_3 \arrow[r, "1"] & b_3 \otimes a_3
\end{tikzcd}
\end{center}
Using \cref{thm:braiding_limit}, we deduce from the latter two diagrams that the Cartan braiding $\braid = \braid_{\mathcal{B}(\varpi_1), \mathcal{B}(\varpi_2)}$ is given by
\begin{equation}
\label{eq:sl3-braiding}
\begin{array}{lll}
\braid(a_1 \otimes b_1) = b_1 \otimes a_1, &
\braid(a_2 \otimes b_1) = b_1 \otimes a_2, &
\braid(a_3 \otimes b_1) = b_2 \otimes a_2, \\
\braid(a_1 \otimes b_2) = b_2 \otimes a_1, &
\braid(a_2 \otimes b_2) = b_3 \otimes a_1, &
\braid(a_3 \otimes b_2) = b_2 \otimes a_3, \\
\braid(a_1 \otimes b_3) = 0, &
\braid(a_2 \otimes b_3) = b_3 \otimes a_2, &
\braid(a_3 \otimes b_3) = b_3 \otimes a_3.
\end{array}
\end{equation}
Note that this is different from the flip map, even on the Cartan component.
\end{example}
\subsection{Crystal limit of the commutation relations}
As previously, we fix a weight basis $\{v^\lambda_i\}_i\subset \mathcal{L}(\lambda)$ lifting the crystal basis $\{b^\lambda_i\}_i$ of $\mathcal{B}(\lambda)$, and let $\{f^i_\lambda\}_i \subset V(\lambda)^*$ be the dual basis. We will usually suppress the highest weight $\lambda$ in the notation and write $v_i$ and $f^i$.
We make the convention that $v^\lambda_1$ is the highest weight vector, and hence $f_\lambda^1$ is the lowest weight vector.
We also denote by $\{\tilde{v}^\lambda_i\}_i \subset V(\lambda)^{**}$ the dual basis to $\{f_\lambda^i\}_i$.
We recall the notation for the generators of $\cO_q^{\bfA_0}[K]$ from \cref{sec:compact_form},
\begin{equation}
\label{eq:generators}
\mathsf{f}^\lambda_i := c^{V(\lambda)}_{f^i, v_1}, \quad
\mathsf{v}^\lambda_i := c^{V(\lambda)^*}_{\tilde{v}_i, f^1}.
\end{equation}
\begin{proposition}
\label{prop:fxf}
Let $\lambda, \lambda' \in \bP^+$ and let $\phi_{\lambda,\lambda'}:\mathcal{B}(\lambda)\otimes\mathcal{B}(\lambda') \to \mathcal{B}(\lambda+\lambda')$ be the unique non-trivial crystal morphism. If $b^\lambda_i \otimes b^{\lambda'}_j$ is in the Cartan component, with $\phi_{\lambda,\lambda'}(b^\lambda_i \otimes b^{\lambda'}_j) = b^{\lambda+\lambda'}_m $, then we have
\begin{align}
\label{eq:fxf}
\mathsf{f}^\lambda_i \mathsf{f}^{\lambda'}_j &\equiv \mathsf{f}^{\lambda+\lambda'}_m \mod{q \cO_q^{\bfA_0}[K]}, \\
\label{eq:vxv}
\mathsf{v}^{\lambda'}_j \mathsf{v}^\lambda_i &\equiv \mathsf{v}^{\lambda+\lambda'}_m \mod{q \cO_q^{\bfA_0}[K]}.
\end{align}
Otherwise $\mathsf{f}^\lambda_i \mathsf{f}^{\lambda'}_j\equiv 0$ and $\mathsf{v}^{\lambda'}_j \mathsf{v}^\lambda_i \equiv 0 \pmod {q \cO_q^{\bfA_0}[K]}$.
\end{proposition}
\begin{proof}
We have $\mathsf{f}^\lambda_i \mathsf{f}^{\lambda'}_j = c^{V(\lambda)\otimes V(\lambda')}_{f^i_\lambda \otimes f^j_{\lambda'}, v^\lambda_1\otimes v^{\lambda'}_1}$.
Let us write $\mathcal{B}(\lambda)\otimes\mathcal{B}(\lambda') = \bigsqcup_n \mathcal{B}_n$ for the decomposition of the tensor product crystal into irreducible components, with $\mathcal{B}_1$ being the Cartan component. Let $V_n$ and $\mathcal{L}_n$ be the corresponding irreducible components of $V(\lambda)\otimes V(\lambda')$ and $\mathcal{L}(\lambda)\otimes_{\mathbf{A}_0} \mathcal{L}(\lambda')$.
The vector $v^\lambda_1\otimes v^{\lambda'}_1$ is the highest weight vector for the Cartan component $\mathcal{B}_1$.
If $b^\lambda_i \otimes b^{\lambda'}_j$ is contained in some component $\mathcal{B}_n$ other than the Cartan component, then $v^\lambda_i\otimes v^{\lambda'}_j \in \mathcal{L}_n + q (\mathcal{L}(\lambda)\otimes_{{\mathbf{A}_0}}\mathcal{L}(\lambda'))$, and likewise $f^i_\lambda\otimes f^j_{\lambda'}\in \mathcal{L}_n^* + q (\mathcal{L}(\lambda)^*\otimes_{{\mathbf{A}_0}}\mathcal{L}(\lambda')^*) $. Therefore $\mathsf{f}^\lambda_i \mathsf{f}^{\lambda'}_j \in q \cO_q^{\bfA_0}[K]$.
On the other hand, if $b^\lambda_i \otimes b^{\lambda'}_j$ is contained in the Cartan component, then $f^\lambda_i\otimes f^{\lambda'}_j \equiv f^{\lambda+\lambda'}_m \pmod{q(\mathcal{L}(\lambda)\otimes_{{\mathbf{A}_0}}\mathcal{L}(\lambda'))}$, where we are identifying $f^{\lambda+\lambda'}_m$ with its image in $\mathcal{L}_1^*\cong\mathcal{L}(\lambda+\lambda')^*$. The relation \eqref{eq:fxf} follows, and by applying the involution $*$ and \cref{prop:v_f_adjoints} we obtain \eqref{eq:vxv}.
\end{proof}
We note that \cref{prop:fxf} implies the exchange relation $\mathsf{f}^\lambda_i \mathsf{f}^{\lambda'}_j \equiv \mathsf{f}^{\lambda'}_k \mathsf{f}^\lambda_l$ modulo $q \cO_q^{\bfA_0}[K]$ when $b^\lambda_i\otimes b^{\lambda'}_j$ is in the Cartan component and \linebreak $\braid_{\mathcal{B}(\lambda), \mathcal{B}(\lambda')}(b^\lambda_i\otimes b^{\lambda'}_j) = b^{\lambda'}_k \otimes b^{\lambda}_l$.
The following proposition gives more precision, as well as giving exchange relations for the $\mathsf{f}^\lambda_i$ with the $\mathsf{v}^{\lambda'}_j$.
\begin{proposition}
\label{prop:exchange-relations}
We have the commutation relations
\[
\begin{split}
\mathsf{f}^\lambda_i \mathsf{f}^{\lambda^\prime}_j & = \sum_{k, l} q^{(\lambda, \lambda^\prime)} \left( \hat{R}_{V(\lambda^\prime), V(\lambda)} \right)^{i j}_{k l} \mathsf{f}^{\lambda^\prime}_k \mathsf{f}^\lambda_l, \\
\mathsf{v}^\lambda_i \mathsf{v}^{\lambda^\prime}_j & = \sum_{k, l} q^{(\lambda, \lambda^\prime)} \left( \hat{R}_{V(\lambda^\prime), V(\lambda)} \right)^{l k}_{j i} \mathsf{v}^{\lambda^\prime}_k \mathsf{v}^\lambda_l, \\
\mathsf{f}^\lambda_i \mathsf{v}^{\lambda^\prime}_j & = \sum_{k, l} q^{(\lambda, \lambda^\prime)} \left( \hat{R}_{V(\lambda), V(\lambda^\prime)} \right)^{k i}_{l j} \mathsf{v}^{\lambda^\prime}_k \mathsf{f}^\lambda_l.
\end{split}
\]
Moreover we have $\sum_i \mathsf{v}^\lambda_i \mathsf{f}^\lambda_i = 1$.
\end{proposition}
\begin{proof}
These can be derived from the commutation relations \eqref{eq:OqG-relation} which hold in $\cO_q[G]$.
Since $v_1$ is the highest weight vector we have \linebreak $(\hat{R}_{V(\lambda^\prime), V(\lambda)}^{-1})^{c d}_{1 1} = q^{(\lambda, \lambda^\prime)} \delta^c_1 \delta^d_1$, which gives the relation for $\mathsf{f}^\lambda_i \mathsf{f}^{\lambda'}_j$ in the form
\[
c^{V(\lambda)}_{f^i, v_1} c^{V(\lambda^\prime)}_{f^j, v_1} = \sum_{a, b} q^{(\lambda, \lambda^\prime)} (\hat{R}_{V(\lambda^\prime), V(\lambda)})^{i j}_{a b} c^{V(\lambda^\prime)}_{f^a, v_1} c^{V(\lambda)}_{f^b, v_1}.
\]
Similarly, since $f^1$ is the lowest weight vector we have $(\hat{R}_{V(\lambda^\prime)^*, V(\lambda)^*}^{-1})^{c d}_{1 1} = q^{(\lambda, \lambda^\prime)} \delta^c_1 \delta^d_1$, which gives the relation for $\mathsf{v}^\lambda_i \mathsf{v}^{\lambda'}_j$ in the form
\[
c^{V(\lambda)^*}_{\tilde{v}_i, f^1} c^{V(\lambda^\prime)^*}_{\tilde{v}_j, f^1} = \sum_{a, b} q^{(\lambda, \lambda^\prime)} (\hat{R}_{V(\lambda^\prime)^*, V(\lambda)^*})^{i j}_{a b} c^{V(\lambda^\prime)^*}_{\tilde{v}_a, f^1} c^{V(\lambda)^*}_{\tilde{v}_b, f^1}.
\]
It can be rewritten in terms of $\hat{R}_{V(\lambda^\prime), V(\lambda)}$ by using the identities
\begin{equation}
\label{eq:braid-coefficients-identities}
(\hat{R}_{V^*, W})_{i j}^{k l} = (\hat{R}_{V, W}^{-1})_{j l}^{i k}, \quad
(\hat{R}_{V, W^*}^{-1})_{i j}^{k l} = (\hat{R}_{V, W})_{j l}^{i k}.
\end{equation}
These are valid for any $\cU_q(\lie{g})$-modules $V, W$ (with dual bases for $V^*, W^*$) and follow from the fact that the finite-dimensional $\cU_q(\lie{g})$-modules form a braided monoidal category (see \cite[Lemma A.1]{Matassa:kahler} for a proof). They imply that
\[
(\hat{R}_{V(\lambda^\prime)^*, V(\lambda)^*})^{i j}_{a b} = (\hat{R}_{V(\lambda^\prime), V(\lambda)})^{b a}_{j i},
\]
which gives the relation for $\mathsf{v}^\lambda_i \mathsf{v}^{\lambda'}_j$ as stated.
Proceeding as above, we have $(\hat{R}_{V(\lambda), V(\lambda^\prime)})^{c d}_{1 1} = q^{(\lambda, \lambda^\prime)} \delta^c_1 \delta^d_1$ and this gives the relation for $\mathsf{f}^\lambda_i \mathsf{v}^{\lambda^\prime}_j$ in the form
\[
c^{V(\lambda)}_{f^i, v_1} c^{V(\lambda^\prime)^*}_{\tilde{v}_j, f^1} = \sum_{a, b} q^{(\lambda, \lambda^\prime)} (\hat{R}_{V(\lambda), V(\lambda^\prime)^*}^{-1})^{i j}_{a b} c^{V(\lambda^\prime)^*}_{\tilde{v}_a, f^1} c^{V(\lambda)}_{f^b, v_1}.
\]
It can be rewritten using the identity $(\hat{R}_{V(\lambda), V(\lambda^\prime)^*}^{-1})^{i j}_{a b} = (\hat{R}_{V(\lambda), V(\lambda^\prime)})^{a i}_{b j}$ from \eqref{eq:braid-coefficients-identities}, which gives the relation as stated.
Finally we want to show that $\sum_i \mathsf{v}^\lambda_i \mathsf{f}^\lambda_i = 1$.
It is not difficult show that $\sum_i \mathsf{v}^\lambda_i \mathsf{f}^\lambda_i$ is invariant under the right action of $\cU_q(\lie{g})$, which means that $\sum_i \mathsf{v}^\lambda_i \mathsf{f}^\lambda_i = c \cdot 1$ for some $c \in \mathbb{C}$.
Applying the counit to the left-hand side we have
\[
\sum_i \varepsilon(\mathsf{v}^\lambda_i \mathsf{f}^\lambda_i) = \sum_i \varepsilon(c^{V(\lambda)^*}_{\tilde{v}_i, f^1}) \varepsilon(c^{V(\lambda)}_{f^i, v_1}) = \sum_i \tilde{v}_i(f^1) f^i(v_1) = 1.
\]
Therefore $c = 1$, which gives the claimed relation.
\end{proof}
Note that the third exchange relations from \cref{prop:exchange-relations} yields the following factorization for our compact ${\mathbf{A}_0}$-form $\cO^{\mathbf{A}_0}_q[K]$.
\begin{corollary}
We have $\cO^{\mathbf{A}_0}_q[K] = \cO^{\mathbf{A}_0}_q[G/N^-] \cdot \cO^{\mathbf{A}_0}_q[G/N^+]$.
\end{corollary}
The key consequence of \cref{prop:exchange-relations} is that, thanks to \cref{thm:braiding_limit}, the stated commutation relations also make sense when we specialize at $q = 0$. This allows us to deduce a set of commutation relations for the analytic limits $\pi_0(\mathsf{f}^\lambda_i)$ and $\pi_0(\mathsf{v}^\lambda_i)$ in $\cO[K_0]$.
Combining all the above results, we get relations at $q=0$ as follows.
\begin{theorem}
\label{thm:pi0_relations}
The following relations hold in $\mathcal{O}[K_0] = \pi_0(\cO_q^{\bfA_0}[K])$.
\begin{enumerate}
\item[1)] For any $\lambda, \lambda' \in \bP^+$ and any $i,j$ we have
\begin{align}
\label{eq:pi0_relations1}
\pi_0(\mathsf{f}^\lambda_i) \pi_0(\mathsf{f}^{\lambda^\prime}_j)
&= \eta(b^\lambda_i\otimes b^{\lambda'}_j) \pi_0(\mathsf{f}^{\lambda+\lambda'}_m) ,
\\
\label{eq:pi0_relations2}
\pi_0(\mathsf{v}^{\lambda'}_j) \pi_0(\mathsf{v}^{\lambda}_i)
&= \eta(b^\lambda_i\otimes b^{\lambda'}_j) \pi_0(\mathsf{v}^{\lambda+\lambda'}_m) ,
\end{align}
where we write $b^\lambda_i\otimes b^{\lambda'}_j \mapsto b^{\lambda+\lambda'}_m$ under the unique surjective crystal morphism $\mathcal{B}(\lambda)\otimes\mathcal{B}(\lambda')\to\mathcal{B}(\lambda+\lambda')$.
\item[2)] For any $\lambda, \lambda' \in \bP^+$ and any $i, j$ we have
\begin{align}
\label{eq:pi0_relations3}
\pi_0(\mathsf{f}^\lambda_i) \pi_0(\mathsf{v}^{\lambda^\prime}_j) &=
\sum_{k, l}
\pi_0(\mathsf{v}^{\lambda^\prime}_k) \pi_0(\mathsf{f}^\lambda_l),
\end{align}
where the sum is over all pairs $(k,l)$ such that $\braid_{\mathcal{B}(\lambda),\mathcal{B}(\lambda')}(b^\lambda_l \otimes b^{\lambda^\prime}_j) = b^{\lambda^\prime}_k \otimes b^\lambda_i$.
\item[3)] For any $\lambda \in \bP^+$ we have
\begin{equation}
\sum_i \pi_0(\mathsf{v}^\lambda_i) \pi_0(\mathsf{f}^\lambda_i) = 1.
\end{equation}
\item[4)]
For any $\lambda \in \bP^+$ and any $i$ we have $\pi_0(\mathsf{f}^\lambda_i)^* = \pi_0(\mathsf{v}^\lambda_i)$.
\end{enumerate}
\end{theorem}
\begin{proof}
1) and 4) follow from \cref{prop:fxf} and \cref{prop:v_f_adjoints}, while 3) follows from \cref{prop:exchange-relations}.
The relation 2) follows from the third relation of \cref{prop:exchange-relations} using the following consequence of \cref{thm:braiding_limit}:
\[
\lim_{q \to 0} q^{(\lambda, \lambda')} \left( \hat{R}_{V(\lambda), V(\lambda')} \right)_{i j}^{k l} =
\begin{cases}
1 & \text{if }\braid_{\mathcal{B}(\lambda),\mathcal{B}(\lambda')}(b^\lambda_i \otimes b^{\lambda'}_j) = b^{\lambda'}_k \otimes b^{\lambda'}_l, \\
0 & \text{if } \eta(b^\lambda_i\otimes b^{\lambda'}_j)=0.
\end{cases} \qedhere
\]
\end{proof}
Ultimately, we will show that \cref{thm:pi0_relations} gives a complete set of generators and relations for $\mathcal{O}[K_0]$.
\begin{remark}
\label{rmk:exchange_relations}
Note that relations \eqref{eq:pi0_relations1} and \eqref{eq:pi0_relations2} imply the exchange relations
\begin{align}
\label{eq:pi0_relations1b}
\pi_0(\mathsf{f}^\lambda_i) \pi_0(\mathsf{f}^{\lambda^\prime}_j) &= \pi_0(\mathsf{f}^{\lambda^\prime}_k) \pi_0(\mathsf{f}^\lambda_l),
\\
\label{eq:pi0_relations2b}
\pi_0(\mathsf{v}^{\lambda'}_j) \pi_0(\mathsf{v}^{\lambda}_i)
&= \pi_0(\mathsf{v}^{\lambda}_l) \pi_0(\mathsf{v}^{\lambda'}_k),
\end{align}
whenever $\braid_{\mathcal{B}(\lambda),\mathcal{B}(\lambda')}(b^\lambda_i\otimes b^{\lambda'}_j) = b^{\lambda'}_k \otimes b^\lambda_l$. These also follow from the $q \to 0$ limit of the relations in \cref{prop:exchange-relations}.
\end{remark}
\section{Properties of the Cartan braiding}
\label{sec:properties-Cartan}
\subsection{Hexagon and braid relations}
\begin{comment}
Recall that the braiding operators $\hat{R}_{V, W}$ satisfy the \emph{hexagon relations}: for all finite-dimensional representations $V,W,X$, we have commuting diagrams
\begin{align}
\xymatrix@C=10ex{
V \otimes W \otimes X \ar[r]^{\hat{R}_{V,W}\otimes\id_X} \ar@/_/[rr]_{\hat{R}_{V, W \otimes X}} &
W \otimes V \otimes X \ar[r]^{\id_W \otimes \hat{R}_{V,X}} &
W \otimes X \otimes V,
}
\label{eq:hexagon1} \\
\xymatrix@C=10ex{
V \otimes W \otimes X \ar[r]^{\id_V\otimes\hat{R}_{W,X}} \ar@/_/[rr]_{\hat{R}_{V \otimes W,X}} &
V \otimes X \otimes W \ar[r]^{\hat{R}_{V,X}\otimes \id_W} &
X \otimes V \otimes W,
}
\label{eq:hexagon2}
\end{align}
(These diagrams appear as triangles rather than hexagons because we are neglecting the associator isomorphisms $(V\otimes W)\otimes X \to V\otimes (W\otimes X)$, \emph{etc.}, which in this category are just the obvious isomorphisms for tensor products of vector spaces.) As a consequence, and thanks to the naturality of the braiding, we have the \emph{braid relation}
\begin{align}
\nonumber
(\hat{R}_{W,X} \otimes \id_V) &(\id_W\otimes \hat{R}_{V,X})(\hat{R}_{V,W} \otimes \id_X) \\
&= (\id_X \otimes \hat{R}_{V,W}) (\hat{R}_{V,X}\otimes\id_W)(\id_V\otimes\hat{R}_{W,X}).
\label{eq:R-braiding}
\end{align}
The same relations hold for the Cartan braiding $\braid_{\mathcal{B}, \mathcal{B}^\prime}$ of \cref{def:Cartan-braiding}.
\end{comment}
The Cartan braiding of \cref{def:Cartan-braiding} inherits properties from the braiding operators $\hat{R}_{V,V'}$ as follows.
\begin{theorem}
Let $V$, $V'$ and $V''$ be products of irreducibles with crystal bases $(\mathcal{L}, \mathcal{B})$, $(\mathcal{L}', \mathcal{B}')$ and $(\mathcal{L}'', \mathcal{B}'')$, respectively. Then the crystal braiding maps satisfy the hexagon relations
\begin{align}
\braid_{\mathcal{B},\mathcal{B}'\otimes\mathcal{B}''} &= (\id_{\mathcal{B}'}\otimes\braid_{\mathcal{B},\mathcal{B}''})(\braid_{\mathcal{B},\mathcal{B}'}\otimes\id_{\mathcal{B}''}),
\label{eq:crystal_hexagon1} \\
\braid_{\mathcal{B}\otimes\mathcal{B}',\mathcal{B}''} &= (\braid_{\mathcal{B},\mathcal{B}''}\otimes\id_{\mathcal{B}'})(\id_{\mathcal{B}}\otimes\braid_{\mathcal{B}',\mathcal{B}''}),
\label{eq:crystal_hexagon2}
\end{align}
and the braid relation
\begin{align}
(\braid_{\mathcal{B}',\mathcal{B}''} \otimes \id_\mathcal{B}) &(\id_\mathcal{B}'\otimes \braid_{\mathcal{B},\mathcal{B}''}) (\braid_{\mathcal{B},\mathcal{B}'} \otimes \id_{\mathcal{B}''}) \nonumber \\
&= (\id_{\mathcal{B}''} \otimes \braid_{\mathcal{B},\mathcal{B}'}) (\braid_{\mathcal{B},\mathcal{B}''}\otimes\id_{\mathcal{B}'})(\id_\mathcal{B}\otimes\braid_{\mathcal{B}',\mathcal{B}''}).
\label{eq:Cartan_braiding}
\end{align}
\end{theorem}
\begin{proof}
Let $\lambda$, $\lambda'$ and $\lambda''$ be the highest weights of $V$, $V'$ and $V''$, with crystals $\mathcal{B}$, $\mathcal{B}'$ and $\mathcal{B}''$, respectively. Then the highest weight of $\mathcal{B}' \otimes \mathcal{B}''$ is $\lambda' + \lambda''$. Using the hexagon relation for $\hat{R}$, we get
\begin{align*}
q^{(\lambda, \lambda' + \lambda'')} \hat{R}_{V, V' \otimes V''}
& = q^{(\lambda, \lambda'')} q^{(\lambda, \lambda')} (\id_{V'} \otimes \hat{R}_{V, V''}) (\hat{R}_{V, V'} \otimes \id_{V''}).
\end{align*}
This specializes at $q = 0$ by \cref{thm:braiding_limit}, and the specialization induces the hexagon relation \eqref{eq:crystal_hexagon1}. The other hexagon relation is proven similarly, and the braid relation follows by an analogous calculation.
\end{proof}
\subsection{Partial action of the symmetric group}
Let $\mathcal{B}_1, \cdots, \mathcal{B}_n$ be products of irreducible crystals, see \cref{def:Cartan_component}. For long tensor products of the form $\mathcal{B}_1\otimes\cdots\otimes\mathcal{B}_n$, we will denote the action of the crystal braiding on two successive terms by $\braid_i = \id_{\mathcal{B}_1} \otimes \cdots \otimes \braid_{\mathcal{B}_i,\mathcal{B}_{i+1}} \otimes \cdots \otimes \id_{\mathcal{B}_n}$, so that
\[
\braid_i : \mathcal{B}_1\otimes\cdots\mathcal{B}_i\otimes\mathcal{B}_{i+1}\otimes \cdots \otimes \mathcal{B}_n \to \mathcal{B}_1\otimes\cdots\mathcal{B}_{i+1}\otimes\mathcal{B}_{i}\otimes \cdots \otimes \mathcal{B}_n.
\]
\begin{proposition}
Let $\mathcal{B}_1, \cdots, \mathcal{B}_n$ be irreducible crystals. Let $s\in S_n$ be a permutation and let $s = s_{i_1} \cdots s_{i_k}$ be a realization of $s$ as a reduced word in the transpositions $s_i=(i\;\;i\!+\!1)$. Then the crystal morphism
\[
\braid_s := \braid_{i_1} \circ \cdots \circ \braid_{i_n} : \mathcal{B}_1\otimes \cdots \mathcal{B}_n \to \mathcal{B}_{s(1)} \otimes \cdots \otimes \mathcal{B}_{s(n)}
\]
is independent of the choice of reduced word representing $s$.
\end{proposition}
\begin{proof}
Thanks to \cref{thm:braiding_limit}, this follows from the well-known fact that any two reduced words can be linked by repeated application of the braid relations $s_is_{i+1}s_i = s_{i+1}s_is_{i+1}$.
\end{proof}
Note that the relation $s_i^2=\id$ is not necessary in the above proof. This is important since $\braid_i^2\neq\id$, although $\braid_i^2$ is the identity on the Cartan component. As a consequence, the map $s\mapsto \braid_s$ doesn't define an action of $S_n$, but it does define an action between the Cartan components, thanks to the following theorem.
\begin{theorem}
\label{thm:longest_word}
Let $\mathcal{B}_1, \cdots, \mathcal{B}_n$ be irreducible crystals and let $b_i \in \mathcal{B}_i$ for each $i$. The following are equivalent:
\begin{enumerate}
\item[(i)] $b_1\otimes\cdots\otimes b_n$ is in the Cartan component of $\mathcal{B}_1\otimes\cdots\otimes\mathcal{B}_n$,
\item[(ii)] $\braid_s(b_1\otimes \cdots\otimes b_n) \neq 0$ for every $s\in S_n$,
\item[(iii)] $\braid_{s_0}(b_1\otimes\cdots\otimes b_n) \neq 0$, where $s_0 = \scriptstyle \begin{pmatrix} 1 & 2 & \cdots & n \\ n & n - 1 & \cdots & 1 \end{pmatrix}$ is the longest permutation.
\end{enumerate}
If $\mathcal{B}_1, \cdots, \mathcal{B}_n$ are products of irreducibles, the equivalence holds if we add to (ii) and (iii) the condition that each $b_i$ be in the Cartan component of $\mathcal{B}_i$.
\end{theorem}
\begin{proof}
To begin with, assume that all the $\mathcal{B}_i$ are irreducible.
For (i) $\Rightarrow$ (ii), note that if $b_1\otimes \cdots \otimes b_n$ is in the Cartan component of $\mathcal{B}_1\otimes\cdots\otimes\mathcal{B}_n$, then necessarily $b_i\otimes b_{i+1}$ is in the Cartan component of $\mathcal{B}_i\otimes\mathcal{B}_{i+1}$ for every $i$. Therefore $\braid_{\mathcal{B}_i, \mathcal{B}_{i + 1}}(b_i \otimes b_{i + 1})$ is non-zero and, since it corresponds to the same element of the Cartan component as $b_i\otimes b_{i+1}$, we have that $\braid_i(b_1 \otimes \cdots \otimes b_n)$ again belongs to the Cartan component.
Inductively, any repeated composition of the $\braid_i$ is non-zero.
The implication (ii) $\Rightarrow$ (iii) is obvious. To prove (iii) $\Rightarrow$ (i), we work inductively on $n$. When $n=1$ or $2$, the claim is immediate. Suppose it is true for $n-1$. If $\braid_{s_0}(b_1\otimes\cdots\otimes b_n) \neq 0$, write
\begin{equation}
\label{eq:s0_words}
s_0 = s_0' (s_{n-1} s_{n-2} \cdots s_1) = (s_{n-1} s_{n-2} \cdots s_1) s_0'',
\end{equation} where
\[
s_0' = \scriptstyle \begin{pmatrix} 1&2&\cdots& n-1&n\\n-1&n-2&\cdots&1&n \end{pmatrix}, \quad
s_0'' = \scriptstyle \begin{pmatrix} 1&2&\cdots& n-1&n\\1 & n & \cdots&3&2 \end{pmatrix}
\]
are the permutations reversing the first $n-1$ and last $n-1$ entries, respectively. Upon writing $s_0'$ and $s_0''$ as reduced words, both expressions in \eqref{eq:s0_words} give a reduced expression for $s_0$, so we have
\begin{multline*}
\braid_{s_0'}(\braid_{n-1} \circ \cdots \circ \braid_1(b_1 \otimes \cdots \otimes b_n)) \\
= \braid_{n-1} \circ \cdots \circ \braid_1(\braid_{s_0''}(b_1 \otimes \cdots \otimes b_n))
= \braid_{s_0}(b_1 \otimes \cdots \otimes b_n) \neq 0.
\end{multline*}
In particular, $\braid_{s_0''}(b_1 \otimes \cdots \otimes b_n) \neq 0$, so by the inductive hypothesis, $b_2 \otimes \cdots \otimes b_n$ is in the Cartan component of $\mathcal{B}_2 \otimes \cdots \otimes \mathcal{B}_n$. But also $\braid_{n-1} \circ \cdots \circ \braid_1(b_1 \otimes \cdots \otimes b_n) \neq 0$, and repeated application of the hexagon relation \eqref{eq:crystal_hexagon1} shows that
\[
\braid_{n-1} \circ \braid_{n-2} \circ \cdots \circ \braid_1 = \braid_{\mathcal{B}_1,(\mathcal{B}_2 \otimes \cdots \otimes \mathcal{B}_n)},
\]
so $b_1\otimes(b_2 \otimes \cdots \otimes b_n)$ is necessarily in the Cartan component of $\mathcal{B}_1 \otimes (\mathcal{B}_2 \otimes \cdots \otimes \mathcal{B}_n)$.
This completes the proof for $\mathcal{B}_i$ irreducible. The general case where the $\mathcal{B}_i$ are products of irreducibles follows readily from this by naturality of the braiding, which implies that we can restrict our attention to the Cartan components in each $\mathcal{B}_i$.
\end{proof}
\subsection{Left and right ends}
We now make a definition that is going to play a major role in the rest of this work, namely the notion of right end of a crystal element.
\begin{definition}
Let $\lambda, \mu \in \bP^+$ with $\lambda \geq \mu$.
Considering the crystal $\mathcal{B}(\lambda)$, the \emph{left end} and \emph{right end} with respect to $\mu$ are the set-theoretic maps
\begin{align*}
&\mathsf{L}_\mu: \mathcal{B}(\lambda) \to \mathcal{B}(\mu), &
&\mathsf{R}_\mu: \mathcal{B}(\lambda) \to \mathcal{B}(\mu),
\end{align*}
defined as follows. There exist unique injective morphisms of crystals
\begin{align*}
&\mathcal{B}(\lambda) \to \mathcal{B}(\mu) \otimes \mathcal{B}(\lambda - \mu), &
&\mathcal{B}(\lambda) \to \mathcal{B}(\lambda - \mu) \otimes \mathcal{B}(\mu),
\end{align*}
and we define $\mathsf{L}_\mu(b)$ and $\mathsf{R}_\mu(b)$ to be the unique elements of $\mathcal{B}(\mu)$ such that
\begin{align*}
&b \mapsto \mathsf{L}_\mu(b) \otimes b^\prime, &
&b \mapsto b^{\prime \prime} \otimes \mathsf{R}_\mu(b),
\end{align*}
for some $b',b'' \in \mathcal{B}(\lambda - \mu)$.
More generally, let $\mathcal{B} = \mathcal{B}(\lambda_1) \otimes \cdots \otimes \mathcal{B}(\lambda_n)$ be a product of irreducibles of highest weight $\lambda=\sum_i\lambda_i$.
If $b \in \mathcal{B}$ is in the Cartan component we define $\mathsf{L}_\mu(b)$ and $\mathsf{R}_\mu(b)$ as above, under the identification of the Cartan component of $\mathcal{B}$ with $\mathcal{B}(\lambda)$.
Otherwise $\mathsf{L}_\mu(b) = \mathsf{R}_\mu(b) = 0$.
\end{definition}
We will also consider the right ends with respect to a \emph{family} of dominant weights, and in particular for the family of fundamental weights ${\boldsymbol{\Pi}}=(\varpi_1, \cdots, \varpi_r)$.
\begin{definition}
Let $\mathsf{S} = (\mu_1, \cdots, \mu_n)$ be an $n$-tuple of dominant weights.
Let $\mathcal{B}$ be a product of irreducible crystals of highest weight $\lambda$ with $\lambda \geq \mu_i$ for all $i$.
Then the \emph{$\mathsf{S}$-left end} and \emph{$\mathsf{S}$-right end} maps are the set-theoretic maps
\[
\mathsf{L}_\mathsf{S}, \mathsf{R}_\mathsf{S}: \mathcal{B} \to
\mathcal{B}(\mu_1) \times \cdots \times \mathcal{B}(\mu_n) \sqcup \{0\}
\]
defined by
\begin{align*}
&\mathsf{L}_\mathsf{S}(b) := \left( \mathsf{L}_{\mu_1}(b), \cdots, \mathsf{L}_{\mu_n}(b) \right), &
&\mathsf{R}_\mathsf{S}(b) := \left( \mathsf{R}_{\mu_1}(b), \cdots, \mathsf{R}_{\mu_n}(b) \right).
\end{align*}
\end{definition}
\begin{remark}
According to the usual conventions on crystal bases, the notations $\mathcal{B}(\mu_1) \otimes \cdots \otimes \mathcal{B}(\mu_n)$ and $\mathcal{B}(\mu_1) \times \cdots \times \mathcal{B}(\mu_n)$ denote the same object.
We use the former when we take into account the additional crystal structure, but the latter when regarded simply as a set.
This is the notation we use for left and right ends, since $\mathsf{L}_\mathsf{S}$ and $\mathsf{R}_\mathsf{S}$ are only set-theoretic maps.
\end{remark}
The rest of this section is devoted to the properties of the right end maps. We focus on the right ends since these play the main role in our setup, but similar results can of course be obtained for the left end maps.
\begin{proposition}
\label{prop:rightend-sigma}
Let $\lambda_1, \cdots, \lambda_n \in \bP^+$ be dominant weights. An element $b_1 \otimes \cdots \otimes b_n$ belongs to the Cartan component of $\mathcal{B}(\lambda_1) \otimes \cdots \otimes \mathcal{B}(\lambda_n)$ if and only if we have
\begin{equation}
\label{eq:rightend-sigma}
\braid_{n - 1} \circ \cdots \circ \braid_k (b_1 \otimes \cdots \otimes b_n) \neq 0
\end{equation}
for every $k\in\{1, \cdots, n - 1\}$.
In this case, for each $k \in \{1, \cdots, n\}$, the right end $\mathsf{R}_{\lambda_k}(b_1 \otimes \cdots \otimes b_n)$ is equal to the rightmost tensor factor of \eqref{eq:rightend-sigma}.
\end{proposition}
\begin{proof}
For $n=1$ the statement is trivial, so suppose it is true for tensor products of length $n-1$. The necessity of \eqref{eq:rightend-sigma} is obvious from \cref{thm:longest_word}, so it remains to check sufficiency.
By the inductive hypothesis, $b_2\otimes\cdots \otimes b_n$ is in the Cartan component of $\mathcal{B}(\lambda_2)\otimes\mathcal{B}(\lambda_n)$, so by \cref{thm:longest_word} we have $s_0'(b_2\otimes\cdots\otimes b_n) \neq0$ where $s_0'$ denotes the longest word of $S_{n-1}$. But now the longest word $s_0$ of $S_n$ satisfies
\[
s_0\circ(\id\otimes s_0')(b_1\otimes\cdots\otimes b_n) = \braid_{n - 1} \circ \cdots \circ \braid_1 (b_1\otimes\cdots\otimes b_n) \neq 0
\]
by \eqref{eq:rightend-sigma}.
Therefore, by \cref{thm:longest_word}, $(\id\otimes s_0')(b_1\otimes\cdots\otimes b_n)$ is in the Cartan component of $\mathcal{B}(\lambda_1)\otimes\cdots\otimes\mathcal{B}(\lambda_n)$, and so $b_1\otimes\cdots\otimes b_n$ as well. This proves the first statement.
Now the map from \eqref{eq:rightend-sigma} gives a morphism of crystals
\[
\braid_{n - 1} \circ \cdots \circ \braid_k: \mathcal{B}(\lambda_1) \otimes \cdots \otimes \mathcal{B}(\lambda_n) \to \mathcal{B}(\lambda_1) \otimes \cdots \otimes \widehat{\mathcal{B}(\lambda_k)} \otimes \cdots \otimes \mathcal{B}(\lambda_n) \otimes \mathcal{B}(\lambda_k)
\]
with the hat denoting omission, which gives the second statement.
\end{proof}
\begin{remark}
We briefly mention how this connects to the theory of set-theoretic solutions to the Yang-Baxter equations, as in \cite{EtiSchSol}.
Writing the Cartan braiding as $\braid(b \otimes b^\prime) = \mathsf{f}_b(b^\prime) \otimes \mathsf{g}_{b^\prime}(b)$, \cref{prop:rightend-sigma} can be recast in the form
\[
\mathsf{R}_{\lambda_k}(b_1\otimes\cdots\otimes b_n) = \mathsf{g}_{b_n} \circ \cdots \circ \mathsf{g}_{b_{k + 1}}(b_k), \quad
k = 1, \cdots, n.
\]
This gives the components of the map $J_n$ introduced in \cite[Proposition 1.7]{EtiSchSol} (if we work with left ends instead of right ends).
\end{remark}
Next, we discuss how right ends behave with respect to tensor products.
\begin{proposition}
\label{prop:right_end_independence}
Let $\lambda, \lambda', \mu \in \bP^+$
and let $b\otimes b'$ be in the Cartan component of $\mathcal{B}(\lambda)\otimes\mathcal{B}(\lambda')$.
\begin{enumerate}
\item[1)] If $\lambda'\geq\mu$ then $\mathsf{R}_\mu(b\otimes b') = \mathsf{R}_\mu(b')$.
\item[2)] If $\lambda\geq\mu$ then $\mathsf{R}_\mu(b\otimes b') = \mathsf{R}_\mu(\mathsf{R}_\mu(b)\otimes b')$. Consequently, if $\lambda''\geq\mu$ and $c\in\mathcal{B}(\lambda'')$ has the same $\mu$-right end as $b\in\mathcal{B}(\lambda)$, then $\mathsf{R}_\mu(b\otimes b')=\mathsf{R}_\mu(c\otimes b')$.
\end{enumerate}
\end{proposition}
\begin{proof}
1) is almost immediate from the definitions. For 2), consider the inclusion of the Cartan component $\iota: \mathcal{B}(\lambda) \to \mathcal{B}(\lambda-\mu)\otimes\mathcal{B}(\mu)$. We have
\[
\xymatrix{
b\otimes b' \ar@{|->}[r]^-{\iota \otimes \id} &
b''\otimes \mathsf{R}_\mu(b) \otimes b' \ar@{|->}[r]^-{\braid_2} &
b'' \otimes b''' \otimes \mathsf{R}_\mu(\mathsf{R}_\mu(b)\otimes b'),
}
\]
for some $b''\in\mathcal{B}(\lambda-\mu)$ and $b'''\in\mathcal{B}(\lambda')$. The result then follows from \cref{prop:rightend-sigma}. The final statement is now immediate.
\end{proof}
The next result shows that we obtain the same set of non-zero $\mathsf{S}$-right ends from any appropriate crystal.
\begin{proposition}
\label{prop:set_of_ends}
Let $\mathsf{S} = (\mu_1, \cdots, \mu_n)$ be a family of dominant weights. Let $\mathcal{B}$ be a product of irreducible crystals. Then the set of (non-zero) $\mathsf{S}$-right ends of elements of $\mathcal{B}$, namely
\begin{equation}
\label{eq:right_end_set}
\{ \mathsf{R}_\mathsf{S}(b) \mid b\in \mathcal{B}\} \setminus \{0\}
\quad
\subseteq \prod_{i=1}^N\mathcal{B}(\mu_i),
\end{equation}
is independent of the choice of $\mathcal{B}$, as long as the highest weight $\lambda$ of $\mathcal{B}$ satisfies $\lambda\geq\mu_i$ for every $i$.
\end{proposition}
\begin{proof}
We may assume without loss of generality that $\mathcal{B}=\mathcal{B}(\lambda)$ is irredudible, with $\lambda\geq\mu_i$ for all $i$, since if $\mathcal{B}$ has highest weight $\lambda$ then by definition the right ends of $\mathcal{B}$ are the same as those of $\mathcal{B}(\lambda)$.
Now suppose $\lambda'\geq\lambda$. \cref{prop:right_end_independence}(1) shows that the $\mu_i$-right ends of $\mathcal{B}(\lambda)$ are the same as the $\mu_i$-right ends of $\mathcal{B}(\lambda'-\lambda)\otimes\mathcal{B}(\lambda)$, which by definition are the same as the $\mu_i$-right ends of $\mathcal{B}(\lambda')$.
To complete the proof, just note that for any two $\lambda_1,\lambda_2$ there is $\lambda'$ which is greater than both of them.
\end{proof}
\section{The higher-rank graph algebra}
\label{sec:k-graph_algebra}
\subsection{Higher-rank graphs}
We begin by recalling the definition of a \linebreak $k$-graph from \cite{KumPas}. For this, we need to regard the abelian monoid $\mathbb{N}^k$ as a category with one object, in which composition is given by addition.
\begin{definition}
A \emph{higher-rank graph of rank $k$} (or $k$-graph) is a small category $\Lambda$ equipped with a functor $\mathsf{d}: \Lambda \to \mathbb{N}^k$ satisfying the \emph{factorization property}: for every $e \in \Lambda$, and any pair of multi-indices $m, n \in \mathbb{N}^k$ with $\mathsf{d}(e) = m + n$, there are unique elements $e_1, e_2 \in \Lambda$ with $\mathsf{d}(e_1) = m$, $\mathsf{d}(e_2) = n$ such that $e = e_1 \cdot e_2$.
\end{definition}
The elements of $\Lambda$ are called \emph{paths}, the indices $\{ 1, \cdots, k \}$ are referred to as \emph{colours}, and $\mathsf{d}(e)$ is called the \emph{degree} or \emph{coloured length} of $e$. We write $\Lambda^n$ for the set of paths of length $n\in\mathbb{N}^k$. Paths of length $0$ are called \emph{vertices}, naturally enough. Paths of length $\delta_i = (0, \cdots, 1, \cdots, 0)$, with $1$ in the $i$th slot, are called \emph{edges of colour $i$}. We use the notation $\Lambda^{\neq0}$ for the set of paths of non-zero length.
The factorization property implies that for every path $e\in\Lambda$, there are unique vertices $\mathsf{r}(e)$ and $\mathsf{s}(e)$ in $\Lambda^0$, with the property that
\[
e = \mathsf{r}(e) \cdot e \cdot \mathsf{s}(e).
\]
This defines the \emph{range} and \emph{source} maps $\mathsf{r},\mathsf{s}: \Lambda \to \Lambda^0$.
Given a vertex $v \in \Lambda^0$ and a multi-index $n \in \mathbb{N}^k$, we write
\begin{align*}
& v \Lambda^n := \{ e \in \Lambda^n: \mathsf{r}(e) = v \}, &
& \Lambda^n v := \{ e \in \Lambda^n: \mathsf{s}(e) = v \}.
\end{align*}
It is common to impose the following additional hypotheses on a higher-rank graph, which will apply in our case, see \cref{sec:graph-properties}.
\begin{definition}
\label{def:row_finite_and_no_sources}
A $k$-graph $\Lambda$ is \emph{row-finite} if $v\Lambda^n$ is finite for every $v\in\Lambda^0$ and $n\in \mathbb{N}^k$. It has \emph{no sources} if $v\Lambda^n\neq\emptyset$ for every $v\in\Lambda^0$ and $n \in \mathbb{N}^k$, and \emph{no sinks} if $\Lambda^nv\neq\emptyset$ for every $v\in\Lambda^0$ and $n \in \mathbb{N}^k$.
\end{definition}
\subsection{Higher-rank graph algebras}
Higher-rank graph algebras were first defined as $C^*$-algebras, by Kumjian and Pask \cite{KumPas}. We will work initially with the algebraic version, which is due to Aranda Pino, Clark, an Huef and Raeburn \cite{Aranda:Kumjian-Pask}, although we will modify their definition slightly to take advantage of the fact that we only work over the field $\mathbb{C}$ rather than an arbitrary commutative ring with unit.
\begin{definition}
\label{def:graph-algebra}
Let $\Lambda$ be a row-finite $k$-graph without sources.
The \linebreak \emph{Kumjian-Pask algebra} $\mathrm{KP}_\mathbb{C}(\Lambda)$ is the $*$-algebra over $\mathbb{C}$ generated by the elements $\{ p_v \}_{v \in \Lambda^0}$ and $\{ s_e \}_{e \in \Lambda^{\neq 0}}$ with the following relations:
\begin{enumerate}
\item[(KP1)] $\{ p_v \}_{v \in \Lambda^0}$ is a set of mutually orthogonal projections, that is $p_v^* = p_v = p_v^2$ and $p_v p_{v^\prime} = \delta_{v, v^\prime} p_v$,
\item[(KP2)] for all $e, e^\prime \in \Lambda^{\neq 0}$ with $\mathsf{s}(e) = \mathsf{r}(e^\prime)$ we have
\[
s_e s_{e^\prime} = s_{e \cdot e^\prime}, \quad
p_{\mathsf{r}(e)} s_e = s_e = s_e p_{\mathsf{s}(e)},
\]
\item[(KP3)] for all $e, e^\prime \in \Lambda^{\neq 0}$ with $\mathsf{d}(e) = \mathsf{d}(e^\prime)$ we have
\[
s_e^* s_{e'} = \delta_{e, e^\prime} p_{\mathsf{s}(e)},
\]
\item[(KP4)] for all $v \in \Lambda^0$ and $n \in \mathbb{N}^k \backslash \{0\}$ we have
\[
p_v = \sum_{e \in v \Lambda^n} s_e s_e^*.
\]
\end{enumerate}
\end{definition}
\begin{remark}
The definition in \cite{Aranda:Kumjian-Pask} is given in terms of paths $e \in \Lambda^{\neq 0}$ and ghost paths $e^* \in G(\Lambda^{\neq 0})$, corresponding to generators $s_e$ and $s_{e^*}$.
When working over $\mathbb{C}$ there is a $*$-structure uniquely defined by $p_v^* = p_v$ and $s_e^* = s_{e^*}$, so we prefer to give the definition in this way.
\end{remark}
For the definition of the $C^*$-algebra of a higher-rank graph, see \cite{KumPas}.
\subsection{Higher-rank graphs associated to quantum groups}
\label{sec:crystal_N-graph}
Let $K$ be a connected, simply connected compact semisimple Lie group of rank $r$ with Lie algebra $\mathfrak{k}$ and complexification $\mathfrak{g}=\mathfrak{k}_\mathbb{C}$.
Our goal in this section is to define a higher-rank graph $\Lambda_{\lie{g}}$ of rank $r$ associated to $\mathfrak{g}$.
In fact, we will make a more general construction, in anticipation of the results required for torus bundles over flag varieties, compare \cref{sec:flag_manifolds} and in particular \cref{thm:flag_generators2}.
Consider any $N$-tuple $\mathsf{C} = (\vartheta_1, \cdots, \vartheta_N)$ of linearly independent dominant weights.
We refer to $\mathsf{C}$ as the set of \emph{colours} and we blur the distinction between $\mathsf{C}$ and the index set $\{1, \cdots, N\}$.
Then to any such choice of colours we will associate a higher-rank graph $\Lambda_{\lie{g}, \Cset}$ of rank $N$.
The basic case $\Lambda_{\lie{g}}$ is obtained when $\mathsf{C} = {\boldsymbol{\Pi}} = (\varpi_1, \cdots, \varpi_r)$ is the $r$-tuple fundamental weights.
Write $\bP^+_\mathsf{C} := \mathbb{N} \cdot \mathsf{C}$ for the submonoid of $\bP^+$ generated by the colours $\mathsf{C}$. We identify $\bP^+_\mathsf{C}$ with the monoid $\mathbb{N}^N$ via the map
\begin{equation}
\label{eq:monoid}
\mathbb{N}^N \to \bP^+_\mathsf{C}, \qquad (n_1, \cdots, n_N) \mapsto \sum_{i = 1}^N n_i \vartheta_i.
\end{equation}
In this way, the degree of a path in our higher rank graph will be given by a weight in $\bP^+_\mathsf{C}$.
We also fix the dominant weight
\( \displaystyle
\rho_\mathsf{C} := \sum_{i = 1}^N \vartheta_i \in \bP^+_\mathsf{C}.
\)
\begin{definition}
\label{def:vertices_and_paths}
Let $\mathfrak{g}$ be a complex semisimple Lie algebra and fix a set of colours $\mathsf{C} = (\vartheta_1, \cdots, \vartheta_N)$ as above.
We define the pair $(\hgraph^0, \Lambda_{\lie{g}, \Cset})$ as follows.
\begin{itemize}
\item The \emph{set of vertices} consists of the $\mathsf{C}$-right ends of $\mathcal{B}(\rho_\mathsf{C})$, that is
\[
\hgraph^0 := \left\{ \mathsf{R}_\mathsf{C}(b) \mid b \in \mathcal{B}(\rho_\mathsf{C}) \right\} \subset \mathcal{B}(\vartheta_1) \times \cdots \times \mathcal{B}(\vartheta_N).
\]
\item The \emph{set of paths} $\Lambda_{\lie{g}, \Cset}$ is the set of pairs $(v, b)$, where $v = (c_1, \cdots, c_N) \in \hgraph^0$ and $b \in \mathcal{B}(\lambda)$ for some $\lambda \in \bP^+_\mathsf{C}$ such that $c_i\otimes b$ is in the Cartan component of $\mathcal{B}(\vartheta_i)\otimes\mathcal{B}(\lambda)$ for every colour $i$.
\end{itemize}
We define the \emph{degree} of the path $e = (v, b)$ above to be $\mathsf{d}(e) = \lambda$, where we use the identification $\bP^+_\mathsf{C}\cong\mathbb{N}^N$ from \eqref{eq:monoid}.
\end{definition}
Some remarks are in order. Firstly, we can identify the vertex set $\hgraph^0$ with the set $\Lambda_{\lie{g}, \Cset}^0$ of paths of degree $0$ via the map $v\mapsto(v, b_0)$, where $b_0 \in \mathcal{B}(0)$ is the unique element of the trivial crystal.
Secondly, note that in the definition of the vertex set $\hgraph^0$, we could equally well use the set of $\mathsf{C}$-right ends of any crystal $\mathcal{B}(\mu)$ with highest weight $\mu \geq \rho_\mathsf{C}$, thanks to \cref{prop:set_of_ends}.
\begin{lemma}
\label{lem:right_end_of_a_path}
Let $v \in \hgraph^0$ and fix $c \in \mathcal{B}(\mu)$
with $\mathsf{R}_\mathsf{C}(c) = (c_1,\cdots,c_N)=v$, for some $\mu \in\bP^+_\mathsf{C}$ with $\mu \geq \rho_\mathsf{C}$. Let also $b\in\mathcal{B}(\lambda)$ for some $\lambda\in\bP^+_\mathsf{C}$. Then $(v,b)\in\Lambda_{\lie{g}, \Cset}$ if and only if $c\otimes b$ is in the Cartan component of $\mathcal{B}(\mu)\otimes\mathcal{B}(\lambda)$. In this case, for every $\vartheta_i\in\mathsf{C}$, we have $\mathsf{R}_{\vartheta_i}(c_i\otimes b) = \mathsf{R}_{\vartheta_i}(c\otimes b)$.
\end{lemma}
\begin{proof}
First, suppose that $c\otimes b$ is in the Cartan component of $\mathcal{B}(\mu) \otimes \mathcal{B}(\lambda)$. Then \cref{prop:right_end_independence}(2) shows that $\mathsf{R}_{\vartheta_i}(c_i \otimes b) = \mathsf{R}_{\vartheta_i}(c \otimes b) \neq 0$ for all $i$, so $(v,b) \in \Lambda_{\lie{g}, \Cset}$.
Conversely, suppose that $(v,b) \in \Lambda_{\lie{g}, \Cset}$, so that $c_i \otimes b$ is in the Cartan component of $\mathcal{B}(\vartheta_i)\otimes\mathcal{B}(\lambda)$ for every $i$.
We can write $\mu = \sum_{i=1}^N n_i \vartheta_i$ with $n_i > 0$ for every $i$, and we put $|\mu| = \sum_i n_i$. We have an inclusion of crystals
\[
\mathcal{B}(\mu) \hookrightarrow \mathcal{B}(\vartheta_1)^{\otimes n_1} \otimes \cdots \otimes \mathcal{B}(\vartheta_N)^{\otimes n_{N}}
\]
as the Cartan component on the right-hand side. Let us identify $c$ with its image $c\mapsto a_{1}\otimes\cdots\otimes a_{|\mu|}$ in the Cartan component. We can then identify $c\otimes b$ with its image $a_1 \otimes\cdots\otimes a_{|\mu|} \otimes b$ in $\mathcal{B}(\vartheta_1)^{\otimes n_1} \otimes \cdots \otimes \mathcal{B}(\vartheta_N)^{\otimes n_{N}} \otimes \mathcal{B}(\lambda)$. For each $k \in \{1, \cdots, |\mu|\}$, if the $k$-th factor in the tensor product above is $\mathcal{B}(\vartheta_i)$, then using \cref{prop:rightend-sigma} we have
\begin{align*}
\braid_{|\mu|} \circ & \cdots \circ \braid_k (a_1\otimes\cdots\otimes a_{|\mu|} \otimes b) \\
&= \braid_{|\mu|}(a_1\otimes\cdots a_{k-1} \otimes a_k' \otimes \cdots \otimes a'_{|\mu|-1}\otimes c_i \otimes b) \\
&= a_1\otimes\cdots a_{k-1} \otimes a_k' \otimes \cdots \otimes a'_{|\mu|-1}\otimes b' \otimes \mathsf{R}_{\vartheta_i}(c_i\otimes b) \neq 0
\end{align*}
for some $a'_k, \cdots, a'_{|\mu|-1}$ and some $b'$. By \cref{prop:rightend-sigma} again, we deduce that $c\otimes b$ is in the Cartan component. Moreover, $\mathsf{R}_{\vartheta_i}(c\otimes b) = \mathsf{R}_{\vartheta_i}(c_i\otimes b)$, which proves the final statement.
\end{proof}
In particular, \cref{lem:right_end_of_a_path} shows that $\mathsf{R}_\mathsf{C}(c\otimes b)$ depends only on the right end $v = \mathsf{R}_\mathsf{C}(c) \in \hgraph^0$ of $c$, and not on the choice of the element $c$ representing it. Therefore, the following definition makes sense.
\begin{definition}
\label{def:range_and_source}
We define the \emph{source} and \emph{range} maps $\mathsf{s}, \mathsf{r}: \Lambda_{\lie{g}, \Cset} \to \hgraph^0$ as follows.
Let $e = (v, b)$ be a path with $v = (c_1, \cdots, c_N)$. Then we define
\[
\mathsf{s}(e) := v, \qquad
\mathsf{r}(e) := \left( \mathsf{R}_{\vartheta_1}(c_1 \otimes b) , \cdots, \mathsf{R}_{\vartheta_N}(c_N \otimes b) \right).
\]
Equivalently, choosing $c \in \mathcal{B}(\rho_\mathsf{C})$ such that $\mathsf{R}_\mathsf{C}(c) = v$, we have $\mathsf{r}(e) = \mathsf{R}_\mathsf{C}(c \otimes b)$.
\end{definition}
As before, in the equality $\mathsf{r}(e) = \mathsf{R}_\mathsf{C}(c \otimes b)$ we can replace $c \in \mathcal{B}(\rho_\mathsf{C})$ with any crystal element $c \in \mathcal{B}(\mu)$, where $\mu \in \bP^+_\mathsf{C}$ and $\mu \geq \rho_\mathsf{C}$, such that $\mathsf{R}_\mathsf{C}(c) = v$.
\begin{theorem}
\label{thm:hgraph}
The set $\Lambda_{\lie{g}, \Cset}$ becomes a higher-rank graph of rank $N$ with:
\begin{itemize}
\item degree map $\mathsf{d}$ as in \cref{def:vertices_and_paths},
\item source and range maps as in \cref{def:range_and_source},
\item composition of paths $(v, b), (v', b') \in \Lambda_{\lie{g}, \Cset}$ with $\mathsf{r}(v, b) = \mathsf{s}(v', b')$ defined by
\begin{equation}
\label{eq:composition}
(v',b') \cdot (v,b) = (v, \phi(b \otimes b')),
\end{equation}
where $\phi: \mathcal{B}(\lambda) \otimes \mathcal{B}(\lambda') \to \mathcal{B}(\lambda + \lambda')$ denotes the projection onto the Cartan component.
\end{itemize}
\end{theorem}
\begin{proof}
First we need to show that the composition law is well-defined. Let $(v,b)$ and $(v',b')\in\Lambda_{\lie{g}, \Cset}$ with $\mathsf{r}(v,b) = \mathsf{s}(v',b')$. By \cref{lem:right_end_of_a_path}, this means that if we fix any $c \in \mathcal{B}(\rho_\mathsf{C})$ with $\mathsf{R}_\mathsf{C}(c) = v$, then $c\otimes b$ is in the Cartan component of $\mathcal{B}(\rho_\mathsf{C}) \otimes \mathcal{B}(\lambda)$ and $\mathsf{R}_\mathsf{C}(c \otimes b) = v'$. Again using \cref{lem:right_end_of_a_path}, we get that $c \otimes b \otimes b'$ is in the Cartan component of $\mathcal{B}(\rho_\mathsf{C}) \otimes \mathcal{B}(\lambda) \otimes \mathcal{B}(\lambda')$, and hence $c \otimes \phi(b \otimes b')$ is in the Cartan component of $\mathcal{B}(\rho_\mathsf{C}) \otimes \mathcal{B}(\lambda + \lambda')$. This proves that $(v, \phi(b\otimes b')) \in \Lambda_{\lie{g}, \Cset}$.
Associativity follows from the associativity of the tensor product of crystals. The identity arrows are the paths of length $0$, see the remarks immediately after \cref{def:vertices_and_paths}. Thus $\Lambda_{\lie{g}, \Cset}$ is a (small) category.
If $\mathsf{d}(v, b) = \lambda$ and $\mathsf{d}(v',b') = \lambda'$ then clearly $\mathsf{d}(v,\phi(b\otimes b')) = \lambda + \lambda'$. Conversely, if $\mathsf{d}(v,b'') = \lambda + \lambda'$, then $b''\in\mathcal{B}(\lambda\otimes\lambda'')$ and there is a unique element $b\otimes b'$ in the Cartan component of $\mathcal{B}(\lambda)\otimes\mathcal{B}(\lambda')$ with $\phi(b\otimes b')=b''$. Fix any $c\in\mathcal{B}(\rho_\mathsf{C})$ with $\mathsf{R}_\mathsf{C}(c)=v$. Since $(v,b'')\in\Lambda_{\lie{g}, \Cset}$, we have that $c\otimes b \otimes b'$ is in the Cartan component of $\mathcal{B}(\rho_\mathsf{C})\otimes\mathcal{B}(\lambda)\otimes\mathcal{B}(\lambda')$, hence both $(\mathsf{R}_\mathsf{C}(c),b)=(v,b)$ and $(\mathsf{R}_\mathsf{C}(c\otimes b),b')$ are in $\Lambda_{\lie{g}, \Cset}$. If we put $v'=\mathsf{R}_\mathsf{C}(c\otimes b)$ then $\mathsf{r}(v,b) = v'$, and we get a factorization $(v,b'') = (v',b') \cdot (v,b)$. This factorization is unique, by the uniqueness of $b$ and $b'$.
\end{proof}
We describe this structure in some detail in the case of our running example.
\begin{example}
\label{ex:graph-su3}
Consider $\mathfrak{g} = \mathfrak{sl}_3$ with the colours $\mathsf{C} = (\varpi_1, \varpi_2)$.
We want to determine the $2$-graph corresponding to this case.
We can identify the crystal $\mathcal{B}(\rho) = \mathcal{B}(\varpi_1 + \varpi_2)$ with the Cartan component of the product $\mathcal{B}(\varpi_1) \otimes \mathcal{B}(\varpi_2)$, which consists of $8$ elements.
Then using the Cartan braiding \eqref{eq:sl3-braiding} from \cref{ex:braiding-sl3}, \cref{prop:rightend-sigma} gives
\[
\begin{array}{lll}
\mathsf{R}_\mathsf{C}(a_1 \otimes b_1) = (a_1, b_1), \quad &
\mathsf{R}_\mathsf{C}(a_2 \otimes b_1) = (a_2, b_1), \quad &
\mathsf{R}_\mathsf{C}(a_3 \otimes b_1) = (a_2, b_1), \\
\mathsf{R}_\mathsf{C}(a_1 \otimes b_2) = (a_1, b_2), &
\mathsf{R}_\mathsf{C}(a_2 \otimes b_2) = (a_1, b_2), &
\mathsf{R}_\mathsf{C}(a_3 \otimes b_2) = (a_3, b_2), \\
&
\mathsf{R}_\mathsf{C}(a_2 \otimes b_3) = (a_2, b_3), &
\mathsf{R}_\mathsf{C}(a_3 \otimes b_3) = (a_3, b_3).
\end{array}
\]
From these $8$ elements we obtain $6$ distinct right ends, namely
\begin{align*}
v_1 & = (a_1, b_1), &
v_2 & = (a_1, b_2), &
v_3 & = (a_2, b_1), \\
v_4 & = (a_2, b_3), &
v_5 & = (a_3, b_2), &
v_6 & = (a_3, b_3).
\end{align*}
Next, we determine the edges (paths of length $1$) of colour $\varpi_1$.
Recall that these consists of pairs $((a_i, b_j), a_k)$ such that $a_i \otimes a_k$ and $b_j \otimes a_k$ are in their respective Cartan components. Using the graphs from \cref{ex:braiding-sl3} we find
\begin{align*}
e_1 & = (v_1, a_1), &
e_2 & = (v_2, a_1), &
e_3 & = (v_3, a_1), &
e_4 & = (v_3, a_2), \\
e_5 & = (v_4, a_1), &
e_6 & = (v_4, a_2), &
e_7 & = (v_5, a_1), &
e_8 & = (v_5, a_2), \\
e_9 & = (v_5, a_3), &
e_{10} & = (v_6, a_1), &
e_{11} & = (v_6, a_2), &
e_{12} & = (v_6, a_3).
\end{align*}
The sources and ranges $(\mathsf{s}(e_i),\mathsf{r}(e_i))$ of these edges are given by
\begin{align*}
e_1 & : (v_1, v_1), &
e_2 & : (v_2, v_2), &
e_3 & : (v_3, v_1), &
e_4 & : (v_3, v_3), \\
e_5 & : (v_4, v_2), &
e_6 & : (v_4, v_4), &
e_7 & : (v_5, v_2), &
e_8 & : (v_5, v_3), \\
e_9 & : (v_5, v_5), &
e_{10} & : (v_6, v_2), &
e_{11} & : (v_6, v_4), &
e_{12} & : (v_6, v_6).
\end{align*}
This gives the following portion of the graph, which we depict in red
\begin{center}
\begin{tikzpicture}[
vertex/.style = {align=center, inner sep=2pt},
Rarr/.style = {->, red},
Barr/.style = {->, blue, dotted},
shadow/.style = {white, line width=3pt},
Rloop/.style = {->, red, out=165, in=195, loop},
Bloop/.style = {->, blue, out=15, in=-15, loop, dotted}
]
\node (v1) at ( 0, 0) [vertex] {$v_1$};
\node (v2) at (-2,-1) [vertex] {$v_2$};
\node (v3) at ( 2,-1) [vertex] {$v_3$};
\node (v4) at (-2,-2) [vertex] {$v_4$};
\node (v5) at ( 2,-2) [vertex] {$v_5$};
\node (v6) at ( 0,-3) [vertex] {$v_6$};
\draw [Rloop] (v1) edge (v1);
\draw [Rloop] (v2) edge (v2);
\draw [Rarr] (v3) edge (v1);
\draw [Rloop] (v3) edge (v3);
\draw [Rarr] (v4) edge (v2);
\draw [Rloop] (v4) edge (v4);
\draw [Rarr] (v5) edge (v2);
\draw [Rarr] (v5) edge (v3);
\draw [Rloop] (v5) edge (v5);
\draw [Rarr] (v6) edge (v2);
\draw [Rarr] (v6) edge (v4);
\draw [Rloop] (v6) edge (v6);
\end{tikzpicture}
\end{center}
Similar computations can be made for the colour $\varpi_2$, or alternatively we can employ the obvious symmetry coming from the Dynkin diagram automorphism. This leads to the $2$-graph presented in the introduction.
\end{example}
\subsection{Properties of the higher-rank graph}
\label{sec:graph-properties}
We now prove that our graphs satisfy the properties from \cref{def:row_finite_and_no_sources}.
\begin{proposition}
\label{prop:nice_graph}
The $N$-graph $\Lambda_{\lie{g}, \Cset}$ is row-finite and has no sources and sinks.
\end{proposition}
\begin{proof}
Row-finiteness is clear, because $v\Lambda_{\lie{g}, \Cset}^\lambda \subseteq \{v\}\otimes\mathcal{B}(\lambda)$, which is finite.
Now let $v\in\hgraph^0$ and $\lambda\in\bP^+_\mathsf{C}$. Pick $c\in\mathcal{B}(\rho_\mathsf{C})$ with $\mathsf{R}_\mathsf{C}(c)=v$. Let $b_\lambda$ and $b_{w_0\lambda}$ denote the highest and lowest weight elements of $\mathcal{B}(\lambda)$, respectively. By \cref{lem:tensor_fact}, $c\otimes b_\lambda$ belongs to the Cartan component of $\mathcal{B}(\rho_\mathsf{C})\otimes\mathcal{B}(\lambda)$ and so $(v,b_\lambda)\in \Lambda_{\lie{g}, \Cset}^\lambda v$.
Likewise, $b_{w_0\lambda}\otimes c$ is in the Cartan component of $\mathcal{B}(\lambda)\otimes\mathcal{B}(\rho_\mathsf{C})$. Let $c'\otimes b = \braid_{\mathcal{B}(\lambda),\mathcal{B}(\rho_\mathsf{C})}(b_{w_0\lambda}\otimes c)$. Then $\mathsf{R}_\mathsf{C}(c'\otimes b) = \mathsf{R}_\mathsf{C}(b_{w_0\lambda}\otimes c) = \mathsf{R}_\mathsf{C}(c) = v$ by \cref{prop:right_end_independence}, so putting $v'=\mathsf{R}_\mathsf{C}(c')$ we have $(v',b)\in v\Lambda_{\lie{g}, \Cset}^\lambda$.
\end{proof}
The higher rank graph $\Lambda_{\lie{g}, \Cset}$ is compatible with a partial ordering on the vertices, as we now describe.
Firstly, every irreducible crystal $\mathcal{B}(\lambda)$ admits a partial ordering by declaring that $b\leq b'$ if and only if $b=\tilde{F} b'$ where $\tilde{F} = \tilde{F}_{i_1} \cdots \tilde{F}_{i_m}$ is some product of the Kashiwara operators $\tilde{F}_i$. Then for $N$-tuples $v = ( b_1, \cdots, b_N )$ and $v^\prime = (b_1^\prime, \cdots, b_N^\prime)$ in $\hgraph^0$, we write $v\leq v'$ if and only if $b_i\leq b_i'$ for every $i$.
\begin{proposition}
Consider the partial order $\leq$ on $\hgraph^0$ defined above. Then:
\begin{enumerate}
\item[1)] for any $e \in \Lambda_{\lie{g}, \Cset}$ we have $\mathsf{s}(e) \leq \mathsf{r}(e)$,
\item[2)] there are unique maximal and minimal elements in $\hgraph^0$, namely
\begin{align*}
& v_{\mathrm{max}} =(b_{\vartheta_1}, \cdots, b_{\vartheta_N}), && v_{\mathrm{min}} = (b_{w_0\vartheta_1}, \cdots, b_{w_0\vartheta_N}),
\end{align*}
where $b_\lambda$ and $b_{w_0\lambda}$ denote the highest and lowest weight elements of $\mathcal{B}(\lambda)$, respectively.
\end{enumerate}
\end{proposition}
\begin{proof}
1) Consider a path $e = (v, b)$ with $b \in \mathcal{B}(\lambda)$.
Write $v = ( b_1, \cdots, b_N )$ for $\mathsf{s}(e)$ and $v'=(b_1', \cdots, b_N')$ for $\mathsf{r}(e)$. Fixing $i$, we have $b'_i = \mathsf{R}_{\vartheta_i}(b_i\otimes b)$, so by Proposition \ref{prop:rightend-sigma} we have $\sigma_{\mathcal{B}(\vartheta_i),\mathcal{B}(\lambda)} : b_i\otimes b \mapsto c \otimes b'_i$ for some $c\in\mathcal{B}(\lambda)$. Considering the structure of the braiding operators $\hat{R}_{\mathcal{B}(\vartheta_i),\mathcal{B}(\lambda)}$ fixed in \eqref{eq:R-matrix_convention}, we obtain that $b_i\leq b_i'$. The result follows.
2) Consider any $\lambda, \mu \in \bP^+$ with $\lambda \geq \mu$.
The inclusion $\mathcal{B}(\lambda) \hookrightarrow \mathcal{B}(\lambda - \mu) \otimes \mathcal{B}(\mu)$ maps $b_\lambda$ to $b_{\lambda-\mu}\otimes b_\mu$, from which we see that $\mathsf{R}_\mu(b_\lambda)=b_\mu$. It follows that $\mathsf{R}_\mathsf{C}(b_{\rho_\mathsf{C}}) = (b_{\vartheta_1}, \cdots, b_{\vartheta_N}) \in \hgraph^0$, and this element is clearly maximal. The result for $v_{\mathrm{min}}$ is obtained in a similar way.
\end{proof}
\section{The crystal algebra}
\label{sec:crystal-algebra}
\subsection{Definition of the algebra}
In this section we introduce an abstract $*$-algebra $\cA_{\lie{k}}$ which is universal for the generators and relations for the crystal limit $\cO[K_0]$ given in \cref{thm:pi0_relations}, so that we get a surjective $*$-homomorphism $\cA_{\lie{k}} \to \cO[K_0]$.
It provides a convenient bridge between $\cO[K_0]$ and the higher-rank graph algebra $\mathrm{KP}_\CC(\hgraphstd)$.
Ultimately, we are going to prove that these three algebras are all $*$-isomorphic.
As in \cref{sec:crystal_N-graph}, we will anticipate the case of flag varieties by fixing a linearly independent family of dominant weights $\mathsf{C} = (\vartheta_1, \cdots, \vartheta_N)$ and writing $\bP^+_\mathsf{C} = \mathbb{N} \cdot \mathsf{C}$ for the monoid generated by the colours.
\begin{definition}
\label{def:crystal_algebra}
We define the \emph{crystal algebra} $\cA_{\lie{g}, \Cset}$ (with respect to the Lie algebra $\mathfrak{g}$ and the colours $\mathsf{C}$) to be the unital $\mathbb{C}$-algebra generated by elements $\{\mathsf{f}_b, \mathsf{v}_b \mid \lambda\in\bP^+_\mathsf{C}, b \in \mathcal{B}(\lambda)\}$ with the following relations:
\begin{enumerate}
\item for any $\lambda, \lambda' \in \bP^+_\mathsf{C}$ and $b \in \mathcal{B}(\lambda)$, $b' \in \mathcal{B}(\lambda')$,
\begin{align}
\label{eq:multiplication-rules}
&\mathsf{f}_b \mathsf{f}_{b^\prime} = \Cartan(b \otimes b^\prime) \mathsf{f}_{b^{\prime \prime}}, &
&\mathsf{v}_{b^\prime} \mathsf{v}_b = \Cartan(b \otimes b^\prime) \mathsf{v}_{b^{\prime \prime}},
\end{align}
where we write $b \otimes b^\prime \mapsto b^{\prime \prime}$ under the unique surjective crystal morphism $\mathcal{B}(\lambda) \otimes \mathcal{B}(\lambda^\prime) \to \mathcal{B}(\lambda + \lambda^\prime)$;
\item for any $\lambda, \lambda' \in \bP^+_\mathsf{C}$ and $b \in \mathcal{B}(\lambda)$, $b' \in \mathcal{B}(\lambda')$,
\begin{equation}
\label{eq:cross-relations}
\mathsf{f}_b \mathsf{v}_{b^\prime} = \sum_{(c, c')} \mathsf{v}_{c^\prime} \mathsf{f}_c,
\end{equation}
where the sum is over all pairs $(c, c') \in \mathcal{B}(\lambda) \times \mathcal{B}(\lambda')$ such that the condition $\braid(c \otimes b^\prime) = c^{\prime} \otimes b$ holds;
\item for any $\lambda \in \bP^+_\mathsf{C}$ we have
\begin{equation}
\label{eq:unitality}
\sum_{b \in \mathcal{B}(\lambda)} \mathsf{v}_b \mathsf{f}_b = 1.
\end{equation}
\end{enumerate}
For the fundamental weights $\mathsf{C} = {\boldsymbol{\Pi}}$ we simply write $\cA_{\lie{g}}$ instead of $\cA_{\lie{g}, \fundweights}$.
\end{definition}
\begin{remark}
Note that, by the first relation, we could reduce the set of generators to those $\mathsf{f}_b$, $\mathsf{v}_b$ with $b\in\mathcal{B}(\vartheta_i)$ for $\vartheta_i\in\mathsf{C}$.
Hence the algebra is finitely generated.
\end{remark}
As in Remark \ref{rmk:exchange_relations}, the relations \eqref{eq:multiplication-rules} imply the exchange relations
\begin{align}
\label{eq:exchange_relations}
& \mathsf{f}_b \mathsf{f}_{b'} = \mathsf{f}_{c'} \mathsf{f}_c, &
& \mathsf{v}_{b'} \mathsf{v}_b = \mathsf{v}_c \mathsf{v}_{c'},
\end{align}
whenever $b\otimes b'$ is in the Cartan component of $\mathcal{B}(\lambda) \otimes \mathcal{B}(\lambda')$ and $\braid(b \otimes b') = c'\otimes c$. On the other hand, $\mathsf{f}_b \mathsf{f}_{b'} = \mathsf{v}_{b'} \mathsf{v}_b = 0$ if $b\otimes b'$ is not in the Cartan component.
In the case $\lambda = \lambda'$, the relations \eqref{eq:exchange_relations} and \eqref{eq:cross-relations} simplify further: for any elements $b, b' \in \mathcal{B}(\lambda)$ we have
\begin{equation}
\label{eq:lambda-lambda_relations}
\begin{gathered}
\mathsf{f}_b \mathsf{f}_{b^\prime} = \Cartan(b \otimes b^\prime) \mathsf{f}_b \mathsf{f}_{b^\prime}, \qquad
\mathsf{v}_{b^\prime} \mathsf{v}_b = \Cartan(b \otimes b^\prime) \mathsf{v}_{b^\prime} \mathsf{v}_b, \\
\mathsf{f}_b \mathsf{v}_{b^\prime} = \delta_{b, b^\prime} \sum_{c \in \mathcal{B}(\lambda)} \Cartan(c \otimes b) \mathsf{v}_c \mathsf{f}_c.
\end{gathered}
\end{equation}
\begin{proposition}
We have a $*$-structure on $\cA_{\lie{g}, \Cset}$ given by $\mathsf{f}^*_b = \mathsf{v}_b$.
\end{proposition}
\begin{proof}
The only relation of $\cA_{\lie{g}, \Cset}$ which needs some checking is \eqref{eq:cross-relations}.
By the properties of the Cartan braiding $\braid$, the relation $\braid(b \otimes b') = c' \otimes c$ is equivalent to $\braid(c' \otimes c) = b \otimes b'$. Then we have
\[
\left(\sum_{(c, c')} \mathsf{v}_{c'} \mathsf{f}_{c}\right)^* = \sum_{(c, c')} \mathsf{v}_c \mathsf{f}_{c'},
\]
where on both sides the sum is over pairs $(c,c')$ as in \eqref{eq:cross-relations}. The result follows.
\end{proof}
We write $\cA_{\lie{k}, \Cset}$ for the crystal algebra $\cA_{\lie{g}, \Cset}$ equipped with this $*$-structure.
When $\mathsf{C} = {\boldsymbol{\Pi}}$ we simply write $\cA_{\lie{k}}$.
The following statement now follows immediately from \cref{thm:pi0_relations}.
\begin{proposition}
\label{prop:AC_universal_map}
There is a unique surjective morphism of $*$-algebras $\cA_{\lie{k}} \to \cO[K_0]$ defined on generators as follows: for any $\lambda \in \bP^+$ and any $b_i \in \mathcal{B}(\lambda)$ we have
\begin{equation*}
\mathsf{f}_{b_i} \mapsto \pi_0(\mathsf{f}_i^\lambda), \quad
\mathsf{v}_{b_i} \mapsto \pi_0(\mathsf{v}_i^\lambda),
\end{equation*}
where $\mathsf{f}_i^\lambda$ and $\mathsf{v}_i^\lambda$ are as in Equation \eqref{eq:genf_genv}.
With notation as in \cref{sec:flag_manifolds},
if $K$ is a compact connected semisimple Lie group, not necessarily simply connected, with complexified Lie algebra $\mathfrak{k}_\mathbb{C} = \mathfrak{g}$, $S \subset {\boldsymbol{\Delta}}$ is a set of simple roots, and $Y_S = K/K^0_S$ is the associated torus bundle over the flag manifold $X_S = K / K_S$, then the above morphism restricts to a surjective $*$-morphism $\cA_{\lie{k}, \Cset} \to \mathcal{O}[Y_{S,0}]$ where $\mathsf{C} = (\vartheta_1, \cdots, \vartheta_N)$ is the family of linearly independent dominant weights which generate $\bP^+_{K,S}$.
\end{proposition}
The following notation is useful.
\begin{definition}
\label{def:f_f_b}
Given $b = b_1 \otimes \cdots \otimes b_n \in \mathcal{B}(\lambda_1) \otimes \cdots \otimes \mathcal{B}(\lambda_n)$, we write
\[
\mathsf{f}_b := \mathsf{f}_{b_1} \cdots \mathsf{f}_{b_n}, \quad
\mathsf{v}_b := \mathsf{v}_{b_n} \cdots \mathsf{v}_{b_1}.
\]
Note that $\mathsf{f}_b^* = \mathsf{v}_b$. We also adopt the useful convention $\mathsf{f}_0 = \mathsf{v}_0 := 0$.
\end{definition}
\begin{proposition}
\label{prop:commutation-relations}
Let $b \in \mathcal{B}(\lambda_1) \otimes \cdots \otimes \mathcal{B}(\lambda_n)$, and let $\phi:\mathcal{B}(\lambda_1)\otimes\cdots\otimes\mathcal{B}(\lambda_n) \to \mathcal{B}(\lambda_1+\cdots+\lambda_n)$ denote the projection onto the Cartan component. Then
\[
\mathsf{f}_b = \mathsf{f}_{\phi(b)}, \quad
\mathsf{v}_b = \mathsf{v}_{\phi(b)}.
\]
In particular $\mathsf{f}_b = \mathsf{v}_b = 0$ if $b$ is not in the Cartan component.
\end{proposition}
\begin{proof}
This follows by an inductive argument using the relations \eqref{eq:multiplication-rules}.
\end{proof}
\begin{remark}
\cref{prop:commutation-relations} implies the following generalization of \eqref{eq:exchange_relations}: with $b$ as above, we have $\mathsf{f}_b = \mathsf{f}_{\braid_s(b)}$ and $\mathsf{v}_b = \mathsf{v}_{\braid_s(b)}$ for every $s\in S_n$.
\end{remark}
\subsection{Projections associated to crystal elements}
Fix $\lambda\in\bP^+_\mathsf{C}$. For any $b \in \mathcal{B}(\lambda)$ we define
\[P_b := \mathsf{v}_b \mathsf{f}_b \in \cA_{\lie{k}, \Cset}.
\]
As we now show, for each fixed $\lambda \in \bP^+_\mathsf{C}$, $\{P_b\}_{b \in \mathcal{B}(\lambda)}$ is a set of mutually orthogonal projections summing to $1$.
\begin{proposition}
\label{prop:projections-sameweight}
For any $b, b^\prime \in \mathcal{B}(\lambda)$ we have
\[
P_b^* = P_b, \quad
P_b P_{b^\prime} = \delta_{b, b^\prime} P_b, \quad
\sum_{b \in \mathcal{B}(\lambda)} P_b = 1.
\]
\end{proposition}
\begin{proof}
The first relation follows from the $*$-structure $\mathsf{f}^*_b = \mathsf{v}_b$ and the third relation is just the unitality relation \eqref{eq:unitality}.
For the second we use the relations \eqref{eq:lambda-lambda_relations}.
They immediately imply $P_b P_b^\prime = 0$ for $b \neq b^\prime$, while for $b = b^\prime$ we compute
\[
\begin{split}
P_b^2 & = \mathsf{v}_b \mathsf{f}_b \mathsf{v}_b \mathsf{f}_b
= \sum_{c \in \mathcal{B}(\lambda)} \Cartan(c \otimes b) \mathsf{v}_b \mathsf{v}_c \mathsf{f}_c \mathsf{f}_b \\
& = \sum_{c \in \mathcal{B}(\lambda)} \mathsf{v}_b \mathsf{v}_c \mathsf{f}_c \mathsf{f}_b = \mathsf{v}_b \mathsf{f}_b = P_b.
\qedhere
\end{split}
\]
\end{proof}
Now consider arbitrary projections $P_b$ and $P_{b^\prime}$ with $b \in \mathcal{B}(\lambda)$, $b^\prime \in \mathcal{B}(\lambda^\prime)$. Our next goal is to show that they commute even when $\lambda \neq \lambda^\prime$.
First we need the following result, which will also be useful elsewhere.
\begin{lemma}
\label{lem:relationPv}
Let $b \in \mathcal{B}(\lambda)$ and $b^\prime \in \mathcal{B}(\lambda^\prime)$. Then we have
\[
P_b \mathsf{v}_{b^\prime} = \sum_{\substack{c \in \mathcal{B}(\lambda) \\ \mathsf{R}_\lambda(c \otimes b^\prime) = b}} \mathsf{v}_{b^\prime} P_c.
\]
\end{lemma}
\begin{proof}
Using the cross-relations \eqref{eq:cross-relations} we have
\[
P_b \mathsf{v}_{b^\prime} = \mathsf{v}_b \mathsf{f}_b \mathsf{v}_{b^\prime} = \sum_{(c, c')} \mathsf{v}_b \mathsf{v}_{c^\prime} \mathsf{f}_c,
\]
where the sum is over all pairs $(c, c') \in \mathcal{B}(\lambda) \times \mathcal{B}(\lambda')$ such that $\braid(c \otimes b^\prime) = c^\prime \otimes b$.
For such pairs we have the relation $\mathsf{v}_b \mathsf{v}_{c^\prime} = \mathsf{v}_{b^\prime} \mathsf{v}_c$. This gives
\[
P_b \mathsf{v}_{b^\prime} = \sum_c \mathsf{v}_{b^\prime} \mathsf{v}_c \mathsf{f}_c = \sum_c \mathsf{v}_{b^\prime} P_c.
\]
Finally note that the condition $\braid(c \otimes b^\prime) = c^\prime \otimes b$ is equivalent to $\mathsf{R}_\lambda(c \otimes b^\prime) = b$.
\end{proof}
Now we can show that the various projections commute.
\begin{proposition}
\label{prop:projections-commute}
Let $b \in \mathcal{B}(\lambda)$ and $b^\prime \in \mathcal{B}(\lambda^\prime)$. Then $P_b P_{b^\prime} = P_{b^\prime} P_b$.
\end{proposition}
\begin{proof}
We need to show that $P_b P_{b^\prime} = P_b \mathsf{v}_{b^\prime} \mathsf{f}_{b^\prime}$ and $P_{b^\prime} P_b = \mathsf{v}_{b^\prime} \mathsf{f}_{b^\prime} P_b$ are equal.
Using the relation from \cref{lem:relationPv} and its adjoint we have
\[
P_b \mathsf{v}_{b^\prime} \mathsf{f}_{b^\prime} = \sum_{\substack{c \in \mathcal{B}(\lambda) \\ \mathsf{R}_\lambda(c \otimes b^\prime) = b}} \mathsf{v}_{b^\prime} P_c \mathsf{f}_{b^\prime} = \mathsf{v}_{b^\prime} \mathsf{f}_{b^\prime} P_b. \qedhere
\]
\end{proof}
\subsection{Projections associated to sets of crystal elements}
Knowing that the projections $P_b$ all commute, we can associate projections to $n$-tuples of crystal elements, as in the following definition.
\begin{definition}
\label{def:P_B}
Let $\mathsf{S} = (\mu_1, \cdots, \mu_n)$ be any family of dominant weights and let $B = (b_1, \cdots, b_n)$ be an $n$-tuple with $b_i \in \mathcal{B}(\mu_i)$ for each $i$. We define the projection
\[
P_B := P_{b_1} \cdots P_{b_n}.
\]
\end{definition}
We will be particularly interested in the case where $\mathsf{S} = \mathsf{C} = (\vartheta_1, \cdots, \vartheta_N)$ is a linearly independent family of dominant weights as in \cref{sec:flag_manifolds}, and $B=v\in\hgraph^0$ is a vertex of the higher-rank graph of \cref{sec:crystal_N-graph}.
The next result generalizes \cref{lem:relationPv}.
\begin{lemma}
\label{lem:commutation-projection-v}
Let $\mathsf{S}$ be as above and let $B = (b_1, \cdots, b_n)$ with $b_i \in \mathcal{B}(\mu_i)$. Fix any $\lambda \in \bP^+$.
Then for any $b \in \mathcal{B}(\lambda)$ we have
\[
P_B \mathsf{v}_{b} = \sum_{B'} \mathsf{v}_{b} P_{B^\prime},
\]
where the sum is over all families $B' = ( b_1^\prime, \cdots, b_n^\prime )$ with $b_i^\prime \in \mathcal{B}(\mu_i)$ satisfying
\[
\mathsf{R}_{\mu_i}(b_i'\otimes b) = b_i
\]
for every $i = 1, \cdots, n$.
\end{lemma}
\begin{proof}
We have $P_B = P_{b_1} \cdots P_{b_n}$ and using \cref{lem:relationPv} we compute
\[
P_B \mathsf{v}_{b} = \sum_{b_1^\prime: \mathsf{R}_{\mu_1}(b_1^\prime \otimes b) = b_1} \cdots \sum_{b_n^\prime: \mathsf{R}_{\mu_n}(b_n^\prime \otimes b) = b_n} \mathsf{v}_{b} P_{b_1^\prime} \cdots P_{b_n^\prime}.
\]
This gives the result.
\end{proof}
We now show how to rewrite the projections $P_B$ in a normal form in terms of the generators $\mathsf{v}_b$ and $\mathsf{f}_b$.
\begin{proposition}
\label{prop:P_B_sum}
Let $\mathsf{S} = (\mu_1, \cdots, \mu_n)$ and write $\mathcal{B} = \mathcal{B}(\mu_1) \otimes \cdots \otimes \mathcal{B}(\mu_n)$. For any $B = (b_1, \cdots, b_n)$ with $b_i\in\mathcal{B}(\mu_i)$ for each $i$, we have
\[
P_B = \sum_{\substack{b\in\mathcal{B} \\ \mathsf{R}_\mathsf{S}(b) = B}} \mathsf{v}_b \mathsf{f}_b.
\]
\end{proposition}
\begin{proof}
We work by induction on the size $n$ of $\mathsf{S}$. For $n=1$, the statement degenerates to the definition of $P_{b_1}$.
Now suppose it holds for all sets with $n - 1$ elements.
Write \linebreak $\mathsf{S}_* = (\mu_1, \cdots, \mu_{n - 1} )$, $\mathcal{B}_* = \mathcal{B}(\mu_1) \otimes \cdots \otimes \mathcal{B}(\mu_{n-1})$ and $B_* = ( b_1, \cdots, b_{n - 1} )$.
Using this notation and \cref{lem:commutation-projection-v} we obtain
\[
P_B = P_{B_*} \mathsf{v}_{b_n} \mathsf{f}_{b_n} = \sum_{B'_*} \mathsf{v}_{b_n} P_{B'_*} \mathsf{f}_{b_n},
\]
where the sum is over all $B'_* = (b'_1,\cdots,b'_{n-1})$ with $b'_i\in\mathcal{B}(\mu_i)$ satisfying $\mathsf{R}_{\mu_i}(b_i'\otimes b_n) = b_i$ for all $i=1,\cdots,n-1$.
By the inductive assumption we know that
\[
P_{B'_*} = \sum_{\substack{b'\in\mathcal{B}_* \\ \mathsf{R}_{\mathsf{S}_*}(b')=B'_*}} \mathsf{v}_{b'} \mathsf{f}_{b'},
\]
and hence
\[
P_B = \sum_{b'} \mathsf{v}_{b'\otimes b_n} \mathsf{f}_{b'\otimes b_n},
\]
where now the sum is over all $b' \in \mathcal{B}_*$ such that $\mathsf{R}_{\mu_i}(\mathsf{R}_{\mu_i}(b') \otimes b_n) = b_i$ for all $i=1,\cdots, n-1$. But by \cref{prop:right_end_independence} this is equivalent to $\mathsf{R}_\mathsf{S}(b'\otimes b_n) = B$, so we are done.
\end{proof}
\begin{corollary}
\label{cor:projection-rightend}
Let $\mathsf{C} = (\vartheta_1,\cdots,\vartheta_N)$ be a family of linearly independent dominant weights. Fix $\lambda\in\bP^+_\mathsf{C}$ with $\lambda\geq\rho_\mathsf{C}$. Then for every $v = (b_1,\cdots,b_N)$ with $b_i\in\mathcal{B}(\vartheta_i)$, we have
\[
P_v = \sum_{\substack{b\in \mathcal{B}(\lambda) \\ \mathsf{R}_\mathsf{C}(b)=v}} \mathsf{v}_b \mathsf{f}_b.
\]
In particular, we have $P_v = 0$ if $v\notin\hgraph^0$.
\end{corollary}
\begin{proof}
Fix $v = (b_1, \cdots, b_N)$ as in the statement. Write $\lambda = \sum_i n_i \vartheta_i$ and put $\mathsf{S} = (\vartheta_{i_1}, \cdots, \vartheta_{i_{|\lambda|}})$, where each $\vartheta_i \in \mathsf{C}$ appears with multiplicity $n_i$. Put also $\mathcal{B} = \mathcal{B}(\vartheta_{i_1}) \otimes \cdots \otimes \mathcal{B}(\vartheta_{i_{|\lambda|}})$ and $B=(b_{i_1},\cdots,b_{i_{|\lambda|}})$ with the same multiplicities.
Applying \cref{prop:P_B_sum}, and using the fact that $(P_{b_i})^{n_i} = P_{b_i}$, we get
\[
P_v = P_B = \sum_{\substack{\tilde{b}\in\mathcal{B} \\ \mathsf{R}_\mathsf{S}(\tilde{b})=B}} \mathsf{v}_{\tilde{b}} \mathsf{f}_{\tilde{b}}.
\]
But $\mathsf{R}_\mathsf{S}(\tilde{b}) \neq 0$ if and only if $\tilde{b}$ is in the Cartan component, in which case its image $b$ under the projection to $\mathcal{B}(\lambda)$ satisfies $\mathsf{R}_\mathsf{S}(b) = B$ if and only if $\mathsf{R}_\mathsf{C}(b) = v$. The result follows.
\end{proof}
In \cref{cor:projections_nonzero} we will show that all $P_v$ with $v\in\hgraph^0$ are non-zero.
\subsection{Graph algebra relations}
Let $\mathsf{C} = (\vartheta_1, \cdots, \vartheta_N)$ be a family of linearly independent dominant weights and consider the corresponding higher-rank graph $\Lambda_{\lie{g}, \Cset}$, as in \cref{def:vertices_and_paths}.
For any vertex $v = (b_1, \cdots, b_N) \in \hgraph^0$ with $b_i \in \mathcal{B}(\vartheta_i)$, we have defined the projection $P_v = P_{b_1} \cdots P_{b_N}$.
Next, given a path $e \in \Lambda_{\lie{g}, \Cset}$ of the form $e = (v, b)$, we define
\[
S_e := \mathsf{v}_b P_v, \quad
S_e^* := P_v \mathsf{f}_b.
\]
Our goal is to show that the elements $\{P_v, S_e\}$ satisfy the relations of a Kumjian-Pask algebra, as in \cref{def:graph-algebra}.
\begin{remark}
\label{rmk:not_a_path}
If $(v,b) \in \hgraph^0 \times \mathcal{B}(\lambda)$ does not define an element of $\Lambda_{\lie{g}, \Cset}$, then we get $v_b P_v = 0$. To see this, note that if $v = (c_1, \cdots, c_N)$ then by hypothesis $c_i \otimes b$ is not in the Cartan component of $\mathcal{B}(\lambda) \otimes \mathcal{B}(\vartheta_i)$ for some $i$. Therefore $\mathsf{v}_b P_{c_i} = \mathsf{v}_b \mathsf{v}_{c_i} \mathsf{f}_{c_i} = 0$, and the claim follows.
\end{remark}
We begin with the condition (KP1) concerning the elements $P_v$.
\begin{lemma}
The set $\{P_v\}_{\hgraph^0}$ consists of mutually orthogonal projections.
Moreover we have $\sum_{v \in \hgraph^0} P_v = 1$.
\end{lemma}
\begin{proof}
Given a vertex $v = (b_1, \cdots, b_N) \in \hgraph^0$, we have the corresponding projection $P_v = P_{b_1} \cdots P_{b_N}$.
It easily follows from commutativity of the projections and \cref{prop:projections-sameweight} that $P_v$ is an orthogonal projection. In the same way one shows that they are mutually orthogonal, that is $P_v P_{v^\prime} = \delta_{v, v^\prime} P_v$.
It remains to show that $\sum_{v \in \hgraph^0} P_v = 1$. It follows from \cref{prop:projections-sameweight} that
\[
\sum_{b_1 \in \mathcal{B}(\vartheta_1)} \cdots \sum_{b_N \in \mathcal{B}(\vartheta_N)} P_{b_1} \cdots P_{b_N} = 1.
\]
By \cref{cor:projection-rightend} we have $P_{b_1} \cdots P_{b_N} = 0$ unless $(b_1, \cdots, b_N) = \mathsf{R}_\mathsf{C}(b_1^\prime \otimes \cdots \otimes b_N^\prime)$ for some $b_1^\prime \otimes \cdots \otimes b_N^\prime$ in the Cartan component of $\mathcal{B}(\vartheta_1) \otimes \cdots \otimes \mathcal{B}(\vartheta_N)$.
Since the latter can be identified with $\mathcal{B}(\rho_\mathsf{C})$ we obtain the result.
\end{proof}
Next we look at the condition (KP2), which we divide into two parts for convenience.
\begin{lemma}
\label{lem:vertex-edge-relations}
For any $e \in \Lambda_{\lie{g}, \Cset}$ we have
\[
P_{\mathsf{r}(e)} S_e = S_e = S_e P_{\mathsf{s}(e)}.
\]
\end{lemma}
\begin{proof}
Consider a path $e = (v, b)$ with range $\mathsf{r}(e) = (c_1, \cdots, c_N)$ where $c_i \in \mathcal{B}(\vartheta_i)$.
Then using \cref{lem:commutation-projection-v} we obtain
\[
P_{\mathsf{r}(e)} S_e = P_{\mathsf{r}(e)} v_b P_v = \sum_{v'} v_b P_{v^\prime} P_v,
\]
where the sum is over all $v' = (c_1', \cdots, c_N') \in \mathcal{B}(\vartheta_1) \times \cdots \times \mathcal{B}(\vartheta_N)$ satisfying the condition $\mathsf{R}_{\vartheta_i}(c_i' \otimes b) = c_i$ for $i = 1, \cdots, N$.
Note that $v' = v$ satisfies this condition, by definition of the range map.
Then using the relation $P_{v'} P_v = \delta_{v', v} P_v$ we obtain $P_{\mathsf{r}(e)} S_e = v_b P_v = S_e$.
The second identity follows immediately from the fact that $\mathsf{s}(e) = v$, since we have $S_e P_{\mathsf{s}(e)} = v_b P_v P_v = S_e$.
\end{proof}
\begin{lemma}
Let $e_1, e_2 \in \Lambda_{\lie{g}, \Cset}$ be composable paths. Then $S_{e_1 \cdot e_2} = S_{e_1} S_{e_2}$.
\end{lemma}
\begin{proof}
Write $e_i = (v_i, b_i)$ for $i = 1, 2$.
We have $\mathsf{s}(e_1) = v_1 = \mathsf{r}(e_2)$, since $e_1$ and $e_2$ are composable paths. Then we compute
\[
S_{e_1} S_{e_2} = \mathsf{v}_{b_1} P_{v_1} S_{e_2} = \mathsf{v}_{b_1} P_{\mathsf{r}(e_2)} S_{e_2} = \mathsf{v}_{b_1} S_{e_2},
\]
where we have used \cref{lem:vertex-edge-relations}. Then we obtain $S_{e_1} S_{e_2} = \mathsf{v}_{b_1} \mathsf{v}_{b_2} P_{v_2}$.
Recall that the composition of paths is given by $e_1 \cdot e_2 = (v_2, \phi(b_2 \otimes b_1))$, where $\phi: \mathcal{B}(\lambda) \otimes \mathcal{B}(\lambda') \to \mathcal{B}(\lambda + \lambda')$ denotes the projection onto the Cartan component.
Then $\mathsf{v}_{b_1} \mathsf{v}_{b_2} = \mathsf{v}_{b_2 \otimes b_1} = \mathsf{v}_{\phi(b_2\otimes b_1)}$ and we get $S_{e_1} S_{e_2} = S_{e_1 \cdot e_2}$.
\end{proof}
Next we consider the condition (KP3), concerning source projections.
\begin{lemma}
For any $e, e' \in \Lambda_{\lie{g}, \Cset}$ with $\mathsf{d}(e) = \mathsf{d}(e')$ we have
\[
S_e^* S_{e'} = \delta_{e, e'} P_{\mathsf{s}(e)}.
\]
\end{lemma}
\begin{proof}
Write $e = (v, b)$ and $e' = (v', b')$ with $\mathsf{d}(e) = \mathsf{d}(e') = \lambda$, that is $b, b' \in \mathcal{B}(\lambda)$.
By \eqref{eq:lambda-lambda_relations} we have $f_b v_{b'} = \delta_{b, b'} \sum_{c \in \mathcal{B}(\lambda)} \Cartan(c \otimes b') P_c$ and so
\begin{equation}
\label{eq:pathstar-path-relation}
S_e^* S_{e'} = P_v f_b v_{b'} P_{v'} = \delta_{b, b'} \sum_{c \in \mathcal{B}(\lambda)} \Cartan(c \otimes b') P_v P_c P_{v'}.
\end{equation}
Recall that the projections commute and we have $P_v P_{v'} = \delta_{v, v'} P_v$ for $v, v' \in \hgraph^0$.
Then we find that $S_e^* S_{e'} = \delta_{e, e'} \sum_{c \in \mathcal{B}(\lambda)} \Cartan(c \otimes b) P_c P_v$.
If $e\neq e'$ we are done, so it remains to deal with the case $e = e'$.
We work by induction on the total length $|\lambda| = \sum_i n_i$ where $\lambda = \sum_{i = 1}^N n_i \vartheta_i$. If $|\lambda| = 0$, then we have $S^*_e S_e = P_v^*P_v = P_v$ so the lemma holds.
Suppose the lemma holds for any $e'$ of degree $\lambda'$ with $|\lambda'|< n$. Consider $e = e'' \cdot e'$ with $e''$ of length $1$ and $e'$ of length $n - 1$.
Note that $\mathsf{s}(e'') = \mathsf{r}(e')$ since they are composable. Then we compute
\[
\begin{split}
S_e^* S_e & = S_{e'}^* S_{e''}^* S_{e''} S_{e'} = S_{e'}^* P_{\mathsf{s}(e'')} S_{e'} = S_{e'}^* P_{\mathsf{r}(e')} S_{e'} \\
& = S_{e'}^* S_{e'} = P_{\mathsf{s}(e')} = P_{\mathsf{s}(e)},
\end{split}
\]
where we use the relation $P_{\mathsf{r}(e)} S_e = S_e$ from \cref{lem:vertex-edge-relations}.
\end{proof}
Finally we look at the condition (KP4), concerning range projections.
Recall that $v \Lambda_{\lie{g}, \Cset}^n$ denotes the set of paths of degree $n$ and range $v$.
\begin{lemma}
For any $v \in \hgraph^0$ and $n \in \mathbb{N}^N$ we have
\[
P_v = \sum_{e \in v \Lambda_{\lie{g}, \Cset}^n} S_e S_e^*.
\]
\end{lemma}
\begin{proof}
We identify $n \in \mathbb{N}^N$ with the corresponding dominant weight $\lambda = \sum_i n_i\vartheta_i$.
Then we consider the sum $\sum_e S_e S_e^*$ over all paths of degree $\lambda$ and range $v$.
Let $e = (v', b) \in v \Lambda_{\lie{g}, \Cset}^n$.
From \cref{cor:projection-rightend} we have
\[
S_e S_e^* = \mathsf{v}_b P_{v'} \mathsf{f}_b =
\sum_{\substack{b' \in \mathcal{B}(\rho_\mathsf{C}) \\ \mathsf{R}_\mathsf{C}(b') = v'}} \mathsf{v}_b \mathsf{v}_{b'} \mathsf{f}_{b'} \mathsf{f}_b.
\]
Summing this over all such $e = (v', b) \in v \Lambda_{\lie{g}, \Cset}^n$, we get
\[
\sum_{e = (v', b)\in v \Lambda_{\lie{g}, \Cset}^n} S_e S_e^*
= \sum_{\substack{b \in \mathcal{B}(\lambda),~ b' \in \mathcal{B}(\rho_\mathsf{C}) \\ \mathsf{R}_\mathsf{C}(b' \otimes b) = v}}
\mathsf{v}_b \mathsf{v}_{b'} \mathsf{f}_{b'} \mathsf{f}_b
\]
Recall that $\mathsf{f}_{b'} \mathsf{f}_b =0$ unless $b' \otimes b$ is in the Cartan component of $\mathcal{B}(\rho_\mathsf{C}) \otimes \mathcal{B}(\lambda)$, in which case it is equal to $\mathsf{f}_c$, where $c$ is the image of $b' \otimes b$ under the projection $\mathcal{B}(\rho_\mathsf{C}) \otimes \mathcal{B}(\lambda) \to \mathcal{B}(\lambda + \rho_\mathsf{C})$. Thus the above sum becomes
\[
\sum_{e = (v', b)\in v \Lambda_{\lie{g}, \Cset}^n} S_e S_e^*
= \sum_{\substack{c \in \mathcal{B}(\lambda + \rho_\mathsf{C}) \\ \mathsf{R}_\mathsf{C}(c) = v}} \mathsf{v}_c \mathsf{f}_c = P_v,
\]
again by \cref{cor:projection-rightend}.
\end{proof}
Collecting all the results above, we obtain the following.
\begin{proposition}
\label{prop:KP_universal_map}
There is a surjective $*$-homomorphism $\mathrm{KP}_\CC(\hgraph) \to \cA_{\lie{k}, \Cset}$, which on the generators is given by
\[
p_v \mapsto P_v, \quad
s_e \mapsto S_e.
\]
\end{proposition}
\begin{proof}
The existence of the $*$-homomorphism follows from the fact that the elements $P_v, S_e \in \cA_{\lie{k}, \Cset}$ satisfy the relations (KP1)--(KP4), as we have shown above.
To prove surjectivity, we must show that $\cA_{\lie{k}, \Cset}$ is generated by the elements $S_e$ with $e \in \Lambda_{\lie{g}, \Cset}$ and their adjoints.
Let $b \in \mathcal{B}(\lambda)$ for any $\lambda \in \bP^+_\mathsf{C}$. From \cref{lem:tensor_fact}, we have that $b_{w_0 \rho_\mathsf{C}} \otimes b$ is in the Cartan component of $\mathcal{B}(\rho_\mathsf{C}) \otimes \mathcal{B}(\lambda)$, so putting $v' = \mathsf{R}_\mathsf{C}(b_{w_0 \rho_\mathsf{C}})$, we have that $e = (v',b)$ defines a path in $\Lambda_{\lie{g}, \Cset}$. Therefore $S_e = \mathsf{v}_b P_{v'}$ is non-zero. Using \cref{prop:projections-sameweight} and \cref{rmk:not_a_path}, we get
\[
v_b = \sum_{v \in \hgraph^0} v_b P_v
= \sum_{\substack{v\in\hgraph^0 \\ (v, b) \in \Lambda_{\lie{g}, \Cset}}} S_{(v, b)}.
\]
Since $\cA_{\lie{k}, \Cset}$ is generated by the $\mathsf{v}_b$'s and their adjoints, the result follows.
\end{proof}
In the next section we will show that this map is an isomorphism.
\section{Crystal limits are higher-rank graph algebras}
\label{sec:crystal-limit}
The universal maps from \cref{prop:KP_universal_map} and \cref{prop:AC_universal_map} yield a pair of surjective $*$-homomorphisms
\begin{equation}
\label{eq:KPiso}
\mathrm{KP}_\CC(\hgraphstd) \to \cA_{\lie{k}} \to \cO[K_0].
\end{equation}
In this section, we will show that these maps are isomorphisms, as well as their restrictions to the subalgebras
\begin{equation}
\label{eq:KPCiso}
\mathrm{KP}_\CC(\hgraph) \to \cA_{\lie{k}, \Cset} \to \mathcal{O}[Y_{S,0}]
\end{equation}
corresponding to the principal torus bundle over any flag variety of a connected compact semisimple group $K$.
We use the same notation as in \cref{sec:compact_form}, namely, for any $\lambda\in\bP^+$ we fix a weight basis $\{v^\lambda_i\}_i$, sometimes denoted simply $\{v_i\}_i$, lifting the crystal $\mathcal{B}(\lambda)=\{b^\lambda_i\}_i$, with $v^\lambda_1$ being the highest weight vector, and we let $\{f^\lambda_i\}_i$ be the dual basis. We define the generating matrix coefficients $\mathsf{f}^\lambda_i$ and $\mathsf{v}^\lambda_i$ for $\cO_q^{\bfA_0}[K]$ as in \eqref{eq:genf_genv}.
\begin{lemma}
\label{lem:f_v_nonzero}
The image of every one of the generators $\mathsf{f}^\lambda_i$ and $\mathsf{v}^\lambda_i$ of $\cO_q^{\bfA_0}[K]$ from \cref{def:OqAOK} under the representation $\pi_0$ is non-zero.
\end{lemma}
\begin{proof}
Let $w_0 = s_{i_1} \cdots s_{i_l}$ be the chosen reduced decomposition of the \linebreak longest word of the Weyl group. The (partially defined) Soibelman representation
\[
\pi_q = (\tilde\pi_{i_1, q} \otimes \tilde\pi_{i_2, q} \otimes \cdots \otimes \tilde\pi_{i_l, q} \otimes \chi) \circ \Delta^{(l)}: \cO_q[K] \to \mathcal{B}(\mathsf{H}).
\]
maps $\cO_q^{\bfA_0}[K]$ to $\mathcal{B}(\ell^2\mathbb{N})^{\otimes l}\otimes C(T)$.
Since $\mathsf{f}^\lambda_j = c^{V(\lambda)}_{f^j, v_1}$ we have
\[
\Delta^{(l)}(\mathsf{f}^\lambda_j) = \sum_{k_1, \cdots, k_l}
c^{V(\lambda)}_{f^j, v_{k_1}} \otimes c^{V(\lambda)}_{f^{k_1}, v_{k_2}} \otimes \cdots \otimes c^{V(\lambda)}_{f^{k_l}, v_1}.
\]
Let $\epsilon_T: C(T) \to \mathbb{C}$ denote the evaluation at the identity in $T$. Then
\begin{equation}
\label{eq:Soib_nonvanishing}
(\id^{\otimes l} \otimes \epsilon_T) \circ \pi_0 (\mathsf{f}_{b_j}) = \sum_{k_1, \cdots, k_{l - 1}}
\tilde\pi_{i_1, 0}(c^{V(\lambda)}_{f^j, v_{k_1}}) \otimes \tilde\pi_{i_2, 0}(c^{V(\lambda)}_{f^{k_1}, v_{k_2}}) \otimes \cdots \otimes \tilde\pi_{i_l, 0}(c^{V(\lambda)}_{f^{k_{l-1}}, v_1}).
\end{equation}
With respect to the standard basis for $\ell^2(\mathbb{N})$, the operators $\tilde\pi_0(c^{V(\lambda)}_{f^i, v_j})$ have all coefficients equal to $0$ or $1$, see \cref{thm:SL2_limit}, so there can be no cancellation in the sum \eqref{eq:Soib_nonvanishing}. Therefore it suffices to check that at least one of the terms in the sum is non-zero.
For this, we use the notion of \emph{string patterns}, see for instance \cite[Chapter 11]{BumpSchilling}, which shows that if $b_\lambda\in\mathcal{B}(\lambda)$ is the highest weight element and $b_j\in\mathcal{B}(\lambda)$ is any other element, then
there exists an $l$-tuple $(a_1, \cdots, a_l)$ of non-negative integers such that $b_\lambda = \tilde{E}_{i_l}^{a_l} \cdots \tilde{E}_{i_1}^{a_1} b_j$.
Explicitly, these are given by $a_1 = \varepsilon_{i_1}(b)$, $a_2 = \varepsilon_{i_2}(\tilde{E}_{i_1}^{a_1} b)$, \emph{etc}.
Therefore $b_{j_m} := \tilde{E}_{i_m}^{a_l} \cdots \tilde{E}_{i_1}^{a_1} b$
for $m=1,\cdots,l$ is a highest weight crystal element for the restriction to $\mathcal{U}_{q_{i_m}}(\mathfrak{su}(2))$ associated to the simple root $\alpha_{i_m}$. Therefore, by \cref{thm:SL2_limit} we have
\begin{equation*}
\tilde\pi_{i_1,0}(c^{V(\lambda)}_{f^j,v_{j_1}}) \otimes \tilde\pi_{i_2,0}(c^{V(\lambda)}_{f^{j_1},v_{j_2}}) \otimes \cdots \otimes \tilde\pi_{i_l,0}(c^{V(\lambda)}_{f^{j_{l-1}},v_{j_l}}) \neq 0,
\end{equation*}
and so $\pi_0(f^\lambda_j)\neq0$.
The result for $\pi_0(\mathsf{v}_b)$ follows by taking adjoints.
\end{proof}
Now let $\Phi:\mathrm{KP}_\CC(\hgraphstd)\to\cO[K_0]$ denote the $*$-homomorphism of \eqref{eq:KPiso}.
\begin{corollary}
\label{cor:projections_nonzero}
For any $v \in \hgraph^0$ we have $\Phi(p_v) \neq 0$.
\end{corollary}
\begin{proof}
By \cref{cor:projection-rightend}, and \cref{prop:AC_universal_map}, we have
\[
\Phi(p_v)=\sum_i \pi_0(\mathsf{f}^\lambda_{i})^* \pi_0(\mathsf{f}^\lambda_{i}),
\]
where the sum is over all $i\in\{1,\ldots,\dim V(\lambda)$ such that $\mathsf{R}_\mathsf{C}(b_i) = v$. The result now follows from \cref{lem:f_v_nonzero}.
\end{proof}
We are now in a position to apply the gauge invariant uniqueness theorem \cite{KumPas}, or more precisely the algebraic version due to Aranda Pino, Clark, an Huef and Raeburn \cite{Aranda:Kumjian-Pask}.
Recall that the Kumjian-Pask algebra $\mathrm{KP}_\CC(\hgraph)$ is equipped with a natural $\mathbb{Z}^r$-grading, see \cite[Theorem 3.4]{Aranda:Kumjian-Pask}. Explicitly, each generator $s_e$ is given degree $d(e)$, $s_e^*$ is given degree $-d(e)$ and the vertex projections $p_v$ are given degree $0$.
Meanwhile, the $\bP$-grading on $\cO_q[K]$ from \cref{def:Zr-grading} is such that $\mathsf{f}^\lambda_i = c^{V(\lambda)}_{f^i,v_1}$ has degree $\lambda$ and $\mathsf{v}^\lambda_i$ has degree $-\lambda$. Here, as previously, we are identifying $\mathbb{Z}^r$ with $\bP$ via $(n_1,\cdots,n_r) \mapsto \sum_i n_i\varpi_i$.
Moreover, the Soibelman representations $\pi_q:\cO_q[K] \to \mathcal{B}(\ell^2(\mathbb{N}))^{\otimes l}) \otimes C(T) \subset \mathcal{B}(\mathsf{H})$ are all grading-preserving, in the following sense. By Pontrjagin duality, we have $\mathcal{O}[T] \cong \mathbb{C}[\bP]$, which is a $\bP$-graded algebra in the obvious way. Examining the formula for $\pi_q$ in \cref{def:big_cell_rep}, we see that $\pi_q:\cO_q[K]_\mu \mapsto \mathcal{B}(\ell^2(\mathbb{N})^{\otimes l}) \otimes \mathcal{O}[T]_\mu$.
Considering the limit as $q\to0$, this makes $\cO[K_0]$ into a $\bP$-graded algebra.
Now let $e=(v,b)\in\Lambda_{\lie{g}}$ with degree $\mathsf{d}(e)=\lambda$, so that $b=b^\lambda_i\in\mathcal{B}(\lambda)$ for some $i$. The image of $s_e$ in $\cA_{\lie{k}}$ is $S_e=\mathsf{v}^\lambda_{b_i} P_v$. Under the universal map $\cA_{\lie{k}}\to\cO[K_0]$, the element $\mathsf{v}^\lambda_{b_i}$ maps to $\pi_0(\mathsf{v}^\lambda_i)$, which has degree $-\lambda$, while the projection $P_v$ maps to degree $0$. It follows that the $*$-homomorphism $\Phi:\mathrm{KP}_\CC(\hgraphstd)\to\cO[K_0]$ is grading-reversing.
Thus, equipping $\cO[K_0]$ with the opposite grading and taking into account \cref{cor:projections_nonzero}, we can immediately apply the graded-uniqueness theorem as appearing in \cite[Theorem 4.1]{Aranda:Kumjian-Pask}. We get the following result.
\begin{theorem}
\label{thm:Soibelman_faithful}
Let $K$ be a compact, connected, simply connected semisimple Lie group.
The map $\Phi: \mathrm{KP}_\CC(\hgraphstd) \to\mathcal{O}[K_0]$ is an isomorphism of $*$-algebras. Consequently, we have a $C^*$-isomorphism $C^*(\Lambda_{\lie{g}}) \cong C(K_0)$.
\end{theorem}
By restricting to the subalgebras spanned by elements of degree $\mu\in\Lambda_{\lie{g}, \Cset}$ for the $N$-tuple of dominant weights $\mathsf{C} = (\vartheta_1,\cdots,\vartheta_N)$ described in \cref{sec:flag_manifolds}, we deduce the following more general result.
\begin{theorem}
\label{thm:Soibelman_faithful_flag}
Let $K$ be a compact, connected, semisimple Lie group, not necessarily simply connected.
Let $S \subseteq {\boldsymbol{\Delta}}$ be a set of simple roots, with $X_S$ be the associated flag variety and $Y_S$ the torus bundle over $X_S$, as described in \cref{sec:flag_manifolds}. Let $\mathsf{C}$ be the $N$-tuple of dominant weights generating the submonoid $\bP^+_{K,S}$ from \cref{thm:flag_generators2}.
The map $\Phi$ above restricts to an isomorphism of $*$-algebras $\mathrm{KP}_\CC(\hgraph) \to \mathcal{O}[Y_{S,0}]$. The degree $0$ part $\mathrm{KP}_\CC(\hgraph)_0$ is isomorphic to $\mathcal{O}[X_{S,0}]$.
Taking $C^*$-completions gives an isomorphism $C^*(\Lambda_{\lie{g}, \Cset}) \cong C(Y_{S,0})$ with gauge-invariant subalgebra $(C^*(\Lambda_{\lie{g}, \Cset}))_0 \cong C(X_{S,0})$.
\end{theorem}
It is well-known that the family of $C^*$-algebras $C(K_q)$, with $q\in(0,\infty)$ form a continuous field of $C^*$-algebras. The matrix coefficients $c^V_{f,v}$, with $v\in V(\lambda)$, $f\in V(\lambda)^*$,
form a generating family of continuous sections. It follows immediately from \cref{thm:Soibelman_faithful} that this continuous field can be extended to a continuous field over $[0,\infty)$, with fibre at $0$ being $\cO[K_0]\cong\mathrm{KP}_\CC(\hgraphstd)$. A generating set of continuous sections is given locally near $q=0$ by the matrix coefficients belonging to the compact ${\mathbf{A}_0}$-form $\cO_q^{\bfA_0}[K]$.
Moreover, it is well-known that we have an isomorphism \linebreak $\cO[K_q]\cong\mathcal{O}[K_{q^{-1}}]$, see for instance \cite[Lemma 2.4.2]{NesTus:book}. Therefore, we can also extend the continuous field to $q=\infty$, with fibre $\mathcal{O}[K_\infty]\cong\cO[K_0]\cong\mathrm{KP}_\CC(\hgraphstd)$. Restricting these continuous fields to the fields subalgebras generated by the matrix coefficients of appropriate simple modules, we obtain \cref{thm:continuous_field}.
\section{Further properties and examples}
\label{sec:further-properties}
We conclude with some additional remarks on the structure of the \linebreak higher-rank graphs $\Lambda_{\lie{g}, \Cset}$, in particular in relation to the role of the Weyl group.
First we need the following property of the Cartan braiding, which is a consequence of a similar property for the ordinary braiding $\hat{R}$.
\begin{lemma}
\label{lem:commutation-weyl}
Let $\mathcal{B}(\lambda)$ and $\mathcal{B}(\lambda')$ be irreducible crystals.
Then the condition
\[
\braid_{\mathcal{B}(\lambda), \mathcal{B}(\lambda')}(b \otimes b') = b' \otimes b
\]
is equivalent to $b \otimes b'$ being in the Cartan component and $(\wt(b), \wt(b')) = (\lambda, \lambda')$.
\end{lemma}
\begin{proof}
Consider $V = V(\lambda)$ and $W = V(\lambda')$.
Using the definition of the braiding from \eqref{eq:R-matrix_convention} we have $(\hat{R}_{V, W})_{i j}^{j i} = q^{-(\wt(v_i), \wt(w_j))}$ for any $v_i \in V$ and $w_j \in W$.
Taking into account that $(\wt(v_i), \wt(w_j)) \leq (\lambda, \lambda')$, we get
\begin{equation}
\label{eq:rescaled-weyl}
\lim_{q \to 0} q^{(\lambda, \lambda')} (\hat{R}_{V, W})_{i j}^{j i} =
\begin{cases}
1 & (\wt(v_i), \wt(w_j)) = (\lambda, \lambda'), \\
0 & \mathrm{otherwise}.
\end{cases}
\end{equation}
Now denote by $b_i \in \mathcal{B}(\lambda)$ and $c_j \in \mathcal{B}(\lambda')$ the crystal elements corresponding to $v_i$ and $w_j$.
According to \cref{thm:braiding_limit}, we have $\braid(b_i \otimes c_j) = 0$ if $b_i \otimes c_j$ is not in the Cartan component, otherwise $\braid(b_i \otimes c_j) = c_k \otimes b_l$ for some $k, l$.
Then \eqref{eq:rescaled-weyl} shows that, in the case $(\wt(v_i), \wt(w_j)) = (\lambda, \lambda')$, we have $\braid(b_i \otimes c_j) = c_j \otimes b_i$.
The converse also follows in a similar way from \eqref{eq:rescaled-weyl}.
\end{proof}
\begin{remark}
Suppose $b$ and $b'$ are such that $\wt(b) = w \lambda$ and $\wt(b') = w \lambda'$ for some $w \in W$, where $W$ denotes the Weyl group. Then $W$-invariance of the inner product gives $(\wt(b), \wt(b')) = (\lambda, \lambda')$.
\end{remark}
As usual, let $\mathsf{C}=(\vartheta_1,\ldots,\vartheta_N)$ be a family of dominant weights. For $w\in W$, let us write $b_w$, for the unique element of $\mathcal{B}(\rho_\mathsf{C})$ with extremal weight $w\rho_\mathsf{C}$.
\begin{proposition}
Consider the map
\[
W \to \hgraph^0, \quad
w \mapsto \mathsf{R}_\mathsf{C}(b_w).
\]
Write $W_{\rho_\mathsf{C}} \subset W$ for the stabilizer of the dominant weight $\rho_\mathsf{C}$.
\begin{enumerate}
\item[(1)] We have an embedding of $W / W_{\rho_\mathsf{C}}$ into $\hgraph^0$.
\item[(2)] When $\mathsf{C} = {\boldsymbol{\Pi}}$ this is an embedding of $W$.
\end{enumerate}
\end{proposition}
\begin{proof}
(1) We have a map $W / W_{\rho_\mathsf{C}} \to \hgraph^0$, since $w \rho_\mathsf{C} = w' \rho_\mathsf{C}$ if and only if $w^{-1} w' \in W_{\rho_\mathsf{C}}$.
Hence it suffices to show that $\mathsf{R}_\mathsf{C}(b_w) \neq \mathsf{R}_\mathsf{C}(b_{w'})$ whenever $b_w \neq b_{w'}$.
Identify $\mathcal{B}(\rho_\mathsf{C}) = \mathcal{B}(\vartheta_1 + \cdots \vartheta_N)$ with the Cartan component of $\mathcal{B}(\vartheta_1) \otimes \cdots \otimes \mathcal{B}(\vartheta_N)$. Then $b_w \in \mathcal{B}(\rho_\mathsf{C})$ corresponds to a unique element $b_1 \otimes \cdots \otimes b_N \in \mathcal{B}(\vartheta_1) \otimes \cdots \otimes \mathcal{B}(\vartheta_N)$.
Since $w \rho_\mathsf{C} = w \vartheta_1 + \cdots + w \vartheta_N$ has multiplicity one, being in the Weyl group orbit of $\rho_\mathsf{C}$, we find that $\wt(b_i) = w \vartheta_i$ for $i = 1, \cdots, N$.
It follows that $(\wt(b_i), \wt(b_j)) = (\vartheta_i, \vartheta_j)$ for any $i$ and $j$.
According to \cref{prop:rightend-sigma}, the right end $\mathsf{R}_{\vartheta_k}(b_w)$ is equal to the rightmost factor of $\braid_{N - 1} \circ \cdots \circ \braid_k(b_1 \otimes \cdots \otimes b_N)$.
Then using \cref{lem:commutation-weyl} we obtain
\[
\braid_{N - 1} \circ \cdots \circ \braid_k(b_1 \otimes \cdots \otimes b_N) = b_1 \otimes \cdots \otimes \widehat{b_k} \otimes \cdots \otimes b_N \otimes b_k.
\]
It follows that $\mathsf{R}_{\vartheta_k}(b_w) = b_k$ for any $k = 1, \cdots, N$.
Since $b_1 \otimes \cdots \otimes b_N$ is the unique element corresponding to $b_w$, we find that $\mathsf{R}_\mathsf{C}(b_w) \neq \mathsf{R}_\mathsf{C}(b_{w'})$ when $b_w \neq b_{w'}$.
(2) In this case the stabilizer of $\rho = \varpi_1 + \cdots + \varpi_r$ is trivial.
\end{proof}
\begin{remark}
A bit less formally we could write $\mathsf{R}_\mathsf{C}(b_w) = b_w$, where on the left-hand side $b_w$ is identified with $b_1 \otimes \cdots \otimes b_N \in \mathcal{B}(\vartheta_1) \otimes \cdots \otimes \mathcal{B}(\vartheta_N)$ and on the right-hand side with the $N$-tuple $(b_1, \cdots, b_N)$.
\end{remark}
In the case of the $2$-graph for $K = SU(3)$ given in \cref{ex:graph-su3}, the number of vertices coincides with the number of elements of the Weyl group.
This can be seen to hold more generally for $SU(n)$. But it is not true in general.
\begin{example}
In \cref{ex:graph-su3} we saw that the vertices for $\mathfrak{g} = \mathfrak{sl}_3$ are given by
\begin{align*}
v_1 & = (a_1, b_1), &
v_2 & = (a_1, b_2), &
v_3 & = (a_2, b_1), \\
v_4 & = (a_2, b_3), &
v_5 & = (a_3, b_2), &
v_6 & = (a_3, b_3).
\end{align*}
These correspond to the $6$ elements of the Weyl group of $\mathfrak{sl}_3$, which is the symmetric group $S_3$.
More precisely, they correspond to the elements
\[
w_1 = 1, \quad
w_2 = s_2, \quad
w_3 = s_1, \quad
w_4 = s_1 s_2, \quad
w_5 = s_2 s_1, \quad
w_6 = s_1 s_2 s_1.
\]
This can be checked by computing their action on the weights of $a_i$ and $b_j$.
\end{example}
What is special about $\mathfrak{g} = \mathfrak{sl}_n$ is that every fundamental representation is \emph{minuscule}, that is the Weyl group acts transitively on the weights of the representation.
To show that this does not always hold, we consider the example of $\mathfrak{g} = C_2$, the symplectic Lie algebra of rank two.
\begin{example}
We consider the Lie algebra $\mathfrak{g} = C_2$ of rank two.
The crystal graphs of the fundamental representations are
\[
\begin{split}
\mathcal{B}(\varpi_1) : \quad & a_1 \xrightarrow{1} a_2 \xrightarrow{2} a_3 \xrightarrow{1} a_4, \\
\mathcal{B}(\varpi_2) : \quad & b_1 \xrightarrow{2} b_2 \xrightarrow{1} b_3 \xrightarrow{1} b_4 \xrightarrow{2} b_5.
\end{split}
\]
Here $\mathcal{B}(\varpi_1)$ corresponds to a minuscule representation, while $\mathcal{B}(\varpi_2)$ does not.
Indeed, the weight $\wt(b_3) = 0$ is not in the orbit of $\varpi_2$ under the Weyl group.
The crystal graph of $\mathcal{B}(\varpi_1) \otimes \mathcal{B}(\varpi_1)$ is given by
\begin{center}
\begin{tikzcd}[column sep=1.5em, row sep=1em]
a_1 \otimes a_1 \arrow[r, "1"] & a_2 \otimes a_1 \arrow[r, "2"] \arrow[d, "1"] & a_3 \otimes a_1 \arrow[r, "1"] & a_4 \otimes a_1 \arrow[d, "1"] \\
a_1 \otimes a_2 \arrow[d, "2"] & a_2 \otimes a_2 \arrow[r, "2"] & a_3 \otimes a_2 \arrow[d, "2"] & a_4 \otimes a_2 \arrow[d, "2"] \\
a_1 \otimes a_3 \arrow[r, "1"] & a_2 \otimes a_3 \arrow[d, "1"] & a_3 \otimes a_3 \arrow[r, "1"] & a_4 \otimes a_3 \arrow[d, "1"] \\
a_1 \otimes a_4 & a_2 \otimes a_4 \arrow[r, "2"] & a_3 \otimes a_4 & a_4 \otimes a_4
\end{tikzcd}
\end{center}
with connected components corresponding to $\mathcal{B}(2 \varpi_1)$, $\mathcal{B}(\varpi_2)$ and $\mathcal{B}(0)$.
The crystal graph of $\mathcal{B}(\varpi_2) \otimes \mathcal{B}(\varpi_2)$ is given by
\begin{center}
\begin{tikzcd}[column sep=1.5em, row sep=1em]
b_1 \otimes b_1 \arrow[r, "2"] & b_2 \otimes b_1 \arrow[r, "1"] \arrow[d, "2"] & b_3 \otimes b_1 \arrow[r, "1"] \arrow[d, "2"] & b_4 \otimes b_1 \arrow[r, "2"] & b_5 \otimes b_1 \arrow[d, "2"] \\
b_1 \otimes b_2 \arrow[d, "1"] & b_2 \otimes b_2 \arrow[r, "1"] & b_3 \otimes b_2 \arrow[r, "1"] & b_4 \otimes b_2 \arrow[d, "1"] & b_5 \otimes b_2 \arrow[d, "1"] \\
b_1 \otimes b_3 \arrow[r, "2"] \arrow[d, "1"] & b_2 \otimes b_3 \arrow[r, "1"] & b_3 \otimes b_3 \arrow[d, "1"] & b_4 \otimes b_3 \arrow[r, "2"] \arrow[d, "1"] & b_5 \otimes b_3 \arrow[d, "1"] \\
b_1 \otimes b_4 \arrow[r, "2"] & b_2 \otimes b_4 \arrow[d, "2"] & b_3 \otimes b_4 \arrow[d, "2"] & b_4 \otimes b_4 \arrow[r, "2"] & b_5 \otimes b_4 \arrow[d, "2"] \\
b_1 \otimes b_5 & b_2 \otimes b_5 \arrow[r, "1"] & b_3 \otimes b_5 \arrow[r, "1"] & b_4 \otimes b_5 & b_5 \otimes b_5
\end{tikzcd}
\end{center}
with connected components $\mathcal{B}(2 \varpi_2)$, $\mathcal{B}(2 \varpi_1)$ and $\mathcal{B}(0)$.
Next, the crystal graph of $\mathcal{B}(\varpi_1) \otimes \mathcal{B}(\varpi_2)$ is given by
\begin{center}
\begin{tikzcd}[column sep=1.5em, row sep=1em]
a_1 \otimes b_1 \arrow[r, "1"] \arrow[d, "2"] & a_2 \otimes b_1 \arrow[r, "2"] & a_3 \otimes b_1 \arrow[r, "1"] \arrow[d, "2"] & a_4 \otimes b_1 \arrow[d, "2"] \\
a_1 \otimes b_2 \arrow[r, "1"] & a_2 \otimes b_2 \arrow[d, "1"] & a_3 \otimes b_2 \arrow[r, "1"] & a_4 \otimes b_2 \arrow[d, "1"] \\
a_1 \otimes b_3 \arrow[d, "1"] & a_2 \otimes b_3 \arrow[r, "2"] \arrow[d, "1"] & a_3 \otimes b_3 \arrow[d, "1"] & a_4 \otimes b_3 \arrow[d, "1"] \\
a_1 \otimes b_4 \arrow[d, "2"] & a_2 \otimes b_4 \arrow[r, "2"] & a_3 \otimes b_4 \arrow[d, "2"] & a_4 \otimes b_4 \arrow[d, "2"] \\
a_1 \otimes b_5 \arrow[r, "1"] & a_2 \otimes b_5 & a_3 \otimes b_5 \arrow[r, "1"] & a_4 \otimes b_5 \\
\end{tikzcd}
\end{center}
and the crystal graph of $\mathcal{B}(\varpi_2) \otimes \mathcal{B}(\varpi_1)$ is given by
\begin{center}
\begin{tikzcd}[column sep=1.5em, row sep=1em]
b_1 \otimes a_1 \arrow[r, "2"] \arrow[d, "1"] & b_2 \otimes a_1 \arrow[r, "1"] & b_3 \otimes a_1 \arrow[r, "1"] & 4_1 \otimes a_1 \arrow[r, "2"] \arrow[d, "1"] & b_5 \otimes a_1 \arrow[d, "1"] \\
b_1 \otimes a_2 \arrow[r, "2"] & b_2 \otimes a_2 \arrow[r, "1"] \arrow[d, "2"] & b_3 \otimes a_2 \arrow[d, "2"] & b_4 \otimes a_2 \arrow[r, "2"] & b_5 \otimes a_2 \arrow[d, "2"] \\
b_1 \otimes a_3 \arrow[d, "1"] & b_2 \otimes a_3 \arrow[r, "1"] & b_3 \otimes a_3 \arrow[r, "1"] & b_4 \otimes a_3 \arrow[d, "1"] & b_5 \otimes a_3 \arrow[d, "1"] \\
b_1 \otimes a_4 \arrow[r, "2"] & b_2 \otimes a_4 \arrow[r, "1"] & b_3 \otimes a_4 & b_4 \otimes a_4 \arrow[r, "2"] & b_5 \otimes a_4
\end{tikzcd}
\end{center}
They have connected components $\mathcal{B}(\varpi_1 + \varpi_2)$ and $\mathcal{B}(\varpi_1)$.
Examining the Cartan component of the latter two diagrams, we see that the Cartan braiding $\braid: \mathcal{B}(\varpi_1) \otimes \mathcal{B}(\varpi_2) \to \mathcal{B}(\varpi_2) \otimes \mathcal{B}(\varpi_1)$ restricted to the Cartan component is given by
\begin{small}
\begin{align*}
\braid(a_1 \otimes b_1) & = b_1 \otimes a_1, &
\braid(a_1 \otimes b_2) & = b_2 \otimes a_1, &
\braid(a_2 \otimes b_1) & = b_1 \otimes a_2, &
\braid(a_2 \otimes b_2) & = b_3 \otimes a_1, \\
\braid(a_2 \otimes b_3) & = b_4 \otimes a_1, &
\braid(a_2 \otimes b_4) & = b_4 \otimes a_2, &
\braid(a_3 \otimes b_1) & = b_2 \otimes a_2, &
\braid(a_3 \otimes b_2) & = b_2 \otimes a_3, \\
\braid(a_3 \otimes b_3) & = b_5 \otimes a_1, &
\braid(a_3 \otimes b_4) & = b_5 \otimes a_2, &
\braid(a_3 \otimes b_5) & = b_5 \otimes a_3, &
\braid(a_4 \otimes b_1) & = b_3 \otimes a_2, \\
\braid(a_4 \otimes b_2) & = b_3 \otimes a_3, &
\braid(a_4 \otimes b_3) & = b_4 \otimes a_3, &
\braid(a_4 \otimes b_4) & = b_4 \otimes a_4, &
\braid(a_4 \otimes b_5) & = b_5 \otimes a_4.
\end{align*}
\end{small}
Choosing the colours $\mathsf{C} = (\varpi_1, \varpi_2)$, we compute the right ends
\begin{align*}
v_1 & = (a_1, b_1), &
v_2 & = (a_1, b_2), &
v_3 & = (a_2, b_1), &
v_4 & = (a_1, b_3), &
v_5 & = (a_2, b_4), \\
v_6 & = (a_3, b_2), &
v_7 & = (a_3, b_3), &
v_8 & = (a_3, b_5), &
v_9 & = (a_4, b_4), &
v_{10} & = (a_4, b_5),
\end{align*}
We see that $2$ of these vertices do not come from the Weyl group of $C_2$, since the Weyl group consists of $8$ elements.
The additional vertices are $v_4 = (a_1, b_3)$ and $v_7 = (a_3, b_3)$, both of which feature the non-extremal element $b_3 \in \mathcal{B}(\varpi_2)$.
Note that $a_1 \otimes b_3$ does not even belong to the Cartan component of $\mathcal{B}(\varpi_1) \otimes \mathcal{B}(\varpi_2)$.
We omit the computations for the edges and simply report the results in \cref{fig:graph-C2}, where we split the $2$-graph with respect to the two colours for readability.
Notice that the two vertices which do not come from the Weyl group, namely $v_4$ and $v_7$, do not have any loops.
\begin{figure}[h]
\centering
\begin{tikzpicture}[
vertex/.style = {align=center, inner sep=2pt},
Rarr/.style = {->, red},
Barr/.style = {->, blue, dashed},
Rloop/.style = {->, red, out=165, in=195, loop},
Bloop/.style = {->, blue, out=15, in=-15, loop, dashed}
]
\node (v1) at ( 0, 0) [vertex] {$v_1$};
\node (v2) at (-2,-1) [vertex] {$v_2$};
\node (v3) at ( 2,-1) [vertex] {$v_3$};
\node (v4) at ( 0,-2) [vertex] {$v_4$};
\node (v5) at (-2,-3) [vertex] {$v_5$};
\node (v6) at ( 2,-3) [vertex] {$v_6$};
\node (v7) at ( 0,-4) [vertex] {$v_7$};
\node (v8) at (-2,-5) [vertex] {$v_8$};
\node (v9) at ( 2,-5) [vertex] {$v_9$};
\node (v10) at ( 0,-6) [vertex] {$v_{10}$};
\draw [Rloop] (v1) edge (v1);
\draw [Rloop] (v2) edge (v2);
\draw [Rarr] (v3) edge (v1);
\draw [Rloop] (v3) edge (v3);
\draw [Rarr] (v4) edge (v2);
\draw [Rarr] (v5) edge (v4);
\draw [Rloop] (v5) edge (v5);
\draw [Rarr] (v6) edge[bend right=20] (v2);
\draw [Rarr] (v6) edge (v3);
\draw [Rloop] (v6) edge (v6);
\draw [Rarr] (v7) edge (v2);
\draw [Rarr] (v7) edge (v3);
\draw [Rarr] (v7) edge (v6);
\draw [Rarr] (v8) edge (v4);
\draw [Rarr] (v8) edge (v5);
\draw [Rloop] (v8) edge (v8);
\draw [Rarr] (v9) edge (v4);
\draw [Rarr] (v9) edge[bend left=20] (v5);
\draw [Rarr] (v9) edge (v7);
\draw [Rloop] (v9) edge (v9);
\draw [Rarr] (v10) edge[bend left] (v4);
\draw [Rarr] (v10) edge (v5);
\draw [Rarr] (v10) edge (v8);
\draw [Rloop] (v10) edge (v10);
\node (vb1) at (6, 0) [vertex] {$v_1$};
\node (vb2) at (4,-1) [vertex] {$v_2$};
\node (vb3) at (8,-1) [vertex] {$v_3$};
\node (vb4) at (6,-2) [vertex] {$v_4$};
\node (vb5) at (4,-3) [vertex] {$v_5$};
\node (vb6) at (8,-3) [vertex] {$v_6$};
\node (vb7) at (6,-4) [vertex] {$v_7$};
\node (vb8) at (4,-5) [vertex] {$v_8$};
\node (vb9) at (8,-5) [vertex] {$v_9$};
\node (vb10) at (6,-6) [vertex] {$v_{10}$};
\draw [Bloop] (vb1) edge (vb1);
\draw [Barr] (vb2) edge (vb1);
\draw [Bloop] (vb2) edge (vb2);
\draw [Bloop] (vb3) edge (vb3);
\draw [Barr] (vb4) edge (vb1);
\draw [Barr] (vb4) edge (vb2);
\draw [Barr] (vb5) edge[bend left=20] (vb3);
\draw [Barr] (vb5) edge (vb2);
\draw [Barr] (vb5) edge (vb4);
\draw [Bloop] (vb5) edge (vb5);
\draw [Barr] (vb6) edge (vb3);
\draw [Bloop] (vb6) edge (vb6);
\draw [Barr] (vb7) edge (vb3);
\draw [Barr] (vb7) edge (vb6);
\draw [Barr] (vb8) edge (vb3);
\draw [Barr] (vb8) edge[bend left=20] (vb6);
\draw [Barr] (vb8) edge (vb4);
\draw [Barr] (vb8) edge (vb5);
\draw [Bloop] (vb8) edge (vb8);
\draw [Barr] (vb9) edge[bend right=20] (vb3);
\draw [Barr] (vb9) edge (vb6);
\draw [Barr] (vb9) edge (vb7);
\draw [Bloop] (vb9) edge (vb9);
\draw [Barr] (vb10) edge (vb3);
\draw [Barr] (vb10) edge (vb6);
\draw [Barr] (vb10) edge (vb7);
\draw [Barr] (vb10) edge (vb9);
\draw [Bloop] (vb10) edge (vb10);
\end{tikzpicture}
\caption{The $2$-graph for $\mathfrak{g} = C_2$. On the left are the edges for the colour $\varpi_1$ and on the right for the colour $\varpi_2$.}
\label{fig:graph-C2}
\end{figure}
\end{example}
We conclude with one more example corresponding to a quantum homogeneous spaces, to connect with some existing literature.
\begin{example}
Consider the Lie algebra $\mathfrak{g} = \mathfrak{sl}_n$ and the single colour $\mathsf{C} = (\varpi_1)$.
Geometrically this example corresponds to $Y_S = S^{2 n - 1}$ being an odd-dimensional sphere and $X_S = \mathbb{C} P^{n - 1}$, with notation as in \cref{sec:flag_manifolds}.
In this case we have $\rho_\mathsf{C} = \varpi_1$, which obviously implies that $\mathsf{R}_\mathsf{C}(b) = b$ for every $b \in \mathcal{B}(\rho_\mathsf{C})$.
Therefore the vertex set of $\Lambda_{\lie{g}, \Cset}$ can be identified with the weights of the minuscule representation $V(\varpi_1)$.
Let us label the crystal basis by $\{b_1, \cdots, b_n\}$, where $b_{i + 1} = \tilde{F}_i b_i$.
It is easy to see that $b_i \otimes b_j$ is in the Cartan component if and only if $i \geq j$ (in fact, one can generalize this to any minuscule representation).
Therefore there is a single edge from $b_i$ to $b_j$ for every $i\geq j$.
This graph coincides with the one given in \cite[Theorem 4.4]{HonSzy:spheres} (up to switching sources and ranges of edges, to match up with our conventions).
\begin{figure}[h]
\centering
\begin{tikzpicture}[
vertex/.style = {align=center, inner sep=2pt},
Rarr/.style = {->, red},
Rloop/.style = {->, red, out=135, in=45, loop},
]
\node (v1) at (0,0) [vertex] {$v_1$};
\node (v2) at (2,0) [vertex] {$v_2$};
\node (v3) at (4,0) [vertex] {$v_3$};
\draw [Rloop] (v1) edge (v1);
\draw [Rloop] (v2) edge (v2);
\draw [Rloop] (v3) edge (v3);
\draw [Rarr] (v1) edge (v2);
\draw [Rarr] (v2) edge (v3);
\draw [Rarr] (v1) edge[bend right=20] (v3);
\end{tikzpicture}
\caption{The graph for the case $n = 3$, corresponding to the sphere $S^5$.}
\label{fig:graph-projective}
\end{figure}
\end{example}
\bibliographystyle{alpha}
|
1,108,101,566,352 | arxiv | \section{Introduction} \label{sec:intro}
Over recent years it has become apparent that massive black holes
lie at the centre of the majority of local galaxies. In addition,
correlations between the mass of these central black holes ($M_{\rm BH}$)
and host galaxy properties suggests that the evolution of
galaxies must be intimately related to the growth of their central
black holes. Galaxy properties which display a correlation with
$M_{\rm BH}$ include spheroid luminosity \cite{mag98}, spheroid mass
\cite{k+r95,fer02} and stellar velocity dispersion \cite{f+m00,geb00}.
In this paper we investigate the relationship between the mass of QSO
black holes and the mass of the dark matter halos that host them. Dark
matter halo masses derived from QSO clustering were calculated for
QSOs in the 2dF QSO Redshift Survey (2QZ; Croom et al. 2004) by Croom
et al. (2005). These were evaluated for QSOs in 10 redshift bins from
$z\sim0.5$ to $2.5$. We take the identical sample to that used by Croom
et al. (2005) and calculate average virial black hole masses for the
QSOs in each redshift bin. This is achieved by constructing a composite
spectrum for each redshift bin from all of the QSOs in that bin. The
average virial black hole masses are then estimated from the widths of
broad emission lines in the composite spectra.
In Section~\ref{sec:bhmass} we describe the virial black
hole mass estimators used in this paper, section~\ref{sec:d+a}
describes our data and analysis including composite making
and line width measurement.
In Section \ref{sec:dhmass} we briefly review the clustering
measurements which are used to derive dark halo host mass. Our
results are presented in Section \ref{sec:res} and we present our
conclusions in Section \ref{sec:conc}.
Throughout this paper we assume a
flat $(\Omega_{\rm m},\Omega_{\Lambda})=(0.3,0.7)$,
$H_{0}=70\,{\rm km\,s}^{-1}\,{\rm Mpc}^{-1}$ cosmology.
\section{Black Hole Mass Estimates}\label{sec:bhmass}
At the basis of measuring QSO black hole masses is the virial theorem
such that the black hole mass $M_{\rm BH}\approx
rv^{2}/G$, where $v$ is the
velocity dispersion of material gravitationally bound to the black
hole at a distance $r$ from it. Direct measurements of $r$ and $v$ have
been taken, for relatively nearby systems, in a variety of ways. Most
notably reverberation mapping programs have succeeded in accurately
measuring $M_{\rm BH}$ for tens of local QSOs \cite{wpm99,kasp00,pet04}. This
technique requires careful long-term observations of the variability
in QSO spectra and can take years to produce results.
Therefore there has been considerable effort
put into finding quicker, simpler methods for estimating black hole
masses which can be extended to higher redshifts.
One result of reverberation studies of nearby QSOs was the discovery
of a strong correlation between the radius of the H$\beta$ emitting
region around AGN, $r_{{\rm H}\beta}$, and the continuum luminosity at
5100\,{\AA} \cite{kasp00,kasp05}. $v_{{\rm H}\beta}$ can be found by the
relation $v_{{\rm H}\beta}=f\cdot {\rm FWHM}({\rm H}\beta)$ where
FWHM(H$\beta$) is the measured full width at half maximum of the
H$\beta$ spectral line, and $f$ is factor of order unity which depends
on the geometry of the broad line emitting region around the
AGN. Thus, using the width of the H$\beta$ line and the continuum
luminosity at 5100\,{\AA} single epoch estimates for $M_{\rm BH}$ can
be made \cite{kasp00,vest06}.
In higher redshift QSOs the H$\beta$ line is redshifted out
of the optical spectrum. At these higher redshifts UV lines can
be used to try and estimate $M_{\rm BH}$. Unfortunately, while the velocity
dispersions, $v_{\rm UV}$, can readily be measured there is no direct
way to measure the size of the of the emitting region $r_{\rm
UV}$. Hence these estimators must be calibrated with H$\beta$
measurements of the same object. These calibrations have been performed
for the Mg\,{\sc ii}\ and C\,{\sc iv}\ lines by McLure \& Jarvis (2002) and
Vestergaard (2002) respectively. The relation for Mg\,{\sc ii}\ was
then revised for luminous QSOs by McLure \& Dunlop (2004), and the
relation for C\,{\sc iv}\ was revised for an updated cosmology by Vestergaard
\& Peterson (2006). The
resulting $M_{\rm BH}$ estimators used throughout this paper are
\begin{equation} \label{equ_m+j}
\frac{M_{\rm BH}}{{\rm M}_{\odot}}=3.2\left(\frac{\lambda
L_{3000}}{10^{37}\,{\rm W}}\right)^{0.62}\left(\frac{{\rm FWHM(Mg\,{\sc
ii})}}{{\rm km\,s}^{-1}}\right)^{2}
\end{equation}
for the Mg\,{\sc ii}\ line and
\begin{equation} \label{equ_ves}
\frac{M_{\rm BH}}{{\rm M}_{\odot}}=4.6\left(\frac{\lambda
L_{1350}}{10^{37}\,{\rm W}}\right)^{0.53}\left(\frac{{\rm FWHM(C\,{\sc
iv})}}{{\rm km\,s}^{-1}}\right)^{2}
\end{equation}
for the C\,{\sc iv}\ line. Here $\lambda L$ denotes the continuum
luminosities at the specified wavelength and FWHM() corresponds to the
measured full width at half maximum of the spectral line.
\section{Data and Analysis} \label{sec:d+a}
We are concerned with finding mean black hole masses for QSOs in the
redshift bins shown in table~\ref{tab:res}. Since these predominantly
cover redshift ranges where the H$\beta$ line is shifted off the end
of the visible spectrum we exploit the methods of McLure \& Jarvis
(2002) and Vestergaard \& Peterson (2006) for calculating these
masses. To this
end we require measurements of Mg\,{\sc ii}\ and C\,{\sc iv}\ velocity
widths, as well as monochromatic continuum luminosities near these
lines (3000\,{\AA} and 1350\,{\AA} respectively). It should be noted
that both of these UV mass estimators exhibit considerable scatter.
It has been
suggested that this may be intrinsic, possibly due to geometric
considerations of the AGN \cite{m+d02,smit02}. However, in our
analysis we make use of composite spectra created by combining all of
the individual 2QZ spectra in the redshift bins ($\sim2000$ objects
per bin). Composite spectra have
several advantages. The high signal-to-noise they provide allows
high precision in the measurement of line widths, and combining many
spectra should average over intrinsic (e.g. geometric) variations.
In this section we briefly describe the 2QZ spectral data and our
method for constructing composite spectra from them. We then discuss
the process by which we measured the width of Mg\,{\sc ii}\ and C\,{\sc iv}\ lines in
the composites, and finally how we calculated the monochromatic
luminosities $\lambda L_{3000}$ and $\lambda L_{1350}$.
\subsection{QSO spectra}
All of the data in this paper come from the 2QZ and are described in detail
elsewhere \cite{2qz12}. The sample contains $>23\,000$ spectra
of QSOs in the magnitude range $18.25<b_{j}<20.85$ observed with the
two-degree field instrument on the AAT.
Spectra cover the wavelength range
3700-7900\,{\AA} with a dispersion of 4.3\,{\AA}\,pixel$^{-1}$
and instrumental resolution of 9\,{\AA}. Spectra were typically
observed for 3300-3600\,s giving a median signal-to-noise ratio of
$\sim5.0$\,pixel$^{-1}$.
\subsection{Composite spectra}
The use of composite spectra is not a new technique (e.g. Francis et
al. 1991; Vanden Berk et al. 2001) and a full description of our
method for creating composites is given by Croom et al. (2002). Here
it is worth noting that 2QZ spectra are not flux
calibrated. Therefore, to make the composites the individual spectra
had to be normalised by fitting a polynomial continuum to regions
without emission features. We then divide by this fitted continuum to
uniformly normalise the spectra before combining. In doing so all
information on continuum slope and normalisation is lost while the
emission features remain intact. The normalised spectra were then
shifted to the rest frame and the composites were constructed as the
median of all contributing QSOs in each pixel (of width 1\,\AA).
Errors were determined by taking the 68 per cent semi-interquartile
range of individual QSO pixel values.
\subsection{Measurement of Line Widths} \label{ssec_lw}
Before we measured emission line widths from our composite spectra, we
had to correct for QSO iron emission. This correction was
performed iteratively. A smoothed template
of QSO iron emission \cite{ves01} and linear continuum were fitted to
the data in regions either side of the
emission line in question. The iron template and continuum were then
subtracted from the data, and a
single Gaussian profile was
fitted to the remaining line. In the case of C\,{\sc iv} two other
Gaussians were also fitted to the He\,{\sc ii}/O\,{\sc ii}]
feature just redwards of the line. The iron template and continuum
were then fitted to
the spectrum again, this time excluding any data within
the primary emission line region (defined as $\pm3\sigma$ of the
Gaussian fit
to the line) and in the case of C\,{\sc iv} with the two
other Gaussians (defining the He\,{\sc ii}/O\,{\sc ii}] flux)
subtracted from the data. This process was repeated
until the width of successive Gaussian fits to the line differed by less than
half their associated error. It is worth noting here that the Gaussian
we fit to the primary emission line is not used to measure its width
but only as a mask. Neither the
Mg\,{\sc ii}\ nor the C\,{\sc iv}\ line are well described by a single Gaussian,
instead this allows us to define the parts of the spectrum
unaffected by the emission line to use in our subsequent iron/continuum
fit. Using this method we found that we could
accurately remove the local emission features around both of these
emission lines
without making preliminary assumptions as to their width
(see Fig.~\ref{fig_line-fits}).
Once the iron emission had been subtracted from the spectra it was
possible to measure the width of the lines. Our method for
measuring line widths is similar to that used by Wang, Lu \&
Zhou~(1998) in that we model each line's profile with a set of
Gaussian components. We then take the FWHM of this model as our line
width. We found this process to be more robust than reading the FWHM
directly from the data which can be significantly affected by a single
pixel, and a better indicator of the true FWHM than attempting to
impose a profile on the line such as a single Gaussian or Lorentzian.
Since the Mg\,{\sc ii}\ line is symmetric we can accurately model
its profile with two Gaussians. During the fitting process we tie the central
wavelength of the Gaussians together but all other parameters (five in
total: the central wavelength of the line and, the width and amplitude
of each Gaussian component fit to the line) were left free. The
asymmetry of the C\,{\sc iv}\ line requires that we
use three Gaussians to accurately model its profile with all
nine parameters left free in the fitting process. In each case
the multi-Gaussian models were fit to the data using
the {\sc mrqmin} routine \cite{press89}.
At each step in the above fitting we take into account the
propagation of errors through the process, and we reevaluate the error at each
pixel taking into account uncertainty in the iron and continuum fits.
The errors on the FWHM measurements could then be calculated
analytically from the covariance matrix of the multi-Gaussian fit.
Fig.~\ref{fig_line-fits}
illustrates the line fitting process
for both Mg\,{\sc ii}\ and a C\,{\sc iv}. The multiple Gaussian
components (two for Mg\,{\sc ii}\ and three for C\,{\sc iv}) provide accurate
models for the emission lines.
After each line width had been measured we corrected for the
resolution of the spectrograph (9\,\AA\ rest frame) by subtracting it
in quadrature from the measured line width.
\begin{figure*}
\centering
\centerline{\psfig{file=composite_z092to113_Mall_Mgii.ps,width=8.0cm,angle=-90}\hspace{0.5cm}\psfig{file=composite_z202to225_Mall_Civ.ps,width=8.0cm,angle=-90}}
\caption{Illustrations of the line fitting procedures for Mg\,{\sc ii}\
(left) and C\,{\sc iv}\ (right) in two of the composite spectra, note we
observe very little variation in the line profiles between
composites. In each case the top panel shows
the initial composite spectrum and the smoothed iron template fitted to
it. The lower panels show the spectrum after the iron emission was
subtracted from it. Also shown in the bottom panels are the
Gaussians used to model the line (dashed), as well as Gaussians used
to fit other local emission features (dotted) and the sum of these
for comparison with the composite (dash-dot; which follows
closely the solid line). Vertical lines in the bottom panels
indicate the regions outside of which the iron template
was fit to our data. 68\% interquartile errors for
each point in the composite spectra are shown in the upper panels
(these are hardly visible)
but are omitted from the lower for clarity. For each spectrum the
y axis denotes $F_{\lambda}$ in arbitrary units of flux density.}
\label{fig_line-fits}
\end{figure*}
In calibrating equations~\ref{equ_m+j} and~\ref{equ_ves} McLure \&
Dunlop and Vestergaard \& Peterson define their FWHMs in different ways. Since
$r$ in the virial equation
defines the radius of the broad-line region surrounding AGN McLure \&
Dunlop correct for narrow line Mg\,{\sc ii}\ emission, and
take the FWHM of only the the broad component of the line to calibrate
equation~\ref{equ_m+j}.
However, in their analysis of broad UV lines in QSO spectra Wills et
al.~(1993) found no evidence for a narrow contribution to these
lines. Therefore, Vestergaard \& Peterson (2006) measure the FWHM of
the entire C\,{\sc iv}\ line when calibrating equation~\ref{equ_ves}.
McLure \& Dunlop~(2004) model the Mg\,{\sc ii}\ line with a broad and narrow
component following a similar procedure to that which we have outlined above.
They then record the width of the broader component of
the line to use in their calculations. However, in their fitting they
impose the additional conditions that the velocity width of the
narrower component of the line be $<2000$\,km\,s$^{-1}$, and that the
equivalent width of the narrower component be less than one third the
equivalent width of the broad component. We do not apply these
constraints in our fitting process and find that, while the narrower
components all have equivalent width less than 1/3 that of the broad,
in all but one composite the width of the narrower Mg\,{\sc ii}\ component is
$>2000$\,km\,s$^{-1}$. If we were to add this additional constraint to
the fitting procedure it would degrade the quality of the line fits,
although we find this effect is small. Wills et al.~(1993) find no
evidence for a narrow line component in the Mg\,{\sc ii}\ line in QSO spectra
and we find no reason to subtract off this `narrower' contribution to
the line. We also note that the use of Gaussians in our fitting
process is somewhat arbitrary and hence it is difficult to assign a physical
meaning to each of the components individually. We thus use the FWHM
of the whole line in our calculations for Mg\,{\sc ii}\ (this may introduce
systematic errors to our calculations, see section~\ref{sec:err}).
Note that the asymmetry of the
C\,{\sc iv}\ line (bottom-right panel Fig.~\ref{fig_line-fits}) is of concern
to this analysis. Other investigations of UV QSO spectra have shown
that the C\,{\sc iv}\ line is often found to be blueshifted with respect to
lower ionisation lines such as Mg\,{\sc ii}\ (e.g. Marziani et al. 1996), and
Richards et
al.~(2002) suggested that this may imply the C\,{\sc iv}\ emitting region is
outflowing from the AGN and hence not be fully virialised. However, our
C\,{\sc iv}\ line profiles do not resemble those in
composite spectra constructed by Richards et al. for QSOs with
blueshifted C\,{\sc iv}. The asymmetry we observe in the C\,{\sc iv}\ line could be
caused by a range of possible physical processes including emission
and/or absorption by non-virialised gas, however, a full discussion of
this is beyond the scope of this investigation.
On the
other hand, we note that the original C\,{\sc iv}\ line (top-right
panel Fig.~\ref{fig_line-fits}) shows no obvious asymmetry. The
asymmetry then becomes pronounced after the iron template has been
subtracted from the data. Therefore, it is possible that the asymmetry
we observe may not be caused by any physical process in the
QSOs, but by bad iron subtraction. Vestergaard \& Wilkes (2001) make note
when creating the template that iron emission in the vicinity of the
C\,{\sc iv}\ line can be difficult to isolate. The iron
template is based on spectra of the Seyfert~I galaxy
I~Zwicky~1, which shows considerable Si\,{\sc ii} emission
bluewards of the C\,{\sc iv}\ line. The C\,{\sc iv}\ line itself is unusually weak,
making it
difficult to deblend the carbon, silicon and iron
emission. Vestergaard \& Wilkes (2001) find no emission directly redwards of
the C\,{\sc iv}\ line, and hence the iron template is asymmetric about this
line. It is therefore unclear whether the asymmetry of the C\,{\sc iv}\ line
after the iron emission has been subtracted is real. We find no
evidence of Si\,{\sc ii} emission in any of our composites, and it
may be that this iron template is simply not applicable in this case.
We thus took note of the effect of the iron subtraction on our
data. We repeated our analysis but instead of fitting an iron
template along with a continuum to the data, we simply used a linear
continuum fit between the approximate extremities of the C\,{\sc iv}\ line
(1500\,{\AA} and 1600\,\AA). We found this gave us FWHMs which were
consistently smaller by a factor $\sim1.3$, however, the symmetry of
the line remained intact in this procedure. In the analysis that
follows we only use data measured after we had corrected for iron
emission. Equation~\ref{equ_ves} was calibrated with line widths
measured after correcting for iron emission, and we find that the
template does accurately fit our composite spectra outside the immediate C\,{\sc iv}\
line region.
\subsection{Luminosity Measurements}
The absolute magnitudes quoted in this paper ($M_{b_{\rm J}}$)
come from photographic observations in the $b_{\rm J}$ band with the UK
Schmidt. These are corrected for extinction by Galactic dust
\cite{sfd98}, and $K$-corrected using the values provided by
Cristiani~\&~Vio~(1990). To then calculate continuum luminosities at
the wavelengths desired we
made use of the Sloan Digital Sky Survey (SDSS) QSO composite
spectrum \cite{van01}. We first calculated the mean redshift and $b_{\rm J}$
luminosity of the QSOs in our composite which contributed to the spectral
line we were analysing. Then the SDSS composite was redshifted
to $\overline{z}$, and normalised to the measured luminosity in the $b_{\rm J}$
band by convolving with the UKST $b_{\rm J}$ response function. Then it was
possible to read off values for $L_{1350}$ and $L_{3000}$.
The magnitudes listed in the 2QZ are generally accurate to $0.1-0.2$
magnitudes. Correcting for dust and K-correcting could introduce
significant uncertainty into the values for $M_{b_{\rm J}}$. However, once
averaged over all $\sim2000$ QSOs in a redshift bin we assume the
error on $\overline{M}_{b_{J}}$ will be negligible. On the other hand the
extrapolation to $L_{1350}$ and $L_{3000}$ could be a significant
source of error as we discuss below.
\subsection{Errors} \label{sec:err}
There are three possible sources of error to our calculations for
$M_{\rm BH}$. These are errors in the luminosity values, errors in the line
width measurements and intrinsic scatter associated with
equations~\ref{equ_m+j} and~\ref{equ_ves}.
Equations~\ref{equ_m+j} and~\ref{equ_ves} are quoted as having errors
of 0.33\,dex and 0.36\,dex respectively \cite{m+d04,vest06}, and as we
shall see these large scatters dominate the random error in
$M_{\rm BH}$. It is worth noting here that since the quoted errors are rms
values and because we are dealing with large
numbers of objects ($\sim2000$
QSOs per redshift bin), it could be argued that the errors
should be reduced by the according factor
($\sim1/\sqrt{2000}$). However, the large error in the
virial mass estimators is to a large extent due to the limited number
of AGN with reliable mass
estimates from reverberation mapping. Since the calibrations are
based only on a few tens of objects, we do not believe it to be
prudent to be reducing the errors in our calculations because of the
large number of objects in our dataset.
The high signal to noise of composite spectra mean that very precise
line widths can be measured. We calculate errors on these widths
analytically from the covariance matrix of the
fitting parameters described in section~\ref{ssec_lw}, and find them to
be negligible (see table~\ref{tab:res}). However, the process by which we
measure line widths could introduce significant systematic errors to our
calculations. One source of uncertainty is the nature of the iron template we
used and how it was fit to our data. We note above that the asymmetry
we observe in the C\,{\sc iv}\ line may be a product of the iron template we
are using, and we cannot be sure that using this iron template doesn't
introduce errors into the line width measurements. We find that if we
simply fit a linear continuum around the C\,{\sc iv}\ line our results are
reduced by $\sim1.3$. We therefore gauge that any problems with the
iron template will have an effect no greater than this on our line
width measurements, or a factor of $\sim1.7$ in our virial
black hole masses.
Another source of uncertainty in our line width determinations
is narrow line emission. None of the spectral lines analysed show
clear signs of narrow line emission. Their profiles are smooth and do
not exhibit the inflection characteristic of narrow line emission
superimposed on a broad line. Indeed other investigations of the UV lines in
QSO spectra have found no evidence that they contain narrow line cores
\cite{wills93}. However, as stated above McLure \&
Dunlop only consider the broader component of the Mg\,{\sc ii}\ line when
calibrating equation~\ref{equ_m+j} while we measure the FWHM of the
whole line. In doing so we may
introduce systematics to our calculations. We consider
the possible effect of this by recording the width of the
broader component of the modelled Mg\,{\sc ii}\ fit. We find that line widths
measured this way are
$\sim1.5$ times larger than those measured for the whole line which
translates to roughly a factor of 2 in $M_{\rm BH}$.
Extrapolating luminosities in the manner described above can often be
a large source of error in calculations, in particular when performed
on single objects. In our case, however, we are calculating average
luminosities for the $\sim2000$ QSOs in each redshift bin. Thus
using the flux calibrated SDSS composite, itself constructed from
$>2000$ QSO spectra in the redshift range we are investigating, should
introduce only small errors into the luminosity
calculations. The SDSS median QSO composite has a continuum described
by $F_{\lambda}\propto\lambda^{\alpha}$ with $\alpha=-1.54$ for
$\lambda<5000$\,{\AA} and $\alpha=-0.42$ for
$\lambda>5000$\,{\AA}. Power laws with $-2<\alpha<-1.5$, are used
throughout the literature to make extrapolations similar to those
we make. Therefore we also evaluate the monochromatic luminosities
assuming these power laws and define the error on our luminosity
values to be half their difference.
We calculate the errors on the virial mass estimates
taking into account the scatter
in the virial mass
estimators as well as errors in our luminosity calculations and line
width measurements. However, the the final error on the mass estimate is
completely dominated by the uncertainty in the mass estimator.
We do not attempt to account
for possible sources of uncertainty from poor iron subtraction or narrow
line emission in the tabulated errors.
\section{Dark Halo Mass}\label{sec:dhmass}
Croom et al. (2005) used measurements of the clustering of QSOs from
the 2QZ to infer the mass of dark matter haloes that they inhabit.
They divide the 2QZ sample into 10 redshift intervals (see table
\ref{tab:res}) each containing $\sim2000$ QSOs and measure the
two-point correlation function (accounting for redshift-space
effects) on $<20~h^{-1}~{\rm Mpc} $ scales. They then compare this to the evolution
of mass clustering in a WMAP/2dF cosmology \cite{wmap,2dfgrspk02} to
determine the bias of QSOs as a function of redshift. Note that Croom
et al. (2005) use a slightly different cosmology in their
calculations: $(\Omega_{\rm m},\Omega_{\Lambda})=(0.27,0.73)$ and
$H_{0}=73\,{\rm km\,s}^{-1}\,{\rm Mpc}^{-1}$, although we do not
expect these slight differences to have a significant affect on our
calculations.
Finally they
use the formalism developed by Mo \& White (1996) to relate bias and
dark matter halo mass, specifically using the relation for ellipsoidal
collapse given by Sheth, Mo \& Tormen (2001). This results in the
finding that 2QZ QSO hosts have approximately the same dark matter
halo mass as a function of redshift, with
$M_{\rm DH}=(3.0\pm1.6)\times10^{12}h^{-1}\,{\rm \rm M_{\odot}}$. Croom et al. (2005) then
use a range of partly theoretical relationships between $M_{\rm BH}$ and
$M_{\rm DH}$ to derive black hole masses for these QSOs (see Eqs 22-26 in
Croom et al 2005). In all cases the black hole masses are seen to
decrease toward lower redshift. This so called {\it cosmic downsizing}
(see also Barger et al. 2005; Heckman et al. 2004) appears to be
driving QSO luminosity evolution at $z<2.5$. However, in order to
demonstrate this more conclusively, we have undertaken the analysis in
the present paper to determine more directly the black hole masses of
2QZ QSOs (building on the previous work of Corbett et al. 2003).
\section{Results} \label{sec:res}
Table~\ref{tab:res} displays the results of our
analysis and in Fig.\ref{fig_z-bhm}a we show the virial black hole
mass estimates as a function of redshift. A clear `step' is visible
where we switch from Mg\,{\sc ii}\ estimates to C\,{\sc iv}. In addition, for one
redshift bin we have both Mg\,{\sc ii}\ and C\,{\sc iv}\ present in the composite
spectra, and for this bin the calculated values of $M_{\rm BH}$ differ
by just over $1\sigma$. This disagreement is only marginally
significant but does raise questions over the calibrations of
equations~\ref{equ_m+j} and \ref{equ_ves}. However, since these two
relations are not inter-calibrated we should not be surprised
that we find some discrepancy. We also note that the
magnitude of this offset is consistent with the systematic errors
discussed in section~\ref{sec:err}.
Hereafter we use the weighted mean of the two mass
estimates for the redshift bin $z\in(1.50,1.66)$:
Log$(\frac{M_{\rm BH}}{\rm M_{\odot}})=8.7\pm0.35$.
We note here that the H$\beta$\ line is present in the composite spectrum
for redshift bin $z\in(0.30,0.68)$. We could therefore obtain a virial
black hole mass estimate for this bin from the H$\beta$\ line following a
calibration by
e.g. Vestergaard \& Peterson (2006). Comparing this with
the Mg\,{\sc ii}\ mass estimate for the same redshift bin could help tie
the calibrations together, and potentially clarify why we observe the
difference between the Mg\,{\sc ii}\ and C\,{\sc iv}\ estimators. However,
we found that procedures for measuring the width of the H$\beta$\ line
were not well defined in the literature. In particular other authors
have found evidence for a `very broad component' to the H$\beta$\ line
(e.g. Marziani 2003) which could have a large effect on our
measurements. In addition, because of the nature of composite spectra
the same set of individual QSO spectra do not contribute to the
H$\beta$\ and Mg\,{\sc ii}\ lines in a single composite (note this is equally true
for the C\,{\sc iv}\ and Mg\,{\sc ii}\ lines in composite $z\in(1.50,1.66)$). Due to
the above
considerations, and because we would only obtain a single point to compare
with our Mg\,{\sc ii}\ measurements, we do not make an estimate of the virial
black hole mass for the first redshift bin from the H$\beta$\ line.
It is interesting to note that the line widths in table~\ref{tab:res}
hardly vary between the composites. Given that
this is the case any variation observed in $M_{\rm BH}$ must be due to the
luminosity. Thus the flux limits of the 2QZ $(18.25<b_{\rm J}<20.85)$ will
effectively impose limits on the masses we have calculated.
We can estimate these limits by calculating
the black hole mass as a function of redshift, given a source with
an average line width and an apparent magnitude at the upper and
lower limits of the survey.
We show these limits in Fig.\ref{fig_z-bhm}a and it appears that these
confine our average black hole masses to only a very small
range of possible values at any given redshift.
It is worth noting that the upper flux limit of the 2QZ does not
have a tremendous effect on our calculations. Croom et al. (2004)
extended the 2QZ to a $b_{\rm J}$ of 16 in the 6dF QSO Redshift survey (6QZ)
and found an extra $\sim320$ QSOs in roughly half the area of sky
as surveyed in the 2QZ. Considering the comparatively small numbers of these
bright QSOs, we do not believe they could contribute significantly to
our composite spectra. The lower flux limit, however, does clearly
affect our results. Thus we do not present Fig.\ref{fig_z-bhm}a as evidence
for evolution in black hole mass for the global QSO
population. Instead this shows average virial black hole mass
estimates for QSOs in the 2QZ between redshift 2.5 and 0.5.
\begin{table*}
\begin{center}
\caption{Each line represents measurements taken from one spectral
line (Note that the composite spectra for redshift bin
$z\in(1.50,1.66)$ had both the C\,{\sc iv}\ and Mg\,{\sc ii}\ lines visible).
For each line we give the average redshift and absolute $b_{\rm J}$
magnitude of QSOs contributing to the composite spectra at that line,
along with the absolute magnitude of the break in the QSO luminosity
function at that redshift $M_{b_{\rm J}}^{*}$ (Assuming the polynomial evolution
model of Croom et al. 2004). Note that the values for $\overline{z}$
and $\overline{M}_{b_{\rm J}}$ vary at each point in a single composite
spectra because at each point there will be a different group of
spectra contributing to the composite. We also present the measured FWHMs for
the lines and monochromatic luminosities near them
3000\,{\AA} and 1350\,{\AA} for Mg\,{\sc ii}\ and C\,{\sc iv}\
respectively). We give the black hole mass calculated from the line
and derived Eddington ratios $(L/L_{\rm Edd})$ and finally the
dark halo masses calculated by Croom et al.~(2005).}
\label{tab:res}
\setlength{\tabcolsep}{3pt}
\begin{tabular}{ccccccccccccc}
\hline \hline
$z$ interval & $\overline{z}$ & $\overline{M}_{b_{\rm J}}$ &
$M_{b_{\rm J}}^{*}$ & Spectral Line & FWHM ({km\,s$^{-1}$}) & Log($\frac{\lambda L}{\rm W}$) &
Log($\frac{M_{\rm BH}}{\rm M_{\odot}}$) & Log($L/L_{\rm Edd}$) & $\frac{M_{\rm DH}}{\times10^{12}\rm M_{\odot}}$ \\
\hline
0.30,0.68 & 0.556 & --22.28 & --23.30 & Mg\,{\sc ii} & $3546\pm19$ & $37.379\pm0.001$ & $7.8\pm0.33$ & --$0.71\pm0.33$ & $1.15^{+2.18}_{-0.94}$ \\
0.68,0.92 & 0.803 & --23.26 & --23.91 & Mg\,{\sc ii} & $3875\pm14$ & $37.747\pm0.017$ & $8.1\pm0.33$ & --$0.64\pm0.33$ & $2.94^{+3.07}_{-1.83}$ \\
0.92,1.13 & 1.028 & --23.85 & --24.39 & Mg\,{\sc ii} & $3878\pm15$ & $37.977\pm0.030$ & $8.3\pm0.33$ & --$0.56\pm0.33$ & $3.25^{+3.14}_{-1.93}$ \\
1.13,1.32 & 1.224 & --24.26 & --24.75 & Mg\,{\sc ii} & $3783\pm16$ & $38.127\pm0.040$ & $8.4\pm0.33$ & --$0.48\pm0.33$ & $8.11^{+4.08}_{-3.11}$ \\
1.32,1.50 & 1.414 & --24.57 & --25.04 & Mg\,{\sc ii} & $4104\pm18$ & $38.229\pm0.049$ & $8.5\pm0.33$ & --$0.50\pm0.33$ & $5.20^{+3.15}_{-2.28}$ \\
1.50,1.66 & 1.552 & --24.75 & --25.22 & Mg\,{\sc ii} & $3889\pm31$ & $38.272\pm0.055$ & $8.5\pm0.33$ & --$0.41\pm0.33$ & $2.89^{+2.27}_{-1.51}$ \\
1.50,1.66 & 1.585 & --24.80 & --25.26 & C\,{\sc iv} & $5438\pm58$ & $38.394\pm0.031$ & $8.9\pm0.36$ & --$0.79\pm0.36$ & $2.89^{+2.27}_{-1.51}$ \\
1.66,1.83 & 1.746 & --25.03 & --25.42 & C\,{\sc iv} & $5444\pm44$ & $38.472\pm0.024$ & $8.9\pm0.36$ & --$0.75\pm0.36$ & $1.62^{+1.75}_{-1.01}$ \\
1.83,2.02 & 1.919 & --25.25 & --25.56 & C\,{\sc iv} & $5687\pm40$ & $38.564\pm0.017$ & $9.0\pm0.36$ & --$0.75\pm0.36$ & $4.30^{+2.61}_{-1.89}$ \\
2.02,2.25 & 2.132 & --25.46 & --25.67 & C\,{\sc iv} & $5629\pm39$ & $38.609\pm0.010$ & $9.0\pm0.36$ & --$0.69\pm0.36$ & $6.28^{+3.10}_{-2.37}$ \\
2.25,2.90 & 2.445 & --25.83 & --25.71 & C\,{\sc iv} & $5707\pm39$ & $38.763\pm0.001$ & $9.1\pm0.36$ & --$0.64\pm0.36$ & $6.73^{+3.77}_{-2.80}$ \\
\hline \hline
\end{tabular}
\end{center}
\end{table*}
\begin{figure}
\centering
\centerline{\psfig{file=fig_1.ps,width=8.0cm}}
\caption{(\emph{a}) The trend in virial $M_{\rm BH}$ estimates with
redshift. The
difference between the Mg\,{\sc ii} (triangles) and C\,{\sc iv}
(squares) results is well illustrated here. We also show
approximate limits on our black hole mass estimates as imposed by
the flux limits of the 2QZ assuming an average line width (dotted lines).
In addition $M_{\rm BH}(M_{\rm DH})$ estimates calculated assuming various
dark matter halo density profiles with no
evolution in the $M_{\rm BH}-M_{\rm DH}$ relation (solid lines) and evolution
$\propto(1+z)^{2.5}$ (dashed lines) are shown. For each model the
calculations assuming an
isothermal density profile give the lowest estimates for $M_{\rm
BH}$, the intermediate estimates are given by an NFW profile, and the
Seljak model gives the highest. Note that we do not present the
errors on these points for clarity.
(\emph{b})
We show our values for $M_{\rm BH}/M_{\rm DH}^{4/3}$ (squares) compared with the
predictions of Robertson~(2005) (dashed line). We also show our best
fit to the data (dotted line).}
\label{fig_z-bhm}
\end{figure}
\subsection{The $M_{\rm BH}-M_{\rm DH}$ relation}
Correlations between black hole mass and galactic properties locally imply
that massive black holes grow in parallel with the galaxies (and
presumably dark matter halos) in which they reside. Thus we expect
$M_{\rm BH}$ and $M_{\rm DH}$ to be related. Ferrarese~(2002)
proposed three such relations based the local $M_{\rm BH}-\sigma$ relation
and different models for the dark matter halo density profile. In brief
the first assumes an isothermal dark matter profile, the second
assumes an NFW profile \cite{nfw97}, and the third assumes a profile
based on the weak lensing results of Seljak~(2002). In each of these
models the dark halo mass is calculated from an estimate for the
halo virial velocity $v_{\rm vir}$ which
cannot be directly measured for dark matter
halos. Hence Ferrarese extrapolates from the galaxy's circular velocity,
$v_{\rm c}$, to $v_{\rm vir}$ using the above density profiles.
This is a large extrapolation and the source
of the offset between the models. We combine the models
with two assumptions concerning the evolution of this relation (see
Wyithe \& Loeb~2005; WL05), namely that $M_{\rm BH}-M_{\rm DH}\propto(1+z)^{2.5}$; dashed
lines in Fig.\ref{fig_z-bhm}a) and $M_{\rm BH}-M_{\rm DH}$ is constant with $z$ (solid
lines in Fig.\ref{fig_z-bhm}a).
Table~\ref{tab_bh-models} summarises
the results of $\chi^{2}$ analysis comparing the black hole
masses derived from the dark halo mass, $M_{\rm BH}(M_{\rm DH})$, to our virial
black hole masses. That is, a comparison between our data points and
the solid/dashed lines in Fig.\ref{fig_z-bhm}a allowing for the
errors on the estimate of $M_{\rm BH}(M_{\rm DH})$ (which, for clarity, are not
shown in Fig. \ref{fig_z-bhm}a). The comparison was done in log-space
and (for non-symmetric errors) took the error on $M_{\rm BH}(M_{\rm DH})$ in the
direction of the virial $M_{\rm BH}$ estimates.
The errors are not normally distributed, and hence our results here
are a guide rather than being statistically robust. However, it
appears that the `$M_{\rm BH}-M_{\rm DH}$ constant' model with an isothermal
profile is rejected by the data, while the others are reasonably
acceptable (i.e. cannot be rejected at the 85\% level).
\begin{table}
\begin{center}
\caption{Comparisons of $\chi^{2}$ statistics for the different models
shown in Fig.~\ref{fig_z-bhm}a. Note that since errors on $M_{\rm DH}$
are not normally distributed this $\chi^{2}$ analysis is not
statistically robust. We present it here as a guide to how well each
model matches our data.}
\label{tab_bh-models}
\begin{tabular}{lccc}
\hline \hline
Model & Assumed dark matter & $\chi^{2}$ & P($>\chi^{2}$) \\
& halo density profile & & \\
\hline
$M_{\rm BH}-M_{\rm DH}$ & Isothermal & 24.1 & 0.01 \\
const. & NFW & 14.1 & 0.17 \\
& Seljak & 5.0 & 0.89 \\
$M_{\rm BH}-M_{\rm DH}$ & Isothermal & 5.4 & 0.86 \\
$\propto(1+z)^{2.5}$ & NFW & 6.7 & 0.75 \\
& Seljak & 13.4 & 0.20 \\
\hline \hline
\end{tabular}
\end{center}
\end{table}
Recent hydrodynamical simulations of galaxy mergers have shown that
the local $M_{\rm BH}-\sigma$ relation, along with a startling array of
other QSO features, can be reproduced under the
condition that QSO energy feedback self-regulates the growth of
massive black holes (eg: Di Matteo et al. 2005; Robertson et al. 2005;
Hopkins et al. 2005). These simulations predict an $M_{\rm BH}-\sigma$
relation of the form
\begin{equation}
{\rm log}\left(\frac{M_{\rm BH}}{\rm \rm M_{\odot}}\right)\approx 8.1 + 4.0\,{\rm
log}\left(\frac{\sigma}{200\,{\rm km\,s^{-1}}}\right) - 0.19\,{\rm
log}(1+z).
\label{eq:robertson}
\end{equation}
We take $\sigma \approx V_{\rm vir}$ (see Fig.~3 of Di Matteo et
al. 2005) where $V_{\rm vir}$ is the virial velocity in the
simulations and is directly related to the total galaxy mass by
$M_{\rm DH} \approx M_{\rm vir}=V_{\rm vir}^{3}/10\,G\,H(z)$. Assuming the
cosmological parameters specified in section~\ref{sec:intro} this gives us
a redshift dependent relationship between $M_{\rm BH}$ and $M_{\rm DH}$
characterised by
{\setlength\arraycolsep{2pt}
\begin{eqnarray}
{\rm log}\left(\frac{M_{\rm BH}}{\rm \rm M_{\odot}}\right) & \approx & - 8.5 +
\frac{4}{3}{\rm log}\left(\frac{M_{\rm DH}}{\rm \rm M_{\odot}}\right) \nonumber\\
& & \hspace{0.5cm} - {\rm
log}\left(\frac{(0.7+0.3(1+z)^{3})^{\frac{2}{3}}}{(1+z)^{0.19}}\right).
\label{eq:rob2}
\end{eqnarray}}
We show their predictions
for $M_{\rm BH} / M_{\rm DH}^{4/3}$ as a function of redshift in
Fig.~\ref{fig_z-bhm}b along with our calculated values.
We find good agreement between the simulation predictions
and our values. Our best fit to the data is also shown which follows
$M_{\rm BH} / M_{\rm DH}^{4/3}\propto(1+z)^{2.5\pm1.8}$.
\begin{figure}
\centering
\centerline{\psfig{file=fig_2.ps,width=8.0cm}}
\caption{Shows our values for $M_{\rm BH}$ plotted against $M_{\rm DH}$ found by
Croom et al.~(2005). The size of the points are scaled by redshift (larger
points for higher $z$). The $M_{\rm BH}-M_{\rm DH}$ relation of
equation~\ref{eq:rob2} is also shown as the shaded region in
the plot, with the relation at $z=0$ defining the top of the region
and $z=2.5$ defining the bottom. The $z=0$ $M_{\rm BH}-M_{\rm DH}$ relations of
Ferrarese~(2002) are also shown (heavy lines) for the isothermal
(short dash), NFW (long dash) and Seljak (solid) density profiles.}
\label{fig_bhm-dhm}
\end{figure}
Fig.~\ref{fig_bhm-dhm} shows the relation between $M_{\rm BH}$ and
$M_{\rm DH}$. We observe a weak correlation between these values
significant at only the 85 per cent level (applying a Spearman rank
test). The weakness of this correlation is due in part to the limited
dynamic range of our averaged mass estimates.
We also plot
Eq.~\ref{eq:rob2} between $z=0.5$ and 2.5 as the
shaded region in the Figure, and the $z=0$ relations
of Ferrarese~(2002). We see that the relation of Robertson et
al. is in excellent agreement with our data. It is worth noting that
that the normalisation of the simulated curve (in both
Fig.~\ref{fig_z-bhm}b and~\ref{fig_bhm-dhm}) comes only from the QSO
luminosity function. It is therefore encouraging that the simulation's
predictions for the $M_{\rm BH}-M_{\rm DH}$ relation match our measurements so
well. The Ferrarese relation
assuming a Seljak profile is also in good agreement with the data.
However, the Ferrarese relations assuming other dark matter profiles
appear to be in disagreement (particularly that with an isothermal
profile, as noted above).
Fitting a function of the form $M_{\rm BH}=AM_{\rm DH}^{1.82}$ to the data in
Fig.~\ref{fig_bhm-dhm}, we estimate the zero-point of the $M_{\rm BH}-M_{\rm DH}$
relation in our redshift range to be $M_{\rm BH}=10^{8.4\pm0.2}\,{\rm \rm M_{\odot}}$ at
$M_{\rm DH}=10^{12.5}\,{\rm \rm M_{\odot}}$. This estimate was obtained through
minimising $\chi^{2}(A)$ for the fitted function, and the confidence
interval represents
$\Delta\chi^{2}=1$ limits. The exponent in the fitted model was chosen
as that from the Ferrarese relation assuming a Seljak profile. As a check
we repeated the analysis with the most extreme exponents from the
various other models discussed in this paper and found that all resulted in
a zero-point within the confidence interval quoted.
By comparison the Ferrarese relations at $z=0$
give a zero-point of $M_{\rm BH}=10^{7.3}$, $10^{7.8}$ and
$10^{8.7}\,{\rm \rm M_{\odot}}$
at the same dark matter halo mass for isothermal, NFW and Seljak
profiles respectively. In agreement with the above,
the relation using the Seljak profile is the best match to the
data. Given the uncertainties in the evolution of the $M_{\rm BH}-M_{\rm DH}$
relation, this is not direct evidence for or against a specific dark
matter halo profile. We also note that our
zero-point for the $M_{\rm BH}-M_{\rm DH}$ is consistent with that found for a
small sample of QSOs (but with larger dynamic range) by Adelberger \&
Steidel (2005).
\subsection{Bias in the measured $M_{\rm BH}-M_{\rm DH}$ relation}
In addition to the random errors associated with the values of mean
black hole mass and dark halo mass that we deduce, we should also
consider whether there are any systematic biases that may arise in our
analysis of the $M_{\rm BH}-M_{\rm DH}$ relation. Biases due to the uncertainties
in the estimates of $M_{\rm BH}$ are discussed above. However, there is one
further issue that is particularly important if we wish to compare
with relations such as those of Ferrarese (2002) at low redshift.
Ferrarese draws her objects from a sample of local galaxies with
measured black hole masses. These are fairly evenly distributed over
a range of bulge (and hence inferred dark halo) masses. On the other
hand, our QSO sample is drawn from the population of active galaxies,
selected by luminosity. This tends to produce a Malmquist-type bias
towards larger $M_{\rm BH}$. If we consider that, at a given redshift,
objects will only be detected above a given $M_{\rm BH}$ mass (neglecting for
the moment variation in Eddington ratio), then the
mass function of dark matter halos (with more halos at low mass) and
any scatter in $M_{\rm BH}$ for a given $M_{\rm DH}$ will cause an excess of
objects above the fiducial $M_{\rm BH}-M_{\rm DH}$ relation (i.e. with greater
$M_{\rm BH}$). This leads to a bias of the observed mean value of $M_{\rm BH}$
above the true $M_{\rm BH}-M_{\rm DH}$ relation. Allowing there to also be
scatter in the $L/L_{\rm Edd}$ relation is
equivalent to moving the effective $M_{\rm BH}$ limit, and only moves the
observed zero-point of the $M_{\rm BH}-M_{\rm DH}$ relation parallel to the true
relation.
The size of the Malmquist-type bias depends on both the steepness of the
mass function and the amount of intrinsic scatter in the relationship
between black hole and dark halo mass. Both these quantities are
rather uncertain, but we can estimate the possible size of the effect
in the following way. First, we assume the same mass function of dark
halos that was assumed when deducing the clustering bias that led to
the inferred mean dark halo mass: namely a Sheth et al. (2001) mass
function with the cosmological parameters assumed in this paper.
Then, we assume that the black hole mass function may be generated from
the dark halo mass function by applying the Ferrarese relation, either
with no evolution or with WL05 evolution in the
mass relation, but with some scatter in that relation. Constructing a
Monte-Carlo simulation of the $M_{\rm BH}-M_{\rm DH}$ relation as outlined above,
and including a cutoff in black hole mass corresponding to the flux
limit of the 2QZ we can find the magnitude of this effect on our
results. Note that a cut in $M_{\rm BH}$ will also
cause us to overestimate the gradient of the relation. However, due
primarily to the small dynamic range in Fig.\ref{fig_bhm-dhm}, we
make no attempt to estimate the slope of the $M_{\rm BH}-M_{\rm DH}$ relation
in this work. We are concerned only with the zero-point of the
relation, which is also biased by the cut.
We test three values of scatter in the $M_{\rm BH}-M_{\rm DH}$ relation. The
minimum scatter we adopt is the scatter between
black hole mass and bulge velocity dispersion or luminosity
inferred by Marconi et al (2004), namely a factor of 2 in black hole
mass. However, there is likely to be substantial
additional scatter in the relation between bulge mass and dark halo
mass: from Ferrarese (2002) we estimate the total scatter could be a
factor 4 in black hole mass although without direct measurements of
dark halo mass this is difficult to assess reliably. Table
\ref{tab:bias} shows the mean black hole masses after correcting our
measured values for this Malmquist-type bias in each redshift bin for
the two different evolutionary models and three different value of the
scatter in the $M_{\rm BH}-M_{\rm DH}$ relation, namely $\times2$, $\times4$ and
$\times10$. Although we estimate above that the scatter should be
between a factor of 2 and 4, the uncertainties are great enough that
it is worth considering the affect of a larger scatter
(i.e. $\times10$).
\begin{table}
\caption{
Estimates of Malmquist bias in the mean $M_{\rm BH}$ measured relative to
the true $M_{\rm BH}-M_{\rm DH}$ relation. For each redshift bin at
$\overline{z}$, we give the measured $M_{\rm BH}$ (column 2) and the
corrected values assuming the three values of intrinsic scatter about
the mass relation ($\times2$, $\times4$ and $\times10$), for each of
the two assumed values for evolution (columns 3 to 8). $M_{\rm BH}$ is
given as log$_{10}(M)$ in solar units. The final row gives the
zero-point calculated as described in the text. The error on these
zero-points are $\pm0.22$ dex.}
\label{tab:bias}
\setlength{\tabcolsep}{3pt}
\begin{tabular}{cccccccc}
\hline \hline
& & \multicolumn{3}{c}{no evolution} & \multicolumn{3}{c}{WL05 evolution} \\
$\overline{z}$ & zero & $\times2$ & $\times4$ & $\times10$ &
$\times2$ & $\times4$ & $\times10$\\
& scatter & scatter & scatter & scatter & scatter & scatter & scatter \\
\hline
0.556 & 7.8 & 7.75 & 7.61 & 7.26 & 7.74 & 7.59 & 7.25 \\
0.803 & 8.1 & 8.04 & 7.90 & 7.53 & 8.05 & 7.90 & 7.56 \\
1.208 & 8.3 & 8.24 & 8.08 & 7.72 & 8.25 & 8.10 & 7.74 \\
1.224 & 8.4 & 8.35 & 8.19 & 7.81 & 8.35 & 8.18 & 7.84 \\
1.414 & 8.5 & 8.44 & 8.26 & 7.88 & 8.44 & 8.27 & 7.91 \\
1.552 & 8.5 & 8.44 & 8.26 & 7.88 & 8.44 & 8.28 & 7.91 \\
1.585 & 8.9 & 8.83 & 8.64 & 8.24 & 8.84 & 8.67 & 8.29 \\
1.746 & 8.9 & 8.83 & 8.64 & 8.23 & 8.85 & 8.68 & 8.30 \\
1.919 & 9.0 & 8.94 & 8.74 & 8.32 & 8.95 & 8.77 & 8.39 \\
2.132 & 9.0 & 8.94 & 8.72 & 8.28 & 8.94 & 8.75 & 8.35 \\
2.445 & 9.1 & 9.01 & 8.77 & 8.32 & 9.02 & 8.84 & 8.44 \\
\hline
z-p & 8.4 & 8.33 & 8.14 & 7.74 & 8.33 & 8.16 & 7.79 \\
\hline \hline
\end{tabular}
\end{table}
The biases for the two evolution assumptions are similar, with the WL05
evolution producing slightly less bias at high masses and high redshift,
as in this case the associated dark halo mass is lower for a given
black hole mass (and so the mass function is flatter).
For the minimum scatter assumption, the bias is $\sim 0.1$~dex.
For $\times4$ scatter, however, the bias increase to a factor
$\sim0.25$~dex or greater. If we now consider the
zero-point for the $M_{\rm BH}-M_{\rm DH}$ relation derived above, we see that any
Malmquist bias will push the true zero-point to lower $M_{\rm BH}$.
We note that the true scatter in the $M_{\rm BH}-M_{\rm DH}$ is very poorly
constrained. If it were to be considerably higher than the above
values, then the Malmquist bias would also be higher. A scatter of a
factor of 10 in $M_{\rm BH}$ for a given $M_{\rm DH}$ will produce a bias of
$\sim0.6$~dex in the mean $M_{\rm BH}$ with respect to the true $M_{\rm BH}-M_{\rm DH}$
relation. This would make our data inconsistent with the
Robertson et al. (2005) model (Eq. \ref{eq:rob2}) and the
Ferrarese (2002) model assuming a Seljak (2002) density DMH profile,
while giving better agreement with the NFW profile model in
particular. Hence a detailed comparison of the high-redshift relation
deduced here and the relation at lower redshifts found by Ferrarese
(2002) requires a better understanding of the amount of Malmquist bias
in the QSO measurements, and in particular of the amount of scatter on
the $M_{\rm BH}-M_{\rm DH}$ relation.
\subsection{The evolution in $M_{\rm BH}$ and $L/L_{\rm Edd}$}
Fig.~\ref{fig_z-bhm}a displays evidence for a trend in estimated black hole
mass with $z$ as we observe $M_{\rm BH}$ for our sample to drop by an
order of magnitude between redshift 2.5 and 0.5. The correlation
between $M_{\rm BH}$ and $z$ is significant at the 99\% level (via a
Spearman rank test; i.e. the probability that the null
hypothesis of no correlation is correct is $<1$\%) and the evolution
in $M_{\rm BH}$ is best characterised by $M_{\rm BH}\propto(1+z)^{3.9\pm 1.1}$.
However, due to the flux limits of the 2QZ we can not use this to
infer black hole mass evolution in the global QSO population (see
section~\ref{sec:res}).
In our sample the flux limit of the 2QZ and the luminosity evolution of
QSOs conspire to put $L^{*}$ at a similar apparent magnitude at every
redshift we sample. The mean luminosity of our sample (i.e. the third
column of Table \ref{tab:res}) scales as $(1+z)^{4}$ while
$L^{*}$ scales as $(1+z)^{3}$ (Table \ref{tab:res} lists the break in
the QSO luminosity function, $M_{b_{j}}^*$ at the mean redshift of
each bin using the polynomial form of Croom et al. 2004), so
that the range of differences between $L$ and $L^{*}$ for our sample
is equivalent to 1 magnitude. We also note that the space density of
the QSOs in our redshift bins changes by only a factor of 2.7
\cite{2qz14} over our redshift range. However, the strong luminosity
dependence of the virial black hole mass
estimators, combined with this evolution in $L^{*}$, make untangling
luminosity and mass evolution difficult. Indeed the evolution we
observe in BH
mass is entirely due to the luminosity component of the virial mass
estimator as the velocity widths show no significant trend with
redshift. In principle QSOs in our sample could have shown evolution
in their emission line FWHM to alter the evolution of $M_{\rm BH}$, but
this is not seen. The best approach to investigate BH mass evolution
in the global QSO population would be to construct a sample with larger dynamic
range in magnitude \cite{rich05}. This would allow sources of the same
luminosity to be compared over a range of redshifts.
We therefore restrict our discussion of evolution to $L^{*}$
QSOs. Since the 2QZ samples a range of luminosities around $L^{*}$ at
all redshifts, we can create a subsample defined by a luminosity
interval around $L^{*}$ which is not affected by the flux limits of
the 2QZ. The range in magnitude of this sample is defined at the
bright end by the absolute magnitude of a QSO at low redshift with
an apparent magnitude of $b_{\rm J}=18.25$, and at the faint end
by the absolute magnitude of a source at high redshift with $b_{\rm J}=20.85$.
Note that at $z=0.3$ (the lowest redshift in our sample)
$M_{b_{\rm J}}^{*}=-22.59$ (calculated using the polynomial evolution model of
Croom et al. 2004). This corresponds to an
apparent magnitude of $b_{\rm J}=18.15$, brighter than the flux
limit of the 2QZ. Hence we define the boundaries of this sample by
the absolute magnitudes corresponding to $b_{\rm J}=18.25$ at $z=0.556$ and
$b_{\rm J}=20.85$ at $z=2.445$, i.e. at the mean redshift of the end
bins. Thus the data for the end bins are still slightly affected
by the magnitude limits of the 2QZ. This new sample is then described,
at all redshifts, as all QSOs with
\begin{equation}
-0.62 < M_{b_{\rm J}}-M_{b_{\rm J}}^{*}(z) < 0.75.
\end{equation}
We repeat the analysis described in this paper on this new sample to
investigate the evolution of black hole mass in typical $L^{*}$
QSOs. Composite spectra were made for the same redshift intervals, and
virial black hole masses were estimated from
these. Table~\ref{tab:res2} shows a summary of these results and they
are plotted in Fig.\ref{fig_z-bhm_Lstar}a. We
find $(1+z)^{3.3\pm1.1}$ evolution in $M_{\rm BH}$, less
pronounced than in the whole sample although still marginally significant.
This shows that QSO samples at lower redshift are increasingly dominated
by lower mass BH, but as these are also lower-luminosity QSOs this cannot
directly be interpreted as evidence for anti-hierarchical "downsizing".
\begin{table}
\begin{center}
\caption{Summary of analysis on our sample of QSOs defined by a
constant magnitude interval around $M_{b_{\rm J}}^{*}$. Column~3 shows the
difference between the average magnitude of the QSOs in that bin and
$M_{b_{\rm J}}^{*}$. Note that for all but the final redshift bin this is
nearly constant, the final redshift bin (and the first) is affected
by the magnitude limits of the 2QZ.}
\label{tab:res2}
\begin{tabular}{ccccc}
\hline \hline
$\overline{z}$ & $\overline{M}_{b_{\rm J}}$ & $\overline{M}_{b_{\rm
J}} - M_{b_{\rm J}}^{*}$ & Log$(\frac{M_{\rm BH}}{\rm M_{\odot}})$ & Log$(L/L_{\rm Edd})$ \\
\hline
0.568 & -23.09 & 0.26 & $8.0\pm0.33$ & $-0.6\pm0.33$ \\
0.807 & -23.68 & 0.26 & $8.2\pm0.33$ & $-0.6\pm0.33$ \\
1.030 & -24.17 & 0.25 & $8.4\pm0.33$ & $-0.5\pm0.33$ \\
1.225 & -24.50 & 0.28 & $8.4\pm0.33$ & $-0.5\pm0.33$ \\
1.416 & -24.81 & 0.27 & $8.6\pm0.33$ & $-0.5\pm0.33$ \\
1.566 & -25.02 & 0.25 & $8.7\pm0.35$ & $-0.6\pm0.35$ \\
1.746 & -25.18 & 0.29 & $8.9\pm0.36$ & $-0.7\pm0.36$ \\
1.920 & -25.37 & 0.24 & $9.0\pm0.36$ & $-0.7\pm0.36$ \\
2.135 & -25.46 & 0.26 & $9.0\pm0.36$ & $-0.7\pm0.36$ \\
2.431 & -25.67 & 0.10 & $9.1\pm0.36$ & $-0.6\pm0.36$ \\
\hline \hline
\end{tabular}
\end{center}
\end{table}
\begin{figure}
\centering
\centerline{\psfig{file=fig_3.ps,width=8.0cm}}
\caption{(\emph{a}) Black hole masses as a function of
redshift for the sample of $L^*$ QSOs.
(\emph{b}) Eddington ratios calculated for the same sample. The
dashed line represents $L/L_{\rm Edd}=1$. In each case triangles and
squares represent measurements from the Mg\,{\sc ii}\ and C\,{\sc iv}\ lines
respectively.}
\label{fig_z-bhm_Lstar}
\end{figure}
To further elucidate what may drive QSO luminosity evolution we
calculate average Eddington ratios ($L/L_{\rm Edd}$) for the QSOs in our
redshift bins. In calculating the Eddington
ratios bolometric luminosities ($L$) were found using the relation
derived by McLure \& Dunlop~(2004) for the $B$ band correcting
by $b_{\rm J}=B-0.06$ for a mean QSO $B-V=0.22$ \cite{c+v90}. The relation is
then
\begin{equation} \label{equ_mbj}
M_{b_{\rm J}}=-2.66\,{\rm log}(L)+79.42
\end{equation}
for $L$ in watts. $L_{\rm Edd}$ (also in watts) is given by
\begin{equation} \label{equ_ledd}
L_{\rm Edd}=10^{39.1}\left(\frac{M_{\rm BH}}{10^{8}\,{\rm \rm M_{\odot}}}\right).
\end{equation}
McLure \& Dunlop arrive at this relation from the bolometric
corrections of Elvis et al. (1994). Elvis et al. find an error on their $B$
band bolometric correction of $\sim35$\,\%, and all of their
sample give a correction within a factor of $\sim2$ of the mean. Even
taking this factor of two as the error on our
bolometric correction for a single object, when
averaged over the $\sim2000$ QSOs in a redshift bin this translates to
a factor of only
$2/\sqrt{2000}\sim0.04$. This error is small when compared with
those on our black hole mass estimates, and is ignored in the
following analysis. Any luminosity dependence in the bolometric
correction would introduce systematics into our results, however,
Richards et al. (2006) find no evidence such a dependence.
Calculated Eddington ratios for the $L^*$ QSOs are plotted in
Fig.~\ref{fig_z-bhm_Lstar}b. We
observe no significant trend in Eddington ratio with redshift, characterised
by $L/L_{\rm Edd}\propto (1+z)^{-0.4\pm 1.1}$, indicating that the most
luminous QSOs that are typically observed in samples have similar Eddington
ratios, irrespective of redshift. Overall, however, the mean rate of
accretion in the universe must have been higher at high $z$, because the
integrated luminosity arising from AGN was higher at high $z$ than at low,
whereas the integrated irreducible mass in black holes cannot have been
greater at high $z$ than at low (see also Miller, Percival \& Croom
2005). It seems then
that the process of selecting luminous QSOs results in an almost invariant
Eddington ratio for the most extreme objects at any epoch, although given
both the statistical and systematic uncertainties present in
Fig.~\ref{fig_z-bhm_Lstar}b it is not yet possible to rule out some
redshift variation of Eddington ratio for these objects.
Our results are consistent with no evolution in Eddington
ratios over the redshift range studied. Instead we must conclude that
the luminosity evolution of $L^*$ QSOs is driven, at least for the most
part, by a reduction in black hole mass for $z<2.5$. This said the
confidence intervals on our evolution parameters are large and we
restrain from drawing any firm conclusions on evolution from these
data. The relative contribution of mass and Eddington
evolution are affected strongly in our data by the slope of the
radius-luminosity
relation used in the virial mass estimates
and the possibility of luminosity/redshift dependence in
velocity widths (which is small, see Corbett et al. 2003).
In addition the calibration of the virial mass estimators also has a
major affect on our results. These are not dependent on $z$ and hence
alterations to the calibrations would lead only to an offset in our
mass estimates. However, since each calibration applies to a
different range of redshifts, an offset to one of these calibrations
will significantly change our evolution results. We note that had we
performed the above analysis with older calibrations for the two mass
estimators we would have found considerably less evolution in $M_{\rm BH}$ over this
redshift range, and a correspondingly larger evolution on Eddington ratios.
\section{Conclusions} \label{sec:conc}
We make composite spectra of QSOs from the 2QZ to find average virial
black hole mass estimates for the QSOs in 10 redshift bins for which
Croom et al.~(2005) had already calculated $M_{\rm DH}$ via clustering
analysis. Comparing the black hole and dark halo masses we
find evidence for $\sim(1+z)^{2.5\pm1.8}$ evolution in the $M_{\rm BH}-M_{\rm DH}$
relation,
although large errors are such that we can not exclude the possibility
of there being no evolution. We derive
the zero-point of the $M_{\rm BH}-M_{\rm DH}$ relation (averaged over redshift)
and find it to be $M_{\rm BH}=10^{8.4\pm0.2}\,{\rm \rm M_{\odot}}$ for a dark matter
halo of mass $M_{\rm DH}=10^{12.5}\,{\rm \rm M_{\odot}}$. This is most consistent
with a model using a Seljak (2004) dark matter profile (under the
assumption of no evolution in $M_{\rm BH}-M_{\rm DH}$), however, uncertainties are
such that we are unable to definitively distinguish which model is
preferred. We compare our measured $M_{\rm BH}-M_{\rm DH}$ relation to that
derived from hydrodynamical simulations of galaxy evolution
\cite{dsh05,rob05} and find good agreement.
We note that because QSOs are selected above a given luminosity this
will tend to select objects above a given $M_{\rm BH}$. This results in a
Malmquist-type bias such that the observed mean $M_{\rm BH}$ will lie above
the true $M_{\rm BH}-M_{\rm DH}$ relation. The level of bias is crucially
dependent on the amount of scatter in the $M_{\rm BH}-M_{\rm DH}$ relation.
We take a subsample of QSOs in a constant magnitude interval
around $M_{b_{\rm J}}^{*}$ and find significant evolution in their black
hole masses characterised by $M_{\rm BH}\propto(1+z)^{3.3\pm 1.1}$.
Comparing this to the observed lack of significant evolution
in Eddington ratio ($L/L_{\rm Edd}\propto(1+z)^{-0.4\pm1.1}$) we conclude
that luminosity evolution of $L^*$ QSOs is driven primarily by
decreasing black hole masses between redshifts 2.5 and 0.5. However,
the exact combination of evolution in $M_{\rm BH}$ and $L/L_{\rm Edd}$ is
dependent on the slope of the luminosity dependence in the virial mass
estimator and any luminosity/redshift dependence in the velocity width
of the QSO broad lines as well as the calibrations of the virial mass
estimators themselves. Considering this and potential sources of
systematic errors in our line width measurements, we find that our data are
still consistent
with a picture in which both reducing black hole masses and Eddington
ratios play an equal role in $L^*$ QSO luminosity evolution as observed in
other studies (e.g. Merloni 2004; Heckman et al. 2004).
To extend this work further, detailed analysis of samples with a
broader dynamic range will be required, including the analysis of QSO
clustering as a function of luminosity. This has started to be done
with small samples (e.g. Adelberger \& Steidel 2005), but will be
extended with new faint QSO surveys such as the 2dF-SDSS LRG and QSO
(2SLAQ) Survey (Richards et al. 2005).
\section*{acknowledgements}
We warmly thank all the present and former staff of the
Anglo-Australian Observatory for their work in building and operating
both the 2dF and 6dF facilities. The 2QZ and 6QZ are based on
observations made with the Anglo-Australian Telescope and the UK
Schmidt Telescope.
We would also like to thank all of the good people at the University
of Sydney for their help, their advice and their on-going support.
|
1,108,101,566,353 | arxiv |
\section{Conclusion}
\vspace{-2mm}
In this paper, we have proposed a deep coarse-to-fine model with high order context and guided filtering for semantic image segmentation.
At the coarse level, we target directly at the influence of high order pattern on the unary node to encode the relative interaction between them. We also introduce the hidden global nodes to keep global predictions and local predictions consistent.
At the fine level, instead of bilateral filtering based CRF, we plug in the guided filtering as one step of message passing in the mean field algorithm and make it $100\times$ faster to delineate the object boundary.
We transfer two contemporary image classification models for the task of semantic image segmentation. Experiments on the Pascal VOC 2012 dataset show that our model outperforms the state-of-the-art performance with appealing running speed, which demonstrates our model can harness context information effectively for structural prediction and can locate the object accurately.
\vspace{-5mm}
\begin{figure}[H]
\centering
\begin{tabular}{c@{~}c@{~}c@{~}|c@{~}c@{~}c@{~}}
\includegraphics[width=0.15\columnwidth]{figures/show/2008_006008_im.png}&
\includegraphics[width=0.15\columnwidth]{figures/show/2008_006008_gt.png}&
\includegraphics[width=0.15\columnwidth]{figures/show/2008_006008.png}&
\includegraphics[width=0.15\columnwidth]{figures/show/2007_001630_im.png}&
\includegraphics[width=0.15\columnwidth]{figures/show/2007_001630_gt.png}&
\includegraphics[width=0.15\columnwidth]{figures/show/2007_001630.png}\\
\includegraphics[width=0.15\columnwidth]{figures/show/2010_002251_im.png}&
\includegraphics[width=0.15\columnwidth]{figures/show/2010_002251_gt.png}&
\includegraphics[width=0.15\columnwidth]{figures/show/2010_002251.png}&
\includegraphics[width=0.15\columnwidth]{figures/show/2009_002372_im.png}&
\includegraphics[width=0.15\columnwidth]{figures/show/2009_002372_gt.png}&
\includegraphics[width=0.15\columnwidth]{figures/show/2009_002372.png}\\
\includegraphics[width=0.15\columnwidth]{figures/show/2007_008260_im.png}&
\includegraphics[width=0.15\columnwidth]{figures/show/2007_008260_gt.png}&
\includegraphics[width=0.15\columnwidth]{figures/show/2007_008260.png}&
\includegraphics[width=0.15\columnwidth]{figures/show/2007_000491_im.png}&
\includegraphics[width=0.15\columnwidth]{figures/show/2007_000491_gt.png}&
\includegraphics[width=0.15\columnwidth]{figures/show/2007_000491.png}\\
\includegraphics[width=0.15\columnwidth]{figures/show/2008_001546_im.png}&
\includegraphics[width=0.15\columnwidth]{figures/show/2008_001546_gt.png}&
\includegraphics[width=0.15\columnwidth]{figures/show/2008_001546.png}&
\includegraphics[width=0.15\columnwidth]{figures/show/2007_005173_im.png}&
\includegraphics[width=0.15\columnwidth]{figures/show/2007_005173_gt.png}&
\includegraphics[width=0.15\columnwidth]{figures/show/2007_005173.png}\\
(a) Input & (b) Truth & (c) Prediction & (a) Input & (b) Truth & (c) Prediction
\end{tabular}
\caption{Some examples from Pascal VOC 2012 \emph{val} set.}
\label{figure:qulative}
\end{figure}
\section{Implementation}
We use the public Caffe \cite{jia2014caffe} framework for deep learning. Previous works have shown it is good practice to fine-tune classification networks to segmentation task. We transfer two contemporary classification models (VGG16 and ResNet). When fine-tuned from the simplified VGG16\footnote{The simplified VGG16 originates from the public available version from DeepLab \url{http://ccvl.stat.ucla.edu/software/deeplab/}. The $4096 \times 7 \times 7 \times 512$ layer and $4096 \times 4096$ layer are sub-sampled to $1024 \times 3 \times 3 \times 512$ and $1024 \times 1024$, which leads to a much smaller model with faster speed.} \cite{simonyan2014very}, weight decay parameter is set to $0.0005$, the momentum parameter is set to $0.99$ and the initial learning rate is set to $10^{-5}$ as we process only one image at each iteration, i.e., mini-batch size is set to 1.
For ResNet, we use our own implementation. We trained the model following the same setting as the authors \cite{he2015deep}. Our own implementation has 56 layers and it gets a $6.81\%$ top-5 accuracy (standard 10-crops testing) on the ILSRVC 2012 \emph{val} set. The whole training process takes about 10 days on a 4-GPUs architecture. We skip the sub-sampling operation in the $conv5\_1$ layer and modify the filters in the $conv5$ block by introducing zeros to increase the size, which is known as 'hole algorithm' \cite{chen2014semantic}. This operation yields a stride of 16 pixels. Weight decay parameter is set to $0.0001$, the momentum parameter is set to $0.9$ and the initial learning rate is set at $0.01$. The mini-batch size is set to 16, we found the batch size influenced the convergence of the ResNet model, perhaps it is due to the batch normalization layers \cite{ioffe2015batch} . The momentum of batch normalization is set to 0.1, which means that the running mean and variance changes by $10\%$ of its value at each batch.
Scale jittering, color altering \cite{wu2015deep} and horizontal mirror images are adapted for data augmentation. For scale jittering in the training phase, every image is resized with randomly ration in range $[0.5,2.0]$.
\section{Experiments}
\textbf{Datasets.} We test our model on the PASCAL VOC 2012 segmentation benchmark. It includes 20 categories plus background. The original \emph{train} set has 1464 images with pixel-wise labels. We also use the annotation from~\cite{hariharan2011semantic}, resulting in 10582 (augmented \emph{train} set), 1449 (\emph{val} set) and 1456 (\emph{test} set) images. The accuracy is evaluated by mean IoU scores. Since the \emph{test} set annotation of PASCAL VOC 2012 is not released, the result on \emph{test} set is reported by the server\footnote{\url{http://host.robots.ox.ac.uk:8080}}.
To compare with the-state-of-the-art, we further exploit the large scale dataset MS COCO \cite{lin2014microsoft} to pre-train the model, which includes 123,287 images in its \emph{trainval} set with 80 categories and one background. Each image comes with pixel-wise label.
\begin{figure}
\begin{minipage}[t]{0.5\linewidth}
\centering
\includegraphics[width=1\columnwidth]{figures/figure4/1.pdf}
\captionof{figure}{Training curves.}
\label{figure:traincurves}
\end{minipage}%
\begin{minipage}[t]{0.5\linewidth}
\centering
\vspace{-4.5cm}
\begin{tabular}{l|c}
Component & Time cost (ms) \\
\hline
Unary(VGG) & 44 \\
Unary(ResNet) & 35 \\
High order & 0.7 \\
Global nodes & 0.3 \\
Guidance CRF & 10 \\
\hline
Total(VGG) & 55 \\
Total(ResNet) & 46 \\
\end{tabular}
\vspace{1cm}
\captionof{table}{Inference time for a $500 \times 300$ color image. }
\label{table:time}
\end{minipage}
\end{figure}
\vspace{-10mm}
\subsection{Validation of the Model}
We conduct our evaluations of each components on the Pascal VOC 2012 \emph{val} set (1449 images), training on the augmented \emph{train} set (10582 images), fine-tuning from the simplified VGG16. The detailed settings of \textbf{Unary}, \textbf{A}, \textbf{B} and \textbf{C} are shown in the following paragraph and the performance in Table \ref{table:val} verify the effectiveness of each component in our model.
We train for up to 24 epochs on Pascal VOC 2012 augmented \emph{train} set and the training curves are shown in the Figure \ref{figure:traincurves}, which shows clearly each component of our proposed model can lead to lower training error. The whole training costs about one day for each model on one modern GPU card.
\begin{enumerate}
\item \textbf{Unary}. We follows the similar settings Deeplab, which is fine-tuned from VGG16 with Pascal VOC 2012 augmented \emph{train} set. We does not adapt bilateral-filtered based fully connected CRF as processing step. This baseline gets a mean IOU of $66.6\%$.
\item \textbf{A}. The high order term stated in the Equation \ref{equation:highorder} is add to encode context information. We run for only one iteration for computing efficiency. It achieves a mean IOU of $68.2\%$, 1.6 points higher than base-line. The context information benefits most categories except for ``sofa'' and ``dining table''.
\item \textbf{B}. This setting uses both and the global nodes, further boosting the performance by 3.0 points. It verifies the higher order term and the global nodes are helpful to make consistent estimations. Some categories, such as ``sofa'', ``horse'', ``dining table'', have enjoyed great improvements.
\item \textbf{C}. This setting further adapts the guidance CRF for a sharp boundary and it is the full model. We add the guidance CRF layer and re-train the whole networks from the simplified VGG16 with the same number of training epochs. We run one iteration of mean field in the training stage for computing efficiency and run three iterations for better convergence in the inference stage. The window size of guided filtering is set to 50 by cross validation. It brings the mean IoU score to $73.3\%$.
\item \textbf{D}. In this setting, we add all components of our proposed model and pre-train the simplified VGG16 model on MS COCO, which is widely adapted by other methods \cite{chen2014learning,zheng2015conditional,lin2015efficient,liu2015semantic}. We get a mean IoU score $74.9\%$ of on the \emph{val} set.
\item \textbf{E}. We further replace the simplified VGG16 with ResNet. All the other settings follow \textbf{D}. It gets a mean IoU score of $77.6\%$ on the \emph{val} set.
\end{enumerate}
\textbf{Time complexity.} All the code is optimized by CUDA and the time cost is measured on one modern GPU card. For a typical $300 \times 500$ color image, as it is shown in the Table \ref{table:time}, it costs
about $55ms$ in total to compute the segmentation score map on the simplified VGG16.
The whole networks fine-tuned from ResNet process one image within only $46ms$ in total on a modern GPU card, while the unary layers cost $35ms$ and all the context CRF layers and the guidance CRF layers take about $11ms$, as it is shown in the Table \ref{table:time}.
Our proposed context CRF costs little time while bringing large performance gains.
The bilateral-filtering based fully connected CRF is widely used for sharp object boundary in previous works. The fully connected CRF with a recently optimized implementation of fast bilateral filtering \cite{krahenbuhl2012efficient} takes about $1400ms$ for 10 mean field iterations on a Intel Xeon(R) CPU W3690. Our proposed guidance CRF costs only $10ms$ on one modern GPU card. We make the process for sharp object boundary more than $100\times$ faster.
\subsection{Comparisons with State-of-art}
As our model fine-tuned from ResNet performs best on the \emph{val} set, we submit the segmentation result of Pascal VOC 2012 \emph{test} set from this model to the test server. In the test phase, we combine three scales $\{0.8,1.0,1.2\}$ and their horizontal flipped versions to get the predicted score map.
We quantitatively compare our proposed model with state-of-art models: Deeplab \cite{papandreou2015weakly}, CRF-RNN \cite{zheng2015conditional}, Deeplab-DT \cite{chen2015semantic}, DPN \cite{liu2015semantic} and Piecewise \cite{lin2015efficient},. CRF-RNN and DPN jointly train the filter-based CRF with FCN. Other models adapt bilateral filter based CRF as a post-process step. All these models are trained on the same training data, e.g., ImageNet 2012 \emph{train} set, MS COCO \emph{trainval} set and augmented Pascal VOC 2012 \emph{train} set, for fair comparison.
\begin{table}[H]
\small
\center
\caption{Results on Pascal VOC 2012 \emph{val} set (\%). Unary: on simplified VGG16. A: with context. B: with context and global. C: full. D: with MS COCO. E: on ResNet.}
\vspace{-4mm}
\begin{tabular}{p{2cm}|p{1cm}<{\centering} p{1cm}<{\centering} p{1cm}<{\centering} p{1cm}<{\centering} | p{1cm}<{\centering} p{1cm}<{\centering}}
& \textbf{Unary} & \textbf{A} & \textbf{B} & \textbf{C} & \textbf{D} & \textbf{E}\\
\hline
aeroplane & 80.7 & 83.8 & 81.3 & 84.4 & 84.1 & \textbf{89.3}\\
bicycle & 33.9 & 36.9 & 37.0 & 37.1 & 37.6 & \textbf{40.8}\\
bird & 77.4 & 82.0 & 82.6 & 85.4 & 88.8 & \textbf{86.6}\\
boat & 62.8 & 65.6 & 68.1 & 69.7 & 69.3 & \textbf{70.0}\\
bottle & 66.0 & 67.0 & 71.4 & 74.0 & 74.4 & \textbf{75.1}\\
bus & 82.7 & 84.8 & 87.0 & 87.9 & 92.0 & \textbf{94.5} \\
car & 77.3 & 79.6 & 83.2 & 84.3 & 86.2 & \textbf{88.1}\\
cat & 81.8 & 85.0 & 86.3 & 88.4 & 90.4 & \textbf{91.4}\\
chair & 30.5 & 31.4 & 36.2 & 38.3 & 38.5 & \textbf{44.5}\\
cow & 66.7 & 71.0 & 75.9 & 79.9 & 83.4 & \textbf{87.9}\\
dining table & 52.0 & 43.6 & 56.8 & 57.9 & 59.8 & \textbf{58.9}\\
dog & 73.4 & 78.2 & 81.3 & 83.7 & 82.5 & \textbf{84.6}\\
horse & 65.7 & 70.6 & 78.2 & 83.0 & 81.9 & \textbf{90.0}\\
motorbike & 71.9 & 73.1 & 76.6 & 77.0 & 82.2 & \textbf{86.9}\\
person & 79.5 & 80.1 & 80.5 & 82.2 & 82.5 & \textbf{86.1}\\
potted plant & 46.3 & 49.9 & 48.3 & 52.3 & 54.9 & \textbf{61.2}\\
sheep & 73.6 & 74.9 & 79.0 & 81.7 & 82.2 & \textbf{86.6}\\
sofa & 42.8 & 37.1 & 47.7 & 48.5 & \textbf{52.5} & 52.0\\
train & 77.7 & 78.1 & 80.5 & 82.6 & 88.3 & \textbf{86.8}\\
tv/monitor & 65.0 & 66.2 & 64.8 & 66.3 & 67.5 & \textbf{75.1}\\
\hline
Mean & 66.6 & 68.2 & 71.2 & 73.3 & 74.9 & \textbf{77.6}\\
\end{tabular}
\label{table:val}
\end{table}
\vspace{-16mm}
\begin{table}[H]
\small
\center
\caption{Results on Pascal VOC 2012 \emph{test} set (\%).}
\begin{tabular}{p{2cm}|p{1cm}<{\centering} p{1cm}<{\centering} p{1cm}<{\centering} p{1cm}<{\centering} p{1cm}<{\centering} | p{1cm}<{\centering}}
&\cite{papandreou2015weakly} & \cite{zheng2015conditional} & \cite{chen2015semantic} & \cite{liu2015semantic} & \cite{lin2015efficient} & \textbf{Ours} \\
\hline
aeroplane & 89.2 & 91.2 & 93.2 & 89.0 & 92.9 & \textbf{93.7} \\
bicycle & 46.7 & 56.2 & 41.7 & \textbf{61.6} & 39.6 & 39.5 \\
bird & 88.5 & 88.9 & 88.0 & 87.7 & 84.0 & \textbf{92.9} \\
boat & 63.5 & 68.0 & 61.7 & 66.8 & 67.9 & \textbf{68.4} \\
bottle & 68.4 & 70.7 & 74.9 & 74.7 & \textbf{75.3} & 73.5 \\
bus & 87.0 & 89.5 & 92.9 & 91.2 & 92.7 & \textbf{94.0} \\
car & 81.2 & 83.8 & 84.5 & 84.3 & 83.8 & \textbf{85.5} \\
cat & 86.3 & 87.2 & 90.4 & 87.6 & 90.1 & \textbf{92.8} \\
chair & 32.6 & 33.6 & 33.0 & 36.5 & \textbf{44.3} & 36.7 \\
cow & 80.7 & 81.0 & 82.8 & 86.3 & 85.5 & \textbf{86.8} \\
dining table & 62.4 & 66.4 & 63.2 & 66.1 & 64.9 & \textbf{68.2} \\
dog & 81.0 & 82.4 & 84.5 & 84.4 & \textbf{87.3} & 86.5\\
horse & 81.3 & 83.1 & 85.0 & 87.8 & 88.8 & \textbf{89.7} \\
motorbike & 84.3 & \textbf{87.8} & 87.2 & 85.6 & 84.5 & 85.9 \\
person & 82.1 & 82.3 & 85.7 & 85.4 & 85.5 & \textbf{87.6} \\
potted plant & 56.2 & 59.8 & 60.5 & 63.6 & \textbf{68.1} & 63.7 \\
sheep & 84.6 & 83.5 & 87.7 & 87.3 & \textbf{89.0} & 87.2 \\
sofa & 58.3 & 53.4 & 57.8 & 61.3 & \textbf{62.8} & 57.2 \\
train & 76.2 & 79.5 & 84.3 & 79.4 & 81.2 & \textbf{85.4} \\
tv/monitor & 67.2 & 71.1 & 68.2 & 66.4 & \textbf{71.4} & 70.9 \\
\hline
Mean & 73.9 & 75.9 & 76.3 & 77.5 & 77.8 & \textbf{78.1}\\
\end{tabular}
\label{table:test}
\end{table}
The result of the comparison on Pascal VOC 2012 \emph{test} set is shown in Table \ref{table:test}. We achieve a mean IOU score of $78.1\%$ on this dataset, which outperforms all the existing works. Our model performs the best on more than half of all the 20 categories. Piecewise \cite{lin2015efficient} uses multi-scale feature maps and fine-tunes the model from the complete version of VGG16, which runs much slower than the simplified VGG16. Besides they model lots of pair-wise joint potentials and perform two mean field iterations in the inference stage. They also adapt the bilateral based fully connected CRF for a sharp prediction. As comparison, our model introduce the context CRF and the guidance CRF to encode context information and delineate the object boundary, which run much faster with higher performance on the overall performance. Some examples are shown in the Figure \ref{figure:qulative}.
\section{Introduction}
\begin{figure}
\centering
\includegraphics[width=0.9\columnwidth]{figures/head/1.pdf}
\caption{Schematic visualization of our model. At the coarse level, the context CRF is performed on the coarse segmentation score map to encode context information. At the fine level, the guidance CRF is adapted to delineate the object boundary to make it follow the edges in the input image.}
\label{figure:illustration}
\end{figure}
The task of semantic image segmentation is to assign a label to each pixel in an image. Compared with image classification, semantic image segmentation provides a position-aware semantic understanding of the image through a structural prediction framework.
Recent advances in semantic image segmentation mainly rely on Fully Convolutional Networks (FCN) and conditional random field (CRF).
During the past years, convolutional neural networks have made a series of breakthroughs on the task image classification \cite{krizhevsky2012imagenet,ioffe2015batch,he2015deep}. Deep networks naturally integrate multi-level hierarchical features and classifier by lots of stacked layers in an end-to-end fashion. FCN transfers the recognition network in image classification by fine-tuning to semantic image segmentation in order to harness the learned deep feature representation \cite{long2014fully}.
Different from image classification, the task of segmentation needs to determine object position, shape and boundary, which relies on local contents. Pooling layers in convolutional neural network tend to tolerate the object translation and deformation but decrease the ability for locating and separating objects from the neighboring context. Probability graphic model is natural to be used as it is a structural prediction task to assign pixel-wise labels. In particular, CRF has observed widespread success in semantic image segmentation \cite{russell2009associative,krahenbuhl2012efficient}. However, correctly optimizing CRF requires exact inference in the learning stage, which costs too much time \cite{zheng2015conditional,lin2015efficient}. Instead of explicit global probability representation in CRF, we propose to use a series of classifiers to encode interactions between each node. Our model resembles the error-correcting iterative decoding methods in \cite{ross2011learning,tu2008auto}. We propose an alternative view in the message passing stage in mean field algorithm and update the marginal distribution by collecting message from neighborhood regions. The message estimator is directly modeled with region features consisting of estimated labels and deep convolutional feature.
Designing a strong feature representation is the key challenge in semantic image segmentation. Supervised deep learning feature representation, estimated label map and low level image features are most often used feature for semantic image segmentation. Our contributions are mainly on the exploitations of context clues and low level image features, which are detailedly described in the Section \ref{section:context} and the Section \ref{section:guidance} respectively. Therefore, in the following paragraphs, we will review three kinds of commonly used features and related works.
\textbf{Local feature} plays the most important role to classify individual pixels in semantic image segmentation.
Recently, deep learning approaches such as FCN \cite{long2014fully} have made immense success in semantic image segmentation. The key insight is the strong learning capacity of extremely deep networks such as VGG16 \cite{simonyan2014very} and ResNet \cite{he2015deep} on large-scale training data such as ImageNet \cite{russakovsky2014imagenet}. Taking input of an image with arbitrary size, FCN usually produces a much coarser resolution feature map because of sub-sampling. However, these sub-sampling layers are necessary to keep computational efficient and features invariance. Therefore it is necessary to apply some kind of image filtering for a clear and sharper object boundary.
\textbf{Context clue} represents the spatial-relationship between category labels which is important in structural prediction task.
It has been noted that context clue or high-order information plays a vital role in object detection and semantic image segmentation. Context comes into a variety of forms. Through minimizing the Gibbs energy, CRF is widely adapted for harnessing the context clue to make structural prediction.
However, these models are quite limited due to the time cost of graph inference for the derivation of partition function in each update of gradient descent.
Recently lots of methods to compute the derivation of partition function of CRF in deep learning framework have been proposed.
For example, Chen \emph{et al.} \cite{chen2014learning} attempted to approximate the global distribution using the product of marginal distributions of all cliques, different from mean field algorithm which only uses the unary marginal distributions to approximate the global distribution. Traditionally, the derivation of partition function can also be computed by Gibbs sampling \cite{kirillov2015generic}.
However, even simplifying the global distribution with only unary marginal distributions, it is still not efficient for graph inference as too many iterations are needed for stochastic gradient descent learning in convolutional neural network.
What's more, Lin \emph{et al.} \cite{lin2015efficient} has shown that piece-wise training was able to achieve better performance and faster convergence than pseudo-likelihood training throughout their experiment. These things imply the difficulties of jointly training of FCN and CRF.
Besides, though recently some works have explored the effectiveness of using fixed-pattern high-order cliques \cite{arnab2015higher,vineet2014filter}, CRF usually is restricted into unary cliques and pair-wise cliques.
Compared to the traditional CRF approach for structural prediction, auto-context \cite{tu2008auto} encoded the joint statistics by a series of classifiers based on the label context. For each classifier, the output of the last classifier is used as feature. Auto-context made an attempt to recursively select and fuse context label for structural prediction.
Another probability of encoding context information is learning the messages based on feature context \cite{lin2015deeply,ross2011learning}. The kind of feature context methods model the message estimator between each pair by stacking unary features, which is more similar to traditional CRF as they both rely on pair-wise message passing.
Label context methods are natural to encode high-order clique potential. Pixels with strong local feature clues often achieve high probabilities for their label and they can pass these information to their correlated neighbors. Each pixel can update its estimated label based on local feature and neighborhood supports.
Hierarchical label context \cite{munoz2010stacked} adapted a hierarchical super-pixel representation for coarse-to-fine prediction.
\begin{figure}
\centering
\includegraphics[width=1\columnwidth]{figures/flowchart/flowchart.pdf}
\caption{Model illustration. The coarse score map from FCN is feed into the context CRF component, we use a convolutional block of two layers to model high order messages. The guidance CRF is then applied to refine the object boundary. The whole network is trained in an end-to-end fashion.}
\label{figure:flowchart}
\end{figure}
\textbf{Low level feature} describes the low level image properties, such as image edges, texture and appearance homogeneity. Color histogram and gradient histogram are often used to obtain a clear and sharp boundary around objects.
Recently bilinear-filtering based CRF is popularly adapted for boundary localization. Combined with strong recognition capacity of convolutional neural network, bilateral filter based CRF has shown remarkable success in addressing the task of sharp boundary around object. Though the brute force implementation of bilateral filter is very slow, there are many speedy versions using the techniques of down-sampling \cite{adams2010fast} or quantization \cite{yang2009real}.
Besides, \cite{liu2015semantic} proposed a filter similar to the bilateral filter which can be processed on graphic process unit efficiently through the locally convolutional layer.
The guided filter is also an edge-preserving smoothing operator and it has better performance near edge. It can transfer the structure of guidance image to the filtering output, which is exactly what we want to do for the coarse segmentation map. What's more, the guided filter has a fast linear time algorithm, regardless of the kernel size. We plug the guided filter as the message passing step of pairwise CRF and called it guidance CRF. It leads to both fast process and high performance.
The main contributions of this paper have three folds.
\vspace{-2mm}
\begin{itemize}
\item We propose a jointly trained model with high order context and guided filtering for semantic image segmentation. The networks transfer the parameters from two contemporary classification models and are trained in an end-to-end fashion. It reaches an IOU score of $78.1\%$ and sets the new state-of-the-art on the Pascal VOC 2012 test set.
\item We provide a new method to optimize CRF by encoding context information to update local estimation and introducing global nodes to make the structural prediction global consistency, which we called it context CRF. Experiments have verified the effectiveness of each component of our proposed model. Our proposed context CRF costs little time while bringing large performance gains.
\item We plug in the guided filtering as a message passing step of guidance CRF and make the inference process for accurate boundary $100\times$ faster comparing to traditional bilateral filtering based fully connected CRF.
\end{itemize}
\clearpage
\section{Framework}
Let $I \in \ve I$ denote one input image and $\ve x \in \mathcal{X}$ is its ground truth segmentation label assignment in the dataset. Each pixel $i$ in the label assignment $\ve x = \{x_i,i=1,...,N\}$ takes a value from a pre-defined label set $\mathcal{L}=\{1,...,L\}$. Every label assignment $\ve x$ for image $I$ has a graph $G$ associated with it, where all the pixels forms the vertex set $\mathcal{V}$.
The conditional likelihood function for the image $I$ is
\begin{equation}
P(\ve x|I;\theta)=\frac{1}{Z(I;\theta)}\exp[-E(\ve x,I;\theta)],
\label{equation:P}
\end{equation}
where $E(\ve x,I;\theta)$ is the Gibbs energy function with parameter $\theta$. $Z(I;\theta)$ is the partition function conditioned on the image $I$ and the model parameters $\theta$, $Z(I;\theta)=\sum_{\ve x}\exp[-E(\ve x,I;\theta)].$
The energy function in our formulation is written as
\begin{equation}
E(\ve x, I;\theta) = E_{local}(\ve x,I;\theta) + E_{context}(\ve x;\theta) + E_{edge}(\ve x,I;\theta),
\end{equation}
where $E_{local}(\ve x,I;\theta)$ denotes the segmentation score map of FCN based on deep local feature, $E_{context}(\ve x;\theta)$ encodes the context clue to make structure prediction and $E_{edge}(\ve x,I;\theta)$ is designed to force the segmentation score map to follow the edges in the image.
The coarse segmentation score map of FCN has a lower resolution than original input image. We propose that to encode context information it is unnecessary to make predictions at original resolution. Therefore we have two steps as it is shown in the Figure \ref{figure:flowchart}. Firstly at the coarse level, we take local potential and context potential into consideration
\begin{equation}
E_u(\ve x,I;\theta) = E_{local}(\ve x,I;\theta) + E_{context}(\ve x;\theta) .
\end{equation}
Note that we need to decouple each $x_i$ in this step, e.g., marginal potential with regard to each $x_i$. We solve it in the context CRF component.
After getting the marginal potentials, we up-sampling the segmentation score map to the same size as input image. Secondly at the fine level, the total energy function is
\begin{equation}
E(\ve x,I;\theta) = E_u(\ve x,I;\theta) + E_{edge}(\ve x,I;\theta) .
\label{equation:Euedge}
\end{equation}
where $E_u(\ve x,I;\theta)$ can be treated as unary term as it has been expressed as summation of marginal potentials. Combined with edge potential, we can refine the segmentation score map and get a more accurate object boundary. It is solved in the guidance CRF component.
\subsection{Context Conditional Random Field at Coarse Level}
\label{section:context}
For a given image $I$, the FCN output is a segmentation score map and each pixel $i$ with label assignment $x_i$ has a unary potential $\phi_i(x_i,I_i;\theta)$ associated with it.
In our formulation, as done in \cite{vineet2014filter} and \cite{arnab2015higher}, we introduce $L$ hidden variables $\{y_l,l=1,...,L\}$ to describe the existence of categories in the image. Each hidden variable $y_i$ takes value from $\{0,1\}$ where $y_i=0$ represents that the $i$-th category appears in the image, otherwise the not.
The Gibbs energy of the label assignment $\ve x \in \mathcal{L}^N$ and $\ve y \in 2^L$ is
\begin{equation}
\begin{split}
&E_u(\ve x,\ve y,I;\theta)= \\
& \underbrace{\sum\limits_{i}\phi_i(x_i,I_i;\theta)+\sum\limits_l\phi_l(y_l,I;\theta)}_{local}+\underbrace{\sum\limits_{c}\psi_c(\ve x_c;\theta)+\sum_l\sum\limits_{i}{\psi_g(y_l,x_i;\theta)}}_{context},
\end{split}
\label{equation:E}
\end{equation}
where $\phi_i$ and $\phi_l$ are the singleton node potentials. $\phi_i(x_i,I_i;\theta)$ is the potential of assigning $x_i$ to pixel $i$ based on local appearance descriptor extracted from $I_i$. $\phi_l(y_l,I;\theta)$ describes the existence of the $l$-th category in the image based on global image descriptor extracted from the whole image $I$. $\psi_c$ is defined on the high order clique $c$. $\psi_g$ is designed for the global consistency between global prediction $\{y_l\}$ and pixel-wise label assignment $\ve x$. $i$ indexes the pixel position in the image. The two context terms are independent of image $I$.
Our goal is to estimate the marginal potentials to approximate $ E_u(\ve x,\ve y,I;\theta)$, which is
\begin{equation}
E_u(\ve x,\ve y,I;\theta) \approx \sum\limits_{i}\phi^u_i(x_i,I_i;\theta)+ \sum\limits_l\phi^u_l(y_l,I;\theta),
\end{equation}
Following the similar derivations of mean field algorithm \cite{koller2009probabilistic}, we can get proposed solution which is shown in Algorithm \ref{algorithm:marginal}.
\vspace{-5mm}
\begin{algorithm}
\caption{Marginal potential}
\textbf{input:} FCN unary potential $\phi_i(x_i,I_i;\theta)$ and $\phi_l(y_l,I;\theta)$, maximum iterations K. \\
\textbf{initialize}: $\phi^u_i(x_i|I;\theta)=\phi_i(x_i,I_i;\theta)$, $\phi^u_l(y_l|I;\theta)=\phi_l(y_l,I;\theta)$, k = 0.\\
\textbf{while not converge and $k < K$}\\
\vspace{-6mm}
\begin{enumerate}
\begin{spacing}{1.2}
\item $\hat p(x_i|I;\theta) =\frac{1}{Z_i} \exp[-\phi_i^u(x_i;\theta)].$ \hfill $\triangleright$ Softmax
\item $\hat p(y_l|I;\theta) =\frac{1}{Z_l}\exp[-\phi^u_l(y_l;\theta)].$
\item $\phi_i^u(x_i|I;\theta)=\phi_i(x_i,I_i;\theta)-\sum_c\mathds{E}_{\hat p(\ve x_{c \backslash i})}[\psi_c(\ve x_{c\backslash i},x_i;\theta)]\\- \sum_i\sum_l\mathds{E}_{\hat p(y_l)}[{\psi_g(y_l,x_i;\theta)}].$ \hfill $\triangleright$ Message passing
\item $\phi^u_l(y_l|I;\theta)=\phi_l(y_l,I;\theta)-\sum_i\mathds{E}_{\hat p(x_i)}[{\psi_g(y_l,x_i;\theta)}].$
\item k = k + 1.
\end{spacing}
\end{enumerate}
\vspace{-6mm}
\textbf{end while}\\
\textbf{output:} marginal potential $\phi_i^u(x_i;\theta)$ and $\phi^u_l(y_l;\theta)$.
\label{algorithm:marginal}
\end{algorithm}
In Algorithm \ref{algorithm:marginal}, $\mathds{E}_{\hat p(x_i)}[{\psi_g(y_l,x_i;\theta)}]$ is the expectation of $\psi_g(y_l,x_i;\theta)$ over the estimated distribution of $\hat p(x_i)$ and $\mathds{E}_{\hat p(y_l)}[{\psi_g(y_l,x_i;\theta)}]$ is the expectation of $\psi_g(y_l,x_i;\theta)$ over the estimated distribution of $\hat p(y_l)$. The two terms can be treated as messages reflecting the mutual interactions between the local label prediction $x_i$ and the global label prediction $y_l$. $\mathds{E}_{\hat p(\ve x_{c \backslash i})}[\psi_c(\ve x_{c\backslash i},x_i;\theta)]$ is the expectation of $\psi_c(\ve x_{c\backslash i},x_i;\theta)$ over the estimated distribution of $\hat p(\ve x_{c \backslash i})$, which is about the message passed from the high order clique $c$ to the local node $i$. We will show how to compute these three messages in the convolutional neural networks in the following paragraphs.
\textbf{$\bullet$ Two messages between local and global nodes}. It is straightforward to get the closed form expressions by the definition of expectation
\begin{equation}
\left \{
\begin{split}
\mathds{E}_{p(\hat y)}[{\psi_g(y_l,x_i;\theta)}] &= \sum_{y_l}\hat p(y_l) \mu(y_l,x_i), \\
\mathds{E}_{\hat p(x_i)}[{\psi_g(y_l,x_i;\theta)}] &= \sum\limits_{x_i}\hat p(x_i) \mu(y_l,x_i).
\end{split}
\right.
\end{equation}
Here we define $\psi_g(y_l,x_i;\theta)= \mu(y_l,x_i)$ and initialize $\mu(y_l,x_i)=\mathds{1}[x_i=l \wedge y_l=1]$, which encourages $y_l$ and $x_i$ to take consistent label. $\mu$ can be learned in the jointly training framework.
\textbf{$\bullet$ Message from clique to node.} It is a $L$-dimensional vector encoding the information of label distribution, which is difficult to get a analytical solution. Lin \emph{et al.} \cite{lin2015efficient} has tried to learn potential functions for each two-nodes clique but the inference is much slower and it costs lots of memory to save these joint potentials, e.g., it requires $L^2$ outputs for each pair-wise clique and for a $N$-nodes graph, there are up to $N^2$ pair-wise cliques. It is even much more difficult to learn a potential function for high order clique $c$ with more than two nodes. However, the high order clique is important to make use of the context information and learn the object shapes.
Instead of calculating the marginalization with regard to $\hat p(\ve x_{c\backslash i})$, we propose to construct the convolutional neural networks and directly learn the messages. We place two convolutional layers on the estimated probability map $\hat p(\ve x_c)$ in each iteration to capture the high order pattern
\begin{equation}
\mathds{E}_{\hat p(\ve x_{c \backslash i})}[\psi_c(\ve x_{c\backslash i},x_i;\theta)] = U[\hat p(\ve x_{c}),x_i;\theta],
\label{equation:highorder}
\end{equation}
where $U[\hat p(\ve x_{c}),x_i;\theta]$ is a scalar describing the compatibility of $x_i$ in the high order clique assignment $\ve x_c$. It can also be treated as a new classifier purely based on the estimated probability map, which is independent of image feature. As context information can come from objects far away, we set the size of high order clique very large, almost about half the image size.
Similar ideas can be found in the auto-context model \cite{tu2008auto}. They use a series of classifiers to update the estimated probability label map. In each iteration, the classifier is trained both on local image feature and estimated label context output by the previous classifier. However, in that work the classifiers in each iteration are piece-wise trained with the hand-crafted image features. Unlike their approach, we jointly train the classifier as well as feature layers in convolutional networks. Besides, the classifier in our approach is designed to model the message passed from high order clique $c$ to the node $i$, therefore it is only based on label context and independent of local image feature.
\subsection{Guidance Conditional Random Field at Fine Level}
\label{section:guidance}
The FCN provides a strong feature representation and we have encoded the context information to make structural predictions in the previous section. However, during to the employment of max-pooling layers and sub-sampling operations, the output of FCN is at much lower resolution and is coarse segmentation map. In previous works, the fully connected CRF with low level image features, e.g., color, coordinate, has been successfully used to enhance the object localization accuracy.
The guided filtering is an edge-preserving technique with nice visual quality and fast speed \cite{he2010guided}. We proposed to combine pair-wise CRF with guided filtering and jointly tune the whole networks to learn to align the segmentation results with the object boundary.
The guided filtering in our guidance CRF takes two inputs: (1) the coarse segmentation score map $\phi^u$ to be filtered and (2) the original color image $I$. The filtering result is
\begin{equation}
g(x_i)=\sum\limits_i{w_{ij}(I)\phi^u_j(x_j)}.
\end{equation}
The weight $w_{ij}$ depends on the input color image $I$, which is used as the guidance image. Following the similar derivations in \cite{he2010guided}, the expression of $w_{ij}$ is
\begin{equation}
w_{ij}=\frac{1}{|\omega|^2}\sum\limits_{k\in \omega_i,k\in \omega_j}{\bigg (1+ (\Sigma_k+\epsilon U)^{-1}\sum\limits_{c=1}^3(I^c_i-\mu^c_i)(I^c_j-\mu^c_j)) \bigg )}
\label{equation:w}
\end{equation}
where $\mu_k$ and $\Sigma_k$ is the mean and $3 \times 3$ covariance matrix of image $I$ in window $\omega_k$, $U$ is $3 \times 3$ identity matrix and $|\omega|$ is the number of pixels in $\omega_k$. $\epsilon$ is a regularized parameter and we set it to 1 throughout our experiments.
Now we will introduce how to combine the pair-wise CRF with guided filtering. In the pairwise CRF model according to the Equation \ref{equation:Euedge}, the energy of a label assignment $\ve x$ is given by
\begin{equation}
E(\ve x)=\underbrace{\sum_i\phi^u_i(x_i)}_{unary}+\underbrace{\sum_{i<j}\psi_p(x_i,x_j,I_i,I_j)}_{edge}.
\end{equation}
where the unary potential $\phi^u$ is the output of context CRF. Note that we have dropped the potentials for hidden variables $\ve y$ as it is not measured in our experiment. The pairwise potential $\psi_p$ in the fully connected CRFs has the form
\begin{equation}
\psi_p(x_i,x_j,I_i,I_j)=\mu(x_i,x_j)k(I_i,I_j)
\end{equation}
where $\mu$ is the label compatibility function and the kernel $k(I_i,I_j)=w_{ij}$ as defined in the Equation \ref{equation:w}. $\mu$ is initialized by Potts model and it is jointly learned during training the whole networks to take interactions between labels into account. A mean-field algorithm is used to approximate the maximum a posterior solution as shown in Algorithm \ref{algorithm:guided}.
\begin{algorithm}[H]
\caption{\small Guidance CRF - Training}
\textbf{Forward}
\textbf{input:} Guiding image $I$, segmentation score map $\phi^u$, compatibility matrix $\mu$, weight parameter $\lambda$
\vspace{-3mm}
\begin{enumerate}
\begin{spacing}{1.2}
\item $q(x_i)=\frac{1}{Z_i}\exp[-\phi^u_i(x_i)]$. \hfill $\triangleright$ Softmax
\item $g(x_i)=\sum_j{w_{ij}(I)q(x_j)}$ \hfill $\triangleright$ Guided filtering
\item $m(x_i)=\sum_{x_i}\mu(x_i,x_j)g(x_j)$ \hfill $\triangleright$ Compatibility transform
\item $\phi_i(x_i)=\phi^u_i(x_i) - \lambda m(x_i)$ \hfill $\triangleright$ Local update
\end{spacing}
\end{enumerate}
\vspace{-8mm}
\textbf{output:} marginal potential $\phi$
\vspace{-3mm}
\rule{1\textwidth}{0.1mm}
\vspace{-5mm}
\textbf{Backward}
\textbf{input:} Guidance image $I$, segmentation score map $\phi^u$, compatibility matrix $\mu$, gradient of marginal potential $\frac{\partial L}{\partial \phi}$, weight parameter $\lambda$
\vspace{-3mm}
\begin{enumerate}
\begin{spacing}{1.5}
\item $\frac{\partial L}{\partial \phi^u_i}(x_i) = \frac{\partial L}{\partial \phi_i}(x_i)$ ,$\frac{\partial L}{\partial m}(x_i) = -\lambda \frac{\partial L}{\partial \phi_i}(x_i)$
\item $\frac{\partial L}{\partial \mu}(l_1,l_2) = \frac{\partial L}{\partial m}(x_i) g(x_j)$, $\frac{\partial L}{\partial g}(x_i) = \frac{\partial L}{\partial m}(x_j) \mu(x_i,x_j) $
\item $\frac{\partial L}{\partial q}(x_i) =\sum_j w_{ij}(I) \frac{\partial L}{\partial g}(x_j)$
\item $\frac{\partial L}{\partial \phi_i}(x_i) = \frac{\partial L}{\partial \phi_i}(x_i)$ +$\frac{\partial L}{\partial q}\frac{\partial q}{\partial \phi_i}(x_i)$
\end{spacing}
\end{enumerate}
\vspace{-8mm}
\textbf{output:}$\frac{\partial L}{\partial \phi^u}$, $\frac{\partial L}{\partial \mu}$
\label{algorithm:guided}
\end{algorithm}
\vspace{-5mm}
The forward pass in the training stage performs a softmax, a message passing step, a compatibility transform and a local update. As it is shown in the Froward part in Algorithm \ref{algorithm:guided}, all of these steps can be described by CNN layers. The parameters of guided filter depend on the spatial and appearance of the original image. Instead of directly computed by convolutional layers, the message passing step can be executed as one guided filtering, which can be computed very efficiently.
To back-propagate the segmentation error differentials w.r.t its input and network parameters in each layer, we have shown it in the Backward part in Algorithm \ref{algorithm:guided}. It is straightforward to perform back-propagation algorithm through the local update layer, the compatibility transform layer and the softmax layer.
For the message passing layer, the gradient w.r.t its input is
\begin{equation}
\frac{\partial L}{\partial g}(x_i) =\sum_j w_{ij}(I) \frac{\partial L}{\partial q}(x_j),
\end{equation}
which can also be calculated by performing guided filtering on the error differential map $\frac{\partial L}{\partial q}(x_j)$.
In the inference stage, as shown in \cite{he2015fast}, we down-sample (bilinear) the guidance image and score map, get the guidance parameters in the low resolution, up-sample (bilinear) the guidance parameter and get the filtering result. This operation accelerates this layer more than by $10\times$. We run three iterations in the inference stage.
With the marginal potentials, it is straightforward to get the marginal distribution $\hat p(x_i)=\frac{1}{Z_i}\phi_i(x_i)$. Given a training set $\{(I,\ve x),I\in \ve I,\ve x \in \mathcal{X}\}$, the target of CRF optimization is to learn the parameters $\theta^*$ to maximize the posterior probability of the training data,
\begin{equation}
\theta^* = \argmin_{\theta}\sum_I\sum_i \log \hat p(x_i|I;\theta)+\frac{\lambda}{2}||\theta||^2_2
\end{equation}
Here $I$ is the training image and $x_i$ is the ground truth segmentation label for pixel $i$ in this image; $\lambda$ is the weight decay parameter. The program can be optimized by the standard stochastic gradient descent solver.
\section{Related works}
|
1,108,101,566,354 | arxiv | \section{Introduction}
\copyrightnotice%
Nowadays, radar sensors are widely used for modern advanced driver assistance systems.
In contrast to other used sensor types such as camera and lidar, radar is known for its robustness regarding adverse weather conditions because of its comparatively large wavelength of about \SI{4}{mm} for a \SI{77}{GHz} radar.
Moreover, radar is able to directly measure the radial velocity of an object through the Doppler effect.
Other sensors usually need at least measurements from two time steps to provide a velocity estimation.
On the downside, radar data are sparse and prone to aspects like ghost targets.
Some more challenges that radar has to deal with are noise, interference between radar sensors, measurement ambiguities, and multi-path propagation.
These challenging aspects in radar have gained more and more attention in literature in recent years.
In~\cite{kraus2020using}, Kraus et al.~present a method to segment radar targets into real and ghost objects.
However, only anomalies caused by multi-path propagation are considered with a focus on related certain situations in their dataset.
In addition to that, they examined only vulnerable road users, such as bicyclists and pedestrians, for ghost objects detection.
For the evaluation, a dual-sensor setup is used where measurements are combined and accumulated over \SI{200}{\milli\second}.
A similar task is approached in~\cite{roos2017ghost} where the authors propose a model-based detection algorithm for anomalies.
This method needs two radar sensors and is limited to the detection of multi-path-related anomalies which cause ghost objects.
In~\cite{prophet2019instantaneous}, a detection algorithm consisting of three consecutive steps using handcrafted features is presented in order to detect anomalies.
The used dataset is limited to targets whose Doppler velocity is within the unambiguously measurable range.
However, all of these works have at least one of the following restrictions: focusing solely on multi-path anomalies, accumulating multiple sensors and measurements over time, detecting anomalies at object level and not at target level, and using handcrafted features where expert knowledge is needed.
Although handcrafted features are easier to interpret in general, our radar sensor is a black box where expert knowledge is missing.
\begin{figure}[!t]
\centerline{
\hspace{0.9cm}
\subfigure{
\includegraphics[width=0.75\linewidth]{resources/figures/eye_catcher_anomaly_green-red_cropped.png}
}
}
\centerline{
\subfigure{
\begin{tikzpicture}
\begin{axis}[
width=0.95*\linewidth,
grid=major,
grid style={dashed,gray!30},
xlabel= $y$ in \si{\meter},
ylabel= $x$ in \si{\meter},
x dir=reverse,
axis equal image,
xmin=-45,
xmax=45,
ymax=48,
ymin=-7.5,
colormap={CM}{
samples of colormap=(4 of radar_anomalies)},
colormap access=piecewise constant,
point meta min=0, point meta max=4,
legend columns = 3,
legend style={fill=white, fill opacity=0.8, draw opacity=1.0, text opacity=1.0, at={(0.925,0.1875),anchor=south}},
/tikz/every even column/.append style={column sep=0.3cm},
legend cell align={left}
] \VisualizeRadar{resources/figures/angular_anomaly_fc_25_with_sensor_calib_applied.csv}{
0={mark=*, black},
4={mark=*, blue},
2={mark=*, red},
1={mark=*, orange},
3={mark=*, black}
}
\addlegendentry{stationary}
\addlegendentry{moving}
\addlegendentry{anomalous}
\draw[line width=0.35mm, gray, rotate around={-16: (axis cs: -7.1, 25.5)}] (axis cs: -7.1, 25.5) rectangle (axis cs:-9.5, 31);
\end{axis}
\end{tikzpicture}
}
}
\vspace{-0.25cm}
\caption{Exemplary illustration of anomalous radar targets. Radar data are visualized below the corresponding camera image. The red points display the anomalous radar targets, which show a significant Doppler velocity, although no moving objects are perceivable. The ego-motion compensated Doppler velocity is visualized by the length of the arrows. The gray bounding box sketches the preceding vehicle.\label{fig:eye_catcher}}
\vspace{-0.5cm}
\end{figure}
In this work, we present a single-shot approach to identify anomalous radar targets using deep learning methods.
We consider the radar data of a single measurement cycle as a point cloud, which we call radar target list.
Each radar target consists of two spatial coordinates in combination with the ego-motion compensated and uncompensated Doppler velocity and the radar cross section (RCS) as input features.
In Fig.~\ref{fig:eye_catcher}, exemplary radar data including some anomalous radar targets are visualized.
Since radar targets are represented as point cloud, we leverage PointNets~\cite{PointNet,PointNet++} which are able to directly process point clouds.
We modify the PointNet-architecture by a novel grouping variant which is tailor-made for the anomaly detection task and contributes to a multi-form grouping module.
Our main contributions are:
\begin{itemize}
\item a characterization of sensor-specific anomalies in radar in Section~\ref{section:sensor_setup},
\item a single-shot anomaly detector in radar data in Section~\ref{section:anomaly_detection},
\item a novel multi-form grouping module driven by radar anomaly characteristics in Section~\ref{section:mulit_form_group},
\item an extensive evaluation on real-world data in Section~\ref{section:experiments}.
\end{itemize}
\section{Sensor Setup} \label{section:sensor_setup}
In this work, we use the ARS 408-21 Long Range Radar \SI{77}{GHz} Premium (ARS 408-21) sensor, which is an industrial sensor developed by Continental AG.
The ARS~400 series of radar sensors was initially developed for the automotive industry and is widely used for automotive applications such as advanced driver assistance systems~\cite{weber2020automotive}.
As already mentioned, we consider radar data as a point cloud consisting of many radar targets.
Exactly these data are supplied by the interface of the ARS 408-21.
Thus, we consider the sensor as a black box because we are not able to get insights into the signal processing of the raw radar data.
After the non-transparent signal processing, the sensor outputs resolved radar reflections which we call radar targets.
Fig.~\ref{fig:considered_anomalies} provides a visualization of a radar measurement with five anomalous targets.
These targets are either highlighted in orange or red to distinguish two different kinds of anomalies.
To give the reader an intuition about what can be seen in the radar data, cars are marked with gray boxes.
However, these boxes are only meant for highlighting the radar targets on cars and do not claim to represent their exact bounding box.
The length of the arrows visualizes the ego-motion compensated Doppler velocity of the target.
For clarity purposes, arrows are only drawn for a compensated Doppler velocity greater than \SI{1}{m/s}.
Finally, the size of the points represents the RCS value.
\begin{figure}[t]
\centering
\begin{tikzpicture}
\begin{axis}[
width=1.275\linewidth,
grid=major,
grid style={dashed,gray!30},
xlabel= $y$ in \si{\meter},
ylabel= $x$ in \si{\meter},
x dir=reverse,
axis equal image,
xmin=-25,
xmax=25,
ymax=51,
ymin=-9.9,
colormap={CM}{
samples of colormap=(4 of radar_anomalies)},
colormap access=piecewise constant,
point meta min=0, point meta max=4,
legend style={fill=white, fill opacity=0.8, draw opacity=1.0, text opacity=1.0, at={(0.835,0.1575),anchor=south}},
/tikz/column 2/.style={
column sep=0.3cm,
},
legend cell align={left},
legend columns = 2
]
\VisualizeRadar{resources/figures/more_anomalies_fc_1472_with_sensor_calib_applied.csv}{
0={mark=*, black},
1={mark=*, orange},
4={mark=*, blue},
2={mark=*, red},
3={mark=*, black}
}
\addlegendentry{stationary}
\addlegendentry{anomalous \uproman{1}}
\addlegendentry{moving}
\addlegendentry{anomalous \uproman{2}}
\draw[line width=0.35mm, gray, rotate around={0: (axis cs: 1, 24)}] (axis cs: 1, 24) rectangle (axis cs: -1.5, 29);
\draw[line width=0.35mm, gray, rotate around={0: (axis cs: 15, 39)}] (axis cs: 15, 39) rectangle (axis cs: 11.5, 45);
\draw[line width=0.35mm, gray, rotate around={0: (axis cs: 0.3, 36.2)}] (axis cs: 0.3, 36.2) rectangle (axis cs: -1.5, 42);
\draw[line width=0.35mm, gray, rotate around={0: (axis cs: 2, 43.5)}] (axis cs: 2, 43.5) rectangle (axis cs: 4, 46.5);
\end{axis}
\end{tikzpicture}
\vspace{-0.25cm}
\caption{Exemplary illustration of a radar measurement with several anomalous radar targets. Anomalies colored in red are probably related to errors in the direction of arrival estimation. Whereas anomalies colored in orange are probably related to the multi-path propagation effect or possibly to Doppler velocity ambiguities. The gray bounding boxes sketch vehicles in the environment.\label{fig:considered_anomalies}}
\vspace{-0.5cm}
\end{figure}
Due to the unknown sensor-specific signal processing procedure, some of the hereby considered anomalies are claimed to be ARS 408-21 specific. Hence, we claim that the sensor is not always able to filter out all anomalies in its signal processing. Moreover, some hypotheses, stating the reasons of the sensor-specific anomalies, are developed in the following.
The anomalies colored in red in Fig.~\ref{fig:considered_anomalies} are targets that are quite similar to car targets within the same range bin.
Here, the ego-motion compensated Doppler velocity may vary compared to car targets on the same range.
This is caused by the ego-motion compensation itself which takes the azimuth angle into account and, thus, motivates the additional consideration of the uncompensated Doppler velocity.
The high similarities in both the measured range and Doppler velocity of the anomalous targets lead to the hypothesis that the anomalies are caused by errors in the direction of arrival (DoA) estimation, also referred to as azimuth angle.
These errors may be related to measurement ambiguities and may be caused by the sensor design, e.g., antenna designs or beamforming.
Furthermore, anomalies may also be related to multi-path propagation effects. Multi-path propagation describes a phenomenon, where the radar sensor receives echo signals which do not propagate along the direct line of sight between sensor and object. Instead, the propagation paths include reflections on other surfaces.
The anomalies colored in orange in Fig.~\ref{fig:considered_anomalies} characterizes a high Doppler velocity surrounded by mostly stationary targets. These anomalies are probably related to multi-path propagation effects. In this case, a possible propagation path leading to the high measured Doppler velocity may include multiple bounces on the test vehicle. Due to the ego-motion of this vehicle, these bounces increase the frequency of the reflected wave and thus the measured Doppler velocity.
However, this is just one possible explanation of the anomaly and, based on the measurement visualized in Fig.~\ref{fig:considered_anomalies}, we can neither prove nor debunk this hypothesis.
This means that other influences are conceivable that lead to anomalies in radar data such as Doppler velocity ambiguities.
Throughout this work, we focus on these kinds of anomalies, since they can be comprehensibly identified based on a single radar point cloud using a camera image for verification purposes.
\section{Related Work} \label{section:related_work}
As already mentioned in Section~\ref{section:sensor_setup}, multiple effects have to be considered for our presented anomalies.
Hence, approaches proposed in literature are diverse, aiming at different stages of the processing chain for radar data.
In~\cite{roos2018enhancement} a novel signal processing algorithm for an enhancement of the ambiguity range of the Doppler velocity measurement is proposed.
In this way, anomalies resulting from measurement ambiguities are reduced.
However, this does not affect the anomalies caused by multi-path propagation effects, which are, e.g.,
analyzed in~\cite{kaman2018automotive, roos2017ghost}.
The authors of~\cite{roos2017ghost} additionally propose a model-based detection algorithm for the anomalies.
This algorithm relies on a comparison of the current motion state and the estimated bounding box of a target vehicle.
It is worth mentioning that the estimation of the motion state requires the usage of two radar sensors.
Moreover, to estimate the bounding box correctly the vehicle has to be represented by multiple radar targets.
Finally, the algorithm is limited to the detection of multi-path related anomalies that cause ghost objects and cannot detect single anomalous targets caused by factors such as measurement ambiguities.
Other approaches for anomaly detection involve the usage of machine learning techniques, especially of deep learning. As presented in~\cite{chalapathy2019deep}, these techniques have been applied to many different anomaly detection tasks.
However, most of the works focusing on the detection of radar anomalies consider this task as a semantic segmentation problem.
Hence, we adopt this approach for our investigation.
The authors of~\cite{prophet2019instantaneous} present a detection algorithm suitable for the usage of a single measurement of one radar sensor.
The algorithm is structured in three consecutive steps. First, moving targets are identified.
In the second step, handcrafted features for the identified moving targets are calculated. These features are then passed to a random forest classifier in the final step.
In contrast to the detection algorithm described in~\cite{roos2017ghost}, this algorithm is capable of detecting anomalies regardless of their causes.
Furthermore, it shows promising results on the task of classifying targets in the radar point cloud into infrastructure, real moving targets, and anomalies. However, it is necessary to note that the used dataset is limited to targets whose Doppler velocity is within the unambiguously measurable range. Hence, errors in the first algorithm step are limited to the rare case of purely tangential moving targets.
As a modification of this approach, \cite{garcia2019moving} presents a detection algorithm based on deep learning.
Instead of handcrafted features, the algorithm uses an occupancy grid map and a map of moving targets as input.
The latter incorporates the concept of the first stage of the algorithm presented in~\cite{prophet2019instantaneous}. Moreover, the random forest classifier is replaced by a convolutional neural network (CNN), which performs a segmentation of the input data. Considering only targets with a maximum longitudinal distance of \SI{30}{\meter}, the algorithm shows promising results. However, to use the radar point cloud in the algorithm described above, an occupancy grid map has to be calculated. As the authors of~\cite{schumann2018semantic} show, this step can be omitted. Instead, the task of semantic segmentation can be applied directly to radar point clouds using the PointNet++ architecture~\cite{PointNet++}.
In order to overcome the sparsity of radar measurements, the point clouds provided by multiple radar sensors are combined and accumulated over \SI{500}{\milli\second}. Using this accumulated point cloud as input, the PointNet++ segmentation architecture is used to classify different classes of road users.
In~\cite{chamseddine2021ghost}, the PointNet-architecture is used to detect ghost targets in 3D radar point clouds. In their work, ghost targets are referred to as multi-path reflections.
It is noteworthy that highly dense radar data are used, which consist of around $1000$ 3D points per measurement, as opposed to around $200$ 2D points from our used radar sensor.
Based on the contribution of~\cite{schumann2018semantic}, Kraus et~al. present in~\cite{kraus2020using} an application of the PointNet++ architecture to the task of segmenting radar targets into real and ghost objects.
Although this task is similar to our considered task, there are also fundamental differences.
The authors of~\cite{kraus2020using} use a dual-sensor setup, in which measurements are combined and accumulated over \SI{200}{\milli\second} to increase the density of the radar point cloud.
In addition to that, only vulnerable road users, such as bicyclists and pedestrians, and their corresponding ghost objects caused by multi-path propagation effects are considered.
In contrast to that, our detector is able to derive information about anomalies from only a single radar point cloud.
Moreover, we primarily consider vehicles and aim on detecting even single anomalous targets.
The causes of the anomalies are also not restricted to multi-path propagation effects.
To summarize this section, to the best of our knowledge, we propose the first anomaly detector in 2D sparse radar data using deep learning methods which combines all of the following aspects, i.e., single-shot detection, consideration of various kinds of anomalies, and approaching the task on a target level.
\section{Problem Statement} \label{section:problem_statement}
The goal of our proposed method is to detect anomalies in radar data.
In this work, anomalies are defined as radar targets with significant Doppler velocities which apparently do not correspond to real-world moving objects.
To identify these anomalous radar targets, radar data is given as point clouds.
In detail, the radar point cloud $P$ consists of a set of five-dimensional points $P = \left\{p_i \in \mathbb{R}^5 | i = 1, \ldots, n \right\}$ with $n \in \mathbb{N}, n \leq 250$ the number of points per time step $k = 1, \ldots, m$.
Each point represents a radar target which is obtained by some sensor-processing of raw radar data.
Each target can be described by $p_i = \left(x, y, \tilde{v}_D, \sigma, v_D \right)$ where $(x,y)$ denotes the 2D-position, $\tilde{v}_D$ and $v_D$ the ego-motion compensated and uncompensated Doppler velocity, and $\sigma$ the RCS of the target.
For our anomaly detection task, radar data of only one single measurement cycle of one radar sensor are used.
This means that our radar data are augmented neither by accumulating measurements over time nor by using data from multiple sensors.
Moreover, the output of our anomaly detection method is a binary segmentation of the radar point cloud where each target is classified either as normal (non-anomalous) or as anomalous.
\section{Anomaly Detection in Radar Data} \label{section:anomaly_detection}
In this section, we present our proposed method for the detection of anomalies in radar data.
After explaining the used architecture variants of PointNets, we focus on our developed multi-form grouping module.
\subsection{PointNets} \label{section:pointnets}
We investigate four different forms of PointNets~\cite{PointNet, PointNet++} for the previously described anomaly segmentation task. Namely, we evaluate one PointNet architecture and three PointNet++ variations. These variations differ in the grouping module used inside the set abstraction (SA) layer. Besides the single-scale grouping (SSG) and multi-scale grouping (MSG) modules proposed by Qi~et~al. in~\cite{PointNet++}, we developed and evaluated a so-called multi-form grouping (MFG) module which is introduced later in this section.
All of our PointNet models have been adapted for the usage with radar point clouds in terms of dimensions, i.e., two spatial and multiple feature dimensions. Moreover, we omitted sampling in the first SA layer of our PointNet++ models because the radar measurements are already sparse.
\subsection{Multi-Form Grouping} \label{section:mulit_form_group}
\begin{figure}[t]
\centerline{
\subfigure[Circular grouping.]{
\begin{tikzpicture}
\begin{axis}[
width=0.95*\linewidth,
grid=major,
grid style={dashed,gray!30},
xlabel= $y$ in \si{\meter},
ylabel= $x$ in \si{\meter},
x dir=reverse,
axis equal image,
xmin=-40,
xmax=40,
ymax=48,
ymin=-7,
colormap={CM}{
samples of colormap=(4 of radar_anomalies)},
colormap access=piecewise constant,
point meta min=0, point meta max=4,
legend columns = 3,
legend style={fill=white, fill opacity=0.8, draw opacity=1.0, text opacity=1.0, at={(0.925,0.175),anchor=south}},
/tikz/every even column/.append style={column sep=0.3cm},
legend cell align={left}
] \VisualizeRadar{resources/figures/angular_anomaly_fc_25_with_sensor_calib_applied.csv}{
0={mark=*, black},
4={mark=*, blue},
2={mark=*, red},
3={mark=*, black},
1={mark=*, orange}
}
\DrawBox{line width=0.35mm, gray}{-19}{-6.5}{25}{2.5}{6.5}
\addplot[color=brown!80!black, only marks, style={mark=*, fill=brown!20!black}, mark size=20, fill opacity=0.025] coordinates {(23.07, 22.01)};
\addlegendentry{stationary}
\addlegendentry{moving}
\addlegendentry{anomalous}
\end{axis}
\end{tikzpicture}
\vspace{-0.25cm}
\label{fig:multi_form_grouping:circle}
}
}
\centerline{
\subfigure[Ring grouping.]{
\begin{tikzpicture}
\begin{axis}[
width=0.95*\linewidth,
grid=major,
grid style={dashed,gray!30},
xlabel= $y$ in \si{\meter},
ylabel= $x$ in \si{\meter},
x dir=reverse,
axis equal image,
xmin=-40,
xmax=40,
ymax=48,
ymin=-7,
colormap={CM}{
samples of colormap=(4 of radar_anomalies)},
colormap access=piecewise constant,
point meta min=0, point meta max=4,
legend columns = 3,
legend style={fill=white, fill opacity=0.8, draw opacity=1.0, text opacity=1.0, at={(0.925,0.175),anchor=south}},
/tikz/every even column/.append style={column sep=0.3cm},
legend cell align={left}
]
\begin{scope}[on background layer]
\clip (-40, 0) rectangle (80, 50);
\addplot[color=brown!80!black, only marks, style={mark=*, fill=brown!20!black}, mark size=76, fill opacity=0.025, clip mode=individual, forget plot] coordinates {(0, 0)};
\addplot[color=brown!80!black, only marks, style={mark=*, fill=white}, mark size=60, clip mode=individual, forget plot] coordinates {(0, 0)};
\end{scope} \VisualizeRadar{resources/figures/angular_anomaly_fc_25_with_sensor_calib_applied.csv}{
0={mark=*, black},
4={mark=*, blue},
2={mark=*, red},
3={mark=*, black},
1={mark=*, orange}
}
\DrawBox{line width=0.35mm, gray}{-19}{-6.5}{25}{2.5}{6.5}
\addlegendentry{stationary}
\addlegendentry{moving}
\addlegendentry{anomalous}
\end{axis}
\end{tikzpicture}
\vspace{-0.25cm}
\label{fig:multi_form_grouping:ring}
}
}
\caption{Illustration of multi-form grouping of a radar measurement with the hypothesis that the anomalous radar targets (red) originate from the preceding vehicle (gray bounding box).\label{fig:multi_form_grouping}}
\vspace{-0.25cm}
\end{figure}
The multi-form grouping module is mainly motivated by~\cite{komarichev2019cnn} and \cite{sheshappanavar2020anovellocal}.
In~\cite{komarichev2019cnn} annularly CNNs are proposed on 3D point clouds where each point is utilized as center of annular convolutions.
Moreover, in~\cite{sheshappanavar2020anovellocal} the authors investigate the benefits of using ellipsoids instead of ball spheres for queried regions in PointNet++.
Rather than using an ellipsoid, we propose the usage of a ring with the origin as center.
To the best of our knowledge, this approach has not been investigated for PointNet++ architectures so far.
The reason for our choice is indicated in Fig.~\ref{fig:multi_form_grouping:ring}, as some of the considered anomalies occur in a ring-shaped region around the sensor origin within the same range as car targets.
The ring grouping is exemplarily illustrated for the targets of the preceding vehicle in Fig.~\ref{fig:multi_form_grouping:ring}.
In contrast to elliptical and circular regions, the ring-shaped querying can be easily realized by filtering the range of each target, especially if the spatial information of the radar data is represented using polar coordinates $(r, \phi)$.
Besides that, a circular region that includes both the anomalous targets as well as the car targets covers a larger area than the corresponding ring, see Fig.~\ref{fig:multi_form_grouping:circle}.
Because of that, the ring contains fewer targets than the circle and is thus more memory efficient.
Nevertheless, we expect circular neighborhoods to be beneficial, e.g., for the kind of anomalies in Fig.~\ref{fig:considered_anomalies} which is assumed to be mainly related to the multi-path propagation effect.
Thus, we incorporate both querying forms into a single module, which leads to the naming of MFG.
In addition to that, our module also includes the idea of using the neighborhood information of multiple different scales for both querying forms, as introduced with MSG.
The components of the multi-form grouping module are depicted in Fig.~\ref{fig:multi_form_grouping}.
\section{Dataset}\label{section:dataset}
For training and testing of our anomaly detection methods, we created a hand-labeled dataset consisting of real-world radar data. The dataset has been recorded with the test vehicle of Ulm University~\cite{kunz2015autonomous}. The test vehicle is equipped with three front-facing ARS 408-21 radar sensors. One sensor is mounted on the center of the front bumper; the other two radars are mounted on the front corners of the vehicle. Since our objective is to use solely data of a single radar sensor for the detection of anomalies, this enables us to effectively create three datasets, one for each radar sensor.
The recorded sequence has a length of approximately \SI{3.5}{\minute} and represents a drive along an urban scenario in Ulm, Germany. More precisely, the chosen route includes a roundabout and several intersections. The sequence contains measurements of all three front radar sensors and images of the front camera. The measurements include both the ego-motion compensated and uncompensated Doppler velocity, as well as information about the ego-motion itself.
Furthermore, we limit the maximum range of the considered targets to \SI{70}{\meter}. This is necessary to ensure a correct ground truth labeling of radar anomalies based on the available camera images. Moreover, we only label anomalous targets with a significant Doppler velocity, although some of the effects discussed in Section~\ref{section:sensor_setup}, e.g., multi-path propagation, also apply to stationary targets. This is mainly motivated by the sparsity of the radar point clouds, which makes it almost impossible to distinguish between stationary anomalies and, for instance, a correct measurement of the ground.
In addition, it should also be noted that anomalies represent only approximately \SI{2}{\percent} of the total radar targets. Hence, our dataset is highly unbalanced, which affects the training process. We additionally investigated the distribution of anomalies across the measurements of the dataset. Thereby, we observed that approximately \SI{75}{\percent} of the radar point clouds contain at least one anomaly. Nevertheless, we also include measurements without any anomalies in our dataset.
\section{Experiments} \label{section:experiments}
In this section, we evaluate the proposed architectures on our real-world dataset.
\subsection{Training} \label{section:training}
The training process has been performed solely using a main data part of the radar on the front center of the vehicle.
This enables assessments of the generalization ability of our method when testing on data of the other two radars.
To obtain batches for the training process, we randomly duplicate some of the radar targets during training in order to reach the same size of $250$ involved measurements in a batch.
Furthermore, training of the models is performed with the Adam optimizer. We use a batch size of $48$ and learning rate scheduling. This schedule starts with a learning rate of $2 \times 10^{-4}$, which is halved every ten epochs. We train the models for $100$ epochs. Moreover, we apply data augmentation to avoid overfitting of our models. A challenging aspect for the training process is the unbalanced dataset. We address this problem by artificially increasing the number of anomalies in a radar measurement by combining them with the anomalies of consecutive measurements. To be more precise, for each measurement we consider the anomalies of the three measurements before and after. Each of these anomalies is inserted into the measurement with a probability of \SI{75}{\percent}. Both, the number of considered measurements and the probability are empirically chosen. Besides increasing the number of anomalies, we also scale the contribution of a target to the loss based on its class. More specifically, anomalies contribute nine times more to the loss than normal targets. This ratio is chosen empirically but was initially oriented at the ratio of normal targets and anomalies in the dataset. By increasing this ratio, we can increase the penalty of false negatives, i.e., non-detected anomalies. But it must be noted that this also decreases the penalty for false positives, which represent normal targets that have been classified as anomalies. Thus, a trade-off is necessary.
\subsection{Quantitative Results} \label{section:quant_results}
For the evaluation of our anomaly detection method, the $F_1$ score is used as metric. The $F_1$ score is the harmonic mean between precision $P$ and recall $R$ and is defined by
\begin{equation}
F_1 = \frac{2 P R}{P + R} .
\end{equation}
\sisetup{detect-weight=true,detect-inline-weight=math}
\begin{table}[t!]
\caption{Results for the anomaly detection in radar data. Different models of the anomaly detection method are evaluated on our split dataset using $F_1$ score.}
\vspace{-0.25cm}
\label{tab:anomaly:other}
\begin{center}
\begin{tabular}{c|c|c|c|c}
\toprule
\diagbox[width=6.5em]{data}{model} & PointNet & \thead{PointNet++ \\ SSG} & \thead{PointNet++ \\ MSG} & \thead{PointNet++ \\ MFG} \\
\midrule
left & \SI{66.07}{\percent} & \SI{66.89}{\percent} & \SI{74.04}{\percent} & \bfseries{\SI{76.03}{\percent}}\\
right & \SI{77.01}{\percent} & \SI{73.12}{\percent} & \SI{81.97}{\percent} & \bfseries{\SI{82.21}{\percent}}\\
center & \SI{77.34}{\percent} & \SI{77.39}{\percent} & \SI{83.52}{\percent} & \bfseries{\SI{84.76}{\percent}} \\
\midrule
intersections & \SI{51.89}{\percent} & \SI{54.55}{\percent} & \SI{59.94}{\percent} & \bfseries{\SI{63.62}{\percent}} \\
\thead{without\\ intersections} & \SI{74.74}{\percent} & \SI{72.59}{\percent} & \SI{80.87}{\percent} & \bfseries{\SI{81.64}{\percent}} \\
\bottomrule
\end{tabular}
\end{center}
\vspace{-0.6cm}
\end{table}
Table~\ref{tab:anomaly:other} shows the results of the four variations of the anomaly detection method in radar data.
First, we split the whole test dataset for evaluation purposes in data of the center, left, and right radar sensor. As already mentioned, we solely trained with data of the radar at the front center. For this reason, the test data of the center radar are rather small. For a more comprehensive evaluation and also the investigation of the generalization ability, data of the left and right sensor are used for testing.
We notice that the PointNet model performs similar to the PointNet++ SSG model except for the right sensor data where the PointNet model is even better.
This indicates that the information gained from a single neighborhood by the SSG does not bring a notable benefit.
Nevertheless, the PointNet++ MSG and MFG models, which consider the neighborhood of multiple scales, outperform the PointNet model.
More precisely, the PointNet++ MFG model performs best. This indicates that our modification of MSG with adapted grouping regions, which leads to the MFG module, is beneficial.
Beyond that, we observe for all models that the $F_1$ score on data of the front left radar is lower than on data of the front right radar.
This might be caused by the fact that the front left radar sensor is slightly oriented towards oncoming traffic, whereas the front right radar sensor is rotated into the other direction (right-hand traffic).
As a result of that, an oncoming vehicle, which may cause anomalies, is longer visible for the front left radar.
Thereby, the number of anomalies is higher in the dataset of the front left and lower in the dataset of the front right radar sensor.
As a result of that, the number of anomalies, which are challenging for the models, may also differ, leading to the differences observed in the results.
Second, we split the test data of the left and right radar in one part only with intersection scenarios and another part with everything else.
This is done because we hypothesize that the characteristics of the anomalies of our radar sensor in intersection scenarios, which are associated with slow ego-velocities, differ relevantly.
On closer inspection, we observed that the ego-velocity affects the characteristics of the anomalies significantly which is possibly caused by a transition of range gates and thus velocity ambiguities.
The general comparison results of the four different models are still similar.
However, the results support our hypothesis because the performance on intersection data is significantly worse than on data without intersections.
This is caused by the underrepresentation of these scenarios in our dataset, which makes them more challenging for our networks.
\begin{table}
\caption{Mean inference time of the different models for processing the anomaly detection method.}
\vspace{-0.25cm}
\label{tab:anomaly:inference}
\begin{center}
\begin{tabular}{c|c|c|c|c}
\toprule
\diagbox[width=6.9em]{measure}{model} & PointNet & \thead{PointNet++\\ SSG} & \thead{PointNet++\\ MSG} & \thead{PointNet++\\ MFG} \\
\midrule
\thead{inference\\ time} & \bfseries \SI{1.6}{\milli\second} & \SI{13.4}{\milli\second} & \SI{26.7}{\milli\second} & \SI{23.4}{\milli\second}\\
\bottomrule
\end{tabular}
\end{center}
\vspace{-0.5cm}
\end{table}
In Table~\ref{tab:anomaly:inference}, the inference times of the anomaly detection algorithms are illustrated.
The tests were performed on a Linux workstation with a single \textit{NVIDIA GeForce RTX 2070 SUPER} GPU.
It is worth mentioning that the computational performance was not the main focus of our work and is still perfectible.
We observe that the improved performance of the PointNet++ MSG and PointNet++ MFG models is achieved at the cost of a much higher time complexity in comparison to the PointNet++ SSG model.
However, it is worth mentioning that the inference for our PointNet++ MFG model is about \SI{3}{\milli\second} faster than for the original PointNet++ MSG model.
This supports our claim that the ring-shaped query is more efficient than its circular pendant.
Moreover, it should be noted that the PointNet++ SSG model has an inferior time complexity than the PointNet model, although both models achieved comparable $F_1$ scores.
\subsection{Qualitative Examples} \label{section:qual_examples}
\begin{figure*}[!t]
\centerline{
\subfigure{
\hspace*{0.775cm}
\includegraphics[width=0.335\textwidth]{resources/figures/cam_fl_1800_cropped.png}
}
\hfill
\subfigure{
\includegraphics[width=0.335\textwidth]{resources/figures/cam_fl_170_cropped.png}
}
}
\addtocounter{subfigure}{-2}
\centerline{
\subfigure[Exemplary illustration of a radar measurement near an intersection where our model classifies two anomalies with a low ego-motion compensated Doppler velocity incorrectly.]{
\begin{tikzpicture}
\begin{axis}[
grid=major,
grid style={dashed,gray!30},
xlabel= $y$ in \si{\meter},
ylabel= $x$ in \si{\meter},
x dir=reverse,
ytick={0, 20, 40, 60, 80},
yticklabels={0,20,40,60,80},
axis equal image,
xmin=-50,
xmax=50,
ymax=78,
ymin=-23,
colormap={CM}{
samples of colormap=(5 of radar_qualitative)},
colormap access=piecewise constant,
point meta min=0, point meta max=4,
legend style={fill=white, fill opacity=0.8, draw opacity=1.0, text opacity=1.0, at={(0.83,0.21),anchor=south}},
/tikz/every even column/.style={
column sep=0.3cm,
},
legend cell align={left},
legend columns = 2
] \VisualizeRadar{resources/figures/qualitative_mfg_fl_1800_with_sensor_calib_applied.csv}{
0={mark=*, black},
1={mark=*, green},
4={mark=*, blue},
3={mark=*, orange},
2={mark=*, red}
}
\DrawBox{line width=0.35mm, gray}{0}{4}{0}{3}{3};
\DrawBox{line width=0.35mm, gray}{0}{1.5}{23}{3}{6};
\DrawBox{line width=0.35mm, gray}{0}{5}{27.5}{3}{6};
\DrawBox{line width=0.35mm, gray}{0}{1.5}{36}{3}{7};
\DrawBox{line width=0.35mm, gray}{0}{4}{43}{3}{6};
\DrawBox{line width=0.35mm, gray}{0}{4}{52}{3}{8};
\DrawBox{line width=0.35mm, gray}{0}{3}{63}{2.5}{3};
\addlegendentry{TN (stationary)}
\addlegendentry{TP}
\addlegendentry{TN (moving)}
\addlegendentry{FN}
\end{axis}
\end{tikzpicture}
\label{fig:qualitative_examples:near_intersection}
}
\hfill
\subfigure[Exemplary illustration of a radar measurement with many correctly detected anomalies that are probably related to multi-path propagation involving the ego-vehicle.]{
\begin{tikzpicture}
\begin{axis}[
grid=major,
grid style={dashed,gray!30},
xlabel= $y$ in \si{\meter},
ylabel= $x$ in \si{\meter},
x dir=reverse,
ytick={0, 20, 40, 60, 80},
yticklabels={0,20,40,60,80},
axis equal image,
xmin=-50,
xmax=50,
ymax=78,
ymin=-23,
colormap={CM}{
samples of colormap=(5 of radar_qualitative)},
colormap access=piecewise constant,
point meta min=0, point meta max=4,
legend style={fill=white, fill opacity=0.8, draw opacity=1.0, text opacity=1.0, at={(0.83,0.21),anchor=south}},
/tikz/every even column/.style={
column sep=0.3cm,
},
legend cell align={left},
legend columns = 2
] \VisualizeRadar{resources/figures/qualitative_mfg_fl_170_with_sensor_calib_applied.csv}{
0={mark=*, black},
1={mark=*, green},
4={mark=*, blue},
2={mark=*, red},
3={mark=*, orange}
}
\addlegendentry{TN (stationary)}
\addlegendentry{TP}
\addlegendentry{TN (moving)}
\DrawBox{line width=0.35mm, gray}{25}{13}{34}{3}{5};
\DrawBox{line width=0.35mm, gray}{25}{20}{52}{3}{5};
\DrawBox{line width=0.35mm, gray}{50}{-12}{9}{3}{4};
\end{axis}
\end{tikzpicture}
\label{fig:qualitative_examples:good}
}
}
\caption{Qualitative examples of predictions of our anomaly detection method. On the left, a radar measurement with erroneous prediction is presented, i.e., false negatives (FN). On the right, a radar measurement is depicted where all anomalies are correctly detected, i.e., true positives (TP).\label{fig:qualitative_examples}}
\vspace{-0.25cm}
\end{figure*}
Qualitative examples of the anomaly detection prediction are visualized in Fig.~\ref{fig:qualitative_examples}.
An exemplary measurement which leads to wrong predictions is visualized in Fig.~\ref{fig:qualitative_examples:near_intersection}.
The vehicles highlighted in this measurement are driving towards an intersection and therefore slowing down.
Moreover, the measurement contains five anomalies.
Thereby, two of these anomalies have a significantly lower ego-motion compensated Doppler velocity than the remaining ones.
It is worth mentioning that the anomalies with low ego-motion compensated Doppler velocities are not detected, whereas our model correctly classifies the remaining ones as anomalies.
This also indicates that scenarios in which traffic slows down are challenging for our detector.
Thus, the bad performance of our detector may be caused by the fact that scenarios with a significantly slower velocity of the ego-vehicle and other road users are underrepresented in our dataset.
Nevertheless, the quantitative evaluation already showed that our model is capable of detecting the majority of the anomalies. As a result of that, Fig.~\ref{fig:qualitative_examples:good} displays an exemplary measurement in which our model performs well. It is worth mentioning that the anomalies, which are probably related to multi-path propagation effects involving the ego-vehicle, tend to cluster in this measurement. As a consequence, the direct neighborhood of these anomalies contains more anomalous than normal targets. Although this makes the detection of anomalies more challenging, our model detects all of these and the other anomalies correctly.
\subsection{Discussion} \label{section:discussion}
In general, the results of the anomaly detector in radar data are promising.
When taking inference time and $F_1$ score into account, the PointNet model is a good balance between these two aspects and also a suitable choice for systems with limited computational capacity.
The PointNet++ MFG model provides significantly higher performance which is accompanied by higher computational requirements.
However, the inference time of our PointNet++ MFG model is lower than the original MSG model.
Certainly, it is important to note that the used dataset is limited in several aspects.
Firstly, the amount of data and variety of situations is limited.
More precisely, only urban scenarios are covered in the dataset so far.
Besides, the three subdatasets, one for every sensor, are correlated because, although the data are from different sensors, they still cover the same driving sequence.
In addition, the variety of objects is limited, e.g., trucks are highly underrepresented.
However, these limitations can be overcome by extending the dataset.
Moreover, we restricted ourselves to use only a single radar measurement for the anomaly detection.
In this way, the anomalies can be neither detected using the temporal information of multiple consecutive measurements nor by the fusion of overlapping point clouds obtained from different radars.
However, this enables us to use a classical centralized fusion approach where temporal information is not taken into account until the tracking stage.
In addition, this approach facilitates us to combine our proposed anomaly detection with a single-shot object detector operating also on single radar measurements such as~\cite{griebel2019Car}.
\section{Conclusion} \label{section:conclusion}
In this work, we tackle the problem of anomaly detection in radar data using a single radar measurement.
We first described and defined the anomalies which we want to detect in our real-word data and hypothesize the reasons for these anomalies.
In doing so, we observed that approximately \SI{75}{\percent} of the radar point clouds in our dataset contain at least one anomaly.
For the anomaly detection, we used the PointNet architecture family as a base.
Thereby, we proposed a novel grouping algorithm for the PointNet++ architecture, the multi-form grouping.
In contrast to classical circular grouping, our approach takes the characteristics of anomalous radar targets into account.
This enables us to outperform the reference implementation with circular grouping both in terms of the $F_1$ score as well as the inference time.
Overall, our approach shows promising results for detecting anomalies in radar data.
In our future work, we aim to extend this approach also for other kind of anomalies occurring in radar measurements.
To this end, the dataset should be increased in size and various additional real-world scenarios should be included, e.g., non-urban scenarios with a speed limit higher than \SI{50}{\kilo\meter\per\hour}.
Besides that, the combination and interaction of our proposed anomaly detection with a single-shot object detector operating on radar measurements such as~\cite{griebel2019Car} should be further investigated.
Additionally, generative adversarial networks (GANs) seem suitable for further investigations of anomaly detection in radar data.
Finally, a comparison to other radar sensors regarding anomalies and its detection should be explored.
\bibliographystyle{IEEEtran}
|
1,108,101,566,355 | arxiv | \section{Introduction}\label{s:i}
Let $\O$ be a smooth bounded domain of ${\mathbb R}^N$, $N\geq2$ and let
$\Sig_k$ be a smooth closed submanifold of $\partial\O$ with dimension
$0\leq k\leq N-1$. Here $\Sig_0$ is a single point and
$\Sig_{N-1}=\partial\O$. For $\l\in{\mathbb R}$, consider the problem of finding
minimizers for the quotient:
\begin{equation}
\label{eq:mpqek} \m_{\l}(\Omega,\Sigma_k):= \inf_{u\in
H^{1}_{0}(\O)} ~\frac{\displaystyle\int_{\O}|\nabla
u|^2p~dx-\l\int_{\O}\d^{-2}|u|^2\eta~dx}
{\displaystyle\int_{\O}\d^{-2}|u|^2q~dx}~,
\end{equation}
where $\d(x):= \textrm{dist}(x,\Sig_k)$ is the distance function to
$\Sig_k$ and where the weights $p,q$ and $\eta$ satisfy
\begin{equation}\label{eq:weight} \textrm{$p,q\in C^2(\overline{\O})$,}\qquad
p,q>0\quad\textrm{ in $\overline{\O}$,}\qquad \eta>0\quad\textrm{ in
$\overline{\O}\setminus\Sig_k$,}\qquad \textrm{ $\eta\in Lip(\overline{\O})$}
\end{equation}
and
\begin{equation}\label{eq:min-pq}
\max_{\Sig_k}\frac{q}{p}=1,\qquad \textrm{ $\eta=0$}\qquad \textrm{ on $\Sig_k$ }.
\end{equation}
We put
\begin{equation}\label{eq:defIk}
I_{k}=\int_{\Sig_k}\frac{d\s}{\sqrt{1-\left(q(\s)/p(\s)\right)}},\quad
1\leq k\leq N-1\quad\textrm{ and }\quad I_0=\infty.
\end{equation}
It was shown by Brezis and Marcus in \cite{BM} that there exists $\l^*$ such that
if $\l>\l^*$ then
$\m_{\l}(\Omega,\Sigma_{N-1}) <\frac{1}{4}$ and it is attained while for
$\l\leq\l^*$, $\m_{\l}(\Omega,\Sigma_{N-1}) =\frac{1}{4}$ and it is not
achieved for every $\l<\l^*$. The critical case
${\l=\l^*}$ was studied by Brezis, Marcus and Shafrir in \cite{BMS}, where they
proved that $ \m_{\l^*}(\Omega,\Sigma_{N-1})$ admits a minimizer if and only if
$I_{N-1}<\infty$. The case where $k=0$ ($\Sig_0$ is reduced to a point
on the boundary) was treated by the first author in \cite{Fallccm}
and the same conclusions hold true.\\
Here we obtain the following
\begin{Theorem}\label{th:mulpqe} Let $\O$ be a smooth bounded
domain of ${\mathbb R}^N$, $N\geq3$ and let $\Sig_k\subset\partial\O$ be a closed
submanifold of dimension $k\in[1,N-2]$. Assume that the weight
functions $p,q$ and $\eta$ satisfy \eqref{eq:weight} and
\eqref{eq:min-pq}. Then, there exists
$\l^*=\l^*(p,q,\eta,\O,\Sig_k)$ such that
$$
\begin{array}{ll}
\displaystyle \m_{\l}(\Omega,\Sigma_k)=\frac{(N-k)^2}{4},\quad\forall\l\leq \l^*,\\
\displaystyle \m_{\l}(\Omega,\Sigma_k)<\frac{(N-k)^2}{4},\quad\forall\l> \l^*.
\end{array}
$$
The infinimum $\m_{\l}(\Omega,\Sigma_k)$ is attained if $\l>\l^*$ and it is not attained when $\l< \l^*$.
\end{Theorem}
Concerning the critical case we get
\begin{Theorem}\label{th:crit}
Let $\l^*$ be given by Theorem \ref{th:mulpqe} and consider $I_k$ defined in \eqref{eq:defIk}. Then
$\m_{\l^*}(\Omega,\Sigma_k)$ is achieved if and only if $I_{k}<\infty $.
\end{Theorem}
By choosing $p=q\equiv1$ and $\eta=\d^2$, we obtain the following consequence of the above theorems.
\begin{Corollary}
Let $\O$ be a smooth bounded domain of ${\mathbb R}^N$, $N\geq3$ and
$\Sig_k\subset\partial\O$ be a closed submanifold of dimension
$k\in\{1,\cdots,N-2\}$. For $\l\in{\mathbb R}$, put
$$
\nu_\l(\O,\Sig_k)=\inf_{u\in
H^{1}_{0}(\O)} ~\frac{\displaystyle\int_{\O}|\nabla
u|^2~dx-\l\int_{\O}|u|^2~dx}
{\displaystyle\int_{\O}\d^{-2}|u|^2~dx}~,
$$
Then, there exists $\bar{\l}=\bar{\l}(\O,\Sig_k)$ such that
$$
\begin{array}{ll}
\displaystyle \nu_{\l}(\Omega,\Sigma_k)=\frac{(N-k)^2}{4},\quad\forall\l\leq \bar{\l},\\
\displaystyle \nu_{\l}(\Omega,\Sigma_k)<\frac{(N-k)^2}{4},\quad\forall\l> \bar{\l}.
\end{array}
$$
Moreover $\nu_{\l}(\Omega,\Sigma_k) $ is attained if and only if $ \l> \bar{\l}$.
\end{Corollary}
The proof of the above theorems are mainly based on the
construction of appropriate sharp $H^1$-subsolution and $H^1$-supersolutions for the
corresponding operator
$$\mathcal{ L}_\l:=-\D
-\frac{(N-k)^2}{4}q\d^{-2}+\l\d^{-2}\eta $$
(with $p\equiv 1$).
These super-sub-solutions are perturbations of an approximate
``virtual" ground-state for the Hardy constant $ \frac{(N-k)^2}{4}$
near $\Sig_k$. For that we will consider the \textit{projection
distance} function $\tilde{\d}$ defined near $\Sig_k$ as
$$
\tilde \d(x):=\sqrt{|\mbox{dist}^{\partial\O}(\overline
x,\Sigma_k)|^2+|x-\overline x|^2},
$$
where $\overline x$ is the orthogonal projection of $x$ on $\partial\O$ and $\rm{dist}^{\partial\O}(\cdot,\Sig_k)$
is the geodesic distance to $\Sig_k$ on $\partial\O$ endowed with the induced metric.
While the distances $\d$ and $\tilde{\d}$ are equivalent, $\D\d$ and $\D\tilde{\d}$
differ and $\d$ does not, in general, provide the right approximate solution for $k\leq N-2$.
Letting $d_{\partial\O}=\textrm{dist}(\cdot,\partial\O)$, we have
$$
\tilde \d(x):=\sqrt{|\mbox{dist}^{\partial\O}(\overline
x,\Sigma_k)|^2+d_{\partial \O}(x)^2}.
$$
Our approximate virtual ground-state near $\Sig_k$ reads then as
\begin{equation}\label{eq:virtgs} x\mapsto d_{\partial\O}(x)\,\tilde \d^{
\frac{k-N}{2}}(x). \end{equation} In some appropriate Fermi coordinates
${y}=(y^1,y^2,\dots, y^{N-k}, y^{N-k+1},\dots, y^N)=(\tilde{y},\bar{y})\in{\mathbb R}^{N}$ with $\tilde{y}=(y^1,y^2,\dots, y^{N-k})\in{\mathbb R}^{N-k}$ (see next section for
precise definition), the function in \eqref{eq:virtgs} then becomes
$$
{y}\mapsto y^1|\tilde{y}|^{\frac{k-N}{2}}
$$
which is the "virtual" ground-state for the Hardy constant $ \frac{(N-k)^2}{4}$
in the flat case $\Sig_k= {\mathbb R}^k$ and $\O= {\mathbb R}^N$. We refer to Section \ref{s:pn} for more details about the constructions of the super-sub-solutions.\\
The proof of the existence part in {Theorem} \ref{th:crit} is inspired from \cite{BMS}. It amounts to obtain a uniform control
of a specific minimizing sequence for $ \m_{\l^*}(\Omega,\Sigma_k) $ near $\Sig_k$ via the $H^1$-super-solution constructed.\\
We mention that the existence and non-existence of extremals for
\eqref{eq:mpqek} and related problems were studied in
\cite{AS,CaMuPRSE,CaMuUMI,C,Fall,FaMu,FaMu1,NaC,Na,PT} and some
references therein. We would like to mention that some of the results in this paper might
of interest in the study of semilinear equations with a Hardy potential singular
at a submanifold of the boundary. We refer to \cite{Fall-ne-sl, BMR1, BMR2},
where existence and nonexistence for semilinear problems were studied via the method
of super/sub-solutions.
\section{Preliminaries and Notations}\label{s:pn}
In this section we collect some notations and conventions we are
going to use throughout the paper.
Let ${\mathcal U}$ be an open subset of ${\mathbb R}^N$, $N\geq 3$, with boundary
$\mathcal{M}:=\partial{\mathcal U}$ a smooth closed hypersurface of ${{\mathbb R}^N}$. Assume that
$\mathcal{M}$ contains a smooth closed submanifold $\Sigma_k$ of dimension
$1\le k\le N-2$. In the following, for $x\in{\mathbb R}^N$, we let $d(x)$ be
the distance function of $\mathcal{M}$ and $\delta (x)$ the distance
function of $\Sigma_k$.
We denote by $N_\mathcal{M}$ the unit normal vector field of $\mathcal{M}$ pointed into ${\mathcal U}$.\\
Given $P\in\Sig_k$, the tangent
space $T_P \mathcal{M}$ of $\mathcal{M}$ at $P$ splits as
$$
T_P \mathcal{M}=T_P \Sigma_k\oplus N_P \Sigma_k,
$$
where $T_P\Sigma_k$ is the tangent space of $\Sigma_k$ and $N_P\Sigma_k$ stands for the normal space of $T_P\Sigma_k$ at $P$.
We assume that the basis of these subspaces are spanned respectively by $\big(E_a\big)_{a=N-k+1,\cdots,N}$ and $\big(E_i\big)_{i=2,\cdots,N-k} $.
We will assume that $N_\mathcal{M}(P)=E_1$.
A neighborhood of $P$ in $\Sig_k$ can be parameterized via the map
$$
\bar y\mapsto f^P(\bar y)=\textrm{Exp}^{\Sigma_k}_P( \sum_{a=N-k+1}^{N}y^a E_a),
$$
where, $\bar{y}=(y^{N-k+1},\cdots,y^N)$ and where $\textrm{Exp}_P^{\Sigma_k}$
is the exponential map at
$P$ in $\Sigma_k$ endowed with
the metric induced by $\mathcal{M}$. Next we extend $(E_i)_{i=2,\cdots,N-k}$ to an orthonormal frame $(X_i)_{i=2,\cdots,N-k}$ in a neighborhood of $P$.
We can therefore define the parameterization of a neighborhood of $P$ in $\mathcal{M}$ via the mapping
$$
(\breve{y},\bar y)\mapsto h^P_{\mathcal{M}}(\breve{y},\bar y):=\textrm{Exp}^{\mathcal{M}}_{f^P(\bar
y)}\left(\sum_{i=2}^{N-k} y^iX_i\right),
$$
with $ \breve{y}=(y^{2},\cdots,y^{N-k})$
and $\textrm{Exp}_Q^\mathcal{M}$ is the exponential map at $Q$ in $\mathcal{M}$ endowed with
the metric induced by ${\mathbb R}^N$.
We now have a parameterization of a neighborhood of $P$ in ${\mathbb R}^N$ defined via the above {Fermi coordinates} by the map
$$
y=(y^1,\breve{y},\bar y)\mapsto F^P_{\mathcal{M}}(y^1,\breve{y},\bar y)=h^P_{\mathcal{M}}(\breve{y},\bar y)+y^1 N_\mathcal{M}(h^P_{\mathcal{M}}(\breve{y},\bar y)).
$$
Next we denote by $g$ the metric induced by $F^P_{\mathcal{M}} $ whose components are defined by
$$g_{\a\b}(y)={\langle}\partial_\a F^P_{\mathcal{M}}(y),\partial_\b F^P_{\mathcal{M}}(y){\rangle}.$$
Then we have the following expansions (see for instance \cite{FaMah})
\begin{equation}\label{eq:metexp}
\begin{array}{lll}
g_{11}(y)=1\\
g_{1\b}(y)=0,\quad\quad\quad\quad\quad\quad\textrm{ for } \b=2,\cdots,N\\
g_{\a\b}(y)=\d_{\a\b}+{\mathcal O}(|\tilde{y}|),\quad\textrm{ for } \a,\b=2,\cdots,N,
\end{array}
\end{equation}
where $\tilde{y}=(y^1,\breve{y})$ and ${\mathcal O}(r^m)$ is a smooth function in the variable $y$ which is uniformly bounded by
a constant (depending only $\mathcal{M}$ and $\Sig_k$) times $r^m$.
In concordance to the above coordinates, we will consider the ``half"-geodesic neighborhood contained in ${\mathcal U}$ around
$\Sigma_k$ of radius $\rho$
\begin{equation}\label{eq:geodtub}
{\mathcal U}_{\rho}(\Sigma_k) := \{ x \in {\mathcal U}: \quad \tilde{\d}(x)<\rho \},
\end{equation}
with $\tilde \d $ is the projection distance function given by
$$
\tilde \d(x):=\sqrt{|\mbox{dist}^{\mathcal{M}}(\overline
x,\Sigma_k)|^2+|x-\overline x|^2},
$$
where $\overline x$ is the orthogonal projection of $x$ on $\mathcal{M}$ and $\rm{dist}^{\mathcal{M}}(\cdot,\Sig_k)$
is the geodesic distance to $\Sig_k$ on $\mathcal{M}$ with the induced metric.
Observe that
\begin{equation}\label{eq:tidFptiy}
\tilde \d(F^P_\mathcal{M}(y))=|\tilde y|,
\end{equation}
where $\tilde y=(y^1,\breve{y})$.
We also
define $\sigma(\overline x)$ to be the orthogonal projection of $\overline x$ on $\Sigma_k$ within $\mathcal{M}$.
Letting
$$
\hat \delta(\overline x):=\mbox{dist}^{\mathcal{M}}(\overline x,\Sigma_k),
$$
one has
$$
\overline x=\textrm{Exp}_{\sigma(\overline x)}^\mathcal{M}(\hat\d\,\n\hat\d)\quad \hbox{or
equivalently }\quad \sigma(\overline x)=\textrm{Exp}_{\overline x}^\mathcal{M}(-\hat\d\,\n\hat\d).
$$
Next we observe that
\begin{equation}\label{eq:td-hd}
\tilde{\d}(x)=\sqrt{\hat{\d}^2(\bar{x})+d^2(x)}.
\end{equation}
In addition it can be easily checked via the implicit function theorem that there exists a positive constant
$\b_0=\b_0(\Sig_k,\O)$ such that $\tilde{\d}\in C^\infty({\mathcal U}_{\b_0}(\Sig_k))$.
It is clear that for
$\rho$ sufficiently small, there exists a finite number of Lipschitz
open sets $(T_i)_{1\le i\le N_0}$ such that
$$
T_i\cap T_j=\emptyset \quad \hbox{for }\,i\ne j\quad \hbox{and}\quad
{\mathcal U}_\rho(\Sig_k)=\bigcup_{i=1}^{N_0}\overline{ T_i}.
$$
We may assume that each $T_i$ is chosen, using the above coordinates, so that
$$
T_i=F^{p_i}_{\mathcal{M}}(B^{N-k}_+(0,\rho)\times D_i)\quad\hbox{with }\; p_i\in \Sigma_k,
$$
where the $D_i$'s are Lipschitz disjoint open sets of ${\mathbb R}^k$ such that
$$
\bigcup_{i=1}^{N_0} \overline{f^{p_i} (D_i)}=\Sig_k.
$$
In the above setting we have
\begin{Lemma} \label{lemddelta} As $\tilde{\d}\to0$, the following expansions hold
\begin{enumerate}
\item $\d^2=\tilde{\d}^2(1+O(\tilde{\d}))$,
\item $\n \tilde{\d}\cdot\n d=\displaystyle\frac{d}{\tilde{\d}}$,
\item $|\n\tilde{\d}|=1+O(\tilde{\d}),$
\item $\Delta \tilde{\delta }=\frac{N-k-1}{\tilde{\delta}}+O(1)$,
\end{enumerate}
where $O(r^m)$ is a function for which there exists a constant $C=C(\mathcal{M},\Sig_k)$ such that
$$
|O(r^m)|\leq C r^m.
$$
\end{Lemma}
\noindent{{\bf Proof. }}
\begin{enumerate}
\item Let $P\in \Sig_k$. With an abuse of notation, we write $x(y)= F^P_\mathcal{M}(y)$ and we set
$$
\vartheta( y):=\frac12\delta^2 (x({y})).
$$
The function $\vartheta$ is
smooth in a small neighborhood of the origin in ${\mathbb R}^{N}$ and Taylor
expansion yields
\begin{eqnarray}
\vartheta( y)&=&\vartheta(0,\bar{y})\tilde y+\nabla\vartheta(0,\bar{y})[\tilde y]+\frac12\nabla^2\vartheta(0,\bar{y})[\tilde y,\tilde y]+{\mathcal O}(\|\tilde y\|^3)\nonumber\\
&=&\label{eq:vartzyb}\frac12\nabla^2\vartheta(0,\bar{y})[\tilde y,\tilde y]+{\mathcal O}(\|\tilde y\|^3) .
\end{eqnarray}
Here we have used the fact that $x(0,\bar{y} )\in \Sig_k$ so that $ \d(x(0,\bar{y}))=0$.
We write
$$
\nabla^2\vartheta(0,\bar{y})[\tilde y,\tilde
y]=\sum_{i,l=1}^{N-k}\Lambda_{il}y^iy^l,
$$
with
\begin{eqnarray*}
\Lambda_{il} &:=&\frac{\partial^2 \vartheta}{\partial y^i\partial y^l}/_{ \tilde{y}=0}\\
&=& \frac{\partial}{\partial y^l}\bigg(\frac{\partial }{\partial x^j} (\frac12 \delta^2(x)\frac{\partial x^j}{\partial y^i} ) \bigg)/_{
\tilde{y}=0}\\
&=&\frac{\partial^2}{\partial x^i\partial x^s}(\frac12
\delta^2)(x)\frac{\partial {x^j}}{\partial y^i}\frac{\partial x^s}{\partial y^l}/_{
\tilde{y}=0}+\frac{\partial }{\partial x^j}(\delta^2)(x)\frac{\partial^2x^s}{\partial y^i\partial y^l}/_{
\tilde{y}=0}.
\end{eqnarray*}
Now using the fact that
$$
\frac{\partial x^s}{\partial y^l}/_{ \tilde{y}=0}=g_{ls}=\delta_{ls}\quad
\textrm{and}\quad\frac{\partial }{\partial x^j}(\delta^2)(x)/_{
\tilde{y}=0}=0,
$$
we obtain
\begin{eqnarray*}
\Lambda_{il} y^i y^l&=&y^i y^s\,\frac{\partial^2}{\partial x^i\partial x^s}(\frac12
\delta^2)(x)/_{ \tilde{y}=0} \\
&=& |\tilde y|^2,
\end{eqnarray*}
where we have used the fact that the matrix $\left(\frac{\partial^2}{\partial x^i\partial x^s}(\frac12
\delta^2)(x)/_{ \tilde{y}=0} \right)_{1\leq i,s\leq N}$ is the matrix of the orthogonal projection onto the normal space of $T_{f^P(\bar{y})}\Sig_k$.
Hence using \eqref{eq:vartzyb}, we get
$$
\delta^2 (x({y}))=|\tilde y|^2 +{\mathcal O}(|\tilde y|^3).
$$
This together with \eqref{eq:tidFptiy} prove the first expansion.
\item Thanks to \eqref{eq:tidFptiy} and \eqref{eq:metexp}, we infer that
$$
\n \tilde{\d}\cdot\n d(x(y))= \frac{\partial \tilde{\d}( x(y))}{\partial y^1}=\frac{y^1}{|\tilde{y}|}=\frac{d(x(y))}{\tilde{\d}(x(y))}
$$
as desired.
\item We observe that
$$
\frac{\partial \tilde{\d}}{\partial x^\t}\frac{ \partial \tilde{\d}}{\partial x^\t} (x(y)) =g^{\t \a}(y)g^{\t \b}(y)\frac{\partial \tilde{\d}(x(y))}{\partial y^\a}\frac{\partial \tilde{\d}(x(y))}{\partial y^\b},
$$
where $(g^{\a\b})_{\a,\b=1,\dots,N} $ is the inverse of the matrix $(g_{\a\b})_{\a,\b=1,\dots,N} $.
Therefore using \eqref{eq:tidFptiy} and \eqref{eq:metexp}, we get the result.
\item Finally using the expansion of the Laplace-Beltrami operator $\D_g$, see Lemma 3.3 in \cite{mm}, applied to \eqref{eq:tidFptiy}, we get
the last estimate.
\end{enumerate}
\hfill {$\square$}\goodbreak \medskip
In the following of -- only -- this section, $q:\overline{{\mathcal U}} \to {\mathbb R}$
be such that \begin{equation}\label{eq:q} q\in C^2(\overline{{\mathcal U}}),\quad\textrm{ and
}\quad q\leq 1\quad\textrm{ on } \Sig_k. \end{equation} Let $M,a\in{\mathbb R}$, we
consider the function \begin{equation}\label{eq:pert-gst}
W_{a,M,q}(x)=X_a(\tilde{\delta}(x))\,e^{Md(x)}\,d(x)\,\tilde{\delta}(x)^{\alpha(x)},
\end{equation} where
$$
X_a(t)=(-\log(t))^a\quad \, 0<t<1 $$
and
$$
\alpha(x)=\frac{k-N}{2}+\frac{N-k}{2}\sqrt{1-q(\s(\bar{x}))+\tilde{\d}(x)}.
$$
In the above setting, the following useful result holds.
\begin{Lemma}\label{LapFinalExp}
As $\d\to 0$, we have
\begin{eqnarray*}
\Delta W_{a,M,q}&=& - \frac{(N-k)^2}{4}\,q\,\delta^{-2} \,W_{a,M,q}-{2\,a\,\sqrt{\tilde\alpha}}\,X_{-1}(\d)\,\delta^{-2}\,W_{a,M,q}
\\
&+& {a(a-1)} \,X_{-2}(\d)\,\delta^{-2}\,W_{a,M,q}+\frac{h+2M}{d}\,W_{a,M,q}+O(|\log(\delta)|\,\delta^{-\frac32})\,W_{a,M,q},\nonumber
\end{eqnarray*}
where $\tilde{\alpha}(x)=\frac{(N-k)^2}{4}\left(1- q(\sigma (\overline x))+\tilde{\delta} (x)\right) $ and $h=\D d$. Here the lower order term satisfies
$$
|O(r)|\leq C |r|,
$$
where $C$ is a positive constant only depending on $a,M,\Sig_k,{\mathcal U}$ and $\|q\|_{C^2({\mathcal U})}$.
\end{Lemma}
\noindent{{\bf Proof. }}
We put $s=\frac{(N-k)^2}{4} $.
Let $w=\tilde{\delta}(x)^{\alpha(x)} $ then the following formula can be easily verified
\begin{equation}\label{eq:1}
\D w=w\bigg( \D \log(w)+|\nabla\log(w)|^2 \bigg).
\end{equation}
Since
$$
\log(w)=\alpha\log(\tilde{\delta}),
$$
we get
\begin{equation}\label{eq:2}
\D \log(w)=\D \alpha\log(\tilde{\delta})+2\nabla\alpha\cdot \nabla
(\log(\tilde{\delta}))+\alpha\D \log(\tilde{\delta}).
\end{equation}
We have
\begin{equation}\label{eq:3}
\D\alpha=\D\sqrt{\tilde \alpha}=\sqrt{\tilde
\alpha}\,\left(\frac12 \D\log(\tilde \alpha) +\frac14|\nabla
\log(\tilde \alpha)|^2 \right),
\end{equation}
$$
\nabla\log(\tilde\alpha)=\frac{\nabla\tilde\alpha}{\tilde\alpha}=\frac{-s\nabla(q\circ\sigma)+s\nabla\tilde{\delta}}{\tilde\a}
$$
and using the formula \eqref{eq:1}, we obtain
\begin{eqnarray*}
\D\log(\tilde\alpha)&=&\frac{\D\tilde\alpha}{\tilde\alpha} -\frac{|\nabla\tilde\alpha|^2}{\tilde\alpha^2}\\
&=& \frac{-s\D(q\circ\sigma)+s\D\tilde{\delta}}{\tilde\alpha} -\frac{s^2|\nabla(q\circ\sigma)|^2+s^2|\nabla\tilde{\delta}|^2}
{\tilde\alpha^2}+2s^2\frac{\nabla(q\circ\sigma)\cdot\nabla\tilde{\delta}}{\tilde\alpha^2}.
\end{eqnarray*}
Putting the above in \eqref{eq:3}, we deduce that
\begin{equation}\label{eq:4}
\D\alpha =\frac{1}{2\sqrt{\tilde\alpha}} \bigg( -s\D (q\circ\sigma)+s\D\tilde{\delta}-
\frac12\frac{s^2|\nabla(q\circ\sigma)|^2+s^2|\nabla\tilde{\delta}|^2-2s^2\nabla(q\circ\sigma)\cdot\nabla\tilde{\delta}}{\tilde\alpha}\bigg).
\end{equation}
Using Lemma \ref{lemddelta} and
the fact that $q$ is in $C^2(\overline{{\mathcal U}})$,
together with \eqref{eq:4} we get
\begin{equation}\label{eq:5}
\D\alpha= O({\tilde{\delta}^{-\frac32}}).
\end{equation}
On the other hand
$$
\nabla
\alpha=\nabla\sqrt{\tilde\alpha}=\frac12\frac{\nabla\tilde\alpha}{\sqrt{\tilde\alpha}}=-\frac{s}{2\sqrt{\tilde\alpha}}\nabla(q\circ\sigma)+
\frac{s}{2}\frac{\nabla\tilde{\delta}}{\sqrt{\tilde\alpha}}
$$
so that
$$
\nabla \alpha\cdot \nabla
\tilde{\delta}=-\frac{s}{2\sqrt{\tilde\alpha}}\nabla(q\circ\sigma)\cdot
\nabla \tilde{\delta}+
\frac{s}{2}\frac{|\nabla\tilde{\delta}|^2}{\sqrt{\tilde\alpha}}=O(\tilde{\d}^{-\frac12})
$$
and from which we deduce that
\begin{equation}\label{eq:6}
\nabla\alpha\cdot \nabla\log(\tilde{\delta}) = \frac{1}{\tilde{\delta}} \nabla\alpha\cdot \nabla\tilde{\delta}
=
O(\tilde{\d}^{-\frac32}).
\end{equation}
By Lemma \ref{lemddelta} we have that
$$
\alpha\D\log(\tilde{\delta})=\alpha\,\frac{N-k-2}{\tilde{\delta}^2}\,(1+O(\tilde{\delta})).
$$
Taking back the above estimate together with \eqref{eq:6} and \eqref{eq:5} in \eqref{eq:2}, we get
\begin{equation}\label{eq:7}
\D\log(w) = \alpha\,\frac{N-k-2}{\tilde{\delta}^2}\,(1+O(\tilde{\delta}))
+O(|\log(\tilde{\d})|\tilde{\d}^{-\frac32}).
\end{equation}
We also have
$$
\nabla(\log(w))=\nabla(\alpha \log(\tilde{\delta}))=\alpha
\frac{\nabla\tilde{\delta}}{\tilde{\delta}}+\log(\tilde{\delta})\nabla \alpha
$$
and thus
$$
|\nabla(\log(w))|^2=\frac{\alpha^2}{\tilde{\delta}^2}+\frac{2\alpha\log(\tilde{\delta})}{\tilde{\delta}}\,\nabla\tilde{\delta}\cdot\nabla
\alpha+|\log(\tilde{\delta})|^2|\nabla \alpha|^2=\frac{\alpha^2}{\tilde{\delta}^2}+ O(|\log(\tilde{\d})|\tilde{\d}^{-\frac32}).
$$
Putting this together with \eqref{eq:7} in \eqref{eq:1}, we conclude that
\begin{equation}\label{eq:8}
\frac{ \D w }{w}=
\alpha\,\frac{N-k-2}{\tilde{\delta}^2}+\frac{\alpha^2}{\tilde{\delta}^2}+O(|\log(\tilde{\delta})|\,\tilde{\delta}^{-\frac32}).
\end{equation}
Now we define the function
$$
v(x):=d(x)\,w(x),
$$
where we recall that $d$ is the distance function to the boundary of ${\mathcal U}$.
It is clear that
\begin{equation}\label{eq:9}
\D v= w\D d+d\D w+2\nabla d\cdot \nabla w.
\end{equation}
Notice that
$$
\nabla w=w\,\nabla
\log(w)=w\,\left(\log(\tilde{\delta})\nabla\alpha+\alpha\frac{\nabla
\tilde{\delta}}{\tilde{\delta}}\right)
$$
and so
\begin{equation}\label{eq:10}
\nabla d\cdot\nabla w=w\,\left(\log(\tilde{\delta})\nabla d
\cdot\nabla\alpha+\frac{\alpha}{\tilde{\delta}}\nabla d\cdot\nabla
\tilde{\delta}\right).
\end{equation}
Recall the second assertion of Lemma \ref{lemddelta} that we rewrite as
\begin{equation}\label{eq:11}
\nabla d\cdot\nabla \tilde{\delta}=\frac{d}{\tilde{\delta}}.
\end{equation}
Therefore
\begin{equation}\label{eq:12}
\nabla d \cdot\nabla\alpha=\nabla
d\cdot\left(-\frac{s}{2\sqrt{\tilde
\alpha}}\nabla(q\circ\sigma)+\frac{s}{2}\frac{\nabla\tilde{\delta}}{\sqrt{\tilde
\alpha}} \right)=\frac{s}{2\sqrt{\tilde
\alpha}}\frac{d}{\tilde{\delta}}-\frac{s}{2\sqrt{\tilde
\alpha}}\nabla d\cdot\nabla(q\circ\sigma).
\end{equation}
Notice that if $x$ is in a neighborhood of some point $P\in \Sig_k$ one has
$$
\nabla d\cdot\nabla(q\circ\sigma)(x)=\frac{\partial}{\partial
y^1}q(\sigma(\overline x))=\frac{\partial}{\partial y^1}q( f^P(\overline y))=0.
$$
This with \eqref{eq:12} and \eqref{eq:11} in \eqref{eq:10} give
\begin{eqnarray}\label{eq:13}
\nabla d\cdot\nabla w&=&w\,\left(O(\tilde{\delta}^{-\frac32}|\log(\tilde{\delta})|)\,d+\frac{\alpha}{\tilde{\delta}^2}\,
d \right)\nonumber \\
&=& v\,\left(O(\tilde{\delta}^{-\frac32}|\log(\tilde{\delta})|)+\frac{\alpha}{\tilde{\delta}^2}\right).
\end{eqnarray}
From \eqref{eq:8}, \eqref{eq:9} and \eqref{eq:13} (recalling the expression of $\a$ above), we get immediately
\begin{eqnarray}\label{eq:14}
\D v&=&\left(
\alpha\,\frac{N-k}{\tilde{\delta}^2}+\frac{\alpha^2}{\tilde{\delta}^2}\right)\,v+O(|\log(\tilde{\delta})|\,\tilde{\delta}^{-\frac32})\,v+
\frac{h}{d}\,v \nonumber\\
&=&\left(- \frac{(N-k)^2}{4}
\frac{q(x)}{\tilde{\delta}^2}+O(|\log(\tilde{\delta})|\,\tilde{\delta}^{-\frac32})\right)\,v+
\frac{h}{d}\,v,
\end{eqnarray}
where $h=\D d$. Here we have used the fact that $|q(x)-q(\s(\bar{x}))|\leq C \tilde{\d }(x)$ for $x$ in a neighborhood of $\Sig_k$.\\
Recall that
$$
W_{a,M,q}(x)=X_a(\tilde{\delta}(x))\,e^{Md(x)}\,v(x), \quad \hbox{ with }\quad
X_a(\tilde{\delta}(x)):=(-\log(\tilde{\delta}(x)))^a,
$$
where $M$ and $a$ are two real numbers. We have
\begin{eqnarray*}
\D W_{a,M,q} = X_a(\tilde{\delta})\,\D (e^{Md}\,v)+2\nabla X_a(\tilde{\delta})\cdot\nabla (e^{Md}\,v)+e^{Md}\,v\,\D X_a(\tilde{\delta})
\end{eqnarray*}
and thus
\begin{equation}\label{eq:15}
\begin{array}{lll}
\D W_{a,M,q}
&= &X_a(\tilde{\delta})e^{Md}\,\D
v+X_a(\tilde{\delta}) \D (e^{Md})\, v+2X_a(\tilde{\delta})\n v\cdot \nabla(e^{Md})\\
&\,\,&+\,2\nabla X_a(\tilde{\delta})\cdot\left( v\,\nabla (e^{Md})+e^{Md}\nabla v\right)+e^{Md}\,v\,\D
X_a(\tilde{\delta}).
\end{array}
\end{equation}
We shall estimate term by term the above expression.\\
First we have form \eqref{eq:14}
\begin{equation}\label{eq:141}
X_a(\tilde{\delta})e^{Md}\,\D v= - \frac{(N-k)^2}{4}
\frac{q}{\tilde{\delta}^2}\, W_{a,M,q} +
\frac{h}{d}\, W_{a,M,q} +O(|\log(\tilde{\delta})|\,\tilde{\delta}^{-\frac32})\, W_{a,M,q}.
\end{equation}
It is plain that
\begin{equation}\label{eq:17}
X_a(\tilde{\delta})\,\D (e^{Md})\,v=O(1)
\,W_{a,M,q}.
\end{equation}
It is clear that
\begin{equation}\label{eq:nv}
\nabla v= w\,\nabla d+d\,\nabla w=w\,\nabla
d+d\,\left(\log(\tilde{\delta})\,\nabla\alpha+\alpha \frac{\nabla
\tilde{\delta}}{\tilde{\delta}}\right)\, w.
\end{equation}
From which and \eqref{eq:11} we get
\begin{eqnarray}\label{eq:16}
X_a(\tilde{\delta})\,\nabla v\cdot \nabla(e^{Md}) &=& M\,X_a(\tilde{\delta})\,e^{Md}\,w\left\{ |\nabla d|^2+d\, \left(\log(\tilde{\delta})\,\nabla d\cdot \nabla\alpha+
\frac{\alpha }{\tilde{\delta}}
\nabla\tilde{\delta}\cdot\nabla d\right)\right\}\nonumber \\
&=&M\,X_a(\tilde{\delta})\,e^{Md}\,w\left\{
1+O(|\log(\tilde{\delta})|\,\tilde{\delta}^{-\frac12})\,d+O(\tilde{\delta}^{-1})\,d\right\}\nonumber\\
&=& W_{a,M,q} \,\left\{ \frac{M}{d}+O(|\log(\tilde{\delta})|\,\tilde{\delta}^{-1})\right\}.
\end{eqnarray}
Observe that
$$
\nabla(X_a(\tilde{\delta}))=-a\,\frac{\nabla \tilde{\delta}}{\tilde{\delta}}
X_{a-1}(\tilde{\delta}).
$$
This with \eqref{eq:nv} and \eqref{eq:11} imply that
\begin{equation}\label{eq:18}
\nabla X_a(\tilde{\delta})\cdot\left( v\,\nabla (e^{Md})+e^{Md}\nabla
v\right)=
-\frac{a(\alpha+1)}{\tilde{\delta}^2}\,X_{-1}\,W_{a,M,q}+O(|\log(\tilde{\delta})|\tilde{\delta}^{-\frac32})\,W_{a,M,q}.
\end{equation}
By Lemma \ref{lemddelta}, we have
$$
\D(X_a(\tilde{\delta}))=\frac{a}{\tilde{\delta}^2}X_{a-1}(\tilde{\delta})\{2+k-N+O(\tilde{\delta})\}+\frac{a(a-1)}{\tilde{\delta}^2}X_{a-2}(\tilde{\delta}).
$$
Therefore we obtain
\begin{equation}\label{eq:19}
e^{Md}v \D(X_a(\tilde{\delta}))=\frac{a}{\tilde{\delta}^2} \{2+k-N+O(\tilde{\delta})\}\,X_{-1}\,W_{a,M,q}+ \frac{a(a-1)}{\tilde{\delta}^2}X_{-2} \,W_{a,M,q}.
\end{equation}
Collecting \eqref{eq:141}, \eqref{eq:17}, \eqref{eq:16}, \eqref{eq:18} and \eqref{eq:19} in the expression \eqref{eq:15},
we get
as $\tilde{\d}\to 0$
\begin{eqnarray*}
\Delta W_{a,M,q}&=& - \frac{(N-k)^2}{4}\,q\,\tilde{\delta}^{-2} \,W_{a,M,q}-2\,a\,\sqrt{\tilde{\alpha}}\,X_{-1}(\tilde{\d})\,\tilde{\delta}^{-2}\,W_{a,M,q}
\\
&+& {a(a-1)} \,X_{-2}(\tilde{\d})\,\tilde{\delta}^{-2}\,W_{a,M,q}+\frac{h+2M}{d}\,W_{a,M,q}
+O(|\log(\tilde{\delta})|\,\tilde{\delta}^{-\frac32})\,W_{a,M,q}.\nonumber
\end{eqnarray*}
The conclusion of the lemma follows at once from the first assertion of Lemma \ref{lemddelta}.
\hfill {$\square$}\goodbreak \medskip
\subsection{Construction of a subsolution}
For $\l\in{\mathbb R}$ and $\eta\in Lip(\overline{{\mathcal U}})$ with $\eta=0$ on $\Sig_k$, we define the operator
\begin{equation}\label{eq:calL_l}
\mathcal{L}_\l:=
-\D -\frac{(N-k)^2}{4}\,q\,\delta^{-2}+\l\, \eta\,\delta^{-2},
\end{equation}
where $q$ is as in \eqref{eq:q}.
We have the following lemma
\begin{Lemma} \label{le:lowerbound}
There exist two positive constants $M_0,\beta_0$ such that for all
$\beta\in\,(0,\beta_0)$ the function
$V_\e:=W_{-1,M_0,q}+W_{0,M_0,q-\e}$ (see \eqref{eq:pert-gst})
satisfies
\begin{equation}\label{eq:subsolution}
\mathcal{L}_\l V_\e\le 0 \quad \textrm{ in } {\mathcal U}_\b,\quad\hbox{ for all }\; \e\in[0,1).
\end{equation}
Moreover $V_\e\in H^1({\mathcal U}_\beta)$ for any $\e\in(0,1)$ and in addition
\begin{equation}\label{eq:Iq}
\int_{{\mathcal U}_\b}\frac{V_{0}^2}{\d^2}\,dx\geq C \int_{\Sigma_k} \frac{1}{\sqrt{1-q(\sigma)}}\,d\sigma.
\end{equation}
\end{Lemma}
\noindent{{\bf Proof. }} Let $\beta_1$ be a positive small real number so that $d$ is
smooth in ${\mathcal U}_{\b_1}$. We choose
$$
M_0= \max_{x\in \overline{\mathcal U}_{\b_1}}|h(x)|+1.
$$
Using this and Lemma \ref{LapFinalExp}, for some $\b\in(0,\b_1)$, we have
\begin{equation}\label{eq:LaM0}
\mathcal{L}_\l W_{-1,M_0,q} \le \left(-2\delta^{-2} \,X_{-2}+C|\log(\delta)|\,\delta^{-\frac32}+|\l|\eta \d^{-2}\right)\,W_{-1,M_0,q}\quad
\textrm{ in } {\mathcal U}_\b. \end{equation}
Using the
fact that the function $\eta$ vanishes on
$\Sigma_k$ (this implies in particular that $|\eta|\le C \delta$ in
${\mathcal U}_\b$), we have
$$
\mathcal{L}_\l(W_{-1,M_0,q})\le -\delta^{-2} \,X_{-2}\,W_{-1,M_0,q}= -\delta^{-2} \,X_{-3}\,W_{0,M_0,q}\quad \textrm{ in }{\mathcal U}_\b,
$$
for $\b$ sufficiently small. Again by Lemma \ref{LapFinalExp}, and
similar arguments as above, we have \begin{equation}\label{eq:LaMqep}
\mathcal{L}_\l W_{0,M_0,q-\e} \le C|\log(\delta)|\,\delta^{-\frac32}\,W_{0,M_0,q-\e}\leq C|\log(\delta)|\,\delta^{-\frac32}\,W_{0,M_0,q}\quad\textrm{ in }{\mathcal U}_{\b},
\end{equation}
for any $\e\in [0,1)$. Therefore we get
$$
\mathcal{L}_\l \left(W_{-1,M_0,q}+W_{0,M_0,q-\e} \right)\leq 0\quad \textrm{ in }{\mathcal U}_{\b},
$$
if $\b$ is small. This proves \eqref{eq:subsolution}.\\
The proof of the fact that
$W_{a,M_0,q}\in H^1({\mathcal U}_\beta)$, for any $a<-\frac{1}{2}$ and $ W_{0,M_0,q-\e}\in H^1({\mathcal U}_\beta) $, for $\e>0$ can be easily checked using polar coordinates
(by assuming without any loss of generality that $M_0=0$ and $q\equiv 1$), we therefore skip it. \\
We now prove the last statement of the theorem.
Using Lemma \ref{lemddelta}, we have
\begin{eqnarray*}
\int_{{\mathcal U}_\b}\frac{V_{0}^2}{\d^2}\,dx
&\ge& \int_{{\mathcal U}_\b}\frac{W_{0,M_0,q}^2}{\d^2}\,dx\\
&\ge &C\,\int_{{\mathcal U}_\b(\Sig_k)}d^2(x)\tilde{\delta}(x)^{2\a(x)-2}\,dx\\
&\ge& C\sum_{i=1}^{N_0}\,\int_{T_i}d^2(x)\tilde{\delta}(x)^{2\a(x)-2}\,dx\\
&=&C\sum_{i=1}^{N_0}\,\int_{B^{N-k}_+(0,\b)\times
D_i}(y^1)^2\,|\tilde y|^{2\a(F^{p_i}_\mathcal{M}(y))-2}\,
|{\rm Jac}(F^{p_i}_\mathcal{M})|(y)\,dy\\
&\ge& C\,\sum_{i=1}^{N_0}\,\int_{B^{N-k}_+(0,\b)\times
D_i}(y^1)^2\,|\tilde y|^{k-N-2+(N-k)\sqrt{1-q(f^{p_i}(\bar
y))}}\, \,|\tilde y|^{-\sqrt{|\tilde{y}|}}\,dy.
\end{eqnarray*}
Here we used the fact that $|{\rm Jac}(F^{p_i}_\mathcal{M})|(y)\ge C$. Observe that
$$
|\tilde y|^{-\sqrt{|\tilde{y}|}}\ge C >0
\quad \hbox{as }\, |\tilde y| \to 0.
$$
Using polar coordinates, the above integral becomes
\begin{eqnarray*}
\int_{{\mathcal U}_\b}\frac{V_{0}^2}{\d^2}\,dx &\ge&
C\,\sum_{i=1}^{N_0}\int_{D_i}\int_{S^{N-k-1}_+}\left(\frac{y^1}{|\tilde
y|}\right)^2\,d
\theta\int_0^{\b}r^{-1+(N-k)\sqrt{1-q(f^{p_i}(\bar
y))}}\,d\bar y
\\
&\ge & C\,\sum_{i=1}^{N_0}\int_{D_i}\int_0^{r_{i_1}}r^{-1+(N-k)\sqrt{1-q(f^{p_i}(\bar
y))}}\,|\textrm{Jac}(f^{p_i})|(\bar y)\,d\bar y.
\end{eqnarray*}
We therefore obtain
\begin{eqnarray*
\int_{{\mathcal U}_\b}\frac{V_{0}^2}{\d^2}\,dx
&\geq & C\,\int_{\Sig_k}\int_0^{\b}r^{-1+(N-k)\sqrt{1-q(\s)}}\,dr\,d\s\\
&\geq & C\,\int_{\Sig_k}\frac{1}{\sqrt{1-q(\s)}}\,d\s.
\end{eqnarray*}
This concludes the proof of the lemma.
\hfill {$\square$}\goodbreak \medskip
\subsection{Construction of a supersolution}
In this subsection we provide a supersolution for the operator $\mathcal{ L}_\l$ defined in \eqref{eq:calL_l}. We prove
\begin{Lemma} \label{le:upperbound}
There exist constants $\beta_0>0$,
$M_{1}<0,$ $M_0>0$ (the constant $M_0$ is as in Lemma \ref{le:lowerbound}) such that for
all $\beta\in\,(0,\beta_0)$ the function $U:=W_{0,M_1,q}-W_{-1,M_0,q}>0$ in ${\mathcal U}_\b$ and
satisfies
\begin{equation}\label{eq:supsolution}
\mathcal{L}_\l U_a \geq 0 \quad \textrm{ in } {\mathcal U}_\b.
\end{equation}
Moreover $U\in H^1({\mathcal U}_\beta)$
provided
\begin{equation}\label{eq:Iql}
\int_{\Sigma_k} \frac{1}{\sqrt{1-q(\sigma)}}\,d\sigma <+\infty.
\end{equation}
\end{Lemma}
\noindent{{\bf Proof. }}
We consider $\b_1$ as in the beginning of the proof of Lemma \ref{le:lowerbound} and we define
\begin{equation}\label{eq:M1}
M_1=-\frac12\,\max_{x\in\overline{\mathcal U}_{\beta_1}}|h(x)|-1.
\end{equation}
Since $$ U(x)=(e^{M_1 d(x)}-e^{M_0d(x)}X_{-1}(\tilde{\d}(x)))d(x)\tilde{\d}(x)^{\a(x)},$$ it follows that $U>0$ in ${\mathcal U}_\b$ for $\b>0$ sufficiently small.
By \eqref{eq:M1} and Lemma \ref{LapFinalExp}, we get
\begin{eqnarray*}
\mathcal{L}_\l W_{0,M_1,q} \ge \left(-C|\log(\delta)|\,\delta^{-\frac32}-|\l|\eta \d^{-2}\right)\,W_{0,M_1,q}.
\end{eqnarray*}
Using \eqref{eq:LaM0} we have
$$
\mathcal{L}_\l (- W_{-1,M_0,q})\geq
\left(2\d^{-2}X_{-2}-C|\log(\delta)|\,\delta^{-\frac32}-|\l|\eta \d^{-2}\right)\, W_{-1,M_0,q}.
$$
Taking the sum of the two above inequalities, we obtain
$$
\mathcal{L}_\l U\geq0 \quad\textrm{ in }{\mathcal U}_\b,
$$
which holds true because $|\eta|\leq C\d$ in $ {\mathcal U}_{\b}$.
Hence we get readily \eqref{eq:supsolution}.\\
Our next task is to prove that $U\in H^1({\mathcal U}_\b)$ provided \eqref{eq:Iql} holds, to do so it is enough to show that $W_{0,M_1,q} \in H^1({\mathcal U}_\b)$ provided \eqref{eq:Iql} holds.\\
We argue as in the proof of Lemma \ref{le:lowerbound}. We have
\begin{eqnarray*}
\int_{{\mathcal U}_\b}|\nabla W_{0,M_1,q}|^2 &\le & C\int_{{\mathcal U}_\b}d^2(x)\tilde{\delta}(x)^{2\a(x)-2}\,dx\\
&\leq& C\sum_{i=1}^{N_0}\int_{B^{N-k}_+(0,\b)\times
D_i}d^2(F^{p_i}_\mathcal{M}(y))\tilde{\delta}(F^{p_i}_\mathcal{M}(y))^{2\a(F^{p_i}_\mathcal{M}(y))-2}
|{\rm
Jac}(F^{p_i}_\mathcal{M})|(y)dy\\
&\leq&C\sum_{i=1}^{N_0}\,\int_{B^{N-k}_+(0,\b)\times
D_i}(y^1)^2\,|\tilde y|^{2\a(F^{p_i}_\mathcal{M}(y))-2}\,
|{\rm Jac}(F^{p_i}_\mathcal{M})|(y)\,dy\\
&\le& C\,\sum_{i=1}^{N_0}\,\int_{B^{N-k}_+(0,\b)\times
D_i}(y^1)^2\,|\tilde y|^{k-N-2+(N-k)\sqrt{1-q(f^{p_i}(\bar
y))}}\, \,|\tilde y|^{-\sqrt{|\tilde{y}}|}\,dy.
\end{eqnarray*}
Here we used the fact that $|{\rm Jac}(F^{p_i}_\mathcal{M})|(y)\le C$. Note that
$$
|\tilde y|^{-\sqrt{|\tilde{y}}|}\le C
\quad \hbox{as }\, |\tilde y|\to 0.
$$
Using polar coordinates, it follows that
\begin{eqnarray*}
\int_{{\mathcal U}_\b}|\nabla W_{0,M_1,q}|^2
&\le& C\,\sum_{i=1}^{N_0}\int_{D_i}\int_{S^{N-k-1}_+}\left(\frac{y^1}{|\tilde
y|}\right)^2\,d
\theta\int_0^{\b}r^{-1+(N-k)\sqrt{1-q(f^{p_i}(\bar
y))}}\,dr\,d\bar y\\
&\le&
C\, \sum_{i=1}^{N_0}\,\int_{D_i}\frac{1}{\sqrt{1-q(f^{p_i}(\bar
y))}}\,d\bar y.
\end{eqnarray*}
Racalling that $|{\rm Jac}(f^{p_i})|(\bar y)=1+O(|\bar y|)$, we deduce that
\begin{eqnarray*}
\sum_{i=1}^{N_0}\,\int_{D_i}\frac{1}{\sqrt{1-q(f^{p_i}(\bar
y))}}\,d\bar y&\le&
C\sum_{i=1}^{N_0}\,\int_{D_i}\frac{1}{\sqrt{1-q(f^{p_i}(\bar
y))}}\,|{\rm Jac}(f)|(\bar y)\,d\bar
y\\
&=&C\int_{\Sigma_k}\frac{1}{\sqrt{1-q(\sigma})}\,d\sigma.
\end{eqnarray*}
Therefore
\begin{eqnarray*}
\int_{{\mathcal U}_\b}|\nabla W_{0,M_1,q}|^2\,dx
&\le&C\int_{\Sigma_k}\frac{1}{\sqrt{1-q(\sigma})}\,d\sigma
\end{eqnarray*}
and the lemma follows at once.
\hfill {$\square$}\goodbreak \medskip
\section{Existence of $\l^*$}\label{s:localhardy}
We start with the following local improved Hardy inequality.
\begin{Lemma}\label{lem:loc-hardy}
Let $\O$ be a smooth domain and assume that
$\partial\O$ contains a smooth closed submanifold $\Sigma_k$ of
dimension $1\le k\le N-2$. Assume that $p,q$ and $\eta$ satisfy \eqref{eq:weight} and \eqref{eq:min-pq}.
Then there exist constants $\beta_0>0$ and $c>0$
depending only on $\O, \Sig_k,q,\eta$ and $p$ such that for all $\beta\in(0,\beta_0)$
the inequality
$$
\int_{\O_\beta}p|\n
u|^2\,dx-\frac{(N-k)^2}{4}\int_{\O_\beta}q\frac{|u|^2}{\d^{2}}\,dx\geq
c\int_{\O_\beta}\frac{|u|^2}{ \d^{2}|\log(\d)|^{2} }\,dx
$$
holds for all $ u\in H^1_0({\O_\beta})$.
\end{Lemma}
\noindent{{\bf Proof. }}
We use the notations in Section \ref{s:pn} with ${\mathcal U}=
\O$ and $\mathcal{M}=\partial \O$.\\
Fix $\b_1>0$ small and
\begin{equation}\label{eq:M2fi}
M_2=-\frac12\,\max_{x\in\overline\O_{\beta_1}}(|h(x)|+ |\n p\cdot \n d |)-1.
\end{equation}
Since $\frac{p}{q}\in C^1(\overline{\O})$, there exists $C>0$ such that
\begin{equation}\label{eq:Lippovq}
\left|\frac{p(x)}{q(x)} - \frac{p(\s(\bar{x}))}{q(\s(\bar{x}))}\right|\leq C\d(x)\quad\forall x\in \O_{\b},
\end{equation}
for small $\b>0$.
Hence by \eqref{eq:min-pq} there exits a constant $C'>0$ such that
\begin{equation}\label{eq:Lippovq}
p(x)\geq q(x)- C'\d(x)\quad \forall x\in \O_{\b}.
\end{equation}
Consider $ W_{\frac{1}{2},M_2,1}$ (in Lemma \ref{LapFinalExp} with $q\equiv1$).
For all $\b>0 $ small, we set
\begin{equation}\label{eq:tiw}
\tilde{w}(x)=W_{\frac{1}{2},M_2,1}(x),\quad \forall x\in\O_\b.
\end{equation}
Notice that $\textrm{div}(p\n \tilde{w})=p\D \tilde{w}+\n p\cdot\n\tilde{w}$.
By Lemma \ref{LapFinalExp}, we have
$$
- \frac{{\rm div} (p\n \tilde{w})}{\tilde{w}}\geq
\frac{(N-k)^2}{4}\,p\d^{-2}+\frac{p}{4}\d^{-2}X_{-2}(\d)
+O({|\log(\d)|\d^{-\frac32}})\,\textrm{ in }\O_\b.
$$
This together with \eqref{eq:Lippovq} yields
$$
- \frac{{\rm div} (p\n \tilde{w})}{\tilde{w}}\geq
\frac{(N-k)^2}{4}\,q\d^{-2}+\frac{c_0}{4}\d^{-2}X_{-2}(\d)
+O({|\log(\d)|\d^{-\frac32}})\,\textrm{ in }\O_\b,
$$
with $c_0=\min_{\overline{\O_{\b_1}}}p>0$.
Therefore
%
\begin{equation}\label{eq:dwow} - \frac{{\rm div} (p\n \tilde{w})}{\tilde{w}}\geq
\frac{(N-k)^2}{4}\,q\d^{-2}+c\,\d^{-2}X_{-2}(\d)\,\textrm{ in
}\O_{\b},
\end{equation}
for some positive constant $c$ depending only on $\O, \Sig_k,q,\eta$ and $p$.
Let $u\in C^\infty_c(\O_\b)$ and put
$\psi=\frac{u}{\tilde{w}}$. Then one has $|\n
u|^2=|\tilde{w}\n\psi|^2+|\psi\n \tilde{w}|^2+\n(\psi^2)\cdot \tilde{w} \n
\tilde{w}$. Therefore $|\n u|^2p=|\tilde{w}\n\psi|^2p+p\n
\tilde{w}\cdot\n(\tilde{w}\psi^2)$. Integrating by parts, we get
$$
\int_{\O_\b}|\n
u|^2p\,dx=\int_{\O_\b}|\tilde{w}\n\psi|^2p\,dx+\int_{\O_\b}\left(-
\frac{\textrm{div}(p\n \tilde{w})}{\tilde{w}}\right)u^2\,dx.
$$
Putting \eqref{eq:dwow} in the above equality, we get the result.
\hfill {$\square$}\goodbreak \medskip
We next prove the following result
\begin{Lemma}\label{lem:Jl1} Let $\O$ be a smooth bounded domain and assume that
$\partial\O$ contains a smooth closed submanifold $\Sigma_k$ of
dimension $1\le k\le N-2$. Assume that \eqref{eq:weight} and \eqref{eq:min-pq} hold.
Then there exists $\l^*=\l^*(\O,\Sig_k,p,q,\eta)\in{\mathbb R}$ such
that
$$
\begin{array}{cc}
\displaystyle \mu_{\l}(\O,\Sigma_k)=\frac{(N-k)^2}{4}, &
\quad\forall
\l\leq\l^*, \vspace{2mm}\\
\displaystyle \mu_{\l}(\O,\Sigma_k)<\frac{(N-k)^2}{4}, &
\quad\forall \l>\l^*.
\end{array}
$$
\end{Lemma}
\noindent{{\bf Proof. }} We devide the proof in two steps
\noindent \textbf{Step 1:} We claim that:
\begin{equation}\label{eq:supmulambda}\sup\limits_{\l\in{\mathbb R}}\mu_\l(\O,\Sigma_k)\leq
\frac{(N-k)^2}{4}. \end{equation} Indeed, we know that
$\nu_0({\mathbb R}^N_+,{\mathbb R}^k)=\frac{(N-k)^2}{4}$, see \cite{FTT} for instance. Given
$\tau>0$, we let $u_\tau\in C^\infty_c({\mathbb R}^N_+)$ be such that
\begin{equation}\label{eq:estutau}
\int_{{\mathbb R}^N_+}|\n
u_\tau|^2\,d y\leq\left(\frac{(N-k)^2}{4}+\tau\right)\int_{{\mathbb R}^N_+}|\tilde
y|^{-2}u_\tau^2\,d y.
\end{equation}
By \eqref{eq:min-pq}, we can let $\sigma_0\in\Sigma_k$ be such that
$$
q(\sigma_0)=p(\s_0).
$$
Now, given $r>0$, we let $\rho_r>0$ such that for all $ x\in
B(\sigma_0,\rho_r)\cap \Omega $
\begin{equation}\label{eq:estq}
p(x)\le (1+r)q(\sigma_0),\quad q(x)\ge (1-r)q(\sigma_0)\quad\textrm{ and }\quad \eta(x)\le r.
\end{equation}
We choose Fermi coordinates near $\sigma_0\in\Sigma_k$ given by the map $ F^{\sigma_0}_{\partial\O}$ (as in Section \ref{s:pn}) and we choose
$\e_0>0$ small such that, for all $\e\in(0,\e_0) $,
$$
\Lambda_{\e,\rho,r,\tau}:=F^{\sigma_0}_{\partial\O}(\e\,{\rm
Supp(u_\tau)})\subset\,B(\sigma_0,\rho_r)\cap \Omega
$$
and we define the following test function
$$
v(x)=\e^{\frac{2-N}{2}}u_\tau\left(\e^{-1}(F^{\sigma_0}_{\partial\O})^{-1}(x)\right),
\quad x\in \Lambda_{\e,\rho,r,\tau}.
$$
Clearly, for every $\e\in(0,\e_0)$, we have that $v\in
C^\infty_c(\O)$ and thus by a change of variable, \eqref{eq:estq}
and Lemma \ref{lemddelta}, we have~
\begin{eqnarray*}
\mu_\l(\O,\Sigma_k)&\leq&\frac{\displaystyle \int_{\O}p|\n v|^2\,dx
+\l\int_{\O}\d^{-2}\eta v^2\,dx}{\displaystyle
\int_{\O}q(x)\,\d^{-2}\,v^2\,dx}\\
&\leq&\frac{\displaystyle (1+r)\int_{\Lambda_{\e,\rho,r,\tau}}|\n
v|^2\,dx
}{(1-r)\,\displaystyle
\int_{\Lambda_{\e,\rho,r,\tau}}\d^{-2}\,v^2\,dx}+\frac{r|\l|}{(1-r)q(\sigma_0) } \\
&\leq&\frac{\displaystyle (1+r)\int_{\Lambda_{\e,\rho,r,\tau}}|\n
v|^2\,dx
}{(1-c r)\,\displaystyle
\int_{\Lambda_{\e,\rho,r,\tau}}\tilde{\d}^{-2}\,v^2\,dx}+\frac{r|\l|}{(1-r)q(\sigma_0) } \\
&\leq&\frac{(1+r)\e^{2-N}\displaystyle
\int_{{\mathbb R}^N_+}\e^{-2}(g^\e)^{ij}\partial_i
u_\tau\partial_ju_\tau|\,\sqrt{|g^\e|}(y)\,dy
}{(1-cr)\,\displaystyle
\int_{{\mathbb R}^N_+}\e^{2-N}\,|\e\tilde y|^{-2}\,u_\tau^2\,\sqrt{|g^\e|}(\tilde y)\,d y}+\frac{cr}{1-r } ,\\
\end{eqnarray*}
where $g^\e$ is the scaled metric with components $g^\e_{\a\b}(y)=\e^{-2}{\langle}\partial_\a F^{\s_0}_{\partial\O}(\e y), \partial_\b F^{\s_0}_{\partial\O}(\e y){\rangle}$
for $\a,\b=1,\dots,N$
and where we have used the fact that $\tilde{\d}(F^{\s_0}_{\partial\O}(\e y))=|\e\tilde y|^2$ for
every $\tilde y$ in the support of $u_\t$.
Since the scaled metric $g^\e$ expands a $g^\e=I+O(\e)$ on the support of $u_\t$, we deduce that
\begin{eqnarray*}
\mu_\l(\O,\Sigma_k) &\le& \frac{1+r}{1-c r}\,\frac{1+c\e}{1-c\e}\,\, \frac{\displaystyle
\int_{{\mathbb R}^N_+}|\nabla u_\tau|^2\,d y }{\displaystyle
\int_{{\mathbb R}^N_+}|\tilde y|^{-2}\,u_\tau^2\,d y}+\frac{cr}{1-r} ,
\end{eqnarray*}
where $c$ is a positive constant depending only on $\O,p,q,\eta$ and $\Sig_k$. Hence by \eqref{eq:estutau} we conclude
\begin{eqnarray*}
\mu_\l(\O,\Sigma_k)
&\le& \frac{1+r}{1-c r}\,\frac{1+c\e}{1-c\e}\, \left( \frac{(N-k)^2}{4}+\tau
\right)+ \frac{cr}{1-r} .
\end{eqnarray*}
Taking the limit in $\e$, then in $r$ and then in $\tau$, the claim follows.\\
\textbf{Step 2:} We claim that there exists $\tilde{\l}\in{\mathbb R}$ such that
$\mu_{\tilde{\l}}(\O,\Sig_k)\geq\frac{(N-k)^2}{4}$.\\
Thanks to Lemma \ref{lem:loc-hardy}, the proof uses a standard argument of cut-off function
and integration by parts (see \cite{BM}) and we can obtain
$$
\int_{\O}\d^{-2}u^2 q\,dx\leq \int_{\O}|\n u|^2 p\,dx+C\int_{\O}\d^{-2}u^2 \eta \,dx\quad\forall u\in C^\infty_c(\O),
$$
for some constant $C>0$. We skip the details. The claim now follows by choosing $\tilde{\l}=-C$\\
Finally, noticing that $\mu_\l(\O,\Sig_k)$ is
decreasing in $\l$, we can set
\begin{equation}\label{eq:lsdef} \l^*:=\sup\left\{{\l\in{\mathbb R}}\,:\, \mu_\l(\O,\Sig_k)=
{\frac{(N-k)^2}{4}}\right\}
\end{equation}
so that $\mu_\l(\O,\Sig_k)<\frac{(N-k)^2}{4}$ for all $\l>\l^*$.
\hfill {$\square$}\goodbreak \medskip
\section{Non-existence result}\label{s:ne}
\begin{Lemma}\label{lem:Opm}
Let $\O$ be a smooth bounded domain of ${\mathbb R}^N$, $N\geq 3$, and let $\Sigma_k$ be a
smooth closed submanifold of $\partial\O$ of dimension $k$ with $1\le k\le
N-2$. Then, there exist bounded smooth domains $\O^\pm$ such that $\O^+\subset \O\subset\O^-$
and
$$
\partial{\O^+}\cap \partial\O=\partial{\O^-}\cap \partial\O = \Sigma_k.
$$
\end{Lemma}
\noindent{{\bf Proof. }}
Consider the maps
$$
x\mapsto g^\pm(x):=d_{\partial\O}(x)\pm\frac12\,\d^2(x),
$$
where $d_{\partial\O}$ is the distance function to $\partial\O$.
For some $\b_1>0$ small, $g^\pm$ are smooth in $\O_{\b_1}$ and since $|\n g^\pm|\geq C>0$ on $\Sig_k$, by the implicit function theorem, the sets
$$
\{x\in \O_{\b}\,:\,g^\pm=0 \}
$$
are smooth $(N-1)$-dimensional submanifolds of ${\mathbb R}^N$, for some $\b>0$ small. In addition, by construction, they can be taken to be part of the boundaries of
smooth bounded domains $\O^\pm $ with $\O^+\subset \O\subset\O^-$ and such that
$$
\partial{\O^+}\cap \partial \O=\partial{\O^-}\cap \partial\O = \Sigma_k.
$$
The prove then follows at once.
\hfill {$\square$}\goodbreak \medskip
Now, we prove the following non-existence result.
\begin{Theorem}\label{th:ne}
Let $\O$ be a smooth bounded domain of ${\mathbb R}^N$ and let $\Sigma_k$ be a
smooth closed submanifold of $\partial\O$ of dimension $k$ with $1\le k\le
N-2$ and let $\l\geq0$. Assume that $p,q$ and $\eta$ satisfy \eqref{eq:weight} and \eqref{eq:min-pq}. Suppose that $u\in H^1_0(\O)\cap C(\O)$ is
a non-negative function
satisfying
\begin{equation}\label{eq:ustf}
-{\rm div}(p \n u)-\frac{(N-k)^2}{4}q\d^{-2}u\geq-\l \eta \d^{-2} u \quad\textrm{in }\O.
\end{equation}
If $\int_{\Sigma_k}\frac1{\sqrt{1-p(\s)/q(\s)}}d\s=+\infty$ then
$u\equiv0$.
\end{Theorem}
\noindent{{\bf Proof. }}
We first assume that $p\equiv1$.
Let $\O^+$ be the set given by Lemma \ref{lem:Opm}. We
will use the notations in Section \ref{s:pn} with ${\mathcal U}=
\O^+$ and $\mathcal{M}=\partial \O^+$. For $\b>0$ small we define
$$\O^+_{\b} := \{ x \in \O^+: \quad
{\d}(x)<\b \}.$$
We suppose by contradiction that $u $ does not vanish identically near $\Sigma_k$ and satisfies
\eqref{eq:ustf} so that $u>0$ in $\O_{\b}$ by the maximum principle, for some $\b>0$ small.\\
Consider the subsolution $V_\e$ defined in Lemma \ref{le:lowerbound} which satisfies
\begin{equation}\label{eq:lwaneg}
\mathcal{ L}_\l\,V_\e\leq 0\quad\textrm{ in
}\O^+_{\b},\quad\forall \e\in(0,1).
\end{equation}
Notice that $\overline{\partial\O^+_{\b}\cap
\O^+}\subset \O$ thus, for $\b>0$ small, we can choose $R>0$ (independent on $\e$) so
that
$$
R\,V_\e\leq R\,V_0\leq u\quad\textrm{ on }
\overline{\partial\O^+_{\b}\cap \O^+ }
\quad\forall \e\in(0,1).
$$
Again by Lemma \ref{le:lowerbound}, setting $v_\e=R\, {V_\e}-u$, it
turns out that $v^+_\e=\max(v_\e,0)\in
H^1_0(\O^+_{\b})$ because $V_\e=0$ on $\partial
\O^+_{\b}\setminus
\overline{\partial\O^+_{\b}\cap
\O^+}$. Moreover by \eqref{eq:ustf} and
\eqref{eq:lwaneg},
$$
\mathcal{ L}_\l\,v_\e\leq 0\quad\textrm{ in
}\O^+_{\b},\quad\forall \e\in(0,1).
$$
Multiplying the above inequality by $v^+_\e$ and integrating by parts
yields
$$
\int_{\O^+_{\b}}|\n
v^+_\e|^2\,dx-\frac{(N-k)^2}{4}\int_{\O^+_{\b}}\d^{-2}q|v^+_\e|^2\,dx+
\l\int_{\O^+_{\b}}\eta \d^{-2}|v^+_\e|^2\,dx
\leq0.
$$
But then Lemma \ref{lem:loc-hardy} implies that $v^+_\e=0$ in
$\O^+_{\b}$ provided $\b$ small enough because $|\eta|\leq C\d$ near $\Sig_k$. Therefore $u\geq R\, {V_\e}$ for every $\e\in(0,1)$. In
particular $u\geq R\,V_0$. Hence we obtain from Lemma \ref{le:lowerbound} that
$$
\infty>\int_{\O^+_{\b}}\frac{u^2}{\d^{2}}\geq R^2 \int_{\O^+_{\b}}\frac{V_0^2}{\d^{2}}\geq\int_{\Sigma_k}\frac1{\sqrt{1-q(\s)}}d\s
$$
which leads to a contradiction. We deduce that $u\equiv0$ in $\O^+_{\b} $. Thus by
the maximum principle $u\equiv0$ in $\O$.\\
For the general case $p\neq 1$, we argue as in \cite{BMS} by setting
\begin{equation}\label{eq:transf}
\tilde{u}=\sqrt{p} u.
\end{equation}
This function satisfies
$$
-\D \tilde{u}-\frac{(N-k)^2}{4}\frac{q}{p}\d^{-2}\tilde{u}\geq-\l \frac{\eta}{p} \d^{-2}\tilde{ u} +\left(-\frac{\D p}{2 p } +\frac{|\n p|^2}{4 p^2 } \right) \tilde{u}\quad\textrm{in }\O.
$$
Hence since $p\in C^2(\overline{\O})$ and $p>0$ in $ \overline{\O}$, we get the same conclusions as in the case $p\equiv 1$ and $q$ replaced by $q/p$.
\hfill {$\square$}\goodbreak \medskip
\section{Existence of minimizers for $\m_{\l}(\Omega,\Sigma_k) $}
\begin{Theorem}\label{th:exitslesls}
Let $\O$ be a smooth bounded domain of ${\mathbb R}^N$ and let $\Sigma_k$ be a
smooth closed submanifold of $\partial\O$ of dimension $k$ with $1\le k\le
N-2$. Assume that $p,q$ and $\eta$ satisfy \eqref{eq:weight} and \eqref{eq:min-pq}. Then $\m_{\l}(\Omega,\Sigma_k)$ is achieved for every $\l<\l^*$.
\end{Theorem}
\noindent{{\bf Proof. }}
The proof follows the same argument of \cite{BM} by taking into account the fact that $\eta=0$ on $\Sig_k$ so we skip it.
\hfill {$\square$}\goodbreak \medskip
Next, we prove the existence of minimizers in the critical case $\l=\l_*$.
\begin{Theorem}\label{th:exits-crit}
Let $\O$ be a smooth bounded domain of ${\mathbb R}^N$ and let $\Sigma_k$ be a
smooth closed submanifold of $\partial\O$ of dimension $k$ with $1\le k\le
N-2$. Assume that $p,q$ and $\eta$ satisfy \eqref{eq:weight} and \eqref{eq:min-pq}. If $\displaystyle \int_{\Sigma_k}\frac1{\sqrt{1-p(\s)/q(\s)}}d\s<\infty$ then
$\m_{\l^*}=\m_{\l^*}(\O,\Sig_k)$ is achieved.
\end{Theorem}
\noindent{{\bf Proof. }}
We first consider the case $p\equiv 1$.\\
Let $\l_n$ be a sequence of real numbers decreasing to $\l^*$. By Theorem \ref{th:exitslesls}, there exits $u_n$
minimizers for $\mu_{\l_n}=\m_{\l_n}(\Omega,\Sigma_k)$ so that
\begin{equation}\label{eq:u_n}
-\D u_n-\mu_{\l_n}\d^{-2}q u_n= -\l_n \d^{-2 }\eta u_n \quad\textrm{ in }\O.
\end{equation}
We may assume that $u_n\geq 0$ in $\O$. We may also assume that $\|\n u_n\|_{L^2(\O)}=1$. Hence $u_n \rightharpoonup u$ in $H^1_0(\O)$
and $u_n\to u$ in $L^2(\O)$ and pointwise.
Let $\O^-\supset\O$ be the set given by Lemma \ref{lem:Opm}. We
will use the notations in Section \ref{s:pn} with ${\mathcal U}=
\O^-$ and $\mathcal{M}=\partial \O^-$. It will be understood that $q$ is extended to a function in $C^2(\overline{\O^- })$.
For $\b>0$ small we define
$$\O^-_{\b} := \{ x \in \O^-: \quad
{\d}(x)<\b \}.$$
We have that
$$
\D u_n+b_n(x)\, u_n=0\quad\textrm{ in }\O,
$$
with $|b_n|\leq C$ in $\overline{\O\setminus \overline{\O^-_{\frac\b2}}}$ for all integer $n$. Thus by standard elliptic regularity theory,
\begin{equation}\label{eq:unleC}
u_n\leq C \quad \quad\textrm{ in }\overline{\O\setminus \overline{\O^-_{\frac{\b}{2}}}}.
\end{equation}
We consider the supersolution $U$ in Lemma \ref{le:upperbound}. We shall show that there exits a constant $C>0$ such that for all $n\in\mathbb{N}$
\begin{equation}\label{eq:unleCV12}
u_n\leq C U \quad \textrm{ in }\overline{\O^-_\b}.
\end{equation}
Notice that $\overline{\O\cap\partial\O^-_\b}\subset \O^-$ thus by \eqref{eq:unleC}, we can choose $C>0$ so
that for any $n$
$$
u_n\leq C\, U\quad\textrm{ on }
\overline{\O\cap\partial\O^-_\b}.
$$
Again by Lemma \ref{le:upperbound}, setting $v_n=u_n-C\, U$, it
turns out that $v^+_n=\max(v_n,0)\in
H^1_0(\O^-_{\b})$ because $u_n=0$ on $\partial\O\cap \O^-_\b$.
Hence we have
$$
\mathcal{ L}_{\l_n}\,v_n\leq -C(\mu_{\l^*}-\mu_n)q{U}-C(\l^*-\l_n)\eta {U}\leq 0 \quad\textrm{ in
}\O^-_{\b}\cap\O .
$$
Multiplying the above inequality by $v^+_n$ and integrating by parts
yields
$$
\int_{\O^-_{\b}}|\n
v^+_n|^2\,dx-\mu_{\l_n}\int_{\O^-_{\b}}\d^{-2}q|v^+_n|^2\,dx+
\l_n\int_{\O^-_{\b}}\eta\d^{-2} |v^+_n|^2\,dx
\leq0.
$$
Hence Lemma \ref{lem:loc-hardy} implies that
$$
C \int_{\O^-_{\b}}\d^{-2}X_{-2} |v^+_n|^2\,dx+\l_n\int_{\O^-_{\b}}\eta \d^{-2} |v^+_n|^2\,dx\leq0.
$$
Since $\l_n$ is bounded, we can choose $\b>0$ small (independent of
$n$) such that $v^+_n\equiv0$ on $\O^-_\b$ (recall that $|\eta|\leq
C\d$).
Thus we obtain \eqref{eq:unleCV12}. \\
Now since $u_n\to u$ in $L^2(\O)$, we get by the dominated convergence theorem and \eqref{eq:unleCV12}, that
$$
\d^{-1} u_n\to \d^{-1} u\quad \textrm{ in }L^2(\O).
$$
Since $u_n$ satisfies
$$
1=\int_{\O}|\n u_n|^2=\mu_{\l_n}\int_{\O}\d^{-2} q u_n^2+ {\l_n}\int_{\O}\d^{-2} \eta u_n^2,
$$
taking the limit, we have $1= \mu_{\l^*}\int_{\O}\d^{-2} q u^2+ {\l^*}\int_{\O}\d^{-2} \eta u^2$. Hence $u\neq0$ and it is a minimizer for $\mu_{\l^*}=\frac{(N-k)^2}{4}$.\\
For the general case $p\neq 1$, we can use the same transformation as in \eqref{eq:transf}. So \eqref{eq:unleCV12} holds and the same argument as a above carries over.
\hfill {$\square$}\goodbreak \medskip
\section{Proof of Theorem \ref{th:mulpqe} and Theorem \ref{th:crit}}
\textit{Proof of Theorem \ref{th:mulpqe}:} Combining Lemma \ref{lem:Jl1} and Theorem \ref{th:exitslesls},
it remains only to check the case $\l<\l^*$. But this is an easy consequence of the definition of $\l^*$ and of $\mu_{\l}(\O,\Sig_k)$, see [\cite{BM}, Section 3].\hfill {$\square$}\goodbreak \medskip
\bigskip
\noindent
\textit{Proof of Theorem \ref{th:crit}:}
Existence is proved in Theorem \ref{th:exits-crit} for $I_k<\infty$. Since the absolute value of
any minimizer for $\mu_{\l}(\O,\Sig_k) $ is also a minimizer, we can apply Theorem \ref{th:ne}
to infer that $\mu_{\l^*}(\O,\Sig_k) $ is never achieved as soon as $I_k=\infty$.\hfill {$\square$}\goodbreak \medskip
\begin{center}\textbf{ Acknowledgments} \end{center}
This work started when the first author was visiting CMM,
Universidad de Chile. He is grateful for their kind hospitality. M.
M. Fall is supported by the Alexander-von-Humboldt Foundation. F.
Mahmoudi is supported by the Fondecyt proyect n: 1100164 and Fondo
Basal CMM.
|
1,108,101,566,356 | arxiv | \section{Introduction}
Computing thermodynamic quantities of the ferromagnetic
Ising model has been a fundamental problem in statistical physics
since the early 20th century \cite{Ising}, where the demonstration
of the model's phase transition served as the first rigorous proof
that small changes at an atomic scale can lead to
large, observable changes~\cite{Peierls}. Singularities in the
thermodynamic quantities indicate the critical temperature at which
the phase transition occurs. The partition function $Z$ of
the Ising model and its partial derivatives determine these quantities.
While $Z$ has been found exactly in special cases~\cite{Onsager, Yang},
there is unlikely to exist an efficient
method of finding $Z$ in general \cite{JS}. Therefore, the task of
estimating $Z$ has drawn significant
effort from the physics and computer science communities~\cite{cipra}.
However, an algorithm that is truly practical has yet to be found.
In this paper, we present a new heuristic sampling approach with the
goal of solving real-world instances quickly.
The classical approach to this problem is to sample from the Gibbs
distribution using a Markov chain~\cite{Metropolis, SW, JS}. Ideally, the
algorithm will require only a polynomial number of samples to estimate
$Z$ at a particular temperature, but even then this process must be
repeated for each temperature of interest. In contrast, each
run of our heuristic sampling algorithm, $\allk$, estimates certain
coefficients that are independent of temperature. Once obtained,
these coefficients can be used to compute $Z$, mean energy, mean
magnetization, specific heat, and magnetic susceptibility at all
temperatures by simply evaluating polynomials with these coefficients.
For a fixed bond strength, computing $Z$
is equivalent to counting subgraphs of a graph $G$. Let $x_{k,e}$
denote the number of subgraphs of $G$ with $2k$ odd vertices and $e$
edges. Using the high-temperature expansion, we can write $Z$ and
its derivatives as polynomials whose coefficients come from the set
of $x_{k,e}$. For each $k$, $\allk$ generates a search tree whose
leaves are the set of subgraphs with $2k$ odd vertices, and then
implements the stratified sampling method of Chen~\cite{Chen} to
estimate the $x_{k,e}$. In the absence of an applied field, the
problem of estimating $Z$ reduces to estimating $x_{0,e}$ for all $e$.
As will become clear, it is simple to restrict $\allk$ to subgraphs
with no odd-degree vertices, which significantly reduces the complexity
of the algorithm in this special case.
\section{Definitions and Terminology}\label{s:term}
In this section, we introduce important notions from
statistical physics and graph theory.
\subsection{Ising Model}
Given a graph $G=(V,E)$ with $|V|=n$ and $|E|=m$, a \emph{spin
configuration} $\sigma =\sigma(G)$ is an assignment of spins in
$\{+ 1, -1\}$ to the elements of $V$. The energy of $\sigma$ is given
by the Hamiltonian
$$H(\sigma) = -J\sum_{(x,y)\in E}\sigma_x\sigma_y - B \sum_{x\in V}
\sigma_x,$$
where $J$ is the interaction energy (bond strength) and $B$ is the
external magnetic field. In this paper we restrict the ferromagnetic case,
fixing $J=1$.To model the physical reality of a ferromagnet,
the probability assigned to state $\sigma$ is given by the Gibbs
distribution, defined as $e^{-\beta H(\sigma)}/Z$,
where $\beta = (k_{\Boltz}T)^{-1}$ is proportional to inverse
temperature and $k_{\Boltz}$ is Boltzmann's constant.
The normalizing constant $Z = \sum_{\sigma} \exp(-\beta H(\sigma))$
is also called the partition function.
Following the notation of \cite{JS}, let $\lambda
= \tanh(\beta J)$ and $\mu=\tanh(\beta B)$. The high-temperature
expansion is defined by $Z=AZ'$, where $A=(2\cosh(\beta B))^n
\cosh(\beta J)^m$
is an easily computed constant, and
\begin{equation*}\label{htexp}
Z'= \sum_{X\subseteq E} \lambda^{|E(X)|}\mu^{|\ODD(X)|}~,
\end{equation*}
where the sum is taken over all subsets $X$ of the edges of $G$. In a
slight abuse of notation, we let $X$ also refer to the graph with
vertex-set $V$ and edge-set $X$. In this manner, $E(X)$ is the edge-set
of $X$, $\ODD(X)$ is the the set of odd-degree vertices in $X$, and
all subgraphs in this paper are spanning and labeled.
Since all graphs have an even number of vertices of odd degree,
Jerrum and Sinclair \cite{JS} write $Z'$ as a polynomial in $\mu^2$:
$ Z' = \sum_{k=0}^{\lfloor n/2 \rfloor} c_k \mu^{2k},$ where $ c_k =
\sum_{X~:~|\ODD(X)|=2k}
\lambda^{|E(X)|}~.$
Notice that we can compute $Z'$ for any choice of $\mu$ given the values
of the $c_k$,
making the $c_k$ independent of the magnetic field. However, we wish
to have full temperature-independence, so we write
\begin{equation}\label{eq:c_k}
c_k = \sum_{e=0}^m x_{k,e}\lambda^e \quad \text{and} \quad Z' =
\sum_{k=0}^{\lfloor n/2 \rfloor} \sum_{e=0}^m x_{k,e}\lambda^e
\mu^{2k},
\end{equation}
where $x_{k,e}$ is as defined in the introduction.
As we shall see, $\allk$ is designed to estimate the $x_{k,e}$.
Thus, $\allk$ yields an estimate of $Z'$, and hence $Z$ as well, at all
temperatures simultaneously.
While $\allk$ is defined for all graphs $G$, the graphs with the most
physical significance are the square lattices (grids) with periodic
boundary conditions in two and three dimensions.
Therefore, all of the computations provided in this paper utilize such
graphs, and we shall refer to the $s \times s$ square lattice with
periodic boundary conditions simply as the $s \times s$ grid.
\subsection{Cycle Bases}\label{s:cycle_bases_def}
We now introduce some elementary algebraic graph theory
which $\allk$ uses (for more on this topic, see \cite{Diestel}).
The \emph{symmetric difference} of two
subgraphs $X_1$ and $X_2$ of $G$, written $X_1 \oplus X_2$, is the
subgraph of $G$ that contains precisely those edges in exactly one of
$X_1$ and $X_2$. One may consider this operation as addition of
subgraphs over the field $\mathbb{F}_2 = \{0,1\}$.
Notice that an edge $e$ is in $\bigoplus_{i=1}^t X_i$ if and only
if $e$ appears in an odd number of these subgraphs.
Let $\mathcal{E}_0$ be the set of \emph{even subgraphs}, those
subgraphs with no vertices of odd degree. Since the symmetric
difference of two even subgraphs is again an even subgraph,
we may view $\mathcal{E}_0$ as a vector space over $\mathbb{F}_2$,
called the \emph{cycle space} of $G$. The dimension of the cycle
space is $m-n+1$. Hence, every set of $m-n+1$ linearly independent
even subgraphs forms a \emph{cycle basis} $\mathcal{C}$ of $G$.
Further, every even subgraph has a \emph{unique}
representation using the elements of $\mathcal{C}$, and
$|\mathcal{E}_0| = 2^{m-n+1}$.
When $X \in \mathcal{E}_0$, the parity of each vertex in $X \oplus Y$
is the same in $Y$. Now consider a subgraph $P$ of $G$ with
$\ODD(P)=\{v_1,v_2,\ldots, v_{2k}\}$. The set
$$\mathcal{E}_0 \oplus P := \{X\oplus P: X\in \mathcal{E}_0\}$$
is exactly the $2^{m-n+1}$ subgraphs whose odd vertices are
$\ODD(P)$. Therefore, the set of subgraphs with $2k$ odd vertices,
$\mathcal{E}_k$, is $\bigcup_{S} \mathcal{E}_0\oplus P_S$, where
the union is over all $S \subseteq V$ of size $2k$ and $P_S$ is
\emph{any} subgraph with $\ODD(P)=S$.
Cycle bases have a long history in combinatorics \cite{maclane},
and are used both in theory and applications
\cite{cycle_basis_survey}.
A \emph{fundamental cycle basis} is defined as the cycles
in $T+e$ for each $e \in E(G)-E(T)$, for a spanning tree $T$ of $G$.
Since spanning trees can be found quickly (see e.g. \cite{CLRS}),
so can fundamental bases.
\emph{Minimum cycle bases}, which are bases with the
fewest total edges, have proven helpful in practice
and can also be found in
polynomial time \cite{min_cycle_basis}.
\section{Algorithms}\label{s:algs}
Our main data structure is a search-tree; a rooted tree in which each
node represents a subgraph of $G$. For each $k$, we shall define a
search-tree $\tau_k$ whose leaves are precisely $\mathcal{E}_k$.
Our goal is to estimate $x_{k,e}$, the number of leaves of $\tau_k$
that have $e$ edges.
Tree search algorithms have a lengthy history in computer science \cite{Pearl}.
A classical example of such is an algorithm of Knuth
\cite{Knuth} for estimating properties of a backtrack tree. To estimate
the number of leaves, for example, Knuth's algorithm explores a random
path down the tree from the root, choosing a child uniformly at
random at each step. It then returns the product of the number of
children of each node seen along the path. It is easy to see that this
estimator is unbiased; i.e. the expected value is the number of leaves.
For our application, we want the number
of leaves of $\tau_k$ of a certain type (with $e$ edges). We achieve this via
Chen's generalization of Knuth's algorithm, which was originally
introduced to reduce the variance of the estimator.
Since Chen's work lies at the heart of our approach, we take the next section to
explain it in further detail. In Section~\ref{s:allk}, we describe $\allk$.
In \cite{SM}, we present an alternative to $\allk$,
which is related to~\cite{JS}. This approach, which we call $\bs$,
may be more appropriate in the presence of an external field,
but is outperformed by $\allk$ when $B=0$.
\subsection{Stratified Sampling}
We describe in Algorithm \ref{alg:Chen} a simplified version of the
stratified sampling algorithm introduced by Chen \cite{Chen}.
Let $\tau$ be a search tree and choose a \emph{stratifier} for $\tau$
--- a way of partitioning the nodes into sets called \emph{strata}.\footnote{In
general, the stratifier must satisfy a few technical conditions. \
However, as long as we require each strata to contain nodes from a
single level of $\tau$, we are guaranteed that these conditions are met.}
For each stratum $\alpha$, Algorithm \ref{alg:Chen}
produces a representative $s_{\alpha} \in \alpha$ and a weight
$w_{\alpha}$, which is an unbiased estimate of the number of nodes in $\alpha$.
For Algorithm \ref{alg:Chen}, let $Q_{\current}$ and $Q_{\children}$ be queues.
Each node $s$ of $\tau$ has a weight $w$, and we write $(s,w)$
to represent this pair. The input is the root $r$ of $\tau$, a method
for determining the children of a node in $\tau$, and the stratifier.
The output is the set of $(s_\alpha,w_\alpha)$. If the algorithm never
encounters an element of $\alpha$, it returns $(\emptyset, 0)$ for $\alpha$.
\begin{algorithm}
\caption{Chen's Algorithm}
\begin{algorithmic}
\STATE \emph{initialize}: $Q_{\current}=\{(r,1)\}$, $Q_{\children}=\{\}$, $i =0$.
\WHILE{$i < \text{number of levels in $\tau$}$}
\WHILE{$Q_{\current} \not= \emptyset$}
\STATE output the first element $(s,w)$ of $Q_{\current}$
\FOR{each child $t$ of $s$ in $\tau$}
\IF{$Q_{\children}$ contains an element $(u,w_u)$ in the same stratum as $t$}
\STATE update $w_u = w + w_u$
\STATE w. prob. $w/w_u$ replace $(u,w_u)$ with $(t,w_u)$ in $Q_{\children}$
\ELSE
\STATE add $(t,w)$ to $Q_{\children}$
\ENDIF
\ENDFOR
\STATE pop $(s,w)$ off of $Q_{\current}$
\ENDWHILE
\STATE set $Q_{\current} = Q_{\children}$ and reset $Q_{\children}=\emptyset$
\STATE $i$++
\ENDWHILE
\end{algorithmic}
\noindent\hrulefill\par\nobreak\vskip-5pt
\label{alg:Chen}
\end{algorithm}
\subsection{Cycle-Addition Algorithm}\label{s:allk}
Let $S \subseteq V$ of size $2k$ and recall that $P_S$ is any subgraph
of $G$ with $\ODD(P) = S$. Let $\mathcal{C} = \{C_1, C_2,\ldots, C_{m-n+1}\}$ be
a cycle basis of $G$. Define $\tau(\mathcal{C},P_S)$ as the search-tree
determined by the following rules:
\begin{itemize}
\item[1.] $P_S$ is the root of $\tau(\mathcal{C},P_S)$, and
\item[2.] each node $X$ at level $0 \leq i < m-n+1$ has two children:
$X \oplus \mathcal{C}_{i-1}$ and $X$.
\end{itemize}
Now $\tau_k$ is the tree with artificial root node $R$ whose
$\binom{n}{2k}$ children correspond to the roots of $\tau(\mathcal{C},P_S)$,
one for each distinct subset of size $2k$.
\begin{figure}[!htb]
\centering
\begin{overpic}[scale=.5]{k4_4odds.eps}
\put(8, 75){$G$} \put(40.5, 75){$C_1$}
\put(64.5, 75){$C_2$} \put(88, 75){$C_3$}
\put(38.5, 66){$P_S$}
\put(28, 55){$\oplus C_1$}
\put(9, 35){$\oplus C_2$} \put(59, 35){$\oplus C_2$}
\put(-.5, 14.5){$\oplus C_3$} \put(24.75, 14.5){$\oplus C_3$}
\put(49.75, 14.5){$\oplus C_3$} \put(75.25, 14.5){$\oplus C_3$}
\put(53.5, 69){${\bf 1}$}
\put(28.25, 49.5){${\bf 1}$} \put(79, 49.5){${\bf 1}$}
\put(17, 29){${\bf 2}$} \put(67.5, 29){${\bf 2}$}
\put(22, 11){${\bf 2}$} \put(60, 11){${\bf 2}$} \put(72, 11){${\bf 4}$}
\end{overpic}
\caption{}
\label{fig:allk}
\end{figure}
In order to implement Algorithm~\ref{alg:Chen}, we define the
stratifier for each $\tau(\mathcal{C},P_S)$ by: the nodes $X$ and $Y$ in
$\tau(\mathcal{C},P_S)$ belong to the same stratum if and only if
$X$ and $Y$ are in the same level of $\tau(\mathcal{C},P_S)$, and
$|E(X)| = |E(Y)|$.
The inputs to $\allk$ are a graph
$G$, an integer $k$ in $[0, n/2]$,
and an integer $N$. The output of
each of the $N$ runs of Algorithm \ref{alg:Chen}, as a subroutine of
$\allk$, is a set of $(s_{\alpha},w_{\alpha})$ pairs.
Consider a representative $s_{\alpha}$ that is a leaf node in the tree
$\tau(\mathcal{C},P_S)$, and suppose $s_{\alpha}$ has $e$ edges. Then
$\binom{n}{2k}w_{\alpha}$ is our estimate of $x_{k,e}$,
since each sample represents all $\binom{n}{2k}$ choices of $S$.
\begin{algorithm}
\caption{$\allk$}
\begin{algorithmic}
\STATE Choose a cycle basis $\mathcal{C}$ of $G$
\FOR{$j \in [1,N]$}
\STATE Choose $S \subseteq V$ with $|S| = 2k$
\STATE Find $P_S$
\STATE Run Algorithm \ref{alg:Chen} on $\tau(\mathcal{C},P_S)$
\ENDFOR
\FOR{$e \in [0,m]$}
\STATE Let $\alpha$ be the stratum corresponding to the bottom level of
$\tau(\mathcal{C},P_S)$ and $e$ edges, and output $\binom{n}{2k}$ times the
average of the $N$ estimates of $w_\alpha$ as $x_{k,e}$
\ENDFOR
\end{algorithmic}
\noindent\hrulefill\par\nobreak\vskip-5pt
\label{alg:allk}
\end{algorithm}
Figure \ref{fig:allk} shows an example of $\allk$ with $k=2$, $S=V(G)$,
$N=1$, and $G$, $P_S$, $\mathcal{C}=\{C_1,C_2,C_3\}$, and
$\tau(\mathcal{C}, P_S)$ as depicted. The graphs bounded by solid circles
are the strata representatives, and their weights are in bold
just above. The solid edges of $\tau(\mathcal{C}, P_S)$ connect the
nodes seen by $\allk$.
The output is $x_{2,2} = 2$, $x_{2,3} = 4$, $x_{2,6}=2$, and
$x_{2,e}=0$ for $e \in \{0,1,4,5\}$.
In Figure~\ref{fig:xkes_4x4x1}, we show the output of many runs of $\allk$
on a $4\times 4$ grid for $k \in [0,4]$, and use this output (and that for
$k \in [5,8]$) with Equation~\ref{eq:c_k} to get Figure~\ref{fig:xkes_4x4x1},
using four values of $\lambda$. While the $c_k$ are log-concave~\cite{JS},
the $x_{k,e}$ may not be.
\begin{figure}[!htb]
\centering
\subfloat[]{
\begin{overpic}[scale=.19]{xkes_4x4x1.eps}
\put(55,32){\tiny{$k=0$}} \put(55,43){\tiny{$k=1$}}
\put(55,52){\tiny{$k=2$}} \put(55,57){\tiny{$k=3$}}
\put(55,62){\tiny{$k=4$}}
\put(2,20){\begin{sideways}\parbox{15mm}{\footnotesize{$\log_2(x_{k,e})$}}
\end{sideways}}
\put(47,-1){\small{$e$}}
\end{overpic}\label{fig:xkes_4x4x1}}
\subfloat[]{
\begin{overpic}[scale=.19]{ck_all_4x4x1_4lam.eps}
\put(47,59){\tiny{$\lambda=1$}} \put(47,51){\tiny{$\lambda=0.7$}}
\put(47,43){\tiny{$\lambda=0.414$}} \put(47,29){\tiny{$\lambda=0.1$}}
\put(2,20){\begin{sideways}\parbox{15mm}{\footnotesize{$\log_2(c_k)$}}
\end{sideways}}
\put(47,-1){\small{$k$}}
\end{overpic}\label{fig:cks_4x4x1}}
\caption{(Color online)}
\label{fig:xkes}
\end{figure}
\subsection{No external field}
In the absence of an external field, we only need to run $\allk$ for $k=0$.
This represents a huge time savings in comparison to the case $B \not= 0$,
as then we need to run $\allk$ for all $k \in [0, n/2]$.
Furthermore, we must choose $S = \emptyset$, which eliminates this step
from the algorithm.
\subsection{Details} \label{s:details}
The algorithm $\allk$ is really a class of algorithms, each
corresponding to the choice of cycle basis, the order of the
subgraphs in the basis, the subsets $S$, and the roots $P_S$.
We briefly discuss these choices here and
elaborate further in \cite{SM}.
The choice of cycle basis is central to the performance of $\allk$.
Experimentally, minimum cycle bases have outperformed fundamental
and random cycle bases in terms of overall speed and variance. However,
it remains an interesting open problem to determine the optimal basis
for $\allk$.
As for the choice of $S$, we know that $k=0$ implies $S = \emptyset$.
But for $k>0$, we must choose $S$.\footnote{Except if $G$ itself is even,
in which case there is no choice for $k=n/2$ either.}
We would like every subset of $V(G)$ of size $2k$ to appear as $S$ at
least once. However, when $k$ is near $n/4$, $\binom{n}{2k}$ is exponentially
large in $n$.\footnote{Ideally, we would partition the subsets of $V(G)$ into
isomorphism classes $\{V_i\}_{i=1}^t$ and choose a representative for class
$V_i$ to act as $S$ in $|V_i|N/\binom{n}{2k}$ instances.
However, the number of such classes can also be exponentially large.}
So instead we are forced to select a reasonable number of such subsets that
work well in $\allk$.
Once $S = \{v_1, v_2, \dots, v_{2k}\}$ is chosen, we must find $P_S$.
One such method is to use a spanning tree $T$ of $G$ to
create $P_S = \bigoplus_{i=1}^k \path{2i-1}{2i}$,
where $\path{2i-1}{2i}$ is the path from $v_{2i-1}$ to $v_{2i}$ in $T$.
\section{Performance} \label{s:performance}
\subsection{Convergence}\label{s:conv}
In Section \ref{s:phys_quant}, we show how to get unbiased
estimates of $Z'$ from our unbiased estimates of the $x_{k,e}$.
To evaluate the efficiency of the algorithm, we need to know how many
samples ($N$) we need to be reasonably confident about our estimate
of $Z'$. The answer depends on the relative variance of our estimate of
$Z'$.\footnote{By the Central Limit Theorem, we need
$N = \frac{z_{\delta/2}^2}{\epsilon^2}
\left(\frac{E[(Z')^2]}{E^2[Z']} -1\right)$ to be within $\epsilon$
with probability $1-\delta$, where $z_{\delta/2}$ comes from the normal
distribution, and $\frac{E[(Z')^2]}{E^2[Z']} -1$ is precisely relative
variance.} As heuristic sampling methods are relatively new, there are
not many tools for computing the variance of these algorithms.
Experimentally, such methods have been shown to work well in practice,
but a robust theoretical foundation is lacking~\cite{BSSV, Pearl}.
Therefore, analyzing the variance for this problem remains an important
open question which deserves further study.
\begin{figure}[!htb]
\centering
\subfloat[]{
\begin{overpic}[scale=.19]{rel_var_c0_4x4_10000000.eps}
\put(2,-22){\begin{sideways}\parbox{52mm}{\small{Rel. Var. of $Z'$}}
\end{sideways}}
\put(47,0){$\beta^{-1}$}
\end{overpic}
\label{fig:relvarc0}}
\subfloat[]{
\begin{overpic}[scale=.19]{even_only_times_c_code_hours_cubic_growth.eps}
\put(3,8){\begin{sideways}\parbox{25mm}{Running Time}\end{sideways}}
\put(49,0){$n$}
\end{overpic} \label{fig:times}}
\caption{(Color online)}
\label{}
\end{figure}
In our simulations, we find that although $\allk$ is temperature
independent, the variance is not. Figure~\ref{fig:relvarc0} shows
the relative sample variance of our estimate of $Z'$ as a function
of temperature, for a $4\times 4$ grid with no external field.
The highest sample variance occurs at the critical temperature,
$\beta^{-1} \approx 2.269$. However, even at the critical temperature,
our estimate of $Z'$ converges quickly. Figure~\ref{fig:convergence}
presents six separate runs of $\allk$ with $B=0$, and shows the
convergence to $Z'$ for each run as a function of the number of samples.
The exact value of $Z'$ is displayed as the straight black line.
\begin{figure}[!htb]
\centering
\begin{overpic}[scale=.45]{convergence_c0_250000_2-26918_nologs.eps}
\put(1,30){$Z'$}
\put(35,1){Number of Samples}
\end{overpic}
\caption{(Color online)}
\label{fig:convergence}
\end{figure}
\subsection{Running time}\label{s:runningtime}
The number of operations of a single run of Algorithm \ref{alg:Chen}
as a subroutine of $\allk$ is a function of the number of strata used
in $\tau(\mathcal{C},P_S)$ and the number of operations performed
to process each node in $Q_{\current}$.
Recall that our stratifier partitions nodes according to their
level in $\tau(\mathcal{C},P_S)$ and number of edges. Clearly, each level
has at most $m+1$ strata. Further, there are $m-n+2$ levels. Hence,
the number of strata used is at most $(m-n+2) (m+1)$. For each node
in $Q_{\current}$, $\allk$ examines its two children, so the total number
of nodes of $\tau(\mathcal{C},P_S)$ used by the subroutine is at most
$ 2(m-n+2)m = O(m^2).$ For each of these nodes, we take the symmetric
difference of two subgraphs and count the number of edges remaining,
each of which is an $O(m)$ operation.
Thus, each run of Algorithm \ref{alg:Chen} as a subroutine of $\allk$
terminates after
$O(m^3)$ operations. For square lattices in dimension $d$,
the number of operations is $O(d^3n^3)$, as $m=dn$.
\subsection{Implementation}
We implemented $\allk$ in C, using GMP to deal with the
large weights generated by the algorithm. In Figure~\ref{fig:times},
we plot our experimental running times for $\sqrt{n} \times \sqrt{n}$
grids against the curve $f(n) = 1.25 \cdot 10^{-14} n^3$,
which matches up well with our bound of $O(n^3)$.
Typically, one stores graphs as matrices or lists. However, we greatly
improve the running time of $\allk$ by storing each subgraph $X$ as an
integer whose bitstring $b_X$ has length $m$; $b_X(e)= 1$ if and only
if $e\in E(X)$. Here, $b_{X\oplus Y} = b_X~\text{\sc{xor}}~ b_Y$, so
taking symmetric differences is quite fast. Further, $|E(X)|$ is simply
the number of ones in the bitstring, which can also be computed quickly.
One may achieve another increase in speed if
machine-level instructions for the operation XOR are used
for large integers. Most modern micro-processors have
such capabilities, as they are used
in scientific computing \cite{intel}.
\section{Physical Quantities}\label{s:phys_quant}
In this section, we show how to use the estimates of the $x_{k,e}$
to calculate physical quantities. Let $f(X) = f(|\ODD(X)|,
|E(X)|)$ be any function on subgraphs $X$ which depends only on the
number of odd vertices and the number of edges of $X$. We
can calculate the expected value of $f$ with respect to the distribution
$\pi'(X) = \lambda^{|E(X)|}\mu^{|\ODD(X)|}/Z'$ from our estimates of
the $x_{k,e}$ by
\begin{equation}\label{eq:expectation}
\mathbb{E}[f] =\frac{1}{Z'} \sum_{k=0}^{\lfloor n/2 \rfloor} \sum_{e=0}^m f(k,e)
x_{k,e}\lambda^e \mu^{2k}.
\end{equation}
Notice if $f$ is identically $1$, $Z' \mathbb{E}[f] = Z'$,
and so we can approximate $Z'$, and hence $Z$, by simply looking
at the double sum. In Theorem~\ref{t:phys_quant}, we show that important
physical quantities can also be expressed as $\mathbb{E}[f]$ for suitable
choices of $f$. The proof of Theorem \ref{t:phys_quant} involves taking
partial derivatives of $\ln Z$ with respect to $\beta$ and $B$ following
the method of~\cite{JS}. As these calculations are tedious but easy, we
leave the details to \cite{SM}.
\begin{thm}\label{t:phys_quant}
The mean magnetic moment, mean energy, magnetic susceptibility, and
specific heat can each be written as sums of expectations of random
variables over the distribution $\pi'$.
\end{thm}
In Figure \ref{fig:mean_energy_and_specific_heat}, we show estimates of
mean energy and specific heat from $\allk$ with $N = 50,000,000$ on a
$16 \times 16$ grid as a function of $\beta^{-1}$. These figures match
those of \cite[p. 252]{gould} nicely.
\begin{figure}[!htb]
\centering
\subfloat[]{
\begin{overpic}[scale=.19]{mean_energy_16x16_50000000_c0_cfiles_no_binoms.eps}
\put(0,35){\large{$\frac{{\bf \varepsilon}}{n}$}}
\put(45,0){\scriptsize{$\beta^{-1}$}}
\end{overpic}\label{fig:mean_energy_16}}
\subfloat[]{
\begin{overpic}[scale=.19]{specific_heat_16x16_50000000_c0_cfiles.eps}
\put(-5.5,35){$\frac{\mathcal{C}}{nk_{\Boltz}}$}
\put(45,0){\scriptsize{$\beta^{-1}$}}
\end{overpic}\label{fig:specific_heat_16}}
\caption{(Color online)}
\label{fig:mean_energy_and_specific_heat}
\end{figure}
\section{Conclusions}\label{s:conclusion}
The algorithm $\allk$ is a completely new approach to the problem of
estimating $Z$. To our knowledge it is the first heuristic sampling
method for this problem. For this reason, it is difficult to compare
the running time of $\allk$ with the current best-known algorithms,
which are all Markov chain Monte Carlo methods. What is clear is that
$\allk$ gives us an estimate of $Z$ at \emph{all temperatures
simultaneously} in only $O(m^3)$ operations, where the constant hidden
by the big-O notation is small. Bounding the variance of
$\allk$ is an important open problem which is necessary to give a real
understanding of its efficiency. However, if the goal is to get
\emph{some} estimate as fast as possible, $\allk$
is an excellent choice.
Besides analyzing the variance of $\allk$, there are several other
directions for future work. For example, there are many choices made in
$\allk$ which could be optimized, such as the choice of cycle basis. These
choices could affect the variance significantly. One might consider
other tree-search algorithms and compare their performance with that of
$\allk$ and $\bs$. We also plan to investigate more extensively the
connections between our heuristic method and MCMC methods.
\begin{acknowledgments}\label{s:acknowledgements}
We wish to thank Professor Ted Einstein at the University of Maryland
for discussing our ideas and for reminding us of the difference between
physics and mathematics.
\end{acknowledgments}
|
1,108,101,566,357 | arxiv |
\section{Summary}
\section{Conclusion}
$\SD$ has achieved wide-spread popularity in applications that wish to optimize for the $\EMD$ criterion.
However, with a thorough analysis, we point out two limitations in $\SD$ which make it hard to use in deep learning:
(i) numerical instability due to floating point precision; and
(ii) the $\ell_1$ behavior that makes it hard to optimize.
We counter this by deriving closed-form solutions for $\EMD$ and its \emph{dirt} conserving gradient on chain (\eg~histograms) and tree (\eg~hierarchies) output spaces.
We also propose a relaxed version ($\EMD^2$) of the original distance and compute its analytical form.
Our $\EMD^2$ exhibits better properties regarding numerical stability and convergence.
On a task about predicting the PSD of respiratory signals (chain-connectivity), we demonstrate faster convergence and reduction in error using $\EMD^2$.
We also evaluate object categorization on 1000 classes from the ImageNet challenge and work in the regime of limited training data (50K image samples).
Here, using the WordNet hierarchy (tree-connectivity), we observe that modeling the output space through the use of $\EMD^2$ helps boost the performance.
Our contributions will help promote a wider adaption $\EMD$ as a loss criterion within deep learning frameworks.
\section{Experimental analysis}
We implement our models, the $\EMD$ and the $\SD$ criterions using Torch~\cite{collobert2011torch7}%
\footnote{Code will be made publicly available}.
Evaluation is performed on an i5-6600K CPU at 3.5GHz with 64GB of DDR4-2133 RAM and a GTX1080 GPU running Ubuntu 16.04, CUDA 8.0 and cuDNN 5.1.3.
Unless otherwise stated, the $\SD$ hyper-parameters are $\lambda=3$, the iteration limit $100$, and using CUDA (\ie~\texttt{float32} type).
\subsection{Timing analysis for $\SD$ vs. $\EMD^2$}
\begin{table}[t]
\centering
\begin{tabular}{ r | c | c | c |}
& max iter. & CPU & GPU \\ \hline
$\SD$ & 10 & 942ms & 15.1ms\\
$\SD$ & 100 & 7.51s & 88.9ms\\
$\SD$ & 1000 & 74.5s & 865ms\\
$\EMD^2$ & - & 126ms & 25ms\\ \hline
\end{tabular}
\caption{Computation time for the gradients of $\SD$ and $\EMD^2$ on the WordNet Tree Structure experiment for one minibatch of size 512.
The closed form solution allows the $\EMD^2$ to be 60x faster to calculate than 100 iterations of $\SD$ on a CPU.
Our unoptimized CUDA code is 3.5x faster than $\SD$.
Note that it is not practical to run $\SD$ on \texttt{float64} precision and 1000 iterations, as it takes 74 seconds to evaluate the loss for a single minibatch.}
\label{table:speed}
\end{table}
We evaluate the computational efficiency of $\EMD^2$ and $\SD$.
The Sinkhorn-Knopp algorithm is very efficient and demonstrates fast GPU performance.
However, being an iterative procedure, $\SD$ is significantly slower than $\EMD^2$ (see \tableref{speed}).
Furthermore, it is not practical for large output spaces, especially if we require the use of \texttt{float64} precision which is only available on CPUs.
\subsection{$\EMD^2$ on Chain Spaces}
We evaluate the use of $\EMD$ to learn Power Spectral Density (PSD) - a chain distribution with signal power binned into different frequencies.
As we need to optimize over the whole output space, this task is not only well-suited to use $\EMD$ as a loss criterion, but $\EMD$ also serves as the evaluation metric.
Our task is to predict the PSD of a breathing signal obtained from a patient using chest excursion signals (a nose thermistor acts as reference).
Our data is recorded from 75 real patients from a sleep laboratory.
We extract 200 one-minute clips for each patient, providing us with a dataset of 15,000 samples.
We use data from 60 patients to train our models, and the remaining 15 as test subjects.
Noise levels depend on the activity of the patient, and are negligible when he/she is relaxed.
On the other hand, when the patient moves, sits, or talks during the 1 minute segment, the correlation between chest movement and respiration disappears.
We adopt a two layer network for this experiment.
The first layer consists of $16$ temporal convolution filters with a receptive field of $11$.
We apply the \emph{tanh} nonlinearity, and stack a fully connected layer on top.
To ensure positive outputs (as we predict signal power) we apply the \emph{square} function $(\cdot)^2$ to the output layer.
We use the \emph{Adam} optimizer~\cite{ba2015adam}.
On this simple task \emph{Adam} performs very well, and the model converges with all criteria (see \figref{resultsPSD}).
Nevertheless, both $\EMD^2$ and $\SD$ outperform $\MSE$, converging in a fraction of the first epoch.
This highlights the benefits of using $\EMD$ criterion in cases where it can be hard to obtain several training samples and the output space has a suitable structure.
\begin{figure}[t]
\centering
\resizebox{\columnwidth}{!}{%
\begin{tikzpicture}
\tikzstyle{every node}=[font=\footnotesize]
\begin{axis}[ height=3cm, width=8cm, title=Learning to predict the PSD of a breathing signal,
x filter/.code={\pgfmathparse{#1/60}\pgfmathresult},
scale only axis, ymin=0,ymax=10,xmin=0,xmax=12 ,enlargelimits=false, y label style={at={(axis description cs:-0.05,.5)},anchor=south}, ylabel=Test EMD, x label style={at={(axis description cs:0.5,-0.1)},anchor=north}, xlabel=Epoch, legend cell align=left, legend style={legend pos=north east,font=\scriptsize}]
\input{results/experimentPSD_test}
\legend{$\MSE$,$\EMD^2$,$\SD$}
\end{axis}
\end{tikzpicture}}
\caption{We train a regressor to estimate the PSD of real breathing signals obtained from chest excursions.
We observe that both $\EMD^2$ and $\SD$ learn the transformation significantly faster than the $\MSE$.
Over a longer period $\EMD^2$ achieves better accuracy than $\SD$.
}
\label{fig:resultsPSD}
\end{figure}
\begin{figure*}[t]
\centering
\begin{tikzpicture}
\begin{axis}[%
hide axis,xmin=10,xmax=50,ymin=0,ymax=0.4,
legend columns=4,legend style={cells={align=left}}
]
\addlegendimage{line width=1.5pt, purple}
\addlegendentry{$\emph{CE} \quad$};
\addlegendimage{line width=1.5pt, blue}
\addlegendentry{$\EMD^2 \quad$};
\addlegendimage{line width=1.5pt, teal}
\addlegendentry{$50\% \enskip \EMD^2 \quad$\\$50\% \enskip \emph{CE}$};
\addlegendimage{line width=1.5pt, olive}
\addlegendentry{$25\% \enskip \SD \quad$\\$75\% \enskip \emph{CE}$};
\end{axis}
\end{tikzpicture}\\
\vspace{-4.5cm}
\subfloat[Top-1 Accuracy on ImageNet@1280K images]{
\begin{tikzpicture}
\tikzstyle{every node}=[font=\footnotesize]
\begin{axis}[ height=2.6cm, width=4.55cm,
y filter/.code={\pgfmathparse{100*(1-#1)}\pgfmathresult},
scale only axis, ymin=0,ymax=50,xmin=2,xmax=100 ,enlargelimits=false, y label style={at={(axis description cs:-0.075,.5)},anchor=south}, ylabel=Top-1 Accuracy, x label style={at={(axis description cs:0.5,-0.1)},anchor=north}, xlabel=Epoch, legend cell align=left, legend style={at={(0.5,1.1)},anchor=south,font=\tiny}]
\addplot+[purple, line width=1pt, mark=none, line join=round] table[x expr=\coordindex+2, y index=1] {results/FULL_CE_ErrorRate1.log};
\addplot+[blue, line width=1pt, mark=none, line join=round] table[x expr=\coordindex+2, y index=1] {results/FULL_EMD_ErrorRate1.log};
\addplot+[teal, line width=1pt, mark=none, line join=round] table[x expr=\coordindex+2, y index=1] {results/FULL_EMD2CE_ErrorRate1.log};
\addplot+[olive, line width=1pt, mark=none, line join=round] table[x expr=\coordindex+2, y index=1] {results/FULL_SD4_ErrorRate1.log};
\end{axis}
\end{tikzpicture}}
\subfloat[Top-1 Accuracy on ImageNet@50K images]{
\begin{tikzpicture}
\tikzstyle{every node}=[font=\footnotesize]
\begin{axis}[ height=2.6cm, width=4.55cm,
y filter/.code={\pgfmathparse{100*(1-#1)}\pgfmathresult},
scale only axis, ymin=0,ymax=10,xmin=2,xmax=200 ,enlargelimits=false, y label style={at={(axis description cs:-0.075,.5)},anchor=south}, ylabel=Top-1 Accuracy, x label style={at={(axis description cs:0.5,-0.1)},anchor=north}, xlabel=Epoch, legend cell align=left, legend style={at={(0.5,1.1)},anchor=south,font=\tiny}]
\addplot+[purple, line width=1pt, mark=none, line join=round] table[x expr=\coordindex+2, y index=1] {results/CE_0_LR005_ErrorRate1.log};
\addplot+[blue, line width=1pt, mark=none, line join=round] table[x expr=\coordindex+2, y index=1] {results/EMD_0_ErrorRate1.log};
\addplot+[teal, line width=1pt, mark=none, line join=round] table[x expr=\coordindex+2, y index=1] {results/EMD2_0_ErrorRate1.log};
\addplot+[olive, line width=1pt, mark=none, line join=round] table[x expr=\coordindex+2, y index=1] {results/SD4_0_ErrorRate1.log};
\end{axis}
\end{tikzpicture}}
\subfloat[Holistic Error on ImageNet@50K images]{
\begin{tikzpicture}
\tikzstyle{every node}=[font=\footnotesize]
\begin{axis}[ height=2.6cm, width=4.55cm,
scale only axis, ymin=0,ymax=10,xmin=2,xmax=200 ,enlargelimits=false, y label style={at={(axis description cs:-0.075,.5)},anchor=south}, ylabel=$\EMD$, x label style={at={(axis description cs:0.5,-0.1)},anchor=north}, xlabel=Epoch, legend cell align=left, legend style={at={(0.5,1.1)},anchor=south,font=\tiny}]
\addplot+[purple, line width=1pt, mark=none, line join=round] table[x expr=\coordindex+2, y index=0] {results/CE_0_LR005_Wasserstein.log};
\addplot+[blue, line width=1pt, mark=none, line join=round] table[x expr=\coordindex+2, y index=0] {results/EMD_0_Wasserstein.log};
\addplot+[teal, line width=1pt, mark=none, line join=round] table[x expr=\coordindex+2, y index=0] {results/EMD2_0_Wasserstein.log};
\addplot+[olive, line width=1pt, mark=none, line join=round] table[x expr=\coordindex+2, y index=0] {results/SD4_0_Wasserstein.log};
\end{axis}
\end{tikzpicture}}
\caption{We analyze the use of Earth Mover's Distance as a loss criterion on the 1000-class ImageNet Large-Scale Visual Recognition Challenge.
(a) We evaluate on the full training set, where a combined loss of $\EMD^2$ and Cross Entropy provides the best top-1 accuracy.
$\EMD^2$ alone does not achieve high top-1 accuracy as it tries to optimize for the whole output space and not only the best result.
(b) We evaluate on 50K images, which is about $4\%$ of the original training set.
In this case the improvement provided by $\EMD^2$ criterion is more apparent.
(c) We plot the holistic error of the entire output space for the 50K images subset.
The $\EMD^2$ outperforms others and its combinations by a large gap.
}
\label{fig:imagenet}
\end{figure*}
\subsection{$\EMD^2$ on Tree Spaces}
To evaluate the $\EMD^2$ loss on a hierarchical space, we develop an experiment based on the well known 1000-class ImageNet object recognition challenge~\cite{russakovsky2015ilsvrc}.
We train a model similar to Alexnet~\cite{krizhvesky2012alexnet} with batch normalization~\cite{ioffe2015batch} after ReLU, using a minibatch size of $512$, a learing rate of $0.05$ with a decay of $10^{-5}$ and a $\ell_2$ weight penalty of $10^{-4}$.
We use Stochastic Gradient Descent (SGD) as optimizer with momentum of $0.9$.
The input image is downsized to $112 \times 112$ pixels, and horizontal flipping and cropping is used for data augmentation at train time only.
The output space hierarchy tree is obtained from WordNet~\cite{miller1995wordnet} and has a total of $1374$ nodes and a maximum distance between nodes of $26$. We set all edge costs to $1$.
By our definition, the output labels correspond to the leaves of the tree.
Thus, the minimum hierarchical distance between a pair of output labels is $2$.
We evaluate the following loss criteria:
\begin{description}[noitemsep,nolistsep]
\item [Cross Entropy ($\CE$):] the standard loss used in classification problems and state-of-the-art ImageNet models.
\item [$\EMD^2$:] pure $\EMD^2$ after a softmax non-linearity.
\item [$\EMD^2 + \CE$:] a $1:1$ combination of $\EMD^2$ and $\CE$.
\item [$\SD + \CE$:] a $1:3$ combination of $\SD$ and $\CE$. We give more emphasis to $\CE$ since using a $1:1$ ratio did not converge.
\end{description}
$\SD$ and $\EMD$ alone do not converge using SGD on a large variation of parameter combinations that we tried.
This is somewhat expected behavior for $\ell_1$ losses.
We explore two data setups.
In the first case, the full training set is available (1280K images), while in the second, only a small amount of training data is available ($\sim4\%$, 50K images).
Our results show little advantage for $\EMD$ when using the full training set (see \figref{imagenet}~(a)).
This happens because the model can already learn the output space hierarchy from the input images used for training.
However, obtaining such large datasets is a daunting task.
We discuss the results at more depth for the second setting with reduced data (50K images).
As metrics, we present the Top-1 accuracy and the $\EMD$ loss.
Top-1 accuracy is a common metric used in the ImageNet challenge which depends only on the largest value of the output vector.
On the other hand, the $\EMD$ loss has an opposite notion, as it depends on the output of the entire vector.
\noindent\textbf{$\CE$ loss:}
The $\CE$ loss strongly favors Top-1 accuracy, and thus converges the fastest with regards to the Top-1 metric (see \figref{imagenet}~(a) and (b)).
When operating in the reduced data setting, improvements in Top-1 accuracy have the side effect of reducing the holistic loss (\figref{imagenet}~(c)) as the model also begins to learn the leaves of the hierarchical space from the input data.
However, the model soon starts over-fitting, the Top-1 accuracy plateaus, and the holistic loss grows back to it's original value.
\noindent\textbf{$\EMD^2$ loss:}
We see an effect opposite to that of $\CE$ when using $\EMD^2$.
The $\EMD^2$ loss optimizes primarily for the holistic loss (see \figref{imagenet}~(c)).
Here, the improvements in Top-1 accuracy are a side effect and also slow.
Nevertheless, we see in \figref{imagenet}~(b) that the Top-1 accuracy of $\EMD^2$ ends up higher than $\CE$ in the 50K image setting, as it learns the entire output space.
\noindent\textbf{$\EMD^2$ + $\CE$ losses:}
The combination of losses provides fast network convergence and highest Top-1 accuracy (see \figref{imagenet}~(b))
The $\CE$ loss optimizes Top-1 accuracy while the $\EMD^2$ incorporates the information of the output space through the holistic optimization.
\noindent\textbf{$\SD$ + $\CE$ losses:}
We see to a lesser extent a similar behavior to that of $\EMD^2$ + $\CE$.
However, the performance is limited by the fact that $\SD$ is mainly an $\ell_1$ distance, and the Sinkhorn-Knopp algorithm is not numerically stable for large output spaces with the \texttt{float32} representation.
\section{The Earth Mover's Distance}
\label{sinkhorn}
As discussed earlier, the $\EMD$ is defined for discrete distributions.
Here, the probability mass (or \emph{dirt}) is distributed in discrete piles or bins.
The effort of moving a mound of \emph{dirt} between two bins is a non-negative cost which is linearly proportional to the amount of \emph{dirt} and distance between the bins.
Within this discrete domain, the general form of $\EMD$ between two distributions $\mathbf{p}, \mathbf{q} \in \mathbb R^N_{+}$ with $\@ifstar{\oldnorm}{\oldnorm*}{\mathbf{p}}_1 = \@ifstar{\oldnorm}{\oldnorm*}{\mathbf{q}}_1$ is
\begin{equation}
\label{eq:emd_general}
\EMD(\mathbf{p}, \mathbf{q}) = \inf_{T \in U(\mathbf{p}, \mathbf{q})} \langle M, T \rangle \, ,
\end{equation}
where $\langle \cdot, \cdot \rangle$ is the Frobenius inner product and $M \in \mathbb R^{N \times N}_{+}$ defines the generalized distance between bins (see \figref{general}).
$U(\mathbf{p}, \mathbf{q})$ is the set of valid transport plans between $\mathbf{p}$ and $\mathbf{q}$,
\begin{equation}
\label{eq:transport_domain}
U(\mathbf{p}, \mathbf{q}) = \{ T \in \mathbb R^{N \times N}_{+} : T \cdot \mathbf{1}_N = \mathbf{p},\, T^\top \cdot \mathbf{1}_N = \mathbf{q} \} \, .
\end{equation}
$\mathbf{1}_N$ is an $N$ dimensional vector of all ones, and $T$ is constrained such that its row sum corresponds to the distribution $\mathbf{p}$ and column sum to $\mathbf{q}$.
Without loss of generality (a simple scalar normalization), in the rest of the paper we assume $\@ifstar{\oldnorm}{\oldnorm*}{\mathbf{p}}_1 = \@ifstar{\oldnorm}{\oldnorm*}{\mathbf{q}}_1 = 1$.
\subsection{Unnormalized distributions}
The original $\EMD$ is not defined for $\@ifstar{\oldnorm}{\oldnorm*}{\mathbf{p}}_1 \ne \@ifstar{\oldnorm}{\oldnorm*}{\mathbf{q}}_1$.
Although there are several ways to modify the $\EMD$ for unnormalized distributions~\cite{chizat2015unbalanced,frogner2015learning,pele2009fast} we consider that this goes against the spirit of the metric.
Therefore, prior to computing the $\EMD$, we $\ell_1$-normalize the input distributions either using a $\ell_1$ normalization layer, or a \emph{softmax} layer.
\subsection{Sinkhorn distance}
The general formulation of the $\EMD$ (Eq.~\ref{eq:emd_general}) is solved using linear programming which is computationally expensive.
However, this problem was greatly alleviated by Cuturi~\cite{cuturi2013sinkhorn} who suggested a smoothing term for the $\EMD$ in the form of
\begin{equation}
\label{eq:sd}
\SD_\lambda(\mathbf{p}, \mathbf{q}) = \inf_{T \in U(\mathbf{p}, \mathbf{q})} \langle M, T \rangle - \frac{1}{\lambda}\langle T, \log T \rangle \, ,
\end{equation}
which allows to use the Sinkhorn-Knopp algorithm~\cite{sinkhorn1967diagonal} to obtain an iterative and efficient solution.
The Sinkhorn-Knopp algorithm is notable as it converges fast and produces a subgradient for the $\SD$ without extra cost.
This subgradient can in turn be used to update parameters of machine learning models~\cite{frogner2015learning}.
The algorithm is defined as\footnote{Of the several variants of the algorithm, this follows the one in Caffe~\cite{jia2014caffe}. $\odot$ and $\oslash$ stand for per-element multiplication and division respectively.}:
\begin{algorithm}
\caption{Sinkhorn Distance and Gradient}
\label{alg1}
\begin{algorithmic}
\REQUIRE $f(x), y, \lambda, \mathbf{M}$
\STATE $\mathbf{K} \leftarrow e^{-\lambda \mathbf{M}-1}$, $u \leftarrow \mathbf{1}$, $it \leftarrow 1$
\WHILE{not converged \AND it $\le$ MAX-ITER}
\STATE $u \leftarrow f(x) \oslash \left( \mathbf{K} ( y \oslash \mathbf{K}^{\top} u) \right)$
\ENDWHILE
\STATE $v \leftarrow y \oslash (\mathbf{K}^{\top} u)$
\STATE $\SD \leftarrow \text{sum}( u \odot (\mathbf{K} \odot \mathbf{M} v))$
\STATE $\nabla\SD \leftarrow \log{u}/\lambda$
\end{algorithmic}
\end{algorithm}
\subsection{Numerical stability of the Sinkhorn Distance}
We claim that the $\SD$ is not numerically stable when used in common deep learning frameworks.
We substantiate this claim by comparing the output of the $\SD$ and its gradient to the real $\EMD$.
We extract a hierarchy of categories from the WordNet ontology for the 1000 classes of the ILSVRC2012 dataset~\cite{russakovsky2015ilsvrc}.
The tree has 1374 nodes in total.
This hierarchy acts as the structure of the output space in our evaluation.
There are three parameters that impact numerical stability of the $\SD$ in deep learning:
\begin{description}[noitemsep,nolistsep]
\item [Iteration limit:] any practical implementation needs an upper limit for the number of $\SD$ iterations.
This leads to a trade-off between speed and accuracy which is more apparent for larger values of $\lambda$, as seen in \figref{sinkhorn} (a), (c).
\item [Floating point accuracy:] the Sinkhorn algorithm alternately normalizes rows and columns of the transport matrix.
This requires several multiply-accumulate operations which are prone to numerical inaccuracies.
This problem is made worse by the exponential form of $\mathbf{K} = e ^{-\lambda \mathbf{M} -1}$, which increases the dynamic range of the values.
To make things worse, GPUs use a \texttt{float32} representation, instead of the common \texttt{float64} representation used by CPUs.
In \figref{sinkhorn} (b) and (d), we observe how using \texttt{float32} affects the results, especially for large values of $\lambda$ where more iterations are required to converge.
\item [Regularization factor ($\lambda$):] the regularization factor affects both the accuracy and the convergence behavior of $\SD$.
In a non-deep learning framework, the number of iterations does not pose a limit and \texttt{float64} representation can be used.
Lower values of $\lambda$ imply better convergence behavior~\cite{cuturi2013fast}, while larger values approximate the Earth Mover's Distance better (see gray reference line in \figref{sinkhorn}).
However, in deep learning applications where \texttt{float32} representation are common and the iteration number is typically chosen to be $10$~\cite{frogner2015learning} or $100$~\cite{jia2014caffe}, larger values of $\lambda$ are unusable.
\end{description}
In general, $\SD$ works best when the ratio between the largest and the smallest non-zero value in the $\mathbf{M}$ is small (thus has a smaller dynamic range), and the size of the output space is small (due to the reduced number of multiply-add operations).
\section{Introduction}
The Wasserstein metric~\cite{villani2008optimal} is a distance function based on the optimal transport problem that compares two data distributions.
While computing such metrics on digital devices, it is common practice for data distributions to work in a discretized space (\eg~arranged in \emph{bins}).
Here, the Wasserstein distance is popularly known as the Earth Mover's Distance ($\EMD$)~\cite{rubner1998metric}.
The name is derived from a visual analogy of the data distributions as two piles of \emph{dirt} (earth).
$\EMD$ is defined as the minimum amount of effort required to make both distributions look alike.
Note that the individual bins of both distributions should be non-negative and their total mass equal (as is the case with probability distributions).
The $\EMD$ is widely used to compare histograms and probability distributions~\cite{marinai2011using,peleg1989unified,rolet2016fast,rubner1998metric,rubner2000earth}.
However, calculating the $\EMD$ is known to be computationally expensive.
This has led to several relaxed versions of the $\EMD$ for cases where speed is critical, \eg~when comparing feature vectors~\cite{ling2007efficient,pele2009fast,rabin2008circular}.
In addition to the large computational cost, $\EMD$ has the drawback of an $\ell_1$ behavior.
Solving $\EMD$ optimization problems often require \emph{lasso} optimization techniques (\eg, mirror descent, Bregman projections, etc.).
This represents a significant drawback for current deep learning approaches that strongly favor gradient-based methods such as Stochastic Gradient Descent, Momentum~\cite{sutskever2013momentum}, and Adam~\cite{ba2015adam}, that provide several small updates to the model parameters.
\vspace{0.1cm}
\noindent\textbf{Sinkhorn Distance.}
Using the $\EMD$ within an iterative optimization scheme was made feasible by Cuturi~\cite{cuturi2013sinkhorn}, who realized that an entropically regularized $\EMD$ can be efficiently calculated using the Sinkhorn-Knopp~\cite{sinkhorn1967diagonal} algorithm.
The resulting is referred to as the Sinkhorn Distance ($\SD$), and has achieved wide popularity within a number of learning frameworks~\cite{benamou2015iterative,bonneel2015sliced,cuturi2016smoothed,frogner2015learning,montavon2015wasserstein,rabin2015convex,rolet2016fast,solomon2015convolutional,solomon2014earth,solomon2014wasserstein}.
The $\SD$ approximates $\EMD$ effectively, and provides a subgradient for the $\EMD$ as a side result of the estimation.
This $\SD$ subgradient has been used to train deep learning models~\cite{frogner2015learning} and is implemented as a loss criterion in popular deep frameworks such as Caffe~\cite{jia2014caffe} and Mocha~\cite{Mocha}.
However, as $\SD$ is an $\ell_1$ norm, Frogner~\etal~\cite{frogner2015learning} need to combine $\SD$ with the Küllback-Leibler divergence and use an exceedingly small learning rate for it to converge.
Furthermore, the $\SD$ algorithm is prone to numerical instabilities when used in deep learning frameworks.
In some conditions, these instabilities imply that $\SD$ is not a close approximation of $\EMD$.
We believe that an analysis of the causes of instabilities is critical to extend the use of $\SD$ to deep learning frameworks and discuss them in detail in Sec.~\ref{sinkhorn}.
\vspace{0.1cm}
\noindent\textbf{Earth Mover's Distance.}
Concurrently, we suggest an alternative approach to $\SD$.
Instead of tackling the general case, we focus on output spaces whose connectivity graph takes the form of a chain (histograms or probability distributions) or a tree (hierarchies).
We provide closed-form solutions for the real $\EMD$ and its gradient.
We start with chain-connected distributions (see \figref{chain}) that have a well-known closed-form solution~\cite{vallender1974calculation}, and derive its gradient.
We also propose a relaxed version of the $\EMD$, named $\EMD^2$ that exhibits similar structure but converges faster due to its $\ell_2$ behavior.
Furthermore, we derive a closed form solution for the $\EMD$ and its gradient that is valid for all metric spaces that have a tree connectivity graph.
This allows us to represent complex output spaces that are hierarchical in nature (\eg, WordNet~\cite{miller1995wordnet} and ImageNet~\cite{russakovsky2015ilsvrc}, Sentence Parse Trees~\cite{marcus1993penntreebank}).
We see an example of a hierarchical output space of object categories in \figref{main}.
We depict the expected flow of \emph{dirt} on the tree branches, and present the gradients for both original and relaxed versions of the $\EMD$ (details of the gradients in Sec.~\ref{subsec:relax_emd} and Sec.~\ref{treesection}).
\vspace{0.1cm}
\noindent\textbf{$\EMD$ as a loss criterion for deep learning.}
Using $\EMD$ as a loss criterion has several unique advantages over unstructured losses.
It allows us to shape the output relationships we expect from a model.
For example, it can tell the model that confusing a cat for a tiger may more acceptable than confusing a cat for a starship, and thus adds knowledge to the model.
Additionally, $\EMD$ gradients (in contrast to $\MSE$) are holistic and affect the whole output space as it is connected.
Therefore, models that predict the entire output space (\eg, histograms) converge faster.
Overall, we see that $\EMD$ has the effect of magnifying the information that an input data sample provides.
Each input sample does not only provide information about its own class, but also contains the relationship it has with the rest of the output bins (\eg, classes, histogram bins, etc.)
This second source of information helps generalize better, and results in improved performance with less data.
We demonstrate these characteristics in two real-world experiments.
We train a model to predict the Power Spectral Density of respiratory signals of sleep laboratory patients.
In this setting the $\EMD$ converges faster than the $\SD$ and $\MSE$ losses, and achieves better accuracy.
Our second experiment is performed on a reduced version of the ILSVRC 2012 challenge~\cite{russakovsky2015ilsvrc}.
We use all 1000 categories, but limit the training data to 50K images.
This, along with the $\EMD$ criterion, forces the network to learn the output space hierarchy.
While $\EMD$ alone is not enough to achieve the best top-1 accuracy, an equal combination of $\EMD$ and cross entropy loss achieves better top-1 accuracy than using cross entropy alone.
\section{Experimental analysis}
\noindent\textbf{Setup.}
We implement our regression model with the $\EMD$ criterion using Torch~\cite{collobert2011torch7}\footnote{Code will be made publicly available}.
Note that $\varphi_i$ can be computed very fast through the use of a cumulative sum operator (\texttt{cumsum}).
\subsection{Convergence by Gradient Descent}
We analyze the convergence behavior of the four different versions of the gradients.
We propose a toy experiment where a random source distribution $\mathbf{p}$ is converted into a random target distribution $\mathbf{q}$ using gradient descent as follows:
\begin{eqnarray}
\hat{\mathbf{p}}_0 & = & \mathbf{p} \nonumber \\
\hat{\mathbf{p}}_{t+1} & = & \hat{\mathbf{p}}_t - \lambda_t \nabla_{\hat{\mathbf{p}}_t} \oEMD^\rho(\hat{\mathbf{p}}_t,\mathbf{q}) \, .
\end{eqnarray}
The experiment details are: $\mathbf{p}, \mathbf{q} \in \mathbb R^N_{+}$, $\@ifstar{\oldnorm}{\oldnorm*}{\mathbf{p}}_1 = \@ifstar{\oldnorm}{\oldnorm*}{\mathbf{q}}_1 = 1$, $N=64$, $\hat{M}_i = 1$ and $\lambda_0 = 2^{20}$.
Finding an optimal fixed learning rate for different versions of the gradients is a significant problem in itself.
We alleviate this by performing backtracking line search at each iteration adjusting $\lambda$ using a scaling factor of $\sqrt{2}$ so as to make $\hat{\mathbf{p}}_{t+1}$ similar to $\mathbf{q}$.
We evaluate two experimental settings for the $\EMD$ metric.
(i) \textbf{Easy}: $\mathbf{p}$ and $\mathbf{q}$ are vectors with random samples drawn from a uniform distribution in all the $N$ bins.
We expect that the amount of \emph{dirt} transported to make $\mathbf{p}$ equal to $\mathbf{q}$ is small.
(ii) \textbf{Hard}: we zero-out the right half of $\mathbf{p}$ and the left half of $\mathbf{q}$ while maintaining $\@ifstar{\oldnorm}{\oldnorm*}{\mathbf{p}}_1 = \@ifstar{\oldnorm}{\oldnorm*}{\mathbf{q}}_1 = 1$.
Here, all the \emph{dirt} from $\mathbf{p}$ needs to be displaced, which is a harder transport problem.
We report averaged results over 64 runs in~\figref{resultsA} (left Easy, right Hard).
\textbf{Optimal} $\EMD$ \textbf{does not converge.}
We see in the top row of \figref{resultsA} that the optimal $\EMD$ does not converge.
A reason for this is the zero Hessian that implies that the gradient brings no information about step size.
In contrast, the relaxed $\EMD$ (Eq.~\ref{eq:emd_rho}) presents a steady decline for both settings.
\textbf{$\ell_1$ preserving gradients.}
A second observation is the bottom row of \figref{resultsA}.
Here, we see that the $\ell_1$ preserving gradients ($g\oEMD_{L1}$, $g\oEMD^2_{L1}$) do not change the total mass of the distributions, while the gradients $g\oEMD$ and $g\oEMD^2$ change.
In fact, for the Hard experiment, $g\EMD^2$ starts diverging rapidly around iteration 200 and can not recover.
\textbf{Best convergence by} $g\EMD^2_{L1}$.
The $\ell_1$ preserving, relaxed $\EMD$ gradient shows good performance consistently through both experiments.
The learning rate is steady, total mass is maintained, and error reduces significantly.
As a side note, given the current setup, Mean Squared Error ($\MSE$) converges to zero in a single step as it can pick the best $\lambda$.
\begin{figure}[t]
\centering
\subfloat{
\begin{tikzpicture}
\begin{axis}[%
hide axis,
xmin=10,
xmax=50,
ymin=0,
ymax=0.4,
legend columns=4,
legend style={}
]
\addlegendimage{line width=1.5pt, purple}
\addlegendentry{$g\emph{EMD} \quad$};
\addlegendimage{line width=1.5pt, blue}
\addlegendentry{$g\emph{EMD}_{L1} \quad$};
\addlegendimage{line width=1.5pt, teal}
\addlegendentry{$g\emph{EMD}^2 \quad$};
\addlegendimage{line width=1.5pt, olive}
\addlegendentry{$g\emph{EMD}^2_{L1} \quad$};
\end{axis}
\end{tikzpicture}}
\\
\subfloat{
\begin{tikzpicture}
\tikzstyle{every node}=[font=\footnotesize]
\begin{axis}[ height=2.8cm, width=5.3cm, ymode = log,
scale only axis, ymin=1e-4,ymax=1e+4,xmin=0,xmax=2000 ,enlargelimits=false, y label style={at={(axis description cs:-0.1,.5)},anchor=south}, ylabel=Error (EMD), x label style={at={(axis description cs:0.5,-0.1)},anchor=north}, xlabel=Epoch, legend cell align=left, legend style={legend pos=north east,font=\scriptsize}]
\input{artificialNearScore}
\end{axis}
\end{tikzpicture}}
\subfloat{
\begin{tikzpicture}
\tikzstyle{every node}=[font=\footnotesize]
\begin{axis}[ height=2.8cm, width=5.3cm, ymode = log,
scale only axis, ymin=1e-4,ymax=1e+4,xmin=0,xmax=2000 ,enlargelimits=false, y label style={at={(axis description cs:-0.1,.5)},anchor=south}, ylabel=Error (EMD), x label style={at={(axis description cs:0.5,-0.1)},anchor=north}, xlabel=Epoch, legend cell align=left, legend style={legend pos=north east,font=\scriptsize}]
\input{artificialFarScore}
\end{axis}
\end{tikzpicture}}
\\
\subfloat{
\begin{tikzpicture}
\tikzstyle{every node}=[font=\footnotesize]
\begin{axis}[ height=2.8cm, width=5.3cm, ymode = log,
scale only axis, ymin=1e-20,ymax=1e+20,xmin=0,xmax=2000 ,enlargelimits=false, y label style={at={(axis description cs:-0.1,.5)},anchor=south}, ylabel=Learning rate ($\lambda$), x label style={at={(axis description cs:0.5,-0.1)},anchor=north}, xlabel=Epoch, legend cell align=left, legend style={legend pos=north east,font=\scriptsize}]
\input{artificialNearLearn}
\end{axis}
\end{tikzpicture}}
\subfloat{
\begin{tikzpicture}
\tikzstyle{every node}=[font=\footnotesize]
\begin{axis}[ height=2.8cm, width=5.3cm, ymode = log,
scale only axis, ymin=1e-20,ymax=1e+20,xmin=0,xmax=2000 ,enlargelimits=false, y label style={at={(axis description cs:-0.1,.5)},anchor=south}, ylabel=Learning rate ($\lambda$), x label style={at={(axis description cs:0.5,-0.1)},anchor=north}, xlabel=Epoch, legend cell align=left, legend style={legend pos=north east,font=\scriptsize}]
\input{artificialFarLearn}
\end{axis}
\end{tikzpicture}}
\\
\subfloat{
\begin{tikzpicture}
\tikzstyle{every node}=[font=\footnotesize]
\begin{axis}[ height=2.8cm, width=5.3cm,
y filter/.code={\pgfmathparse{#1/64}\pgfmathresult},
scale only axis, ymin=0.9,ymax=1.1,xmin=0,xmax=2000 ,enlargelimits=false, y label style={at={(axis description cs:-0.1,.5)},anchor=south}, ylabel=$\Sigma \hat{\mathbf{p}}$, x label style={at={(axis description cs:0.5,-0.1)},anchor=north}, xlabel=Epoch, legend cell align=left, legend style={legend pos=north east,font=\scriptsize}]
\input{artificialNearSum}
\end{axis}
\end{tikzpicture}}
\subfloat{
\begin{tikzpicture}
\tikzstyle{every node}=[font=\footnotesize]
\begin{axis}[ height=2.8cm, width=5.3cm, ymode = log,
scale only axis, ymin=1e1,ymax=1e30,xmin=0,xmax=2000 ,enlargelimits=false, y label style={at={(axis description cs:-0.1,.5)},anchor=south}, ylabel=$\Sigma \hat{\mathbf{p}}$, x label style={at={(axis description cs:0.5,-0.1)},anchor=north}, xlabel=Epoch, legend cell align=left, legend style={legend pos=north east,font=\scriptsize}]
\input{artificialFarSum}
\end{axis}
\end{tikzpicture}}
\caption{
We compare the convergence behavior of the $\EMD$ in two different settings.
\emph{Left}: in the \textbf{Easy} setting, two uniformly sampled distributions are compared.
\emph{Right}: in the \textbf{Hard} setting, the distributions are sampled in a non-overlapping manner which requires all the \emph{dirt} to be displaced.
We see that the optimal $\EMD$ fails to converge as its Hessian is zero, and the gradient carries no information about the step size required to converge (top row).
The $g\EMD^2_{\text{L1}}$ shows good performance across the 4 alternatives, converging in all cases while using a stable learning rate, and maintaining the total mass of the distribution (keeping $\ell_1$ norm constant).
}
\vspace{-2mm}
\label{fig:resultsA}
\end{figure}
\subsection{Analyis of the shape of the gradient}
Plotting the gradients provide a good impression of the actual behavior of the $\EMD$ and the reason it serves as a holistic optimization over the full distribution.
In \figref{resultsB} (left), we compare a uniform distribution with one that has a single peak.
The $\EMD$ gradients affect every bin as the mass propagates to flatten the distribution.
In particular, at the bins surrounding the peak, the gradient for $\EMD$ has the opposite sign as that for $\MSE$.
In \figref{resultsB} (right), we compare two mirrored distributions with one peak each.
Here, we notice that the $\ell_1$ preserving gradients show 0 gradient at the center bin, demonstrating the symmetric nature.
\begin{figure}[ht!]
\centering
\subfloat{\includegraphics[width=6.8cm]{figures/gradA.png}}\quad
\subfloat{\includegraphics[width=6.8cm]{figures/gradB.png}}
\caption{Gradient flow to convert a source distribution $\mathbf{p}$ to a target $\mathbf{q}$.
Different versions of the gradients are analyzed and plotted on top of each other.}
\label{fig:resultsB}
\end{figure}
\subsection{Power Spectral Density prediction of synthetic signals with noise}
In this experiment we estimate the Power Spectral Density (PSD) of a synthetic signal using a neural network and evaluate it using the Earth Mover's Distance.
Our dataset consists of vectors of size $N=64$ each containing a single sine wave of unit amplitude and frequency $F = \{ f \in \mathbb{N} : 5~\le~f~\le~15\}$.
The signals are corrupted with additive white Gaussian noise (AWGN) of $\sigma^2=0.2$.
We generate 2,000 vectors for each frequency to obtain a total of 22,000 data samples.
We use 1000 vectors for each frequency to train the model, and the remainder for test.
Our network consists of two layers that are sufficient for the complexity of this problem.
The first layer consists of $16$ \emph{temporal convolution} filters with a receptive field of $11$.
We apply the \emph{tanh} nonlinearity, and stack a fully connected layer on top of this.
This output layer non-linearity is a \emph{square} function that ensures positive outputs.
As discussed before, using SGD with the optimal $\EMD$ is tricky as the learning rate needs to be constantly adjusted to ensure convergence.
We thus use the \emph{Adam} optimizer~\cite{ba2015adam}.
The test error for predicting PSD is plotted in \figref{resultsC}.
Firstly, note how using \emph{Adam} helps to achieve convergence for all variants of the $\EMD$ gradient.
Nevertheless, while all gradients are able to train the model, $g\oEMD^2_{L1}$ converges faster than the others.
\begin{figure}[t]
\centering
\subfloat{
\begin{tikzpicture}
\tikzstyle{every node}=[font=\footnotesize]
\begin{axis}[ height=3cm, width=7.4cm,
x filter/.code={\pgfmathparse{#1/10}\pgfmathresult},
scale only axis, ymin=0,ymax=3,xmin=0,xmax=15 ,enlargelimits=false, y label style={at={(axis description cs:-0.1,.5)},anchor=south}, ylabel=Test Error (EMD), x label style={at={(axis description cs:0.5,-0.1)},anchor=north}, xlabel=Epoch, legend cell align=left, legend style={legend pos=outer north east,font=\scriptsize}]
\input{experimentSinus}
\legend{$g\emph{MSE}$,$g\emph{EMD}$,$g\emph{EMD}_{L1}$,$g\emph{EMD}^2$,$g\emph{EMD}^2_{L1}$}
\end{axis}
\end{tikzpicture}}
\vspace{-0.2cm}
\caption{We train a 2 layer neural network regressor to estimate the PSD of an artificial sinusoid contaminated by noise.
As the PSD is a holistic operation, $\EMD$ performs well on this task.
With enough data, and smarter optimization schemes like Adam, all versions of the gradient are capable of achieving the similar errors.
However, $g\emph{EMD}^2_{L1}$ demonstrates faster convergence.}
\label{fig:resultsC}
\vspace{-0.3cm}
\end{figure}
\subsection{Power Spectral Density prediction of Real Signals}
In our final experiment we evaluate the use of $\EMD$ on real-world data.
Our task is to predict the PSD of breathing rate of a patient using chest excursion signals (a nose thermistor acts as reference).
Our data is recorded from 75 patients at a sleep laboratory, where 200 clips of 1 minute each are extracted for each patient, giving us a corpus of 15000 samples.
We use 60 patients to train our models, and the remainder 15 as test.
We use a similar network to the previous experiment, but scale the input and output layers to match input (600) and output (30) dimensions.
In this experiment, noise is negligible if the patient is relaxed.
On the other hand, if the patient moves, sits, or talks during the clip, the correlation between chest movement and respiration drops.
In \figref{resultsD}, we see that \emph{Adam} manages to converge the model significantly faster and better using the $\EMD$ gradients, compared to the $\MSE$.
In fact, the holistic optimization is capable of extracting more information from the source data, as the model when using $\EMD$ gradients converges within the a fraction of the first epoch.
This highlight the benefits of using $\EMD$ distances in cases where capturing data is expensive.
\begin{figure}[t]
\centering
\subfloat{
\begin{tikzpicture}
\tikzstyle{every node}=[font=\footnotesize]
\begin{axis}[ height=3cm, width=7.3cm,
x filter/.code={\pgfmathparse{#1/120}\pgfmathresult},
scale only axis, ymin=0,ymax=10,xmin=0,xmax=10 ,enlargelimits=false, y label style={at={(axis description cs:-0.1,.5)},anchor=south}, ylabel=Test Error (EMD), x label style={at={(axis description cs:0.5,-0.1)},anchor=north}, xlabel=Epoch, legend cell align=left, legend style={legend pos=outer north east,font=\scriptsize}]
\input{experimentPSD}
\legend{$g\emph{MSE}$,$g\emph{EMD}$,$g\emph{EMD}_{L1}$,$g\emph{EMD}^2$,$g\emph{EMD}^2_{L1}$}
\end{axis}
\end{tikzpicture}}
\caption{We train a regressor to estimate the PSD of real breathing signals, using the chest excursion as our source signal.
In this case, we observe all models using variations of $\EMD$ gradients are capable of learning the transformation significantly faster than the $\MSE$ (fraction of the first epoch).
}
\label{fig:resultsD}
\end{figure}
\section{The Earth Mover's distance}
As discussed earlier, the EMD is generally defined in the discrete domain.
Here, the probability mass, or \emph{dirt} is distributed in discrete piles or bins, and the effort of moving a mound of \emph{dirt} between two bins is a non-negative cost linearly proportional to the amount of \emph{dirt}.
Within this discrete domain, the general form of the $\EMD$ distance between two distributions $\mathbf{p}, \mathbf{q}\in \mathbb R^N_{+}$ with $\@ifstar{\oldnorm}{\oldnorm*}{\mathbf{p}}_1 = \@ifstar{\oldnorm}{\oldnorm*}{\mathbf{q}}_1$ is
\begin{equation}
\label{eq:emd_general}
\EMD(\mathbf{p}, \mathbf{q}) = \inf_{T \in U(\mathbf{p}, \mathbf{q})} \langle M, T \rangle \, ,
\end{equation}
where $\langle \cdot, \cdot \rangle$ is the Frobenius inner product and $M \in \mathbb R^{N \times N}_{+}$ defines the generalized distance between bins.
$U(\mathbf{p}, \mathbf{q})$ is the set of valid transport plans between $\mathbf{p}$ and $\mathbf{q}$, and is defined as
\begin{equation}
\label{eq:transport_domain}
U(\mathbf{p}, \mathbf{q}) = \{ T \in \mathbb R^{N \times N}_{+} : T \mathbf{1}_N = \mathbf{p},\, T^\top \mathbf{1}_N = \mathbf{q} \} \, .
\end{equation}
$\mathbf{1}_N$ is a $N$ dimensional vector of all $1$, and $T$ is constrained such that its row sum corresponds to the distribution $\mathbf{p}$ and column sum to $\mathbf{q}$.
\subsection{Sinkhorn distance}
The general formulation of the $\EMD$ (Eq.~\ref{eq:emd_general}) is solved using linear programming that is computationally expensive.
However, this problem was greatly alleviated by Cuturi~\cite{cuturi2013sinkhorn} who suggested a smoothing term for the $\EMD$ in the form of
\begin{equation}
\label{eq:sd}
\SD_\gamma(\mathbf{p}, \mathbf{q}) = \inf_{T \in U(\mathbf{p}, \mathbf{q})} \langle M, T \rangle - \gamma\langle T, log T \rangle \, ,
\end{equation}
which allows to use the Sinkhorn-Knopp algorithm~\cite{sinkhorn1967diagonal} to obtain an iterative and efficient solution.
\subsection{Earth Mover's Distance for one dimensional distributions}
We analyze a version of the problem where the distribution bins are placed in a one dimensional space in such a way that
traveling between two bins requires an ordered visit to every bin between the source and the target (see \figref{onedim} (right)).
In this case the distance between bins $M$ is defined recursively as
\begin{equation}
M_{i,j} =
\begin{cases}
0 &\text{if } i = j \, ,\\
M_{i-1,j} + d(i-1,i) &\text{if } i > j \, ,\\
M_{j,i} &\text{if } i < j \, .
\end{cases}
\end{equation}
Here, $d(i-1, i)$ is the distance between two consecutive bins (typically $1$), and the recursive definition ensures that only consecutive bin distances are considered.
The above bin distances facilitate a simple solution to the $\EMD$ computation using a recursion.
Each bin either receives all the excess \emph{dirt} that results from leveling previous bins, or in case of a deficit, tries to provide for it.
Note that the cost of going left-to-right $M_{ij}$ or right-to-left $M_{ji}$ is symmetric, and the magnitude of the excess or deficit helps compute the distance:
\begin{equation}
\label{eq:emd}
\oEMD(\mathbf{p}, \mathbf{q}) = \sum_{i=1}^{N-1} M_{i,i+1} \cdot | \varphi_i |,
\text{ where }
\varphi_i =
\sum_{j=1}^i \left(
\frac{p_j}{\@ifstar{\oldnorm}{\oldnorm*}{\mathbf{p}}_1} - \frac{q_j}{\@ifstar{\oldnorm}{\oldnorm*}{\mathbf{q}}_1}
\right) \, .
\end{equation}
For notational brevity, we will refer to $M_{i,i+1}$ as $\hat{M}_i$.
The above formulation is rewritten with the sign function as
\begin{equation}
\oEMD(\mathbf{p}, \mathbf{q}) = \sum_{i=1}^{N-1} \hat{M}_i \cdot \sgn(\varphi_i) \cdot \varphi_i \, .
\end{equation}
Note that as both distributions have the same amount of total mass, and we progressively level out the \emph{dirt} over all bins, $\varphi_N = 0$, allowing us to compute the outer sum only up to $N-1$.
\subsection{Gradient of the Earth Mover's Distance}
In a learning framework, model parameter updates are often performed using the gradient or Hessian.
To integrate $\EMD$ as a loss, we now compute the analytical form of the gradient.
Let $\mathbf{e}_k$ be a unit vector of length $N$ whose value at dimension $k$ is 1, and 0 elsewhere.
For a small perturbation $h$, we compute the distance between the modified distribution $\mathbf{p} + h\mathbf{e}_k$ and $\mathbf{q}$ as:
\begin{equation}
\oEMD(\mathbf{p} + h\mathbf{e}_k, \mathbf{q}) \simeq
\sum_{i=1}^{N-1} \hat{M}_i \cdot \sgn(\varphi_i)
\sum_{j=1}^i \left( \frac{p_j + h \delta_{jk}}{\@ifstar{\oldnorm}{\oldnorm*}{\mathbf{p}}_1 + h} - \frac{q_j}{\@ifstar{\oldnorm}{\oldnorm*}{\mathbf{q}}_1} \right) \, ,
\end{equation}
where $\delta_{jk} = 1$ when $j = k$.
Note that by choosing $h$ small enough, $\sgn(\varphi_i)$ can be assumed to remain unchanged.
We now compute the partial derivative for $\EMD$.
As we define $\EMD$ on distributions with equal total mass (Eq.~\ref{eq:emd}), without the loss of generality,
we assume $\@ifstar{\oldnorm}{\oldnorm*}{\mathbf{p}}_1 = \@ifstar{\oldnorm}{\oldnorm*}{\mathbf{q}}_1 = 1$, and operate on unit $\ell_1$ norm vectors $\hat{\mathbf{p}}$ and $\hat{\mathbf{q}}$.
\begin{eqnarray}
\label{eq:grad_emd}
g\oEMD
& = & \frac{\partial \oEMD( \hat{\mathbf{p}}, \hat{\mathbf{q}} )} {\partial p_k}
=
\lim_{h \rightarrow 0} \,\, \frac{1}{h}
\left( \EMD(\hat{\mathbf{p}} + h \mathbf{e}_k, \hat{\mathbf{q}}) - \EMD(\hat{\mathbf{p}},\hat{\mathbf{q}}) \right) \\
& \simeq &
\lim_{h \rightarrow 0} \,\,\frac{1}{h}
\sum_{i=1}^{N-1} \hat{M}_i \cdot \sgn (\varphi_i)
\left(
\sum_{j=1}^i \left( \frac{\hat{p}_j + h \delta_{jk}}{1 + h} - {\hat{q}_j} \right) -
\sum_{j=1}^i \left( \hat{p}_j - \hat{q}_j \right)
\right) \nonumber \\
& = &
\lim_{h \rightarrow 0} \,\,\frac{1}{h}
\sum_{i=1}^{N-1} \hat{M}_i \cdot \sgn (\varphi_i)
\sum_{j=1}^i \frac{h \delta_{jk} - h \hat{p}_j}{1 + h} \nonumber \\
& = &
\label{eq:gemd}
\sum_{i=1}^{N-1} \hat{M}_i \cdot \sgn (\varphi_i) \sum_{j=1}^i \left( {\delta_{jk} - \hat{p}_j} \right) \, .
\end{eqnarray}
\subsection{$\ell_1$ preserving gradient}
The above gradient disobeys the law of \emph{dirt} conservation and creates new or destroys existing \emph{dirt}, changing the total mass of the distributions.
As the gradient sum is not 0, it makes the gradient unsuitable for applications%
\footnote{Symbolic differentiation performed by Theano or TensorFlow produces non-$\ell_1$ preserving gradients that are unsuitable for optimization.}.
To solve this problem, and in contrast to $\mathbf{e}_k$, we redefine our unit vector such that its sum is 0.
We propose a set of vectors $\tilde{\mathbf{e}} \in \{-1, N-1\}^N$, where $\tilde{\mathbf{e}}_k$ takes the value $N-1$ at dimension $k$, and $-1$ elsewhere.
The partial derivatives for such a setting are
\begin{eqnarray}
\label{eq:gradZeroSum_emd}
g\oEMD_{L1}
& = & \frac{\partial \oEMD_{L1}( \hat{\mathbf{p}}, \hat{\mathbf{q}} )} {\partial p_k}
=
\lim_{h \rightarrow 0} \,\, \frac{1}{h}
\left( \EMD(\hat{\mathbf{p}} + h \tilde{\mathbf{e}}_k, \hat{\mathbf{q}}) - \EMD(\hat{\mathbf{p}}, \hat{\mathbf{q}}) \right) \\
& \simeq &
\lim_{h \rightarrow 0} \,\, \frac{1}{h}
\sum_{i=1}^{N-1} \hat{M}_i \cdot \sgn (\varphi_i)
\left(
\sum_{j=1}^i \left( \hat{p}_j + h (N \delta_{jk} - 1) - \hat{q}_j \right) -
\sum_{j=1}^i \left( \hat{p}_j - \hat{q}_j \right)
\right) \nonumber \\
& = &
\lim_{h \rightarrow 0} \,\, \frac{1}{h}
\sum_{i=1}^{N-1} \hat{M}_i \cdot \sgn (\varphi_i)
\sum_{j=1}^i h \left(N \delta_{jk} - 1 \right) \nonumber \\
& = &
\label{eq:gemd_l1}
\sum_{i=1}^{N-1} \hat{M}_i \cdot \sgn (\varphi_i) \cdot (N \cdot H(i-k) - i) \, ,
\end{eqnarray}
where $H: \mathbb{R} \rightarrow \{0, 1\}$ is the Heaviside step function defined as $H(n) = \{1 \text{ if } n \geq 0, 0 \text{ elsewhere}\}.$
\subsection{Relaxed Earth Mover's Distance}
As compared to the original $\EMD$ gradient (Eq.~\ref{eq:gemd}), the $\ell_1$ preserving gradient (Eq.~\ref{eq:gemd_l1}) is numerically stable and avoids erosion or addition of new dirt.
This is an important step forward to use $\EMD$ in learning frameworks.
However, as the distance uses absolute values ($|\varphi_i|$) we observe similar properties to $\ell_1$ optimization.
In particular, when $\hat{M}_i \in \mathbb{N}$, it is easy to see that the terms of $g\oEMD_{L1}$ are integer values.
Furthermore, as small changes to the distribution does not change the gradient, the Hessian is zero except at a few discrete set of points making optimization a hard process.
To solve these issues, we suggest a relaxed form of the $\EMD$ where the cost is calculated proportional to a certain power of the excess/deficit of \emph{dirt}.
\begin{equation}
\label{eq:emd_rho}
\oEMD^\rho(\mathbf{p}, \mathbf{q}) = \sum_{i=1}^{N-1} \hat{M}_i \cdot | \varphi_i |^\rho \, .
\end{equation}
In particular, we explore the case with $\rho=2$ that bears similarity with the popular Mean Squared Error loss.
The standard and $\ell_1$ preserving gradients for $\oEMD^2$ are:
\begin{eqnarray}
\label{eq:g_emd2}
g\oEMD^2
= \frac{\partial \oEMD^2( \hat{\mathbf{p}}, \hat{\mathbf{q}} )} {\partial p_k}
& = & 2 \sum_{i=1}^{N-1} \hat{M}_i \cdot \varphi_i
\sum_{j=1}^i \left( {\delta_{jk} - \hat{p}_j} \right) \, , \\
\label{eq:g_emd2_l1}
g\oEMD^2_{L1}
= \frac{\partial \oEMD^2_{L1}( \hat{\mathbf{p}}, \hat{\mathbf{q}} )} {\partial p_k}
& = & 2 \sum_{i=1}^{N-1} \hat{M}_i \cdot \varphi_i
\cdot (N \cdot H(i-k) - i) \, .
\end{eqnarray}
We see that $g\oEMD^2_{L1}$ preserves the nice properties of conserving dirt, while having real valued gradients.
In addition, we see that $\EMD^2$ also exhibits non-zero Hessians.
\begin{eqnarray}
\label{eq:hessian_emd2_l1}
\frac{\partial^2 \oEMD^2_{L1}( \hat{\mathbf{p}}, \hat{\mathbf{q}} )} {\partial p_k \partial p_l}
& = & 2 \sum_{i=1}^{N-1} \hat{M}_i \cdot (N \cdot H(i-l) - i) \cdot (N \cdot H(i-k) - i) \, .
\end{eqnarray}
\section{Introduction}
The Wasserstein metric is a natural measure for probability distributions based on the optimal transport problem.
The Wasserstein distance is a
Spectral analysis is performed in several domains of physiological monitoring (\eg~respiratory analysis~\cite{martinezbreath}, EEG~\cite{Sano2014}, ECG~\cite{clifford2006advanced}).
Regression models in the spectral domain enable several applications, often through the use of Power Spectral Density (PSD).
Within machine learning frameworks, PSD is commonly treated as a probability distribution and learned using the Küllback-Leibler (KL) divergence.
However, KL compares each bin independently.
The Earth Mover's Distance (EMD) is a natural metric to compare distributions, but has seen limited use due to its computational cost.
Nevertheless, for one dimensional distributions (\eg~PSD) the EMD can be computed efficiently, and we derive a closed-form solution for its gradient.
We enforce the gradient to preserve the $\ell_1$ norm of the original distribution.
We evaluate on a data set of 81 sleep laboratory patients, predict breathing rate, and compare EMD as a loss against KL divergence and Mean Squared Error.
\section{Related Work}
Recently there have been efforts to integrate EMD as a loss criterion for deep learning~\cite{solomon2014wasserstein,frogner2015learning}.
However, as compared to other criteria such as Mean Squared Error (MSE) or KL divergence, the perceived inefficiency in EMD computation has hindered progress.
In the general case, calculating the EMD requires to solve the optimal transport problem that turns the source distribution to the target one.
As this calculation is expensive, much effort has been invested in relaxed definitions of the EMD that allow more efficient computation~\cite{cuturi2015smoothed,shirdhonkar2008approximate,pele2009fast,cuturi2013sinkhorn,cuturi2013fast}.
Recently, Frogner~\etal~\cite{frogner2015learning} suggest a method to incorporate EMD in a deep learning framework using the entropic regularization proposed by Cuturi~\cite{cuturi2013sinkhorn}.
We consider that this approach to compute EMD is unnecessarily complex for the common case of one dimensional distributions for which there exists a closed-form solution.
\section{Convergence behavior of the Earth Mover's Distance}
Within gradient descent methods, the Earth Mover's Distance has a distinct behavior that needs to be well understood in order to use it correctly.
\noindent\textbf{Strong initial gradient magnitude.}
Compared to common non-space aware losses, the EMD shows huge variations in gradient magnitude which depends on the dimensionality of the space (\ie moving dirt from the first dimension of the descriptor to the last one).
This usually implies that EMD provides fast convergence rate at the beginning of the optimization process, and more care than usual must be taken in choosing the right learning rate in order to avoid divergence.
\noindent\textbf{Weak late gradient magnitude.}
Conversely, once the "dirt" is close to its final destination, convergence turns into a crawl, as moving "dirt" becomes cheap, and thus the gradient flattens.
We suggest to combine always EMD with other losses, specially Mean Squared Error, in order to obtain a more steady learning rate.
\section{Synergy with Mean Squared Error loss}
As we discussed on the previous section, we suggest to use EMD together with some other loss in order to obtain a smoother behavior during training.
Frogner~\etal~\cite{frogner2015learning} suggested to combine the relaxed EMD~\equref{relaxedEMD} with the Küllback-Leibler (KL) divergence as a way to extend their domain to non normalized distributions.
Conversely we also suggest to combine EMD with a second metric \textbf{always}, we do it from the opposite point of view:
choose a main metric that has good convergence properties, and combine it with the EMD that will act as a smoothness guard (\ie not interfering with the training unless non-smoothness is detected).
Our goal is to achieve fast training at the beginning when the output is not smooth, thanks to EMD, and good finesse thanks to the main metric.
We have several options when choosing the best metric to use with EMD.
The KL divergence is clearly the best match for the relaxed EMD as its gradient has an entropic regularization term in the logarithmic space.
Therefore, the Mean Squared Error (MSE) is the best metric for our $\oEMD^2_L1$ as our transport relaxation has the same structure.
Furthermore, $\emph{gMSE}(\mathbf{p}, \mathbf{q})$ is $2(\mathbf{p}-\mathbf{q})$, as the $\ell_1$ norms of $\mathbf{p}$ and $\mathbf{q}$ are the same,
$\Sigma^N\emph{gMSE}(\mathbf{p}, \mathbf{q})=0$ and thus is $\ell_1$ preserving.
This means that, unlike~\cite{frogner2015learning}, we can use $\oEMD^2_L1$ + MSE both in normalized and unnormalized scenarios without altering the criterions.
\section{Using EMD for non $\ell_1$-norm = 1 distributions}
The original EMD is not defined for $\@ifstar{\oldnorm}{\oldnorm*}{\mathbf{p}}_1 \ne \@ifstar{\oldnorm}{\oldnorm*}{\mathbf{q}}_1$.
Although there are several ways to modify the EMD for unnormalized distributions~\cite{pele2009fast,chizat2015unbalanced,frogner2015learning} we consider that this goes against the spirit of the metric.
We think that
However, in 20
\subsection{Implementation details}
\vspace{1mm}
\noindent\textbf{Absolute magnitude.}
The original EMD is not defined for $\@ifstar{\oldnorm}{\oldnorm*}{\mathbf{a}}_1 \ne \@ifstar{\oldnorm}{\oldnorm*}{\mathbf{b}}_1$.
Although there are several ways to modify the EMD for unnormalized distributions~\cite{pele2009fast,chizat2015unbalanced} we consider that this goes against the spirit of the metric.
Therefore, prior to computing the EMD, we $\ell_1$-normalize the input distributions (see Eq.~\eqref{eq:emd}).
Such a normalization can lead to a mismatch between the absolute magnitudes, which may result in numerical instability during optimization.
We address this by defining our loss as the weighted combination of EMD and MSE criterion ($w_{\text{EMD}} = 0.9, w_{\text{MSE}} = 0.1$).
\vspace{1mm}
\noindent\textbf{Non-negative distributions.}
We square the output of the last layer of our model to ensure non-negative values for the distribution.
We train a two layer Multilayer Perceptron regressor to predict the breathing signal PSD from the chest excursion of 81 sleep laboratory patients with different degrees of apnoea, and use the PSD from a nose thermistor as reference.
As we see in~\figref{results}, the PSD obtained using $g\EMD_{\text{L1}}$ loss converges faster and provides a better estimation of the breathing rate (the largest peak of the PSD).
We also see that the gradient sums to 0 for $g\EMD_{\text{L1}}$.
\section{$\EMD$ in chain-connected spaces}
\label{oneDimLoss}
We analyze the scenario where bins in a distributions (\eg~histograms, probabilities) are situated in a one dimensional space.
Here, moving \emph{dirt} from a source to target bin requires an ordered visit to every bin in between (see \figref{chain}).
The bin distance $M$ can be defined recursively to ensure that only consecutive bin distances are considered.
\begin{equation}
M_{i,j} =
\begin{cases}
0 &\text{if } i = j \, ,\\
M_{i-1,j} + d_{i-1,i} &\text{if } i > j \, ,\\
M_{j,i} &\text{if } i < j \, .
\end{cases}
\end{equation}
Here, $d_{i-1, i}$ is the distance between two consecutive bins (typically all equal and $1$).
The above choice of bin distances facilitates a simple solution to calculate the $\EMD$ using a recursion.
Essentially, each bin either receives all the excess \emph{dirt} that results from leveling previous bins, or in case of a deficit, tries to provide for it.
Note that the cost of going left-to-right ($M_{i,j}$) or right-to-left ($M_{j,i}$) is symmetric.
The closed form recursive formulation for the $\EMD$ between two one-dimensional distributions is:
\begin{equation}
\label{eq:emd}
\oEMD(\mathbf{p}, \mathbf{q}) = \sum_{i=1}^{N-1} M_{i,i+1} \cdot | \varphi_i | \, ,
\end{equation}
where $\varphi_i$ represents the excess \emph{dirt} that needs to be moved ahead or deficit in \emph{dirt} that needs to be filled up to bin $i$.
\begin{equation}
\varphi_i =
\sum_{j=1}^i \left(
p_j - q_j
\right) \, .
\end{equation}
For notational brevity, we will refer to $M_{i,i+1}$ as $\hat{M}_i$.
The above expression can be rewritten with the sign function as
\begin{equation}
\oEMD(\mathbf{p}, \mathbf{q}) = \sum_{i=1}^{N-1} \hat{M}_i \cdot \sgn(\varphi_i) \cdot \varphi_i \, .
\end{equation}
Note that as both distributions have the same amount of total mass, and we progressively level out the \emph{dirt} over all bins, when we arrive to the last bin all \emph{dirt} will have been leveled (\ie~$\varphi_N = 0$).
Therefore, we compute the outer sum only up to $N-1$.
\subsection{Gradient of the Earth Mover's Distance}
To integrate $\EMD$ as a loss function in an iterative gradient-based optimization approach, we need to compute the analytical form of the gradient.
However, we must ensure that the gradient obeys the law of \emph{dirt} conservation.
The gradient should not create new or destroy existing \emph{dirt} to avoid changing the total mass of the distributions ($\@ifstar{\oldnorm}{\oldnorm*}{\mathbf{p}}_1 \neq 1$ after updates).
We use the trick of projected gradients, and define $\mathbf{e}_k$ as a vector of length $N$ whose value at entry $k$ is $1-1/N$, and $-1/N$ elsewhere.
Note that $\mathbf{e}_k$ sums to $0$.
For a small value $h$, we compute the distance between the perturbed distribution $\mathbf{p} + h\mathbf{e}_k$ and $\mathbf{q}$ as:
\begin{equation}
\begin{split}
& \oEMD(\mathbf{p} + h\mathbf{e}_k, \mathbf{q}) \simeq \\
& \sum_{i=1}^{N-1} \hat{M}_i \cdot \sgn(\varphi_i)
\sum_{j=1}^i \left( p_j + h(\delta_{jk} - 1/N) - q_j \right) \, ,
\end{split}
\end{equation}
where $\delta_{jk} = 1$ iff $j = k$.
Note that by choosing $h$ small enough, $\sgn(\varphi_i)$ can be assumed to remain unchanged.
The corresponding partial derivative for the $\oEMD$ is:
\begin{eqnarray}
\label{eq:gradZeroSum_emd}
\frac{\partial \oEMD( {\mathbf{p}}, {\mathbf{q}} )} {\partial p_k} \simeq
\label{eq:gemd_l1}
\sum_{i=1}^{N-1} \hat{M}_i \cdot \sgn (\varphi_i) \sum_{j=1}^i \left(\delta_{jk} - 1/N \right) \, .
\end{eqnarray}
\subsection{Relaxed Earth Mover's Distance}
\label{subsec:relax_emd}
The proposed gradient (Eq.~\ref{eq:gemd_l1}) is numerically stable and avoids erosion or addition of new dirt.
This is an important step forward to use $\EMD$ in learning frameworks.
However, as the distance contains an absolute value function ($|\varphi_i|$) we observe difficulties converging to a solution, similar to $\ell_1$ optimization.
In particular, when $\hat{M}_i \in \mathbb{N}$, it is easy to see that the terms of $\gEMD$ are integer fractions of $N$ (see \figref{main}, multiples of 0.25).
Furthermore, as small changes to the distribution do not change the gradient, the Hessian is zero except at a few discrete set of points (where the sign of $\varphi_i$ changes) making optimization hard.
To solve these issues, we suggest a relaxed form of the $\EMD$ where the cost is calculated proportional to a power of the excess/deficit of \emph{dirt}.
\begin{equation}
\label{eq:emd_rho}
\oEMD^\rho(\mathbf{p}, \mathbf{q}) = \sum_{i=1}^{N-1} \hat{M}_i \cdot | \varphi_i |^\rho \, .
\end{equation}
whose gradient is:
\begin{eqnarray}
\label{eq:g_emd2_l1}
\nabla\oEMD^\rho \simeq \rho \sum_{i=1}^{N-1} \hat{M}_i \cdot \varphi_i \cdot | \varphi_i |^{\rho-2}
\sum_{j=1}^i \left( {\delta_{jk} - 1/N} \right) \, .
\end{eqnarray}
For $\rho=1$ we have the normal $\EMD$ distance, which behaves like $\ell_1$.
During gradient descend we suggest to use the case with $\rho=2$ that bears similarity with the popular Mean Squared Error loss.
$\EMD^2$ preserves the nice properties of conserving dirt, while having real valued gradients (see an example in \figref{main}).
In addition, we see that $\oEMD^2$ also exhibits non-zero Hessians.
\begin{equation}
\label{eq:hessian_emd2_l1}
\begin{split}
\frac{\partial^2 \oEMD^2( {\mathbf{p}}, {\mathbf{q}} )} {\partial p_k \partial p_l} = 2 \sum_{i=1}^{N-1} \hat{M}_i & \cdot (N \cdot H(i-l) - i) \\
& \cdot (N \cdot H(i-k) - i) \, ,
\end{split}
\end{equation}
where $H: \mathbb{R} \rightarrow \{0, 1\}$ is the Heaviside step function defined as $H(n) = \{1 \text{ if } n \geq 0, 0 \text{ otherwise}\}.$
\subsection{Discussion: comparing $\EMD$, $\SD$ and $\MSE$}
Plotting the gradients provides a good impression of the actual behavior of the $\EMD$.
It also shows how $\EMD$ serves as a criterion that provides holistic optimization over the output space (full distribution).
In \figref{resultsB} we show the gradients corresponding to several different loss criteria for the transformation between two unit-norm distributions: a smooth one $\mathbf{p}$, and a spiky one $\mathbf{q}$.
We present the gradient of $\MSE$ in \figref{resultsB} (a).
Note how $\MSE$ optimizes each bin independently, and results in a non-smooth gradient.
In \figref{resultsB} (b) we show the gradients for $\EMD$ and $\EMD^2$.
In both cases the gradient is holistic and affects the whole output space.
Furthermore, the regularization effect induced by $\EMD^2$ results in a smoother gradient.
In \figref{resultsB} (c) we show the $\SD$ gradient for $\lambda$ values of 0.5, 1 and 10.
In all cases the gradients are also holistic.
We see that that larger values of $\lambda$ produce a gradient resembling that of the true $\EMD$, however it comes with its own set of problems that were discussed earlier.
\begin{figure*}[ht!]
\centering
\subfloat[Mean Squared Error]{\includegraphics[width=5.5cm]{figures/gMSE.eps}}\quad
\subfloat[Earth Mover's Distance]{\includegraphics[width=5.5cm]{figures/gEMD.eps}}\quad
\subfloat[Sinkhorn Distance]{\includegraphics[width=5.5cm]{figures/gSD.eps}}
\caption{Gradient flow to convert a source distribution $\mathbf{p}$ to a target $\mathbf{q}$.
(a) $\MSE$ gradient is not smooth, and does not affect the whole output space.
(b) $\EMD$ and $\EMD^2$ gradients affect the whole space. $\EMD^2$ has $\ell_2$ behavior, while $\EMD$ behaves like $\ell_1$.
(c) $\SD$ gradients also affect the whole space but the regularized versions are asymmetric. $\SD_{\lambda = 10}$ approximates $\EMD$ well.}
\label{fig:resultsB}
\end{figure*}
\section{$\tEMD^\rho$ does not depend on selecting a particular node as a root}
\newtheorem{theorem}{Proposition}
A tree structured graph does not generally define the parent-child relationship between nodes.
As the $\tEMD^\rho$ definition is based on the parent-child relationship, there are multiple ways in which the $\tEMD^\rho$ can be calculated for a given tree.
However, we prove that $\tEMD^\rho$ and by extension, its gradient, is uniquely defined for a given tree.
\begin{theorem}
Let $A$ and $B$ be two identical undirected trees with different nodes selected as root.
Then, the $\tEMD^\rho$ between two distributions $\mathbf{p}$ and $\mathbf{q}$, is identical:
\[\tEMD^\rho_{A}(\mathbf{p},\mathbf{q})=\tEMD^\rho_{B}(\mathbf{p},\mathbf{q}).\]
\end{theorem}
\begin{proof}
Let $\mathbf{p}$ and $\mathbf{q}$ be two distributions with $\@ifstar{\oldnorm}{\oldnorm*}{\mathbf{p}}_1 = \@ifstar{\oldnorm}{\oldnorm*}{\mathbf{q}}_1 = 1$.
Let $A$ and $B$ be two undirected trees with nodes~$G$ and transportation cost function~$M:G \rightarrow \mathbb{R}^+$.
Let $n_0$ be the root node of $A$ and $n_1$ be the root node of $B$.
We prove that our proposition holds for the case where $n_0$ is adjacent to $n_1$.
As a consequence, this proves by induction that our proposition is correct for every two root nodes.
\begin{figure}[h]
\centering
\subfloat[$A$]{\includegraphics[width=3cm]{figures/t0-n0-crop.pdf}}\hspace{1cm}
\subfloat[$B$]{\includegraphics[width=3cm]{figures/t1-n1-crop.pdf}}
\caption{Identical trees $A$ and $B$ with different root nodes.
$S_0$ and $S_1$ represent a bunch of nodes and corresponding subtrees connected to $n_0$ and $n_1$ respectively.}
\label{fig:proof_trees}
\end{figure}
We first simplify the tree structure~(see \figref{proof_trees}).
Without any loss of generality, we group all adjacent nodes of $n_0$ except $n_1$ as a virtual subtree $S_0$ with $M_{S_0,n_0}$ as the virtual transportation cost function from $S_0$ to $n_0$.
Similarly, we group the adjacent nodes of $n_1$ except $n_0$ as $S_1$.
In this setting we have:
\begin{eqnarray}
\tEMD_{A}^\rho(\mathbf{p}, \mathbf{q}) &=& M_{S_0,n_0} | \tilde{\varphi}^{A}_{S_0} |^\rho + M_{S_1,n_1} | \tilde{\varphi}^{A}_{S_1} |^\rho + M_{n_1,n_0} | \tilde{\varphi}^{A}_{n_1} |^\rho \, , \text{and} \\
\tEMD_{B}^\rho(\mathbf{p}, \mathbf{q}) &=& M_{S_0,n_0} | \tilde{\varphi}^{B}_{S_0} |^\rho + M_{S_1,n_1} | \tilde{\varphi}^{B}_{S_1} |^\rho + M_{n_0,n_1} | \tilde{\varphi}^{B}_{n_0} |^\rho \, .
\end{eqnarray}
As $S_0$ and $S_1$ are identical in both trees, $\tilde{\varphi}^{A}_{S_0}=\tilde{\varphi}^{B}_{S_0}$ and $\tilde{\varphi}^{A}_{S_1}=\tilde{\varphi}^{B}_{S_1}$.
Therefore, the difference of $\tEMD^\rho$ between $A$ and $B$ is:
\begin{equation}
\tEMD_{A}^\rho(\mathbf{p}, \mathbf{q}) - \tEMD_{B}^\rho(\mathbf{p}, \mathbf{q}) =
M_{n_1,n_0} | \tilde{\varphi}^{A}_{n_1} |^\rho -
M_{n_0,n_1} | \tilde{\varphi}^{B}_{n_0} |^\rho
\, .
\end{equation}
Furthermore, $M_{n_1,n_0}$ and $M_{n_0,n_1}$ refer to the same edge, and as the graph is undirected, they are equal.
Thus, to prove $\tEMD_{A}^\rho(\mathbf{p}, \mathbf{q}) = \tEMD_{B}^\rho(\mathbf{p}, \mathbf{q})$ we only need to show that $| \tilde{\varphi}^{A}_{n_1} |^\rho = | \tilde{\varphi}^{B}_{n_0} |^\rho$.
If $n_0$ is a leaf node of the tree, then $\tilde{\varphi}^{B}_{n_0} = p_0 - q_0$ and $S_0$ is empty.
On the other hand, if $n_0$ is an intermediate node, then $\tilde{\varphi}^{B}_{n_0} = \tilde{\varphi}^{B}_{S_0}$ as $\mathbf{p}$ and $\mathbf{q}$ are not defined on non-leaf nodes.
To cope with both cases, we define $\tilde{\varphi}^{B}_{n_0}$ as follows:
\begin{eqnarray}
\tilde{\varphi}^{B}_{n_0} &=& p_0 - q_0 + \tilde{\varphi}^{B}_{S_0} \, ,
\end{eqnarray}
and equivalently
\begin{eqnarray}
\tilde{\varphi}^{A}_{n_0} &=& p_0 - q_0 + \tilde{\varphi}^{A}_{S_0} + \tilde{\varphi}^{A}_{n_1} \, .
\end{eqnarray}
As before, since the trees identical, $\tilde{\varphi}^{A}_{S_0} = \tilde{\varphi}^{B}_{S_0}$.
This gives
\begin{eqnarray}
\tilde{\varphi}^{A}_{n_0} &=& \tilde{\varphi}^{B}_{n_0} + \tilde{\varphi}^{A}_{n_1} \, ,
\label{eq:good}
\end{eqnarray}
Finally, as $\mathbf{p}, \mathbf{q} \in \mathbb{R}^+$ and have the same $\ell_1$ norm, we know that $\tilde{\varphi}$ is zero for any root node (as $\tilde{\varphi}$ is the sum of differences of all children, and the total amount of \emph{dirt} is the same).
In this case, since $n_0$ is the root of $A$, $\tilde{\varphi}^{A}_{n_0} = 0$.
Applied to \equref{good}, we show that $\tilde{\varphi}^{A}_{n_1} = -\tilde{\varphi}^{B}_{n_0}$, and $| \tilde{\varphi}^{A}_{n_1} |^\rho = | \tilde{\varphi}^{B}_{n_0} |^\rho$ follows.
Thus, $\tEMD^\rho_{A}(\mathbf{p},\mathbf{q})=\tEMD^\rho_{B}(\mathbf{p},\mathbf{q})$.
\end{proof}
\section{$\EMD$ as pre-training on ImageNet}
\begin{figure}[h]
\centering
\begin{tikzpicture}
\tikzstyle{every node}=[font=\footnotesize]
\begin{axis}[ height=3cm, width=6cm,
y filter/.code={\pgfmathparse{100*(1-#1)}\pgfmathresult},
scale only axis, ymin=0,ymax=20,xmin=2,xmax=200 ,enlargelimits=false, y label style={at={(axis description cs:-0.075,.5)},anchor=south}, ylabel=Top-1 Accuracy (\%), x label style={at={(axis description cs:0.5,-0.1)},anchor=north}, xlabel=Epoch, legend cell align=left, legend pos=outer north east, legend style={font=\tiny}]
\addplot+[olive, line width=1pt, mark=none, line join=round] table[x expr=\coordindex+2, y index=1] {results/CE_0_LR005_ErrorRate1.log};
\addplot+[teal, line width=1pt, mark=none, line join=round] table[x expr=\coordindex+2, y index=1] {results/EMD2_0_ErrorRate1.log};
\addplot+[blue, line width=1pt, mark=none, line join=round] table[x expr=\coordindex+2, y index=1] {results/SMALL_EMD1.log};
\addplot+[red, line width=1pt, mark=none, line join=round] table[x expr=\coordindex+2, y index=1] {results/SMALL_EMD_TO_CE1.log};
\legend{$\CE$,$0.5 CE $+$ 0.5 \EMD^2$,$\EMD^2$,$\EMD^2$ until epoch 135 and then $\CE$}
\end{axis}
\end{tikzpicture}
\caption{
Top-1 accuracy on ImageNet using 50K images for training using different loss functions.
The $\CE$ loss strongly favors Top-1 accuracy, therefore it trains fast with regards to this metric.
Conversely, $\EMD^2$ tries to optimize the entire output space resulting in a slower convergence rate.
The combined $0.5 \CE + 0.5 \EMD^2$ shows mixed performance, achieving fast convergence and better accuracy.
However, if we give enough time to let $\EMD^2$ embed the output hierarchy in the network, and then further train the network using the $\CE$ loss we achieve the best results on this metric.
}
\label{fig:pretraining}
\end{figure}
In the main paper (Sec. 5.3), we report the results of training on the reduced ImageNet (50K images) dataset using the CrossEntropy~($\CE$) loss alone, the regularized Earth Mover's Distance~($\EMD^2$) loss alone, and an equally weighted combination of the two.
The combined loss $0.5 \CE + 0.5 \EMD^2$ outperforms both individual losses, and also shows fast convergence.
However, as the output space is very large, we think that $\EMD^2$ is not provided the opportunity to learn the output space hierarchy on the model when used in combination with $\CE$.
To test this hypothesis, we perform the following experiment.
We train using the $\EMD^2$ loss until Top-1 accuracy stops improving.
This acts as a pre-training step for the network.
In our experiment this happens at epoch 135.
We then switch the loss function to $\CE$ and employ a very small learning rate (0.001) that only refines the learned parameters of the network.
As $\CE$ strongly favors Top-1 accuracy, it learns on top of the previous network to achieve a Top-1 accuracy of $14.97\%$ (compared to $6.34\%$ for $\CE$, $8.20\%$ for $0.5 \CE + 0.5 \EMD^2$, and $7.66\%$ for $\EMD^2$).
\figref{pretraining} shows the performance for the proposed experiment.
\section{$\EMD$ in tree-connected spaces}
\label{treesection}
We demonstrate here how $\EMD$ can be used to model output spaces with a tree structure.
Our formulation expects that all observed bins correspond to the leaves of the tree, and the remaining latent nodes and have no \emph{dirt}.
As we can link a tree to any non-leaf node with a zero-cost connection, this formulation allows us to express any tree structure.
We refer to this analysis as the Hierarchical Earth Mover's Distance ($\tEMD$) (see \figref{main}).
Note that, this is still compatible with all our previous developments (as chains are a sub-class of trees).
As such, we do not distinguish between $\EMD$ and $\tEMD$ while presenting the evaluation.
We define $\tEMD^\rho$ as:
\begin{equation}
\label{eq:smd}
\tEMD^\rho(\mathbf{p}, \mathbf{q}) = \sum_{i \in G} M_{i,\mathbf{p}(i)} \cdot | \tilde{\varphi}_i |^\rho,
\text{ where }
\end{equation}
\begin{equation}
\tilde{\varphi}_i =
\sum_{j \in \mathbf{l}(i)} \left(
p_j - q_j
\right) \, ,
\end{equation}
where $M_{i,\mathbf{p}(i)}$ is the cost of transporting \emph{dirt} from node $i$ to its parent (abbreviated as $\tilde{M}_i$),
$G$ is the set of all nodes in the tree,
and $\mathbf{l}(i)$ is the set of all leaves in the subtree that has $i$ as a root.
$N$ is the total number of leaves (and bins) in the tree.
The intuition behind this formula is that we can reduce the tree one leave at a time as if it were the tail of a chain.
Then the gradient of $\tEMD$ is defined as:
\begin{eqnarray}
\label{eq:g_emd2}
\nabla\tEMD^\rho \simeq \rho \sum_{i \in G} \tilde{M}_i \cdot \tilde{\varphi}_i \cdot | \tilde{\varphi}_i |^{\rho-2}
\sum_{j=1}^i \left( {\delta_{jk} - 1/N} \right) \, .
\end{eqnarray}
All equations can be solved efficiently by a post-order traversal of nodes in the tree.
|
1,108,101,566,358 | arxiv | \section{Introduction}
\label{introduction}
\noindent
A (reduced) compact complex-analytic space $X$ can be viewed as a first-order structure, in the sense of mathematical logic, by equipping $X$ with a predicate
symbol $P_A$ for each complex-analytic subset $A\subseteq X^n$, for all $n\geq 0$.
We denote this structure by $\mathcal{A}(X)$.
Model-theory is interested in the {\em definable sets} of $\mathcal{A}(X)$: the subsets of $X^n$, for various $n$, obtained from the complex-analytic sets by taking intersections, complements, fibres of co-ordinate projections, and images of co-ordinate projections.
Zilber~\cite{zilber93} showed that this structure is ``tame'' in that it admits {\em quantifier elimination} (every definable set is a finite boolean combination of complex-analytic sets) and a certain model-theoretic rank ({\em Morley rank}) is finite valued on definable sets.
Motivated by model-theoretic considerations, the first author introduced in~\cite{sat} the notion of an {\em essentially saturated} compact complex-analytic space, namely, one for which there exists a countable subcollection of the predicates $P_A$ from which one can define all the definable sets of $\mathcal{A}(X)$.
The main result in~\cite{sat}, slightly reformulated, is the following geometric characterisation:
{\em A compact complex-analytic space $X$ is essentially saturated if and only if, for all $n\geq 0$, every irreducible complex-analytic subset of $X^n$ lives in an irreducible component of the Douady space of $X^n$ that is compact.}
Recall that the Douady space is the analytic analogue of the Hilbert scheme; it parameterises all compact complex-analytic subspaces of $X$ (see Section~\ref{preliminaries} below for some details).
Every holomorphic image of a compact K\"ahler manifold is essentially saturated (these are the {\em K\"ahler-type} spaces introduced by Fujiki in~\cite{fujiki78}).
The first author asked in~\cite{sat} for an example of an essentially saturated space that is not of K\"ahler-type.
Akira Fujiki, in private communication, suggested that we consider the surfaces of type $S_M$ constructed by Inoue in~\cite{inoue74}.
The purpose of this note is to show that these surfaces are indeed examples of essentially saturated spaces that are not of K\"ahler-type.
In fact, among the known compact complex surfaces of non K\"ahler-type without curves, these are the only examples.
A key element in our proof is a model-theoretic classification, due to Pillay and Scanlon~\cite{pillayscanlon2000}, of compact complex manifolds having no proper infinite complex-analytic subsets (see Fact~\ref{trichotomy} below).
\smallskip
For the rest of this paper, by a {\em complex variety} we will mean a reduced and irreducible compact complex-analytic space.
A {\em subvariety} will mean an irreducible complex-analytic subset.
\bigskip
\section{Preliminaries}
\label{preliminaries}
\noindent
Let $X$ be any complex-analytic space.
There exist a complex analytic space $D(X)$, called the {\em Douady space} of $X$, and a complex-analytic subspace $Z(X)\subseteq D(X)\times X$, called the {\em universal family} of $X$, such that
\begin{itemize}
\item[(a)]
the projection $Z(X)\to D(X)$ is a flat and proper surjection, and
\item[(b)]
(Universal Property)
if $S$ is any complex-analytic space and $G$ is any complex-analytic subspace of $S\times X$ with $G\to S$ a flat and proper surjection, then there exists a unique holomorphic map (the {\em Douady map}) $\phi:S\to D(X)$ inducing a canonical isomorphism $G\simeq S\times _{D(X)} Z(X)$.
\end{itemize}
In particular, given a complex-analytic subset $A\subseteq X$ there is a unique point $d\in D(X)$ such that $A$ is the fibre, $Z(X)_d$, of $Z(X) \rightarrow D(X)$ at $d$.
This point, often denoted by $[A]$, is called the {\em Douady point} of $A$ in $X$.
The Douady space was constructed by Douady in~\cite{douady66} and shown to have countably many irreducible components by Fujiki in~\cite{fujiki79}.
A more detailed discussion of Douady spaces can be found in~\cite{campanapeternell94}.
It is not necessarily the case that if $X$ is a compact complex variety then the irreducible components of $D(X)$ are again compact.
Indeed, as explained in the introduction, the compactness of the components of the Douady space turns out to be model-theoretically very significant:
Essential saturation is equivalent to asking that every subvariety of $X^n$ live in an irreducible component of the Douady space of $X^n$ that is compact.
Here, given an irreducible component $D$ of $D(X^n)$, and a subvariety $A\subseteq X^n$, by ``$A$ lives in $D$'' we mean that $[A]\in D$.
Fujiki showed in~\cite{fujiki78} that the irreducible components of the Douady spaces of K\"ahler-type spaces (these are the holomorphic images of compact K\"ahler manifolds) are compact.
Since being of K\"ahler-type is preserved under taking cartesian products, this implies that K\"ahler-type spaces are essentially saturated.
A Hopf surface $H$ is an example of a compact complex manifold that is not essentially saturated.
While the components of $D(H)$ itself are compact,
$D(H\times H)$ has non-compact components coming from families of graphs of automorphisms of $H$ (see~\cite{campanapeternell94} for details).
We will show that the Inoue surfaces of type $S_M$ introduced by Inoue in~\cite{inoue74} are essentially saturated but not of K\"ahler-type.
Instead of recalling the construction of these surfaces, we collect together in the following fact those properties of these surfaces that will be relevant to our argument.
\begin{fact}
\label{inouefacts}
Suppose $X$ is an Inoue surface of type $S_M$. Then
\begin{itemize}
\item[(a)]
$X$ is a smooth compact complex surface containing no curves.
\item[(b)]
$H^i(X,T_X)=0$ for $i=0,1$.
\item[(c)]
$X$ is not of K\"ahler-type.
\item[(d)]
Any unramified covering of $X$ satisfies properties (a)-(c)
\end{itemize}
\end{fact}
\begin{proof}
Parts~(a) and~(b) are Proposition~2 of Inoue's original paper~\cite{inoue74}.
Part~(c) follows from the fact that the odd Betti numbers of every compact complex variety of K\"ahler-type are even~(\cite{fujiki83a}), while Inoue surfaces have first Betti number equal to one~(\cite{inoue74}).
Part~(d) follows from the fact that an unramified covering of an Inoue surface of type $S_M$ is again an Inoue surface of type $S_M$.
To see this, recall first of all that the Inoue surfaces are known to be exactly those smooth compact complex surfaces that have no curves, have first Betti number equal to one, and have second Betti number equal to zero
(cf. \cite{inoue74, LYZ94, Tel94}).
All of these properties are preserved under taking unramified coverings (this is Lemma~1 of~\cite{inoue74}).
Moreover, it follows from Inoue's constructions (see also p. 586 of ~\cite{brunella97}), that among the Inoue surfaces, those of type $S_M$ are distinguished as the only ones admitting two {\em holomorphic foliations}, i.e., whose tangent bundles admit two holomorphic subbundles.
As this property is also preserved under taking unramified coverings, we see that an unramified covering of an Inoue surface of type $S_M$ is again an Inoue surface of type $S_M$.
\end{proof}
We will eventually establish the essential saturation of Inoue surfaces of type $S_M$ by proving that the subvarieties of $X^n$ must be of a very special form: they are either ``degenerate'' (defined below) or live in a zero-dimensional component of the Douady space of $X^n$.
Lemma~\ref{isolated-degenerate-es} below shows that that will suffice.
\begin{definition}
Suppose $X$ is a compact complex variety.
A subvariety $A\subseteq X^n$ is {\em degenerate} if for some co-ordinate projection $p:X^n\to X$, $p(A)$ is a point.
\end{definition}
\begin{lemma}
\label{degeneratecomponents}
Degeneracy is preserved in components of the Douady space.
More precisely, suppose $A\subseteq X^n$ is a degenerate subvariety that lives in the irreducible component $D$ of $D(X^n)$ and let $Z$ be the restriction of $Z(X^n)$ to $D$.
Then $Z_d$ is degenerate for all $d\in D$.
\end{lemma}
\begin{proof}
Denote by $p:X^n\to X$ the first projection and by $\pi: D\times X^n\to D\times X$ the induced projection on $D\times X^n$.
Up to a permutation of the co-ordinates we may suppose
that $p(A)=a\in X$ is a point.
We will show in fact that $p(Z_d)$ is a point for all $d\in D$.
Note that the only obstruction is that $\pi(Z)\to D$ need not be flat.
First of all note that as $Z$ has a reduced and irreducible fibre, by flatness its general fibres are reduced and irreducible, as is $Z$ itself.
So $\pi(Z)$ and its general fibres over $D$ are irreducible.
But $\pi(Z)_{[A]}=\{a\}$.
Hence the general fibres of $\pi(Z)$ over $D$ must be points.
That is, $p(Z_d)$ is a point for general $d\in D$.
Let $D'$ be the set of points $d\in D$ such that $\operatorname{dim} p(Z_d)>0$.
We have shown that $D'$ is contained in a proper complex-analytic subset.
Now we claim that $D'$ is in fact empty.
Toward a contradiction, let $d'\in D'$.
Let $U$ be a sufficiently small open neighbourhood of $d'$ in $D$ and let $C$ be a complex-analytic curve in $U$ which passes through $d'$ and such that for general $c\in C$, $Z_c$ is irreducible and $p(Z_c)$ is a point
(this is possible as for general $d\in D$, $Z_d$ is irreducible and $p(Z_d)$ is a point).
Let $Z_C$ be the restriction of $Z$ to $C$.
By flatness, $Z_C$, and hence $\pi(Z_C)$, is irreducible.
Moreover the general fibres of $\pi(Z_C)\to C$ are points and so $\operatorname{dim} \pi(Z_C)=1$.
This contradicts the fact that the fibre of $\pi(Z_C)$ over $d'$ has positive dimension.
\end{proof}
\begin{lemma}
\label{isolated-degenerate-es}
Suppose $X$ is a compact complex variety with the property that every subvariety of $X^n$, for all $n\geq 0$, is either degenerate or lives in a zero-dimensional component of the Douady space of $X^n$.
Then $X$ is essentially saturated.
\end{lemma}
\begin{proof}
First of all, as explained in the introduction, it suffices to show that every subvariety of $X^n$ lives in an irreducible component of $D(X^n)$ that is compact.
We do this by induction on $n\geq 0$.
Cleary, the zero-dimensional components of $D(X^n)$ are compact, and so we focus on the degenerate subvarieties.
For $n=0$ there is nothing to prove.
For $n=1$ note that the degenerate subvarieties are just the points of $X$, and that $X$ itself is the (compact) component of $D(X)$ parametrising these points.
Now suppose $A$ is a degenerate subvariety of $X^n$, for some $n>1$, $D$ is the irreducible component of $D(X^n)$ in which $A$ lives, and $Z\subset D\times X^n$ is the universal family restricted to $D$.
As discussed in Lemma~\ref{degeneratecomponents}, after possibly permuting co-ordinates, for every $d\in D$, $Z_d$ is of the form $\{a\}\times B$ for some subvariety $B$ of $X^{n-1}$.
For each irreducible component $E$ of $D(X^{n-1})$ containing a subvariety of $X^{n-1}$, let $E'=X\times E$ and let $Z'\subset E'\times X^n$ be the complex-analytic subspace with $Z'_{(a,e)}=\{a\}\times Z(X^{n-1})_e$ for all $a\in X$ and $e\in E$.
Note that $E'$ is compact by induction, and $Z'\to E'$ is flat.
We have corresponding (injective) Douady maps $\phi_E:E'\to D(X^n)$ such that $Z(X^n)_{\phi_E(a,e)}=\{a\}\times Z(X^{n-1})_e$.
The $\phi_E(E')$'s must cover $D$ since every fibre above $D$ is of this form.
As there are only countably many $\phi_E(E')$'s and each one is an irreducible complex-analytic subset of $D(X^n)$, it follows that $D$ must be equal to some $\phi_E(E')$, and thus be compact.
\end{proof}
\bigskip
\section{Trivial strongly minimal compact complex varieties}
\noindent
In this section we point out that if $X$ is a ``trivial strongly minimal" compact complex variety (explained below) with the property that every subvariety of $X\times X$ is either degenerate or lives in a zero-dimensional component of $D(X\times X)$, then the same is true of $X^n$ for all $n>2$.
In particular, such varieties will be essentially saturated.
This is not a very surprising result, since, as explained below, triviality says that all relations are essentially binary.
While we will use model-theoretic language freely in this section, we will try to give geometric formulations of the ideas involved and the results obtained.
We suggest~\cite{markerbook} as a general reference for model theory, and~\cite{moosa-ccs} for the model theory of compact complex varieties.
Let us begin by describing what the abstract notions of ``strong minimality'' and ``triviality'' amount to for compact complex varieties.
{\em Strong minimality} just means that $X$ has no proper infinite complex-analytic subsets.
Hence, for example, any irreducible compact complex surface without curves is strongly minimal.
{\em Triviality} is the following condition.
Suppose $n> 0$ and $A\subseteq X^{n+1}$ is a complex-analytic subset such that projection onto the first $n$ co-ordinates, $X^{n+1}\to X^n$, restricts to a generically finite-to-one map on $A$.
Then there must exist some $i\leq n$ such that if $A_i\subseteq X^2$ denotes the image of $A$ under the co-ordinate projection $(x_1,\dots,x_{n+1})\mapsto(x_i,x_{n+1})$, then the projection onto the first co-ordinate, $X^2\to X$, restricts to a generically finite-to-one map on $A_i$.
The following characterisation of non-trivial strongly minimal compact complex manifolds is a manifestation of the ``Zilber Trichotomy'' in this context.
It is due originally to Scanlon~\cite{scanlon2000} and appears as Proposition~5.1 of~\cite{pillayscanlon2000}.
\begin{fact}
\label{trichotomy}
If $X$ is a non-trivial strongly minimal compact complex manifold, then $X$ is either a complex torus or a projective curve.
In particular, every strongly minimal compact complex manifold that is not of K\"ahler-type is trivial.
\label{pillayscanlonfact}
\end{fact}
For example, since the Inoue surfaces of type $S_M$ have no divisors and are not of K\"ahler-type (see Fact~\ref{inouefacts} above), they are trivial strongly minimal.
The following fact about trivial strongly minimal sets in general is an easy and well-known model-theoretic consequence of the definitions.
If the reader unfamiliar with model theory is willing to accept Corollary~\ref{geom-trivial=binary} below, then he or she can skip this lemma and go directly to Proposition~\ref{reductionto2}.
\begin{lemma}
\label{trivial=binary}
Suppose $X$ is a trivial strongly minimal set in some saturated model of a complete stable theory.
Given $a=(a_1,\dots,a_n)\in X^n$, the partial $n$-type $\Sigma(x_1,\dots,x_n):=\displaystyle \bigcup_{i,j\leq n}\operatorname{tp}(a_i,a_j)$ has only finitely many completions.
That is, there exist finitely many complete $n$-types $q_1(x),\dots,q_N(x)$ such that given $b\in X^n$, if $\displaystyle b\models\Sigma$ then $b\models q_\ell$ for some $\ell\leq N$.
\end{lemma}
\begin{proof}
Without loss of generality we may assume that for some $r\leq n$, $\{a_1,\dots,a_r\}$ is an $\operatorname{acl}$-basis for $\{a_1,\dots,a_n\}$.
Suppose $b=(b_1,\dots,b_n)\models\Sigma$.
By triviality, any $\operatorname{acl}$-dependence among $\{b_1,\dots,b_r\}$ would be witnessed by a pair of elements in that set.
Since $\Sigma$ forces all pairs from the first $r$ co-ordinates to be $\operatorname{acl}$-independent, it follows that $\{b_1,\dots,b_r\}$ must be an $\operatorname{acl}$-independent set.
If $r=n$ then in fact $b$ and $a$ are generic tuples and so $b\models\operatorname{tp}(a)$.
That is, $\Sigma$ is complete and we are done.
Hence we may sssume that $r<n$.
Now for each $i=1,\dots,n-r$, $a_{r+i}\in\operatorname{acl}(a_1,\dots,a_r)$.
By triviality there is a $j_i\leq r$ such that $a_{r+i}\in\operatorname{acl}(a_{j_i})$.
Let $\phi_i(x_{j_i},x_{r+i})$ be a formula witnessing this.
So the set defined by $\phi_i(a_{j_i},x_{r+i})$ is finite and contains $a_{r+i}$.
Let $q_1,\dots,q_N$ be the set of all complete $n$-types of the form $\operatorname{tp}(a_1,\dots,a_r,c_1,\dots,c_{n-r})$ where each $c_i$ is in the set defined by $\phi_i(a_{j_i},x_{r+i})$.
Since $b$ realises $\Sigma$, $\models\phi_i(b_{j_i},b_{r+i})$ for all $i=1,\dots,n-r$.
Hence, if we let $f$ be an automorphism taking $(b_1,\dots,b_r)$ to $(a_1,\dots,a_r)$, then for each $i=1,\dots,n-r$ we have that $f(b_{r+i})$ is in the set defined by $\phi_i(a_{j_i},x_{r+i})$.
So $\operatorname{tp}(b)=\operatorname{tp}\big(a_1,\dots,a_r,f(b_{r+1}),\dots,f(b_n)\big)=q_\ell$ for some $\ell\leq N$.
\end{proof}
The following corollary gives the geometric content of Lemma~\ref{trivial=binary} specialised to compact complex varieties.
\begin{corollary}
\label{geom-trivial=binary}
Suppose $X$ is a trivial strongly minimal compact complex variety.
Given a subvariety $A\subseteq X^n$ there exist only finitely many other subvarieties having the same projections to $X\times X$.
More precisely, there exist
subvarieties $B_1,\dots,B_N\subseteq X^n$ such that if $B\subseteq X^n$ is a subvariety for which $\pi(A)=\pi(B)$ for all co-ordinate projections $\pi:X^n\to X^2$, then $B=B_\ell$ for some $\ell\leq N$.
\end{corollary}
\begin{proof}
We work in a saturated elementary extension $\mathcal{A}(X)'$ of $\mathcal{A}(X)$, which acts as ``universal domain'' (in the sense of Weil) for the geometry of the complex-analytic subsets of $X$ and its cartesian powers.
A key point is that the complete $n$-types in $\mathcal{A}(X)'$ (over the emptyset) are in one-to-one correspondence with subvarieties of $X^n$; every complete $n$-type is the generic type of a subvariety of $X^n$ called its {\em locus}.
Let $a$ realise the generic type of $A$ in $\mathcal{A}(X)'$.
Apply Lemma~\ref{trivial=binary} to $a$ to obtain complete types $q_1,\dots,q_N$.
For each $\ell\leq N$, let $B_\ell=\operatorname{loc}(q_\ell)$ be the locus of $q_\ell$.
Now suppose $B$ is as in the statement of the corollary and let $b$ realise the generic type of $B$.
Note that for each co-ordinate projection $\pi:X^n\to X^2$, $\pi(a)$ realises the generic type of $\pi(A)$ and $\pi(b)$ realises the generic type of $\pi(B)$.
So the assumption on $B$ says that $\displaystyle b\models\bigcup_{i,j\leq n}\operatorname{tp}(a_i,a_j)$.
Hence, by the conclusion of Lemma~\ref{trivial=binary}, $b\models q_\ell$ for some $\ell\leq N$.
It follows that $B=B_\ell$, as desired.
\end{proof}
Here is the main conclusion of this section.
\begin{proposition}
\label{reductionto2}
Suppose $X$ is a trivial strongly minimal compact complex variety with the property that every subvariety of $X\times X$ is either degenerate or lives in a zero-dimensional component of $D(X\times X)$.
Then the same holds of $X^n$, for each $n>2$.
In particular, $X$ is essentially saturated.
\end{proposition}
\begin{proof}
Note that every subvariety of $X$ is either degenerate or lives in a zero-dimensional component of $D(X)$: this follows from strong minimality as the only subvarietries of $X$ are points or $X$ itself.
Let us fix $n>2$ and a subvariety $Y\subseteq X^n$.
Assume $Y$ is not degenerate and does not live in a zero-dimensional component of $D(X^n)$, and seek a contradiction.
Let $D$ be the component of the Douady space of $X^n$ in which $Y$ lives and let $Z\subseteq D\times X^n$ be the restriction of the universal family to $D$.
Let $E$ be a proper complex-analytic subset of $D$ such that for all $d\in D\setminus E$, $Z_d$ is reduced and irreducible.
Since $Y$ is not degenerate, none of these $Z_d$'s are degenerate (cf. Lemma~\ref{degeneratecomponents}).
Hence $\pi(Z_d)$ is non-degenerate for each co-ordinate projection $\pi:X^n\to X^2$ and each $d\in D\setminus E$.
It follows that each $\pi(Z_d)$ lives in a zero-dimensional component of $D(X^2)$.
Note that if $\pi(Z_d)$ and $\pi(Z_{d'})$ live in the same zero-dimensional component of $D(X^2)$, then $\pi(Z_d)=\pi(Z_{d'})$.
Since there are only countably many irreducible components of $D(X^2)$, and only finitely many projections $\pi$, but continuum-many $d\in D\setminus E$ (as $\operatorname{dim} D>0$), there must exist infinitely many distinct $d_1,d_2,\dots \in D\setminus E$ with $\pi(Z_{d_i})=\pi(Z_{d_1})$ for all $i>1$ and all co-ordinate projections $\pi:X^n\to X^2$.
Applying Corollary~\ref{geom-trivial=binary} to $Z_{d_1}\subseteq X^n$,
there exists a fixed finite set of subvarieties $B_1,\dots,B_N\subseteq X^n$, such that each $Z_{d_i}$ is equal to one of the $B_j$'s.
But this contradicts the fact that the $d_i$'s are distinct and hence the $Z_{d_i}$'s are distinct (this is the uniqueness of the Douady map in the universal property for Douady spaces).
Hence, it must be the case that either $Y$ is degenerate or it lives in a zero-dimensional component of the Douady space.
By Lemma~\ref{isolated-degenerate-es}, $X$ must be essentially saturated.
\end{proof}
\bigskip
\section{Essential saturation of Inoue surfaces of type $S_M$}
\label{inouesection}
\noindent
Let us briefly recall some deformation theory of compact complex manifolds.
We suggest~\cite{kodaira} for further details and as a general reference.
A compact complex manifold $M$ is {\em rigid} if $H^1(X,T_X)=0$, where $T_X$ denotes the tangent sheaf of $X$.
Every deformation of a rigid compact complex manifold is locally trivial. More precisely: {\em Suppose $M$ is a rigid compact complex manifold.
If $\mathcal{M}\to B$ is a proper and flat surjective holomorphic map of complex varieties, and $c\in B$ is such that $\mathcal{M}_c=M$, then there exists an open neighbourhood $U$ of $c$ in $B$ such that $\mathcal{M}_U\to U$ is biholomorphic to $U\times M$ over $U$.}
Indeed, this is the classical Kodaira-Spencer deformation theory (cf. Theorem~4.6 of~\cite{kodaira})
once we observe that by flatness and the fact that $M$ is a complex manifold, the restriction of $\mathcal{M}\to B$ to some neighbourhood of $c\in B$ is a proper submersion of complex manifolds.
We will also be interested in embedded deformations, which in fact are already implicit in our discussion of Douady spaces.
Suppose $M$ is a complex submanifold of a compact complex manifold $N$.
We say that {\em $M$ is rigid in $N$} if $H^0(M,\mathcal{N}_{M/N})=0$, where $\mathcal{N}_{M/N}$ is the normal sheaf of $M$ in $N$.
In that case every deformation of $M$ in $N$ is trivial.
More precisely:
{\em Suppose $M$ is rigid in $N$.
If $B$ is a complex variety and $G$ is a complex-analytic subset of $B\times N$ such that $G\to B$ is flat and surjective, and $G_c=M$ for some $c\in B$, then $G=B\times M$.}
This follows, for example, from the fact that $H^0(M,\mathcal{N}_{M/N})$ is the tangent space to the Douady space of $N$ at the point $[M]$ corresponding to $M\subseteq N$ (cf. Proposition~1.7 of~\cite{campanapeternell94}).
Hence $[M]$ is isolated in $D(N)$ and so the Douady map $g:B\to D(N)$ must be onto the single point $[M]$.
Embedded deformations give rise to the notion of deformations of holomorphic maps that leave the domain and target fixed.
Suppose $f:M\to N$ is a holomorphic map between compact complex manifolds.
We say that {\em $f$ is rigid with respect to $M$ and $N$} if $H^0(M,f^*T_N)=0$.
It is not hard to see that this is equivalent to asking that the graph $\Gamma(f)$ is rigid in $M\times N$.
In terms of deformations of $f$ this can be formulated as follows:
{\em Suppose $f:M\to N$ is rigid with respect to $M$ and $N$.
If $B$ is a complex variety and $\Phi:B\times M\to B\times N$ is a holomorphic map over $B$ such that $\Phi_c=f$ for some $c\in B$, then $\Phi=\operatorname{id}_B\times f$.}
We put these facts together in the following Lemma for future use:
\begin{lemma}
\label{rigidrigid}
Let $f:M\to N$ be a holomorphic map between compact complex manifolds.
Suppose $M$ is rigid and $f$ is rigid with respect to $M$ and $N$.
Then for any flat and proper surjection of complex varieties $\mathcal{M}\to B$ with $\mathcal{M}_c=M$ for some $c\in B$, and any holomorphic map $\Phi:\mathcal{M}\to B\times N$ over $B$ with $\Phi_c=f$, there must exist a neighbourhood $U$ of $c$ such that $\Phi_U(\mathcal{M}_U)=U\times f(M)$.
\end{lemma}
\begin{proof}
By rigidity of $M$ there exists a neighbourhood $U$ of $c$ and a biholomorphism $\sigma:U\times M\to \mathcal{M}_U$ over $U$.
We may assume that $\sigma_c=\operatorname{id}_M$.
So $\Phi_U\circ\sigma$ is a holomorphic map from $U\times M$ to $U\times N$ over $U$ with $(\Phi_U\circ\sigma)_c=f$.
By rigidity of $f$ with respect to $M$ and $N$, $\Phi_U\circ\sigma=\operatorname{id}_U\times f$.
Hence
$$\Phi_U(\mathcal{M}_U)=\Phi_U\circ\sigma(U\times M)=(\operatorname{id}_U\times f)(U\times M)=U\times f(M)$$
as desired.
\end{proof}
We now specialise to the case of Inoue surfaces of type $S_M$.
\begin{lemma}
\label{rigidity}
Let $X$ be an Inoue surface of type $S_M$ and set $p_i:X\times X\to X$ to be the $i$th co-ordinate projection, for $i=1,2$.
Suppose $Y$ is an irreducible normal compact complex surface, and $f:Y\to X\times X$ is a holomorphic map with the property that $f_i:=p_i\circ f:Y\to X$ is surjective for $i=1,2$.
Then $Y$ is itself Inoue of type $S_M$.
Moreover, $f$ is rigid with respect to $Y$ and $X\times X$.
\end{lemma}
\begin{proof}
Consider the finite surjection $f_1:Y\to X$.
Let $B$ be the set of points in $Y$ at which $f_1$ is not locally biholomorphic.
Then $B$ is a complex-analytic subset of $Y$ (see~2.19 of~\cite{fischer76}).
Since $Y$ is normal and $X$ is smooth, if $B$ is nonempty then it must have dimension $1$ (see~4.2 of~\cite{fischer76}).
But as $f_1$ is finite-to-one, and $X$ contains no curves, the latter is impossible.
Hence $B$ is empty, $f_1$ is an unramified covering, and $Y$ is again Inoue of type $S_M$.
In particular, $H^0(Y,T_Y)=0$.
By the above argument $f_2:Y\to X$ is also an unramified covering.
Hence $f_i^*T_X=T_Y$ for $i=1,2$.
Since $T_{X\times X}=p_1^*T_X\oplus p_2^*T_X$, we get
$$f^*T_{X\times X}=f^*p_1^*T_X\oplus f^*p_1^*T_X=f_1^*T_X\oplus f_2^*T_X=T_Y\oplus T_Y$$
and so $H^0(Y,f^*T_{X\times X})=H^0(Y,T_Y)\oplus H^0(Y,T_Y)=0$.
So $f$ is rigid with respect to $Y$ and $X\times X$.
\end{proof}
\begin{proposition}
\label{subvarietiesofx2}
Suppose $Y\subseteq X\times X$ is an irreducible complex-analytic subset.
Then one of the following holds:
\begin{itemize}
\item[(a)]
$\operatorname{dim} Y=0$, or
\item[(b)]
$Y=\{a\}\times X$ or $Y=X\times\{a\}$ for some $a\in X$, or
\item[(c)]
$Y$ lives in a zero-dimensional component of the Douady space of $X\times X$.
\end{itemize}
\end{proposition}
\begin{proof}
Suppose $Y\subseteq X\times X$ is an irreducible complex-analytic subset, and let $p_i:X\times X\to X$ be the $i$th co-ordinate projection for $i=1,2$.
Since each $p_i(Y)$ is irreducible, strong minimality of $X$ implies that $p_i(Y)$ is either a point or all of $X$.
If $p_i(Y)$ is a point, then by strong minimality of $X$ one of ~(a) or~(b) must hold.
Hence we may assume $p_i(Y)=X$ for $i=1,2$.
We will show that~(c) holds.
It is clear that (c) holds if $Y = X \times X$. Let us now suppose that $Y$ is proper.
Since $p_i(Y)=X$ for $i=1,2$, $Y$ is two-dimensional by strong minimality of $X$. Let $D$ be the irreducible component of the Douady space of $X\times X$ in which $Y$ lives, and let $Z\subseteq D\times(X\times X)$ be the restriction of the universal family to $D$.
Let $\widetilde{Z}\to Z$ be the normalisation of $Z$.
We have the following situation
$$\xymatrix{
\widetilde{Z}\ar[r]^{\pi}\ar[dr] & Z\ar[r]^{\subseteq \ \ \ \ \ \ \ \ }\ar[d] &D\times (X\times X)\ar[dl]\\
& D
}$$
There exists a proper complex-analytic subset $E\subset D$ such that for all $d\in D\setminus E$,
\begin{itemize}
\item[(i)]
$Z_d\subset X\times X$ is a reduced and irreducible surface,
\item[(ii)]
$p_i(Z_d)=X$ for $i=1,2$,
\item[(iii)]
$\pi_d:\widetilde{Z}_d\to Z_d$ is the normalisation of $Z_d$, and
\item[(iv)]
$\widetilde{Z}\to D$ is flat outside of $E$.
\end{itemize}
Indeed, we can choose $E$ to satisfy~(i) though~(iv) because $Z\to D$ is flat and proper.
For~(i), use the fact that $Z\to D$ has one reduced and irreducible two-dimensional fibre (namely $Y$) and hence, by flatness, its general fibres are so.
For~(ii), use the fact that none of the $Z_d$'s are degenerate since $Y$ was not (cf. Lemma~\ref{degeneratecomponents}), and we have already seen that this implies its projections to $X$ are surjections.
To find $E$ satisfying~(iii) note that the general fibre of $\widetilde{Z}\to D$ is normal (Th\'eor\`eme~2 of~\cite{banica79}) and $\pi$ restricted to the general fibre is again a finite map that is biholomorphic outside a proper complex-analytic set.
Finally, we can find $E$ satisfying~(iv) since by~\cite{frisch67} every proper holomorphic map is flat outside a proper complex-analytic set.
Now fix $d_0\in D\setminus E$.
Then $\widetilde{Z}_{d_0}$ is an irreducible normal compact complex surface such that $\pi_{d_0}:\widetilde{Z}_{d_0}\to X\times X$ composed with the projections to $X$ are surjective.
By Lemma~\ref{rigidity}, $\widetilde{Z}_{d_0}$ is itself an Inoue surface of type $S_M$ -- and hence rigid -- and the map $\pi_{d_0}$ is rigid with respect to $\widetilde{Z}_{d_0}$ and $X\times X$.
By Lemma~\ref{rigidrigid}, there exists an open neighbourhood $U$ of $d_0$ in $D\setminus E$, such that $\pi_U(\widetilde{Z}_U)=U\times Z_{d_0}$.
Hence $Z_U=U\times Z_{d_0}$.
The universal property of the Douady space implies that $U=\{d_0\}$.
But as $U$ was open in the irreducible $D$, this means that $D=\{d_0\}$.
We have shown that $Y$ lives in a zero-dimensional component of the Douady space of $X\times X$, as desired.
\end{proof}
\begin{corollary}
Inoue surfaces of type $S_M$ are essentially saturated but not of K\"ahler-type.
\end{corollary}
\begin{proof}
By Fact~\ref{inouefacts}, Inoue surfaces of type $S_M$ are strongly minimal compact complex varieties not of K\"ahler-type.
By Fact~\ref{trichotomy} they are trivial.
Proposition~\ref{subvarietiesofx2} tells us that the irreducible subvarieties of $2$-space are either degenerate or live in a zero-dimensional component of the Douady space.
By Proposition~\ref{reductionto2} this is then true of $n$-space for all $n\geq 0$, and these surfaces are essentially saturated.
\end{proof}
\begin{remark}
One might be tempted to try and generalise the above corollary by observing that our argument goes through for any compact complex surface satisfying the four properties of Fact~\ref{inouefacts}.
However, these properties actually characterise Inoue surfaces of type $S_M$.
Indeed, from Kodaira's classification of compact complex surfaces we see that a non-K\"ahler surface $X$ without curves must have $b_1(X)=1$.
It will also have $b_2(X)=0$ by Riemann-Roch if one imposes
the condition $H^1(X,T_X)=0$.
But, as mentioned before, such surfaces are completely classified (cf.~\cite{inoue74, LYZ94, Tel94}) and they belong to one of Inoue's classes.
Since the surfaces of type $S^{(+)}$ have $H^1(X,T_X)=1$, we are left with only the surfaces of type $S_M$ and $S^{(-)}$.
Among them only those of type $S_M$ satisfy the fourth property, since any surface of type $S^{(-)}$ admits a double cover of type $S^{(+)}$, see \cite{inoue74}.
\end{remark}
We conclude by pointing out that among the known strongly minimal compact complex surfaces, the surfaces of type $S_M$ are the only non-K\"ahler essentially saturated examples.
Indeed, since the only known non-K\"ahler compact complex surfaces without curves are the Inoue surfaces, this follows from:
\begin{proposition}
The other Inoue surfaces, those of type $S^{(+)}$ and $S^{(-)}$, are not essentially saturated.
\end{proposition}
\begin{proof}
Note that by Fact~\ref{trichotomy}, these strongly minimal surfaces are also trivial.
Let $X$ be an Inoue surface of type $S^{(+)}$.
The universal cover of $S^{(+)}$ (and indeed of all the Inoue surfaces) is $\mathbb{H}\times\mathbb{C}$, the product of the upper half plane with the complex plane.
From Inoue's construction of $S^{(+)}$ it is evident that translation on the second co-ordinate induced a non-trivial action of $(\mathbb{C},+)$ on $X$ (see equation~(18) of~\cite{inoue74}).
We thus obtain an infinite analytic family of automorphisms of $X$ paramaterised by $\mathbb{C}$, which by considering graphs, can be viewed as living in the irreducible component $D$ of $D(X\times X)$ that contains the diagonal $A\subset X\times X$.
This already implies that $D$ is not compact, since no trivial strongly minimal compact complex manifold can have an infinite definable family of automorphisms.
However, the non-compactness of $D$ can be seen more directly, without using triviality of $X$, as follows:
Assume $D$ is compact and let $Z\subset D\times X\times X$ be the universal family over $D$.
Since $\operatorname{dim} H^0(X,T_X)=1$, and $\mathcal{N}_{A/X\times X}\cong T_X$, we know that $\operatorname{dim} D=1$.
Fixing $a\in X$, the subspace $Z\cap (D\times\{a\}\times X)$
is thus one-dimensional and contains $\{(g,a,ga): g\in \mathbb{C}\}$.
Thus its projection on $X$ is a complex-analytic subset containing the orbit of $a$ under the action of $(\mathbb C,+)$, and therefore cannot be zero dimensional.
This contradicts the fact that $X$ has no curves.
Let now $Y$ be an Inoue surface of type $S^{(-)}$.
As mentioned before, there exists an Inoue surface $X$ of type $S^{(+)}$ which is a double cover of $Y$.
Now one can finish by showing that a finite cover of an essentially saturated compact complex variety is again essentially saturated.
But in this case the argument is easier:
The action of $(\mathbb{C},+)$ on $X$ described above will induce an infinite analytic family of subvarieties of $Y\times Y$ which project in a finite-to-one manner onto each component.
Arguing as above one shows that this family of subspaces of $Y\times Y$ cannot be compactified in $D( Y\times Y)$.
(Alternatively, essential saturation of $Y$ would imply the existence of an infinite definable family of finite-to-finite correspondences, which is also ruled out by the triviality of $Y$.)
\end{proof}
|
1,108,101,566,359 | arxiv | \section{Introduction}
\label{sec:introduction}
The R Coronae Borealis (RCB) stars are a rare class of hydrogen deficient red giants. These stars are characterized by dramatic, unpredictable photometric declines with slow returns to full luminosity caused by clouds of carbon dust forming and flowing away from the star (e.g., \citealt{1997MNRAS.285..317F}, \citealt{2012JAVSO..40..539C}, \citealt{2007A&A...466L...1L}). These declines can be as great as 9 magnitudes in V band, and can last from a month to hundreds of days (\citealt{Tisserand2012}).
There are two possible evolutionary paths for RCBs. RCBs could be the products of a final helium-flash in heavily evolved single stars before they cool to become white dwarfs. Alternatively, they could be merger products of lower mass He white dwarfs with higher mass CO white dwarfs. The abundance of \textsuperscript{18}O in cool RCBs heavily favors the latter theory (\citealt{2007ApJ...662.1220C}, \citealt{2010ApJ...714..144G}).
Additionally, the He-rich pre-white dwarf KPD 0005+5106 has the abundances expected for a double degenerate merger. This both confirms that it is the descendant of an RCB star and reinforces the merger model for their origin (\citealt{2015A&A...583A.131W}). Observations of R Coronae Borealis, the prototype of its class, during a photometric minimum revealed a possible planetary nebula around the star (\citealt{2011ApJ...743...44C}). This is not predicted by the merger model and may support the He flash model. However, \cite{2015AJ....150...14M} found that the circumstellar shell is most likely not a fossil planetary nebula, but is instead a result of RCB phase mass loss. Thus, the double degenerate merger scenario is presently the favored explanation of RCB stars.
If the double degenerate scenario is correct, merger and lifetime arguments predict between 100 and 500 RCB stars in our galaxy (\citealt{2018arXiv180901743T}, \citealt{2018arXiv180711514L}, \citealt{2015ApJ...809..184K}). At present, there are 117 RCB stars known in the Galaxy and 30 in the Magellanic Clouds (\citealt{2018arXiv180901743T}). The number of known RCB stars has more than doubled in the past decade (\citealt{Tisserand2008}), and many of these new RCBs were found by searching for a combination of a mid-infrared excess and variability (\citealt{Tisserand2013}, \citealt{2016IBVS.6190....1N}, \citealt{2014JAVSO..42...13O}) using techniques developed in \cite{Tisserand2012}. Here we expand this approach to the full sky using ALLWISE (\citealt{2010AJ....140.1868W}) and 2MASS (\citealt{2006AJ....131.1163S}) to photometrically select candidates, and ASAS-SN (\citealt{2014ApJ...788...48S}, \citealt{2017PASP..129j4502K}) to examine their optical variability. Simultaneously with this work, \cite{2018arXiv180901474T} also recently used the complete WISE dataset to update their RCB candidates, and they report 45 spectral confirmations in \cite{2018arXiv180901743T}. Although light curves can be an excellent indicator, only a spectroscopic follow-up can confirm the identification of RCB stars.
The All-Sky Automated Survey for SuperNovae (ASAS-SN) is a ground based survey hosted by Las Cumbres Observatory (\citealt{2013PASP..125.1031B}) that has been monitoring the entire sky on a 2-3 day cadence to a depth of V $\leq$ 17 mag since 2013 using two units consisting of 4 telescopes in a common mount located in Hawaii and Chile. ASAS-SN has recently expanded, adding 3 more units located in Chile, Texas, and South Africa, respectively. ASAS-SN was created to monitor the sky for bright supernovae, but it also continuously monitors for variable stars (\citealt{2018MNRAS.tmp..817J}).
In this work we search for new RCB stars. In Section 2 we outline the photometric selection of the candidates. In Section 3 we present our list of RCB candidates and their ASAS-SN light curves.
\section{Target Selection}
\label{Target Selection}
We started with the 2MASS and WISE selected list of 1602 candidates from \cite{Tisserand2012}. The selection method required that each source had data in all 7 bands (J, H, K, and W1-W4), and selected for stars with infrared properties similar to known RCBs, taking into account interstellar redenning by Galactic latitude. Cuts were made to reject other stars with similar infrared colors such as Asymptotic Giant Branch stars and Miras.
\cite{Tisserand2012} selected these candidates before WISE data were available for the full sky. To select across the full sky, we used an alternate approach, simply looking for stars with
spectral energy distributions (SEDs) similar to those of known RCBs. We started from the nominal list of ``known'' RCBs from
SIMBAD (\citealt{2000A&AS..143....9W}), albeit with the knowledge that some of these classifications
were likely problematic, and objects from the ALLWISE (\citealt{2010AJ....140.1868W}) catalog with
defined WISE and 2MASS magnitudes satisfying a somewhat broader version
of \citealt{Tisserand2012}'s first color cut, namely,
\begin{equation}
\begin{split}
W2-W3>0.75, \\
W2-W3 < 3.00, \\
W3-W4 < 1.30,
\end{split}
\end{equation}
and none of their other criterion. This provided a list of 93 ``known'' RCBs and roughly 1.3 million WISE sources.
The SED of a new RCB can
differ from that of a known RCB due to changes in luminosity, distance
and extinction, where the change in extinction can either be due to changes
in either Galactic or circumstellar extinction. We do not differentiate
between the two sources of extinction since the differences are primarily due to
changes in the physics of scattered photons, which are less important in
the infrared (see the discussion in Kochanek et al. 2012).
We assume that changes in extinction only modify the 2MASS magnitudes as
a simplifying assumption. So for each ALLWISE source and each ``known''
RCB, we first use the WISE magnitudes to estimate a change
in distance and luminosity as
\begin{equation}
\Delta \mu = { \frac{1}{4} } \sum_{i=1,4} \biggl( W_i(WISE) - W_i(RCB_j)\biggr)
\end{equation}
which assumes uniform weighting of the four WISE bands ($W_i$) since this
exercise is almost certainly dominated by systematic errors. Then, with
$\Delta\mu$ fixed, we determine the change in extinction which would best
match the near-IR magnitudes ($M_i=J$, $H$, and $K_s$),
\begin{equation}
\begin{split}
\Delta E & = \biggl[ \sum_{i=1,3} R_i ( M_i(WISE) - M_i(RCB_j) - \Delta\mu )\biggr] \\
& \biggl[ \sum_{i=1,3} R_i^2 \biggr]^{-1}
\end{split}
\end{equation}
where the $R_i$ are the extinction coefficients. We then computed the
root-mean-square magnitude residual $\sigma_j$ for each of the trial $j=1 ... 93$
RCBs corrected for the number of degrees
of freedom after fitting two parameters ($\Delta\mu$ and $\Delta E$).
We accepted an object as an RCB candidate if any $\sigma_j<0.2$~mag,
as this recovered 82 of the 93 ``known'' RCBs if we used this
method to search for each of them after excluding the star being tested
from the SED match.
For each of the ``known'' RCBs we then counted how many candidates were associated with it and iteratively eliminated stars producing
too many candidates for new RCBs to be useful. As expected, we found that the SIMBAD listing is contaminated
by sources other than RCBs. For example, the worst comparison
star was MACHO118.18666.100, with 235 thousand (!) matches,
which \cite{Tisserand2008} found to be an M giant. In fact, all
of the ``known'' RCBs producing such large numbers of matches
are reported to be other sorts of variables (SY~Hyi as a semi-regular
variable, \citealt{Lawson1989}, V618~Sgr as a symbiotic star, \citealt{Kilkenny1997},
V1317~Sco as a Mira, \citealt{Tisserand2013}, V589~Sgr as a symbiotic
star, \citealt{Mennickent2001}, AE~Cir as a symbiotic star, \citealt{Mennickent2008},
GM~Ser as a Mira, \citealt{Tisserand2013}, and TYC6283-1417-1 as a Mira,
\citealt{Tisserand2013}). With the last of these, the maximum number of
matches had dropped to 11 thousand. We also dropped LT~Dra, where the
origin of its classification is unclear and whose variability is reported
to be spurious by the AAVSO (\citealt{2002AAS...201.1711H}).
Next there were ``known'' RCBs where we could find no arguments
that they were misclassified but which still produced too
many matches for a feasible search. Many of these
(in order of numbers of matches, Y~Mus, SV~Sge, XX~Cam, MACHO308.38099.66
and EROS2-CG-RCB-12) were also dropped by \cite{Tisserand2012} as falling outside
their color selection criteria. We also dropped V409~Nor, HV5637, which was spectroscopically
confirmed as an RCB star by \cite{1972MNRAS.158P..11F}, OGLE BUL-SC 37 133492\footnote{First listed the RCB candidated OGLE-GC-RCB-Cand-1 by \cite{2011A&A...529A.118T}. It is still not a spectroscopically confirmed RCB star.}, EROS2-LMC-RCB-8, EROS2-CG-RCB-2,
V1405~Cyg, ASAS-RCB-18, and MACHO135.27132.51. In total we rejected 24
of the initial list of 93 ``known'' RCBs.
This left us with a list of 2615 candidates, 65 of which are the remaining, ``known''
RCBs (which survive this process by its very definition). The
list also includes 609 of the color-selected candidates from \cite{Tisserand2012}.
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{Figure_1.pdf}
\caption{
ASAS-SN V-band light curves of the strong RCB candidates from Table \ref{table:rcbs}. All panels have the same dynamic range in magnitude. The different colors represent different ASAS-SN cameras.
}
\label{fig:rcb1}
\end{figure*}
\clearpage
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{Figure_2.pdf}
\caption{
ASAS-SN V-band light curves of the strong RCB candidates from Table \ref{table:rcbs} with large amplitude variation. These panels have a larger vertical scale than the other figures.
}
\label{fig:rcb_large}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{Figure_3.pdf}
\caption{
ASAS-SN V-band light curves of the DY Per candidates from Tables \ref{table:rcbs} and \ref{table:old}.
}
\label{fig:dy_per}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{Figure_4.pdf}
\caption{
ASAS-SN light curves of the RCB candidates discovered outside of the initial search from Table \ref{table:old}.
}
\label{fig:rcb_old}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{Figure_5.pdf}
\caption{
ASAS-SN light curves of variable stars that are weak RCB candidates from Table \ref{table:iffy}.
}
\label{fig:rcb_uncertain}
\end{figure*}
\section{Optical Variability}
ASAS-SN has been operating since 2013 (\citealt{2014ApJ...788...48S}) and provides up to 5 years of data to examine for RCB-like variability. It saturates at $V \sim 10 - 11$ mag and can detect objects down to $V \sim 17$ mag (\citealt{2017PASP..129j4502K}). We extracted light curves for all sources in the two RCB candidates lists from Section \ref{Target Selection}, as well as for all the RCBs reported by SIMBAD (\citealt{2000A&AS..143....9W}).
We began by examining the light curves of known RCBs to understand how they would appear in our data. A typical RCB light curve shows a plateau in brightness that can last for years, before undergoing an abrupt fading event and then slowly recovering to the plateau brightness. DY Per variables usually decline more slowly and have a more symmetrical recovery (e. g., \citealt{2001ApJ...554..298A}).
We then visually scanned each of the 1602 candidates in Tisserand's original list and the 2615 candidates that we generated using the SED matching approach. We discovered the 15 objects presented in Table \ref{table:rcbs} as strong candidates for new RCBs or DY Pers. Some of these objects have preexisting classifications in the International Variable Star Index (VSX, \citealt{2006SASS...25...47W}), and these are noted in Table \ref{table:rcbs}. We present light curves for each of these objects in Figures \ref{fig:rcb1}, \ref{fig:rcb_large}, and \ref{fig:dy_per}.
In Table \ref{table:old} we present four objects that we discovered while looking through other variables in ASAS-SN data. ASASSN-V J161156.22-575527.2 was included in \citet{Tisserand2012} and was serendipitously discovered in ASAS-SN data by \citet{2017ATel11017....1J} before we began the search described in this work. Two of the remaining objects display strong RCB-like variability, but were not included in either of our candidate lists because of their colors. Two of these objects have preexisting classifications in VSX. We present the RCB candidate light curves in Figure \ref{fig:rcb_old}. ASASSN-V J175700.51-213934.5 shows DY Per like variability and has been grouped with the other DY Per candidates in Figure \ref{fig:dy_per}. We additionally present 16 more objects with peculiar light curves that are weak RCB candidates. These objects are listed in Table \ref{table:iffy} with speculative variability types based on their light curve morphologies, and their light curves are presented in Figure \ref{fig:rcb_uncertain}.
\section{Discussion}
Figure \ref{fig:jk} shows the distribution of RCBs in the Gaia DR2 $G_{BP}-G_{RP}$ vs. $J-K_s$ color-color space \citep{2018arXiv180409365G,2006AJ....131.1163S}. We compare the RCBs with a sample of rotational, Mira, and semi-regular/irregular variables from \cite{2018arXiv180907329J} and the Catalog of Galactic Carbon Stars \citep{2001BaltA..10....1A}. The carbon stars, Mira variables and semi-regular variables all form distinct locii in this color-color space, where the carbon rich sources have redder $J-K_s$ colors for any given $G_{BP}-G_{RP}$ beyond $G_{BP}-G_{RP}\sim2$. RCBs have carbon rich atmospheres and most known RCBs lie on or above the locus of carbon stars in $G_{BP}-G_{RP}$ vs $J-K_s$. A few known RCBs fall along the semi-regular/Mira locii, making these classifications uncertain, although this distinction becomes hazy towards bluer colors. We note that our RCB candidates are consistent with the general distribution of known RCBs.
\begin{figure*}
\centering
\includegraphics[width=\linewidth]{Figure_6.pdf}
\caption{
Gaia DR2 $G_{BP}$-$G_{RP}$ vs. 2MASS $J-K_s$ color-color diagram. Sources from the Catalog of Galactic Carbon Stars \citep{2001BaltA..10....1A} are colored in black, and sources from the ASAS-SN Catalog of Variable Stars: II \citep{2018arXiv180907329J} are colored by their variability type. RCB candidates from this work are denoted as purple diamonds and peculiar variables in this work are denoted as purple stars. The reddening vector corresponding to an extinction of $A_V=1$ mag is shown in red.
}
\label{fig:jk}
\end{figure*}
The stars presented in Table \ref{table:old} were discovered outside of our original search. Other than the RCB candidate ASASSN-V J161156.22-575527.2, the remaining RCB candidates have colors that fail the initial mid-IR color cuts used by \cite{Tisserand2012} and our own procedure (see Figure \ref{fig:color}). Their light curves show distinctive RCB like variability making them likely RCB candidates. Their existence suggests that more RCBs should be visually identifiable in ASAS-SN data given a simple method to search for RCB-like variability independent of color information.
These candidates were selected because they have near/mid-IR spectral energy distributions and optical light curves that are fairly typical of RCBs. There are other classes of variables which undergo dust formation episodes (\citealt{2014JAVSO..42...13O}) that might be included in the sample, so spectroscopic observations will be necessary for final confirmation of the classifications.
As we were completing this paper, \cite{2018arXiv180901743T} reported the discovery and spectroscopic confirmation of 45 new RCBs. Five of these systems, as well as two of their strong RCB candidates, are on our high confidence list, and one is on the weaker candidate list, as indicated in Tables \ref{table:rcbs} and \ref{table:iffy}. Additionally, \cite{2018arXiv180901474T} updated the infrared selection of \cite{Tisserand2012} that we used in this paper. Our next steps include searching through the light curves of the new candidates.
\begin{figure*}
\centering
\includegraphics[width=\linewidth]{Figure_7.pdf}
\caption{
[12]-[22] vs. [4.6]-[12] ALLWISE color-color diagram. The blue points are the 1602 candidates from \protect\cite{Tisserand2012}, with the blue lines showing the initial color cuts used to generate the list. The red points are our new RCB candidates, and the black points are the three RCB candidates we discovered outside our initial search. The pink points are previously known RCB candidates from \protect\cite{Tisserand2012}.
}
\label{fig:color}
\end{figure*}
\section*{Acknowledgments}
We thank the referee, Geoff Clayton, for his comments that helped improve this paper. We thank the Las Cumbres Observatory and its staff for its
continuing support of the ASAS-SN project. We also thank
the Ohio State University College of Arts and Sciences Tech-
nology Services for helping us set up the ASAS-SN variable
stars database.
ASAS-SN is supported by the Gordon and Betty Moore
Foundation through grant GBMF5490 to the Ohio State
University and NSF grant AST-1515927. Development of
ASAS-SN has been supported by NSF grant AST-0908816,
the Mt. Cuba Astronomical Foundation, the Center for Cos-
mology and AstroParticle Physics at the Ohio State Univer-
sity, the Chinese Academy of Sciences South America Cen-
ter for Astronomy (CAS- SACA), the Villum Foundation,
and George Skestos.
This publication makes use of data products from the Two Micron All Sky Survey, which is a joint project of the University of Massachusetts and the Infrared Processing and Analysis Center/California Institute of Technology, funded by the National Aeronautics and Space Administration and the National Science Foundation.
This publication makes use of data products from the Wide-field Infrared Survey Explorer, which is a joint project of the University of California, Los Angeles, and the Jet Propulsion Laboratory/California Institute of Technology, funded by the National Aeronautics and Space Administration.
This research has made use of the NASA/ IPAC Infrared Science Archive, which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration.
This work has made use of data from the European Space Agency (ESA)
mission {\it Gaia} (\url{https://www.cosmos.esa.int/gaia}), processed by
the {\it Gaia} Data Processing and Analysis Consortium (DPAC,
\url{https://www.cosmos.esa.int/web/gaia/dpac/consortium}). Funding
for the DPAC has been provided by national institutions, in particular
the institutions participating in the {\it Gaia} Multilateral Agreement.
|
1,108,101,566,360 | arxiv | \section{Introduction and Statement of the Results}
\label{sec:intro}
In this paper, we study the distribution of resonances for convex strictly obstacles $\mathcal{O}$ under general boundary conditions including Neumann and general smooth Robin boundary conditions $\partial_\nu u+\eta u=0$, $\eta\in C^\infty(\partial\mathcal{O})$. The goal of this paper is to prove that if the boundary of the obstacle satisfies a pinched curvature condition, then the resonances that are close to the real axis are separated in several bands. We also give the asymptotics formula for the counting functions of resonances in each bands.
Let $\mathcal{O}\subset\mathbb{R}^n$ be a strictly convex obstacle with smooth boundary. More precisely, let $Q$ be the second fundamental form of $\partial\mathcal{O}$ and $S\partial\mathcal{O}$ be the sphere bundle of $\partial\mathcal{O}$, then $\min_{S\partial\mathcal{O}}Q>0$. We shall write
\begin{equation}
\kappa=2^{-1/3}\cos(\pi/6)\min_{S\partial\mathcal{O}}Q^{2/3},\;\;\;
K=2^{-1/3}\cos(\pi/6)\max_{S\partial\mathcal{O}}Q^{2/3}.
\end{equation}
Let $P=-\Delta_{\mathbb{R}^n\setminus\mathcal{O}}$ be the Laplacian operator on the exterior domain $\mathbb{R}^n\setminus\mathcal{O}$ associated with the Neumann/Robin boundary condition which will be defined precisely later in section \ref{sec:prelim}, then the resolvent $R(\lambda)=(-\Delta-\lambda^2)^{-1}$ which is analytic for $\Im \lambda>0$ has a meromorphic continuation to the whole complex plane $\mathbb{C}$ (when $n$ is odd) or the logarithmic covering of $\mathbb{C}\setminus\{0\}$ (when $n$ is even). The poles of $R(\lambda)$ are called resonances or scattering poles.
In \cite{J}, we proved that there are no resonances in the region
\begin{equation}
C\leqslant\Re\lambda,\;\;\; 0\leqslant-\Im\lambda\leqslant\kappa\zeta_1'(\Re\lambda)^{1/3}-C
\end{equation}
where $\zeta_1'$ is the negative of the first zero of the derivative of the Airy function $\Ai'$ and $C$ is some constant. The main result in this paper is to obtain alternating cubic bands with and without resonances. More precisely, let $0<\zeta_1'<\zeta_2'<\cdots$ be the negative of the zeroes of $\Ai'$, then we have the following theorem.
\begin{thm}
\label{thm:main1}
Suppose we have the following pinched curvature condition
\begin{equation}
\frac{\max_{S\partial\mathcal{O}}Q}{\min_{S\partial\mathcal{O}}Q}
<\left(\frac{\zeta'_{j_0+1}}{\zeta'_{j_0}}\right)^{3/2}
\end{equation}
for some $j_0\geqslant1$. Then there exists a constant $C>0$ such that for all $0\leqslant j\leqslant j_0$, there are no resonances in the regions
\begin{equation}
C\leqslant\Re\lambda,\;\;\; K\zeta_j'(\Re\lambda)^{1/3}+C\leqslant-\Im\lambda
\leqslant\kappa\zeta_{j+1}'(\Re\lambda)^{1/3}-C.
\end{equation}
\end{thm}
The Dirichlet case was already established in Sj\"{o}strand-Zworski \cite{SZ6} where a Weyl law for resonances in each band with a rough error term was also given. Their argument can also be directly adapted to our situation to give the following theorem.
\begin{thm}
\label{thm:weyl}
Under the assumption in theorem \ref{thm:main1}, for some $C>0$ and all $1\leqslant j\leqslant j_0$,
\begin{equation}
\label{weyl}
\begin{split}
\sum\{M_\mathcal{O}(\lambda):|\lambda|\leqslant r,&\;
\kappa\zeta_j'(\Re\lambda)^{1/3}-C<-\Im\lambda
<K\zeta_j'(\Re\lambda)^{1/3}+C\}\\
&=(1+o(1))(2\pi)^{1-n}\vol(B^{n-1}(0,1))\vol(\partial\mathcal{O})r^{n-1},
\end{split}
\end{equation}
where $B^{n-1}(0,1)$ is the unit ball in $\mathbb{R}^n$.
\end{thm}
We should point out that for spherical obstacles (for which $\kappa=K$ and the pinched curvature assumption in Theorem \ref{thm:main1} holds trivially for all $j_0$), the resonances can be described using Hankel functions in the case of Dirichlet, Neumann and constant Robin boundary conditions. Each band,
\begin{equation*}
\kappa\zeta_j'(\Re\lambda)^{1/3}-C<-\Im\lambda
<K\zeta_j'(\Re\lambda)^{1/3}+C,
\end{equation*}
between the resonance-free bands actually reduces to a curve which is asymptotically cubic. Moreover, there is a better error in Weyl's law, $O(r^{n-2})$ instead of $o(r^{n-1})$, in this situation. See Stefanov \cite{St} for a detailed discussion of scattering resonances for the sphere. The results of Sj\"{o}strand-Zworski \cite{SZ6} and of this paper show that more bands are separated from each other when the obstacle is closer to a ball (in the sense that the curvatures of the boundary are closer to a constant.)
The problem of the distribution of resonances for convex obstacles has been extensively studied in the literature. For the spherical case, it dates back to Watson's work on the scattering of electromagnetic wave by the earth \cite{W}. Other notable works include Lax-Phillips \cite{LP1}-\cite{LP3}, Babich-Grigoreva \cite{BG}, Filippov-Zayaev \cite{FZ}, Morawetz-Ralston-Strauss \cite{MRS}, Melrose \cite{Me1}, Lebeau \cite{Le}, Bardos-Lebeau-Rauch \cite{BLR}, Popov \cite{Po}, Harg\'{e}-Lebeau \cite{HL}, Sj\"{o}strand \cite{S2}, Sj\"{o}strand-Zworski \cite{SZ1}-\cite{SZ6} and Stefanov \cite{St}.
See \cite{Me2}, \cite{Z1} for surveys on this topic and other related settings.
\subsection*{Outline of the proof}
Our strategy is based on a modification of the approach in \cite{SZ6} where the phenomenon that resonances appear in bands was first proved. Our paper is organized as follows.
In Section \ref{sec:prelim}, we reduce the problem to the study of an operator constructed from combining a semiclassical differential operator $P-z$ with a boundary operator $\gamma$. The reason we introduce this combined operator is to avoid the domain issues for different Neumann/Robin boundary conditions and treat them in the same setting. Moreover, in the semiclassical setting, the Robin boundary operator is a perturbation of the Neumann boundary operator. We also follow the long tradition of the complex scaling method in mathematical physics, first introduced in \cite{AC}, \cite{BC}, to deform the self-adjoint operator with continuous spectrum to a non-self-adjoint operator whose discrete spectrum near the real axis coincides with the resonances. In our setting, the complex scaling method has been introduced in \cite{SZ1}, and then in \cite{HL}, \cite{SZ4} and \cite{SZ5}.
In Section \ref{sec:model}-\ref{sec:global}, we set up the Grushin problem for the combined operator and therefore identify the resonances with poles of a meromorphic family of operators on the boundary. The survey \cite{SZ7} gives a good reference for the application of Grushin problem in the study of spectral theory; also see the appendix of \cite{HS}.
In Section \ref{sec:model}, we study the model case near the boundary in which case we have an ordinary differential operator with a Neumann boundary operator in the normal direction. This part is the main novelty of this paper. The complication is due to the presence of the boundary operator which makes the total operator not normal. To deal with this, we need a more careful study of the asymptotics of Airy functions in different directions in the complex plane.
In Section \ref{sec:second}, we continue working near the boundary and study the microlocal structure of the Grushin problem. As in \cite{SZ6}, the suitable symbol class for the operators is given by a second microlocalization with respect to the glancing hypersurface. We shall first review the results in \cite[Section 4]{SZ6} for such symbol classes, then see how the operators we construct fit into these classes.
In Section \ref{sec:global}, we combine the work in Section \ref{sec:model} and Section \ref{sec:second} with the results in \cite[Section 7]{SZ6} for the study of the Laplacian operator away from the boundary to set up the global Grushin problem. The construction of the inverse for this Grushin problem is essentially the same as \cite[Section 8]{SZ6} with modification needed for our operator. This produces an effective Hamiltonian $E_{-+}$ which is a matrix-valued operator on the boundary.
Finally in \ref{sec:resfree} we prove the main theorems using the properties of the operator $E_{-+}$.
\subsection*{Acknowledgement}
I would like to thank Maciej Zworski for the encouragement and advice during the preparation of this paper.
Partial support by the National Science Foundation grant DMS-1201417 is also gratefully acknowledged.
\section{Preliminaries and reduction of the problem}
\label{sec:prelim}
We begin by reviewing the definition of the resonances and its multiplicities. Next we apply the standard complex scaling method to identify the resonances with eigenvalues of a non-self-adjoint operator. Then we further reduce the problem to the study of an operator combining this operator with the corresponding boundary operator.
\subsection{Resonances and their multiplicities}
Let us consider different boundary conditions for the Laplacian operator $-\Delta_{\mathbb{R}^n\setminus\mathcal{O}}$ in the exterior of an obstacle $\mathcal{O}$ in $\mathbb{R}^n$:
\begin{equation*}
u|_{\partial\mathcal{O}}=0 \;\;\; \text{ (Dirichlet) }
\end{equation*}
or
\begin{equation}
\label{boundary:NR}
\partial_{\nu}u+\eta u|_{\partial\mathcal{O}}=0 \;\;\;( \text{Neumann when } \eta=0 \text{ or Robin})
\end{equation}
where $\eta\in C^\infty(\partial\mathcal{O};\mathbb{R})$. For the Dirichlet problem, $-\Delta_{\mathbb{R}^n\setminus\mathcal{O}}$ has the natural domain $H_0^1(\mathbb{R}^n\setminus\mathcal{O})\cap H^2(\mathbb{R}^n\setminus\mathcal{O})$. For the Neumann or Robin problem, $-\Delta_{\mathbb{R}^n\setminus\mathcal{O}}$ has the following domain
\begin{equation}
\label{domain:NR}
\mathcal{D}_\eta(\mathbb{R}^n\setminus\mathcal{O}):=
\{u\in H^2(\mathbb{R}^n\setminus\mathcal{O}):
\partial_{\nu}u+\eta u=0\}.
\end{equation}
In either case, the resonance are defined as the poles of the meromorphic extension of the resolvent
\begin{equation*}
R(\zeta)=(-\Delta_{\mathbb{R}^n\setminus\mathcal{O}}-\zeta^2)^{-1}: L^2_{\comp}(\mathbb{R}^n\setminus\mathcal{O})\to L^2_{\loc}(\mathbb{R}^n\setminus\mathcal{O})
\end{equation*}
from the upper half plane $\Im\zeta>0$ to the whole complex plane if $n$ is odd, the logarithmic covering of $\mathbb{C}\setminus\{0\}$ if $n$ is even. The multiplicity of a resonance $\zeta$ is given by
\begin{equation*}
m_{\mathcal{O}}(\zeta)=\rank\oint_{|z-\zeta|=\epsilon}R(z)2zdz
=\tr\frac{1}{2\pi i}\oint_{|z-\zeta|=\epsilon}R(z)2zdz,
\end{equation*}
where $0<\epsilon\ll1$ so that there are no other resonances on the disk $|z-\zeta|\leqslant\epsilon$.
\subsection{Complex Scaling}
The complex scaling method has a long tradition in mathematical physics. It was first introduced by Aguilar-Combes \cite{AC} and Balslev-Combes \cite{BC} in studying the continuous spectrum of Schr\"{o}dinger operators and later proved to be a strong tool in the study of resonances. Sj\"{o}strand and Zworski build up the theory for the case of scattering by a convex obstacle in a series paper \cite{SZ1}, \cite{SZ4} and \cite{SZ5}. We shall adopt the same approach and notations as in \cite{SZ6} and our previous paper \cite{J}.
Let $\mathcal{O}$ be a convex obstacle in $\mathbb{R}^n$ with smooth boundary. We introduce the following normal geodesic coordinates on the exterior domain $\mathbb{R}^n\setminus\mathcal{O}$:
$$x=(x',x_n)\mapsto x'+x_n\vec{n}(x'),\;\;\; x'\in\partial\mathcal{O},\;\;\; x_n=d(x,\partial\mathcal{O}),$$
where $\nu(x')$ is the exterior unit normal vector to $\mathcal{O}$ at $x'$:
$$\nu(x')\in N_{x'}\partial\mathcal{O},\;\;\; \|\nu(x')\|=1.$$
Then
\begin{equation*}
-\Delta_{\mathbb{R}^n\setminus\mathcal{O}}
=D_{x_n}^2+R(x',D_{x'})-2x_nQ(x_n,x',D_{x'})+G(x_n,x')D_{x_n},
\end{equation*}
where $R(x',D_{x'})$, $Q(x_n,x',D_{x'})$ are second order operators on $\partial\mathcal{O}$:
\begin{equation*}
R(x',D_{x'})=-\Delta_{\partial\mathcal{O}}=(\det(g^{ij}))^{1/2}
\sum_{i,j=1}^{n-1}D_{y_i}(\det(g_{ij}))^{1/2}g^{ij}D_{y_j}
\end{equation*}
is the Laplacian with respect to the induced metric $g=(g_{ij})$ on $\partial\mathcal{O}$ and $Q(x',D_{x'})=Q(0,x',D_{x'})$ is of the form
\begin{equation*}
\det(g^{ij})^{1/2}\sum_{i,j=1}^{n-1}D_{y_j'}
(\det(g_{ij}))^{1/2}a_{ij}D_{y_i'}
\end{equation*}
in any local coordinates such that the principal symbol of $Q$ is the second fundamental form of $\partial\mathcal{O}$ lifted by the duality to $T^\ast\partial\mathcal{O}$:
\begin{equation*}
Q(x',\xi')=\sum_{i,j=1}^{n-1}a_{ij}(x')\xi_i\xi_j.
\end{equation*}
Thus the principal curvatures of $\partial\mathcal{O}$ are the eigenvalues of the quadratic form $Q(x',\xi')$ with respect to the quadratic form $R(x',\xi')$.
Now we consider the complex contour given by
\begin{equation*}
\mathbb{R}^n\setminus\mathcal{O}\ni x\mapsto
z=x+i\theta(x)f'(x)\in
\Gamma\subset\mathbb{R}^n\setminus\mathcal{O}+i\mathbb{R}^n,
\end{equation*}
where $f(x)=\frac{1}{2}d(x,\partial\mathcal{O})^2$. When near the boundary, we scale by the angle $\pi/3$ which is first introduced in \cite{HL}:
\begin{equation*}
\frac{1+i\theta(x)}{|1+i\theta(x)|}=e^{i\pi/3},\;\;\; d(x,\partial\mathcal{O})<C^{-1}
\end{equation*}
and then connect to the scaling with a smaller angle $\theta(x)=\theta_0$ near infinity. Whenever there is no confusion, we shall identify $\Gamma$ with $\mathbb{R}^n\setminus\mathcal{O}$ as above and use the normal geodesic coordinates $(x',x_n)$ as coordinates on $\Gamma$. We define $-\Delta_\Gamma$ as the restriction of the holomorphic Laplacian on $\mathbb{C}^n$
\begin{equation*}
-\Delta_z=\sum_{j=1}^nD_{z_j}^2
\end{equation*}
to $\Gamma$. Therefore we have the following expression near the boundary
\begin{equation*}
-\Delta_\Gamma=e^{-2\pi i/3}((D_{x_n})^2+2x_nQ(x_n,x',D_{x'}))+R(x',D_{x'})+F(x_n,x')D_{x_n}.
\end{equation*}
This shows that $\pi/3$ is the correct scaling angle and we get an Airy-type differential operator in the normal direction.
We can also associate the scaled operator with different boundary conditions on $\partial\Gamma=\partial\mathcal{O}$:
\begin{equation*}
u|_{\partial\mathcal{O}}=0 \;\;\; \text{ (Dirichlet) }
\end{equation*}
or
\begin{equation*}
\partial_{\vec{n}}u+e^{\pi i/3}\eta u|_{\partial\mathcal{O}}=0 \;\;\;( \text{Neumann when } \eta=0 \text{ or Robin}).
\end{equation*}
Now for Dirichlet problem, the scaled operator
$-\Delta_\Gamma$ has the natural domain
$H_0^1(\Gamma)\cap H^2(\Gamma)$ and for Neumann or Robin boundary condition $-\Delta_\Gamma$ has the domain
\begin{equation}
\label{scaled:NR}
\mathcal{D}_\eta(\Gamma):=\{u\in H^2(\Gamma):\partial_\nu u+e^{\pi i/3}\eta u|_{\partial\mathcal{O}}=0\}.
\end{equation}
It was shown in \cite{SZ4} that
\begin{prop}
The spectrum of $-\Delta_\Gamma$ is discrete in $-2\theta_0<\arg z<0$ and the resonances of $-\Delta_{\mathbb{R}^n\setminus\mathcal{O}}$ in the sector $-\theta_0<\arg\zeta<0$ are the same as the square root of the eigenvalues of $-\Delta_\Gamma$ with corresponding boundary condition in $-2\theta_0<\arg z<0$. Moreover, they have the same multiplicities:
\begin{equation*}
m_{\mathcal{O}}(\zeta)=m(z):=\tr\frac{1}{2\pi i}\oint_{|\tilde{z}-z|=\epsilon}
(-\Delta_\Gamma-\tilde{z})^{-1}d\tilde{z}
\end{equation*}
where $z=\zeta^2$, $0<\epsilon\ll1$ so that there are no other eigenvalues of $-\Delta_\Gamma$ in $|\tilde{z}-z|\leqslant\epsilon$.
\end{prop}
\subsection{Further reductions}
We work in the semiclassical setting and introduce $P(h):=-h^2\Delta_\Gamma$. Near the boundary, we have the expression
\begin{equation}
P(h)=e^{-2\pi i/3}((hD_{x_n})^2+2x_nQ(x_n,x',hD_{x'};h))
+R(x',hD_{x'};h)+hF(x_n,x')hD_{x_n}.
\end{equation}
Also for $w\in W\Subset(0,\infty)$ and $|\Im z|\leqslant C$, $|\Re z|\ll\delta^{-1}$, we let $P-z=h^{-2/3}(P(h)-w)-z$, so near the boundary,
\begin{equation}
\label{op:scale}
\begin{split}
P-z=&\;e^{-2\pi i/3}(D_t^2+2tQ(h^{2/3}t,x',hD_{x'};h))\\
&+h^{-2/3}(R(x',hD_{x'};h)-w)+F(h^{2/3}t,x')h^{2/3}D_t-z,
\end{split}
\end{equation}
where $t=h^{-2/3}x_n$.
There are certain difficulty in working with Robin boundary conditions with the domain \eqref{domain:NR} or more precisely with the scaled boundary condition \eqref{scaled:NR}. In normal geodesic coordinates introduced above, the domain will change as the function $\eta$ changes and this causes the difficulty in the formulation of the model problem later.
To avoid this issue, notice that in the $t$-coordinates, the condition \eqref{scaled:NR} can be rewritten as
\begin{equation*}
\partial_tu+h^{2/3}ku|_{t=0}=0,
\end{equation*}
where $k=e^{\pi i/3}\eta$. Roughly speaking, the principal term corresponds to the Neumann boundary condition. This motivates us to consider the Robin boundary problem with general $\eta\in C^\infty(\partial\mathcal{O})$ as a perturbation of the Neumann boundary problem. To achieve this, we shall combine our differential operator $P-z$ with the boundary operator and consider
\begin{equation}
\label{op:comb}
\left(
\begin{array}{c}
P-z \\
\gamma \\
\end{array}
\right): H^2(\mathbb{R}^n\setminus\mathcal{O})\to L^2(\mathbb{R}^n\setminus\mathcal{O})\times H^l(\partial\mathcal{O})
\end{equation}
where for Dirichlet problem, $l=\frac{3}{2}$,
\begin{equation*}
\gamma=\gamma_0:H^2(\mathbb{R}^n\setminus\mathcal{O})\to H^{3/2}(\partial\mathcal{O}),\;\;\; u\mapsto u|_{\partial\mathcal{O}};
\end{equation*}
and for Neumann or Robin problem $(k=e^{\pi i/3}\gamma)$ that we shall focus on, $l=\frac{1}{2}$,
\begin{equation}
\label{op:NR}
\gamma=h^{2/3}(\gamma_1+k\gamma_0):
H^2(\mathbb{R}^n\setminus\mathcal{O})\to H^{1/2}(\partial\mathcal{O}),\;\;\; u\mapsto h^{2/3}(\partial_\nu u+ku)|_{\partial\mathcal{O}}.
\end{equation}
In the coordinates $(t,x')$, we have $\gamma(u)=u(0,\cdot)$ (Dirichlet) or
\begin{equation*}
\gamma(u)=(\partial_tu+h^{2/3}ku)(0,\cdot)\;\; \text{ (Neumann or Robin).}
\end{equation*}
Therefore from now on we shall think of $P-z$ as the first component of the combined operator \eqref{op:comb}, i.e. the differential operator from $H^2(\mathbb{R}^n\setminus\mathcal{O})$ to $L^2(\mathbb{R}^n\setminus\mathcal{O})$ instead of an operator with a smaller domain \eqref{scaled:NR}. Moreover, to avoid confusion, we shall write $R_P(z)$ to be the resolvent of $P$ with domain \eqref{scaled:NR}, or in other words, $R_P(z)$ is a right inverse of $P-z:H^2\to L^2$ satisfying $\gamma R_P(z)=0$. We wish to use our new operator \eqref{op:comb} to give an equivalent description of resonances instead of
\begin{equation}
\label{res:scale}
m(h^{-2}(w+h^{2/3}z))=\tr\frac{1}{2\pi i}\oint_{|\tilde{z}-z|=\epsilon}
R_P(\tilde{z})d\tilde{z},\;\;\; 0<\epsilon\ll1.
\end{equation}
\begin{prop}
The eigenvalues of $P$ are exactly the poles of
\begin{equation}
\label{op:invcomb}
\left(
\begin{array}{c}
P-z \\
\gamma \\
\end{array}
\right)^{-1}: L^2(\mathbb{R}^n\setminus\mathcal{O})\times H^l(\partial\mathcal{O})\to H^2(\mathbb{R}^n\setminus\mathcal{O})
\end{equation}
as a meromorphic operator-valued function in $z$. Moreover, they have the same multiplicity:
\begin{equation}
\label{res:comb}
m(h^{-2}(w+h^{2/3}z))=\tr-\frac{1}{2\pi i}\oint_{|\tilde{z}-z|=\epsilon}
\left(
\begin{array}{c}
P-\tilde{z} \\
\gamma \\
\end{array}
\right)^{-1}
\frac{d}{d\tilde{z}}\left(
\begin{array}{c}
P-\tilde{z} \\
\gamma \\
\end{array}
\right)d\tilde{z},
\end{equation}
where $0<\epsilon\ll1$ is chosen in a way that there are no other poles for the operator \eqref{op:invcomb} in $|\tilde{z}-z|<\epsilon$.
\end{prop}
\begin{proof}
Let $K$ be a right inverse of $\gamma$:
\begin{equation}
\label{op:invtrace}
K:L^2(\partial\mathcal{O})\to H^2(\mathbb{R}^n\setminus\partial\mathcal{O}),\;\;\; \gamma Kg=g,\;\;\; \forall g\in H^l(\partial\mathcal{O}).
\end{equation}
One possible choice is the so-called Poisson operator, but any choice will be good for us. Then we have
\begin{equation}
\label{rel:comb}
\left(
\begin{array}{c}
P-z \\
\gamma \\
\end{array}
\right)^{-1}=(R_P(z),K-R_P(z)(P-z)K),
\end{equation}
In fact, for any $(v,g)\in L^2(\mathbb{R}^n\setminus\mathcal{O})\times H^l(\partial\mathcal{O})$, let
$$u=R_P(z)v+(K-R_P(z)(P-z)K)g,$$
by the construction of $K$, \eqref{op:invtrace}, and the fact that $\gamma R_P(z)=0$,
\begin{equation*}
(P-z)u=v+(P-z)Kg-(P-z)Kg=v,\;\;\; \gamma u=\gamma Kg=g.
\end{equation*}
Therefore \eqref{rel:comb} gives
\begin{equation*}
\left(
\begin{array}{c}
P-z \\
\gamma \\
\end{array}
\right)^{-1}
\frac{d}{dz}\left(
\begin{array}{c}
P-z \\
\gamma \\
\end{array}
\right)=(R_P(z),K-R_P(z)(P-z)K)
\left(
\begin{array}{c}
-1 \\
0 \\
\end{array}
\right)
=-R_P(z).
\end{equation*}
Now \eqref{res:comb} and the proposition follows directly from \eqref{res:scale}.
\end{proof}
In this paper, we shall work with the Neumann/Robin boundary \eqref{boundary:NR} condition. The techniques here can certainly be applied to Dirichlet boundary condition. However, in the Dirichlet case, since the domain is already simple enough, we do not need this reduction and a direct approach without the boundary operator is given in \cite{SZ6}.
\subsection{A simple model}
We conclude this section by presenting a simple model motivating our approach to boundary value problems using a Grushin reduction for an operator combining a differential operator and a boundary operator.
We consider the differential operator $P=-\frac{d^2}{dx^2}$ with Neumann boundary condition on the interval $[0,\pi]$. The spectrum of the operator is discrete: $\sigma(P)=\{\lambda_k=k^2: k=0,1,2,\ldots\}$
and each eigenspace is one-dimensional:
\begin{equation*}
E_k=\{f\in H^2[0,1]|f'(0)=f'(1)=0, -f''=\lambda_kf\}=\mathbb{C}\cos kx.
\end{equation*}
We set up a Grushin problem to capture the first $m$ eigenvalues using a finite matrix. For simplicity, let us consider the case $m=1$
so that the first eigenvalue is $\lambda_0=0$ with unit eigenvector $e_0=\frac{1}{\pi}$. Put
\begin{equation*}
\left(
\begin{array}{cc}
P-z & R_- \\
R_+ & 0 \\
\end{array}
\right):\mathcal{D}\times\mathbb{C}\to L^2[0,\pi]\times\mathbb{C},
\end{equation*}
where
\begin{equation*}
\mathcal{D}=\{u\in H^2[0,\pi]:u'(0)=u'(\pi)=0\}
\end{equation*}
and
\begin{equation*}
Pu=-u'',\;\;\;R_+u=\langle u,e_0\rangle=\frac{1}{\pi}\int_0^\pi udx,
\;\;\;R_-u_-=u_-e_0=\frac{u_-}{\pi}.
\end{equation*}
Then
\begin{equation*}
\left(
\begin{array}{cc}
P-z & R_- \\
R_+ & 0 \\
\end{array}
\right)
\left(
\begin{array}{c}
u \\
u_- \\
\end{array}
\right)=
\left(
\begin{array}{c}
v \\
v_+ \\
\end{array}
\right)
\end{equation*}
is equivalent to
\begin{equation*}
-u''-zu+\frac{u_-}{\pi}=v,\;\;\;\frac{1}{\pi}\int_0^\pi udx=v_+.
\end{equation*}
We can integrate the first equation on $[0,\pi]$ to get
\begin{equation*}
-(u'(\pi)-u'(0))-z\int_0^\pi udx+u_-=\int_0^\pi vdx
\end{equation*}
and thus
\begin{equation*}
u_-=(u'(\pi)-u'(0))+z\int_0^\pi udx+\int_0^\pi vdx=\pi zv_++\int_0^\pi vdx.
\end{equation*}
It is then not difficult to see that for $z<1$, we can use this $u_-$ to solve $u$ uniquely. Therefore the Grushin problem is well-posed with inverse
\begin{equation*}
\left(
\begin{array}{cc}
E & E_+ \\
E_- & E_{-+} \\
\end{array}
\right):L^2[0,\pi]\times\mathbb{C}\to\mathcal{D}\times\mathbb{C},
\end{equation*}
which has an explicit expression and we have seen that $E_{-+}=\pi z$ which is invertible if and only if $z\neq\lambda_0=0$.
The situation is somewhat similar to our case of obstacle scattering if we regard the left end point $x=0$ as the boundary, and the right end point $x=\pi$ as infinity. Recall that in the case of obstacle scattering, since the outgoing condition becomes $L^2$-condition after complex scaling, we get a ``boundary condition'' at infinity. Now, we consider another Grushin problem for $-\frac{d^2}{dx^2}$, or rather the following operator
\begin{equation*}
\left(
\begin{array}{c}
-\frac{d^2}{dx^2}-z \\
\gamma_1 \\
\end{array}
\right):\mathcal{D}'=\{u\in H^2[0,1]|u'(\pi)=0\}\to L^2[0,1]\times\mathbb{C}
\end{equation*}
where $\gamma_1u=u'(0)$. We use the same $R_+$ and $R_-$ as above to construct the Grushin problem
\begin{equation*}
\left(
\begin{array}{cc}
-\frac{d^2}{dx^2}-z & R_- \\
\gamma_1 & 0\\
R_+ & 0 \\
\end{array}
\right):\mathcal{D}'\times\mathbb{C}\to L^2[0,\pi]\times\mathbb{C}\times\mathbb{C}.
\end{equation*}
Now
\begin{equation*}
\left(
\begin{array}{cc}
-\frac{d^2}{dx^2}-z & R_- \\
\gamma_1 & 0\\
R_+ & 0 \\
\end{array}
\right)
\left(
\begin{array}{c}
u \\
u_- \\
\end{array}
\right)=
\left(
\begin{array}{c}
v \\
v_0 \\
v_+ \\
\end{array}
\right)
\end{equation*}
is equivalent to
\begin{equation*}
-u''-zu+\frac{u_-}{\pi}=v,\;\;\;u'(0)=v_0,\;\;\;\frac{1}{\pi}\int_0^\pi udx=v_+.
\end{equation*}
Again, integrating the first equation gives
\begin{equation*}
-(u'(\pi)-u'(0))-z\int_0^\pi udx+u_-=\int_0^\pi vdx
\end{equation*}
and thus
\begin{equation*}
u_-=(u'(\pi)-u'(0))+z\int_0^\pi udx+\int_0^\pi vdx=-v_0+\pi zv_++\int_0^\pi vdx.
\end{equation*}
Again, using this $u_-$, it is not difficult to solve $u$ uniquely for $z<1$. Hence this Grushin problem is also well-posed with inverse
\begin{equation*}
\left(
\begin{array}{ccc}
E & K & E_+ \\
E_- & K_- & E_{-+} \\
\end{array}
\right):L^2[0,\pi]\times\mathbb{C}\times\mathbb{C}
\to\mathcal{D}'\times\mathbb{C},
\end{equation*}
which again has an explicit expression. We find that $E_{-+}=\pi z$ coincides with $E_{-+}$ we found in the previous Grushin problem.
Of course in this trivial example we can compute everything explicitly without Grushin reduction. The importance of the Grushin problem is that we can perturb the operator and the invertibility of the perturbed operator is captured by the finite matrix $E_{-+}$ (in our case it is a $1\times1$ matrix, i.e. a scalar.) This reduces the infinite-dimensional problem to a finite-dimensional one. The second Grushin problem also allows us to perturb the boundary condition at $0$ which turns out to be crucial in our setting.
\section{Model Grushin problems}
\label{sec:model}
In this section, we shall study the model problem for ordinary differential operators by setting up a suitable Grushin problem.
Recall that we have the combined operator \eqref{op:comb}
\begin{equation*}
\left(
\begin{array}{c}
P-z \\
\gamma \\
\end{array}
\right): H^2(\mathbb{R}^n\setminus\mathcal{O})\to L^2(\mathbb{R}^n\setminus\mathcal{O})\times H^{1/2}(\partial\mathcal{O}),
\end{equation*}
where $P-z$ is given by
\begin{equation*}
P-z=h^{-2/3}(-h^2\Delta_\Gamma-w)-z:
H^2(\mathbb{R}^n\setminus\mathcal{O})\to L^2(\mathbb{R}^n\setminus\mathcal{O})
\end{equation*}
and $\gamma$ is given by
\begin{equation*}
\gamma=h^{2/3}(\gamma_1+k\gamma_0):
H^2(\mathbb{R}^n\setminus\mathcal{O})\to L^2(\partial\mathcal{O}),\;\;\; u\mapsto h^{2/3}(\partial_\nu u+ku)|_{\partial\mathcal{O}}.
\end{equation*}
In local coordinates $(t=h^{-2/3}x_n,x')$ near the boundary introduced in Section \ref{sec:prelim}, we have
\begin{equation*}
\begin{split}
P-z=&\;e^{-2\pi i/3}(D_t^2+2tQ(h^{2/3}t,x',hD_{x'};h))\\
&+h^{-2/3}(R(x',hD_{x'};h)-w)+F(h^{2/3}t,x')h^{2/3}D_t-z,
\end{split}
\end{equation*}
and
\begin{equation*}
\gamma(u)=\gamma_1(u)+h^{2/3}k\gamma_0(u)
=(\partial_tu+h^{2/3}ku)(0,\cdot).
\end{equation*}
Therefore we start by ignoring the lower order terms and considering a model operator
\begin{equation}
\label{model:airy}
P_\lambda-z=e^{-2\pi i/3}(D_t^2+\mu t)+\lambda-z
\end{equation}
with $\gamma_1:u\mapsto u'(0)$, where $\lambda\in\mathbb{R}$, $C^{-1}\leqslant\mu\leqslant C$ and $|\Im z|<C_1$ with
$C_1$ large but fixed. Here we regard $\lambda$ as $h^{-2/3}(R(x',hD_{x'})-w)$, and $\mu$ as $Q(0,x',hD_{x'})$. Other terms will be small perturbation.
The model above is only necessary for handling the region near the glancing hypersurface $\Sigma_w=\{R(x',\xi')=w\}$. In the situation that $|\lambda|\gg1+|\Re z|$, i.e. away from the glancing region, since $Q$ is bounded by $R$, we can also treat the term $e^{-2\pi i/3}\mu t$ as a perturbation and instead consider the model operator
\begin{equation}
\label{model:easy}
P_\lambda^\#-z=e^{-2\pi i/3}D_t^2+\lambda-z
\end{equation}
with the same $\gamma_1$ and $\lambda\in\mathbb{R}, |\Im z|<C_1$. Here we note that \eqref{model:easy} is elliptic as $|\lambda-\Re z|\gg1$ and thus this model is easier to work with.
In this section, we shall first review some properties of Airy function and estimates of Airy operators and boundary operators. Next we solve the Grushin problem for the model Airy operators in the case $\mu=1$. Then we treat the easier model operator \eqref{model:easy} in the same way. Finally we shall show how the additional parameter $\mu$ affects our construction and that all the estimates are uniform for $\mu$ in a compact subset of $(0,\infty)$.
\subsection{Asymptotics and zeroes of Airy functions}
Recall that the Airy function $\Ai$ can be defined by the formula
\begin{equation}
\label{fn:Airy}
\Ai(t)=\frac{1}{2\pi}\int_{\Im\sigma=\delta>0}e^{i(\sigma^3/3)+i\sigma t}d\sigma
\end{equation}
in the real domain and it is in fact an entire function for $t\in\mathbb{C}$ with different asymptotic behaviors in different directions. For example, in the positive real direction,
\begin{equation}
\label{asy:Airy-pos}
\begin{split}
\Ai(t)=&\;(2\sqrt{\pi})^{-1}t^{-1/4}
e^{-\frac{2}{3}t^{3/2}}(1+O(t^{-3/2})),\\
\Ai'(t)=&\;-(2\sqrt{\pi})^{-1}t^{1/4}
e^{-\frac{2}{3}t^{3/2}}(1+O(t^{-3/2})),\\
\end{split}
\end{equation}
as $t\to\infty$; while in the negative real direction,
\begin{equation}
\label{asy:Airy-neg}
\begin{split}
\Ai(-t)=&\;\pi^{-1/2}t^{-1/4}
\left(\sin(\frac{2}{3}t^{3/2}+\frac{\pi}{4})+O(t^{-3/2})\right),\\
\Ai'(-t)=&\;-\pi^{-1/2}t^{1/4}
\left(\cos(\frac{2}{3}t^{3/2}+\frac{\pi}{4})+O(t^{-3/2})\right),\\
\end{split}
\end{equation}
as $t\to\infty$. Moreover, \eqref{asy:Airy-pos} holds away from the negative real axis:
\begin{equation}
\label{asy:Airy-com}
\begin{split}
\Ai(z)=&\;(2\sqrt{\pi})^{-1}
e^{-\zeta}z^{-1/4}(1+O(|\zeta|^{-1})),\\
\Ai'(z)=&\;-(2\sqrt{\pi})^{-1}
e^{-\zeta}z^{1/4}(1+O(|\zeta|^{-1})),\\
\end{split}
\end{equation}
uniformly for $0\leqslant|\arg z|\leqslant\pi-\delta$, where $\delta>0$ is fixed. Here $\zeta=\frac{2}{3}z^{3/2}$ and we choose the branch such that if $z$ is real and positive, then so is $\zeta$.
Let $0<\zeta_1<\zeta_2<\cdots$ and $0<\zeta_1'<\zeta_2'<\cdots$ be the negatives of the zeroes of $\Ai$ and $\Ai'$, respectively. All of these zeroes are simple and we have $\zeta_j'<\zeta_j<\zeta_{j+1}'$. The distances between the zeroes get closer: $\zeta_{j+1}-\zeta_j\searrow0$ and $\zeta_{j+1}'-\zeta_j'\searrow0$ as $j\to\infty$. This can be proved by Sturm's comparison theorem.
The Airy function $\Ai$ solves the simple differential equation
$(D_t^2+t)\Ai(t)=0, t\in\mathbb{R}$.
Therefore all the eigenfunctions and eigenvalues for the Dirichlet and Neumann realization of the Airy operator $D_t^2+t$ on $[0,\infty)$ are given by translations of the Airy function:
\begin{equation*}
(D_t^2+t)\Ai(t-\zeta_j)=\zeta_j\Ai(t-\zeta_j),\;\;\;
(D_t^2+t)\Ai(t-\zeta_j')=\zeta_j'\Ai(t-\zeta_j').
\end{equation*}
Since we are only working with Neumann boundary condition, let us write $e_j(t)=c_j\Ai(t-\zeta_j')$ to be the normalized eigenfunctions of the Neumann realization of $D_t^2+t$ on $(0,\infty)$. Then $\{e_j\}_{j=1}^\infty$ forms an orthonormal basis for $L^2(0,\infty)$.
\subsection{Some basic estimates}
In this part, we give some elementary estimates on Airy operators and the boundary operators, some of these estimates can be found in \cite{J}.
Consider the Airy operator
$D_t^2+t:B\subset L^2\to L^2$
and the boundary operators
\begin{equation*}
\gamma_0:B\to\mathbb{C},\;\; u\mapsto u(0),\;\;\;
\gamma_1:B\to\mathbb{C},\;\; u\mapsto u'(0).
\end{equation*}
Here $L^2=L^2(0,\infty)$ and $B=\{u\in L^2:D_t^2u, tu\in L^2\}$ is a Banach space equipped with the norm
\begin{equation}
\label{norm:b}
\|u\|_B=\|D_t^2u\|+\|tu\|+\|u\|,
\end{equation}
where we use $\|\cdot\|$ to represent the standard $L^2$-norm on $(0,\infty)$.
It is clear that $\|(D_t^2+t)u\|\leqslant C\|u\|_B$. More precisely, we have the following identity,
\begin{equation}
\label{id:airy}
\|(D_t^2+t)u\|^2=\|D_t^2u\|^2+\|tu\|^2
+2\|\sqrt{t}D_tu\|^2-|\gamma_0u|^2,
\end{equation}
for any $u\in C_0^\infty([0,\infty))$. The proof is bases on a simple integration by parts. To see this, let $\langle,\rangle$ be the standard $L^2$ inner product on $(0,\infty)$. Then
\begin{equation*}
\begin{split}
\|(D_t^2+t)u\|^2=&\;\|D_t^2u\|^2+\|tu\|^2+2\Re\langle D_t^2u, tu\rangle\\
=&\;\|D_t^2u\|^2+\|tu\|^2+2\Re\langle D_tu,D_t(tu)\rangle\\
=&\;\|D_t^2u\|^2+\|tu\|^2+2\Re\langle D_tu,tD_tu\rangle
+2\Re\frac{1}{i}\langle D_tu,u\rangle\\
=&\;\|D_t^2u\|^2+\|tu\|^2
+2\|\sqrt{t}D_tu\|^2-|\gamma_0u|^2.
\end{split}
\end{equation*}
Here in the last step, we use again the integration by parts
\begin{equation}
\label{id:gamma0}
\langle D_tu,u\rangle=\langle u,D_tu\rangle-i|u(0)|^2
\end{equation}
to get
\begin{equation*}
\Re\frac{1}{i}\langle D_tu,u\rangle=\Im\langle D_tu,u\rangle=-\frac{i}{2}|u(0)|^2.
\end{equation*}
Next we give some estimates of $\gamma_0$ and $\gamma_1$. For any $u\in C^\infty_0([0,\infty))$, by the Cauchy-Schwartz inequality and \eqref{id:gamma0}, we get
\begin{equation*}
|\gamma_0u|^2\leqslant2\|D_tu\|\|u\|,
\end{equation*}
and similarly
\begin{equation*}
|\gamma_1u|^2\leqslant2\|D_t^2u\|\|D_tu\|.
\end{equation*}
Another application of integration by parts and the Cauchy-Schwartz inequality also gives
\begin{equation*}
\begin{split}
\|D_tu\|^2=&\;\langle D_t^2u,u\rangle-u(0)u'(0)\\
\leqslant&\;|\gamma_1u||\gamma_0u|+\|D_t^2u\|\|u\|\\
\leqslant&\;2\|D_t^2u\|^{1/2}\|u\|^{1/2}\|D_tu\|+\|D_t^2u\|\|u\|
\end{split}
\end{equation*}
which leads to the standard interpolation estimates
\begin{equation}
\label{es:interpolation}
\|D_tu\|\leqslant(\sqrt{2}+1)\|D_t^2u\|^{1/2}\|u\|^{1/2}.
\end{equation}
As a consequence, for any $\epsilon>0$,
\begin{equation}
\label{es:boundary}
\begin{split}
|\gamma_0u|\leqslant&\;C\|D_t^2u\|^{1/4}\|u\|^{3/4}\leqslant \epsilon\|D_t^2u\|+C_\epsilon\|u\|\\
|\gamma_1u|\leqslant&\;C\|D_t^2u\|^{3/4}\|u\|^{1/4}\leqslant \epsilon\|D_t^2u\|+C_\epsilon\|u\|.
\end{split}
\end{equation}
Now from \eqref{norm:b} and \eqref{id:airy} we get
\begin{equation}
\label{eq:normairy}
\|u\|_B\leqslant C(\|u\|_{L^2}+\|(D_t^2+t)u\|_{L^2})
\end{equation}
and
\begin{equation}
\label{esB:boundary}
|\gamma_0u|\leqslant C\|u\|_B,\;\;\; |\gamma_1u|\leqslant C\|u\|_B.
\end{equation}
We finish this part by using these two estimates to show that elements in $B$ can be written in a unique way as a linear combination of the Neumann Airy eigenfunctions $(e_j)_{j=1}^\infty$ introduced in the previous section and one other element $f\in B$ with $\gamma_1f\neq0$. We remark that $(e_j)$ is not an orthonormal basis in $B$, so this expression might be different from the orthogonal expansion in $L^2$.
On one hand, if the sum $\sum_ju_je_j$ converges in $B$ to some $u$, then by \eqref{esB:boundary} we have $\gamma_1u=\sum_ju_j\gamma_1e_j=0$. On the other hand, if $u\in B$ satisfies $\gamma_1u=u'(0)=0$, then we can consider the $L^2$-orthogonal expansion
\begin{equation}
\label{ex:ortho}
u=\sum_j\langle u,e_j\rangle e_j.
\end{equation}
By \eqref{eq:normairy}, we have for any finite subset $J$ of $\mathbb{Z}_+$,
\begin{equation*}
\begin{split}
\|\sum_{j\in J}\langle u,e_j\rangle e_j\|_B\leqslant&\; C(\|\sum_{j\in J}\langle u, e_j\rangle e_j\|+\|(D_t^2+t)\sum_{j\in J}
\langle u, e_j\rangle e_j\|)\\
\leqslant&\; C(\|\sum_{j\in J}\langle u, e_j\rangle e_j\|+\|\sum_{j\in J}\zeta_j'\langle u, e_j\rangle e_j\|)\\
\leqslant&\; C(\|\sum_{j\in J}\langle u, e_j\rangle e_j\|+\|\sum_{j\in J}\langle u, (D_t^2+t)e_j\rangle e_j\|)\\
\leqslant&\; C(\|\sum_{j\in J}\langle u, e_j\rangle e_j\|+\|\sum_{j\in J}\langle (D_t^2+t)u, e_j\rangle e_j\|).\\
\end{split}
\end{equation*}
which shows that the sum \eqref{ex:ortho} converges to $u$ in $B$ since $(D_t^2+t)u\in L^2$.
Therefore if we fix some $f\in B$ such that $\gamma_1f=f'(0)\neq0$, then every $u\in B$ can be uniquely expressed in the form
\begin{equation}
\label{ex:B}
u=u_0f+\sum_{j=1}^\infty u_je_j
\end{equation}
where the sum converges in $B$. We simply choose $u_0$ first such that $\gamma_1(u-u_0f)=0$, then write the orthogonal expansion of $u-u_0f$ by $(e_j)$ in $L^2$, i.e. $u_j=\langle u-u_0f,e_j\rangle$.
\subsection{Model Airy problem}
The operator in \eqref{model:airy} (taking $\mu=1$) combining with the Neumann boundary operator
\begin{equation}
\label{model:airycomb}
\left(
\begin{array}{c}
P_\lambda-z \\
\gamma_1 \\
\end{array}
\right):B\to L^2\times\mathbb{C}
\end{equation}
may not be invertible for all $z$ with $|\Im z|<C_1$. In fact,
let us take $N=N(C_1)$ as the largest number such that
\begin{equation*}
|\Im e^{-2\pi i/3}\zeta'_N|\leqslant C_1,
\end{equation*}
so that $e^{-2\pi i/3}\zeta'_j+\lambda-z\neq0$ for $j\geqslant N+1$.
Then \eqref{model:airycomb} is not invertible precisely when $e^{-2\pi i/3}\zeta'_j+\lambda-z=0$ for some $j=1,\ldots,N$ since $e_j$ is in its kernel. Therefore we need to correct this operator in a suitable way to make it invertible. We shall also modify our spaces by putting an exponential weight. Moreover, we also need to add correct powers of $\langle\lambda-\Re z\rangle$ in the norm.
More precisely, let us consider the following Grushin problem for \eqref{model:airycomb}:
\begin{equation}
\label{model:airygrushin}
\mathcal{P}_\lambda(z)=\left(
\begin{array}{cc}
P_\lambda-z & R_-^0 \\
\gamma_1 & r_- \\
R_+^0 & 0 \\
\end{array}
\right):
\mathcal{B}_{z,\lambda,r}\to\mathcal{H}_{z,\lambda,r}
\end{equation}
(Later on we shall always choose $r_-=0$.) Here the spaces and the norms on the spaces are given by
\begin{equation}
\label{model:airyspace}
\begin{split}
\mathcal{B}_{z,\lambda,r}&\;=B_{z,\lambda,r}\times\mathbb{C}^N,\\
\left\|\left(
\begin{array}{c}
u \\
u_- \\
\end{array}
\right)
\right\|_{\mathcal{B}_{z,\lambda,r}}&\;
=\|u\|_{B_{z,\lambda,r}}+|u_-|,\\
\mathcal{H}_{z,\lambda,r}&\;=L^2_r\times
\mathbb{C}_{\langle\lambda-\Re z\rangle^{1/4}}\times
\mathbb{C}^N_{\langle\lambda-\Re z\rangle},\\
\left\|\left(
\begin{array}{c}
v \\
v_0 \\
v_+
\end{array}
\right)
\right\|_{\mathcal{H}_{z,\lambda,r}}&\;=\|v\|_{L^2_r}
+\langle\lambda-\Re z\rangle^{1/4}|v_0|
+\langle\lambda-\Re z\rangle|v_+|.
\end{split}
\end{equation}
with $|\cdot|$ fixed norms on $\mathbb{C}$ or $\mathbb{C}^N$ and $L^2_r=L^2([0,\infty),e^{rt}dt),
B_{z,\lambda,r}=\{u\in L^2_r; D_t^2u, tu\in L^2_r\}$. The norms are given by the standard weighted $L^2$-norm $\|\cdot\|_{L^2_r}$ and
\begin{equation}
\label{norm:weighted}
\|u\|_{B_{z,\lambda,r}}=\langle\lambda-\Re z\rangle\|u\|_{L^2_r}
+\|D_t^2u\|_{L^2_r}+\|tu\|_{L^2_r},
\end{equation}
respectively. Moreover, the operators are given by
\begin{equation*}
\begin{split}
P_\lambda-z&: B_r\to L^2_r,\;\;\; u\mapsto(e^{-2\pi i/3}(D_t^2+t)+\lambda-z)u; \\
\gamma_1&: B_r\to\mathbb{C},\;\;\; u\mapsto u'(0);\\
R_+^0&: B_r\to\mathbb{C}^N,\;\;\; u\mapsto (\langle u,e_j\rangle)_{1\leqslant j\leqslant N};\\
R_-^0&: \mathbb{C}^N\to L^2_r,\;\;\; u_-\mapsto\sum_{j=1}^Nu_-(j)e_j;\\
r_-&: \mathbb{C}^N\to\mathbb{C},\;\;\; u_-\mapsto\sum_{j=1}^Nr_ju_-(j).
\end{split}
\end{equation*}
We remark that the heuristic reason for the weight $\langle\lambda-\Re z\rangle^{1/4}$ in the second component $\mathbb{C}$ on $\mathcal{H}_{z,\lambda,r}$ is that $\langle\lambda-\Re z\rangle$ roughly represents the Laplacian on the boundary $\langle\Delta_{\partial\mathcal{O}}\rangle$ (up to some parameters). Therefore if $u\in H^2(\mathbb{R}^n\setminus\mathcal{O})$, then by the well-known property of boundary operators $\partial_\nu u|_{\partial\mathcal{O}}\in H^{1/2}(\partial\mathcal{O})$ the norm of which corresponds to $\langle\lambda-\Re z\rangle^{1/4}$. We can also see that this is the correct weight by rescaling the estimate \eqref{es:boundary}. For the same reason, if we wish to work with Dirichlet boundary operator, then we need to replace this weight $\langle\lambda-\Re z\rangle^{1/4}$ by $\langle\lambda-\Re z\rangle^{3/4}$.
Moreover, to handle powers of $t$ which will appear in lower order terms, it is necessary to introduce the exponential weight $e^{rt}, r>0$ in the definition of spaces $\mathcal{B}_{z,\lambda,r}$ and $\mathcal{H}_{z,\lambda,r}$. This will be explained in full details in the next section.
For $r=0$, it is clear that the space $B_{z,\lambda,0}$ is just $B$ in the previous section with an equivalent norm (of course not uniformly in $z,\lambda$) and $\mathcal{P}_\lambda(z):
\mathcal{B}_{z,\lambda,0}\to\mathcal{H}_{z,\lambda,0}$ is a uniformly bounded operator. Now we look for the inverse of $\mathcal{P}_\lambda(z)$. Let
\begin{equation}
\label{eq:grushin}
\mathcal{P}_\lambda(z)\left(
\begin{array}{c}
u \\
u_- \\
\end{array}
\right)
=\left(
\begin{array}{c}
v \\
v_0 \\
v_+\\
\end{array}
\right).
\end{equation}
Then explicitly we have
\begin{equation*}
\begin{split}
(P_\lambda-z)u+R_-^0u_-=&\;v\\
u'(0)+r_-u_-=&\;v_0\\
R_+^0u=&\;v_+.
\end{split}
\end{equation*}
We express $v$ in terms of the orthonormal basis $(e_j)_{j=1}^\infty$ in $L^2$:
\begin{equation*}
v=\sum_{j=1}^\infty v_je_j,
\end{equation*}
and we write $v_+=(v_+(j))_{1\leqslant j\leqslant N}$. Then we look for solutions with $u\in B$ as in \eqref{ex:B}
\begin{equation*}
u=u_0f+\sum_{j=1}^\infty u_je_j
\end{equation*}
and
\begin{equation*}
u_-=(u_-(j))_{1\leqslant j\leqslant N}.
\end{equation*}
Let us write
\begin{equation*}
f_0:=f'(0),\;\;\;f_j:=\langle f,e_j\rangle, \;\;\; \eta_j:=e^{-2\pi i/3}\zeta_j'+\lambda-z.
\end{equation*}
then we have
\begin{equation*}
(P_\lambda-z)e_j=\eta_je_j,\;\;\; (P_\lambda-z)^\ast e_j=\bar{\eta}_je_j.
\end{equation*}
where $(P_\lambda-z)^\ast=e^{2\pi i/3}(D_t^2+t)+\lambda-\bar{z}$ is the formal adjoint of $P_\lambda-z$. Moreover,
\begin{equation*}
\langle(P_\lambda-z)f,e_j\rangle=e^{-2\pi i/3}e_j(0)f_0+\langle f,(P_\lambda-z)^\ast e_j\rangle=e^{-2\pi i/3}e_j(0)f_0+\eta_jf_j.
\end{equation*}
Then we can rewrite the system \eqref{eq:grushin} as an infinite system of linear equations:
\begin{equation}
\label{eq:grushin2}
\begin{split}
[e^{-2\pi i/3}e_j(0)f_0+\eta_jf_j]u_0+\eta_ju_j+u_-(j)=&\;v_j,\;\; (1\leqslant j\leqslant N)\\
[e^{-2\pi i/3}e_j(0)f_0+\eta_jf_j]u_0+\eta_ju_j=&\;v_j,\;\; (j\geqslant N+1)\\
f_0u_0+\sum_{j=1}^Nr_ju_-(j)=&\;v_0\\
f_ju_0+u_j=&\;v_+(j),\;\; (1\leqslant j\leqslant N).\\
\end{split}
\end{equation}
It is not difficult to see that as long as
\begin{equation*}
1-e^{-2\pi i/3}\sum_{j=1}^Nr_je_j(0)\neq0,
\end{equation*}
we have a unique solution for \eqref{eq:grushin2},
\begin{equation*}
\begin{split}
u_0=&\;\left[1-e^{-2\pi i/3}\sum_{j=1}^Nr_je_j(0)\right]^{-1}
f_0^{-1}\left[v_0+\sum_{j=1}^Nr_j(\eta_jv_+(j)-v_j)\right]\\
u_j=&\;v_+(j)-f_ju_0, \;\;\;(1\leqslant j\leqslant N)\\
u_j=&\;\eta_j^{-1}(v_j-(e^{-2\pi i/3}e_j(0)f_0+\eta_jf_j)u_0), \;\;\;(j\geqslant N+1)\\
u_-(j)=&\;v_j-\eta_jv_+(j)-e^{-2\pi i/3}e_j(0)f_0u_0, \;\;\;(1\leqslant j\leqslant N).
\end{split}
\end{equation*}
For simplicity, henceforth we shall choose $f_0=1,r_-=0$ (though other choices are also possible). Then the solution becomes
\begin{equation}\label{eq:solution}
\begin{split}
u_0=&\;v_0\\
u_j=&\;v_+(j)-f_jv_0, \;\;\;(1\leqslant j\leqslant N)\\
u_j=&\;\eta_j^{-1}(v_j-e^{-2\pi i/3}e_j(0)v_0)-f_jv_0, \;\;\;(j\geqslant N+1)\\
u_-(j)=&\;v_j-e^{-2\pi i/3}e_j(0)v_0-\eta_jv_+(j),
\;\;\;(1\leqslant j\leqslant N).
\end{split}
\end{equation}
Now we need to estimate the norm.
\begin{lem}
\label{lem:airy1}
The Grushin problem \eqref{model:airygrushin} is well-posed for $r=0$. In other words, suppose \eqref{eq:grushin}, then we have
\begin{equation}
\label{es:inv0}
\|u\|_{B_{z,\lambda,0}}+|u_-|\leqslant C(\|v\|_{L^2}+\langle\lambda-\Re z\rangle^{1/4}|v_0|+\langle\lambda-\Re z\rangle|v_+|).
\end{equation}
where $C$ is independent of $\lambda,z$.
\end{lem}
\begin{proof}
We first observe that for $1\leqslant j\leqslant N$,
\begin{equation*}
|\eta_j|\leqslant C\langle\lambda-\Re z\rangle
\end{equation*}
while for $j\geqslant N+1$
\begin{equation*}
|\eta_j|\geqslant C^{-1}(\langle\lambda-\Re z\rangle+\zeta_j').
\end{equation*}
The first inequality just follows the definition $\eta_j=e^{-2\pi i/3}\zeta_j'+\lambda-z$ and the assumption $|\Im z|<C_1$. When $\langle\lambda-\Re z\rangle\geqslant C\zeta_j'$, we can get the second inequality simply by estimating the real part using $|\Re\eta_j|\geqslant|\lambda-z|-C\zeta_j'$. Otherwise we use the imaginary part $\Im\eta_j=-(\sin2\pi/3)\zeta_j'-\Im z$ which does not vanish from the assumption on $N$. Therefore $|\Im\eta_j|\geqslant C^{-1}\zeta_j'$ and we also get the second inequality.
From the last equation in \eqref{eq:solution}, we easily get
\begin{equation}
|u_-|\leqslant C(\|v\|_{L^2}+|v_0|+\langle\lambda-\Re z\rangle|v_+|).
\end{equation}
To estimate $u$, we first write its orthogonal expansion in $L^2$ following the first three equations in \eqref{eq:solution}
\begin{equation*}
\begin{split}
u=&\;u_0f+\sum_{j=1}^\infty u_je_j\\
=&\;v_0\left(f-\sum_{j=1}^\infty f_je_j\right)
+\sum_{j=1}^Nv_+(j)e_j
+\sum_{j=N+1}^\infty\eta_j^{-1}(v_j-e^{-2\pi i/3}e_j(0)v_0)e_j\\
=&\;\sum_{j=1}^Nv_+(j)e_j
+\sum_{j=N+1}^\infty\eta_j^{-1}(v_j-e^{-2\pi i/3}e_j(0)v_0)e_j
\end{split}
\end{equation*}
which shows that
\begin{equation*}
\begin{split}
\|u\|_{L^2}^2=&\;\sum_{j=1}^N|v_+(j)|^2
+\sum_{j=N+1}^\infty|\eta_j|^{-2}|v_j-e^{-2\pi i/3}e_j(0)v_0|^2\\
\leqslant&\;C|v_+|^2+C\langle\lambda-\Re z\rangle^{-2}\|v\|_{L^2}^2
+C|v_0|^2\sum_{j=N+1}^\infty|\eta_j|^{-2}|e_j(0)|^2.
\end{split}
\end{equation*}
To treat the last term, we need a careful study of Airy functions. Recall that
\begin{equation*}
e_j(0)=\Ai(-\zeta_j')/\|\Ai\|_{L^2(-\zeta_j',\infty)}.
\end{equation*}
From the asymptotics \eqref{asy:Airy-neg}, it is not difficult to see that
\begin{equation*}
\zeta_j'=(\frac{3}{2}j\pi)^{2/3}(1+o(1)),\;\;\;j\to\infty
\end{equation*}
and
\begin{equation*}
\Ai(-\zeta_j')=(-1)^{j-1}\pi^{-1/2}(\frac{3}{2}j\pi)^{-1/6}(1+o(1)),\;\;\;
j\to\infty.
\end{equation*}
To compute the normalizing factor, we use
\begin{equation*}
\begin{split}
\|\Ai\|_{L^2(-\zeta_{k+1}',-\zeta_k')}^2\;&= (1+o(1))\pi^{-1}\int_{\zeta_k'}^{\zeta_{k+1}'}t^{-1/2}
|\sin(\frac{2}{3}t^{3/2}+\frac{\pi}{4})|^2dt\\
\;&=(1+o(1))\pi^{-1}
\int_{\frac{2}{3}\zeta_k'^{3/2}}^{\frac{2}{3}\zeta_{k+1}'^{3/2}}
|\sin(s+\frac{\pi}{4})|^2s^{-2/3}ds=\frac{1}{2}(k\pi)^{-2/3}(1+o(1)),
\end{split}
\end{equation*}
as $k\to\infty$. Here in the second step, we use the natural change of variables $s=\frac{2}{3}t^{3/2}$ while in the third step, we use that $s=\frac{2}{3}\zeta_k'^{3/2}(1+o(1))=k\pi(1+o(1))$ on $(\frac{2}{3}\zeta_k'^{3/2},\frac{2}{3}\zeta_{k+1}'^{3/2})$ and the integral of $|\sin(s+\frac{\pi}{4})|^2$ over this interval is equal to
\begin{equation*}
(1+o(1))\int_{k\pi}^{(k+1)\pi}|\sin(s+\frac{\pi}{4})|^2ds\
=\frac{\pi}{2}(1+o(1)).
\end{equation*}
Therefore
\begin{equation*}
\begin{split}
\|\Ai\|_{L^2(-\zeta_j',\infty)}^2=&\;
\|\Ai\|_{L^2(-\zeta_1',\infty)}^2+\sum_{k=1}^{j-1}
\|\Ai\|_{L^2(-\zeta_{k+1}',-\zeta_k')}^2\\
=&\;c_0(1+o(1))\sum_{k=1}^{j-1}k^{-2/3}=c_0j^{1/3}(1+o(1)).
\end{split}
\end{equation*}
As a consequence, we have
\begin{equation*}
|e_j(0)|^2=c_1j^{-2/3}(1+o(1)),\;\;\;j\to\infty
\end{equation*}
for some constant $c_1>0$. Now we can compute
\begin{equation*}
\begin{split}
\sum_{j=N+1}^\infty|\eta_j|^{-2}|e_j(0)|^2
\leqslant&\; C\sum_{j=N+1}^\infty j^{-2/3}(\langle\lambda-\Re z\rangle+\zeta_j')^{-2}\\
\leqslant&\; C\sum_{j=N+1}^\infty j^{-2/3}(\langle\lambda-\Re z\rangle+j^{2/3})^{-2}\\
\leqslant&\; C\int_1^\infty s^{-2/3}(\langle\lambda-\Re z\rangle+s^{2/3})^{-2}ds\\
\leqslant&\; C\langle\lambda-\Re z\rangle^{-3/2}\int_0^\infty
t^{-2/3}(1+t^{2/3})^{-2}dt\leqslant C\langle\lambda-\Re z\rangle^{-3/2},
\end{split}
\end{equation*}
where the last step we use the change of variable $s=\langle\lambda-\Re z\rangle^{3/2}t$. This gives the following estimate on the $L^2$-norm of $u$:
\begin{equation}
\label{es:L2norm}
\langle\lambda-\Re z\rangle\|u\|_{L^2}\leqslant C(\|v\|_{L^2}+\langle\lambda-\Re z\rangle^{1/4}|v_0|+\langle\lambda-\Re z\rangle|v_+|).
\end{equation}
Now since
\begin{equation*}
(D_t^2+t)u=e^{2\pi i/3}(v-R_-^0u_--(\lambda-z)u),
\end{equation*}
we have
\begin{equation*}
\|(D_t^2+t)u\|_{L^2}\leqslant
C(\|v\|_{L^2}+|u_-|+\langle\lambda-\Re z\rangle\|u\|_{L^2})
\end{equation*}
Now we can use a variation of \eqref{eq:normairy}
\begin{equation*}
\|u\|_{B_{z,\lambda,0}}\leqslant C(\|(D_t^2+t)u\|_{L^2}+\langle\lambda-\Re z\rangle\|u\|_{L^2})
\end{equation*}
and \eqref{es:L2norm} to get \eqref{es:inv0}.
\end{proof}
The next step is to consider adding a small exponential weight, i.e. $r\in(0,r_0)$ for $r_0$ small.
\begin{lem}
\label{lem:airy2}
There exists $r_0>0$ such that the Grushin problem \eqref{model:airygrushin} is uniformly well-posed for $r\in(0,r_0)$. More precisely, suppose \eqref{eq:grushin}, then we have
\begin{equation}
\label{es:invr}
\|u\|_{B_{z,\lambda,r}}+|u_-|\leqslant C(\|v\|_{L^2_r}+\langle\lambda-\Re z\rangle^{1/4}|v_0|+\langle\lambda-\Re z\rangle|v_+|).
\end{equation}
where $C$ is independent of $\lambda,z$ and $r$.
\end{lem}
\begin{proof}
We introduce
\begin{equation*}
\begin{split}
\mathcal{P}_\lambda^r(z)=&\;
\left(
\begin{array}{ccc}
e^{rt/2} & 0 & 0 \\
0 & 1 & 0 \\
0 & 0 & 1 \\
\end{array}
\right)\mathcal{P}_0^\lambda
\left(
\begin{array}{cc}
e^{-rt/2} & 0 \\
0 & 1 \\
\end{array}
\right)\\
=&\;\mathcal{P}_\lambda(z)+
\left(
\begin{array}{cc}
e^{rt/2}(P_\lambda-z)e^{-rt/2} & (e^{rt/2}-1)R_-^0 \\
\gamma_1(e^{-rt/2}-1) & 0 \\
R_+^0(e^{-rt/2}-1) & 0 \\
\end{array}
\right)
\end{split}
\end{equation*}
By the interpolation estimate \eqref{es:interpolation}, we have \begin{equation*}
D_t=O(\langle\lambda-\Re z\rangle^{-1/2}):
B_{z,\lambda,0}\to L^2,
\end{equation*}
thus
\begin{equation*}
e^{rt/2}(P_\lambda-z)e^{-rt/2}=e^{-2\pi i/3}(irD_t-\frac{1}{4}r^2)=O(r\langle\lambda-\Re z\rangle^{-1/2}):
B_{z,\lambda,0}\to L^2.
\end{equation*}
Next, by \eqref{es:boundary},
\begin{equation*}
\gamma_0=O(\langle\lambda-\Re z\rangle^{-3/4})
:B_{z,\lambda,0}\to\mathbb{C},
\end{equation*}
so
\begin{equation*}
\gamma_1(e^{-rt/2}-1)=-\frac{r}{2}\gamma_0=O(r\langle\lambda-\Re z\rangle^{-1/2})
:B_{z,\lambda,0}\to\mathbb{C}_{\langle\lambda-\Re z\rangle^{1/4}}.
\end{equation*}
Also by the super exponential decay of $e_j, j=1,\ldots,N$: $\|(e^{-rt/2}-1)e_j(t)\|_{L^2}=o(1)$, so
\begin{equation*}
R_+^0(e^{-rt/2}-1)=o(1):B_{z,\lambda,0}\to
\mathbb{C}^N_{\langle\lambda-\Re z\rangle}.
\end{equation*}
Similarly, we have $\|(e^{rt/2}-1)e_j(t)\|_{L^2}=o(1)$, and
\begin{equation*}
(e^{rt/2}-1)R_-^0=o(1):\mathbb{C}^N\to L^2.
\end{equation*}
We see that $\mathcal{P}_\lambda^r(z)$ is a small perturbation of $\mathcal{P}_\lambda(z)$ in the sense that
\begin{equation*}
\mathcal{P}_\lambda^r(z)-\mathcal{P}_\lambda(z)=o(1):
\mathcal{B}_{z,\lambda,0}\to\mathcal{H}_{z,\lambda,0}
\end{equation*}
uniformly in $z,\lambda$ as $r\to0+$.
Therefore
\begin{equation*}
\mathcal{P}_\lambda^r(z):
\mathcal{B}_{z,\lambda,0}\to\mathcal{H}_{z,\lambda,0}
\end{equation*}
is uniformly invertible when $r\in[0,r_0]$ for some small $r_0>0$.
Now we note that
\begin{equation*}
\|u\|_{B_{z,\lambda,r}}\sim\|e^{rt/2}u\|_{B_{z,\lambda,0}}
\end{equation*}
uniformly in $z,\lambda$ and $r\in[0,r_0]$ which again follows from the interpolation estimate \eqref{es:interpolation} for $D_t$. This finishes the proof of the lemma.
\end{proof}
In particular, from \eqref{eq:solution}, we see that the inverse of $\mathcal{P}_\lambda(z)$ is given by
\begin{equation*}
\mathcal{E}_\lambda(z)=\left(
\begin{array}{ccc}
E & K & E_+ \\
E_- & K_- & E_{-+} \\
\end{array}
\right)
:\mathcal{H}_{z,\lambda,r}\to\mathcal{B}_{z,\lambda,r},
\end{equation*}
where
\begin{equation}
\label{model:airye-+}
E_{-+}\in\hom(\mathbb{C}^N,\mathbb{C}^N),\;\;
(E_{-+})_{1\leqslant j,k\leqslant n}=-\eta_j\delta_{ij}.
\end{equation}
\subsection{Dependence on parameters}
Now we shall modify our Grushin problem so that we get nice global symbolic properties. For $0<\delta\ll1$, we put
\begin{equation*}
e_j^{\lambda,\delta}(t)=\Lambda^{1/2}e_j(\Lambda t), \Lambda=\langle\delta\lambda\rangle^{1/2}
\end{equation*}
which also forms an orthonormal basis for $L^2([0,\infty))$. We notice that
\begin{equation*}
\partial_\lambda^k\Lambda=O_k(1)\delta^k\Lambda^{1-2k},\;\;\;
\|\partial_\lambda^ke_j^{\lambda,\delta}\|_{L^2}
=O_k(1)\delta^k\Lambda^{-2k}.
\end{equation*}
In particular,
\begin{equation*}
\|e_j^{\lambda,\delta}-e_j\|_{L^2}\leqslant C\delta|\lambda|.
\end{equation*}
We define $R_+^{\lambda,\delta}$ and $R_-^{\lambda,\delta}$ by replacing $e_j$ with $e_j^{\lambda,\delta}$ in the definition of $R_+^0$ and $R_-^0$, we obtain
\begin{equation}
\label{model:modairy}
\mathcal{P}_\lambda^\delta(z)=\left(
\begin{array}{cc}
P_\lambda-z & R_-^{\lambda,\delta} \\
\gamma_1 & 0 \\
R_+^{\lambda,\delta} & 0 \\
\end{array}
\right):
\mathcal{B}_{z,\lambda,r}\to\mathcal{H}_{z,\lambda,r}
\end{equation}
and
\begin{equation*}
\mathcal{P}_\lambda^\delta(z)-\mathcal{P}_\lambda(z)=\left(
\begin{array}{cc}
0 & O(|\lambda|\delta) \\
0 & 0 \\
O(|\lambda|\delta) & 0 \\
\end{array}
\right):
\mathcal{B}_{z,\lambda,r}\to\mathcal{H}_{z,\lambda,r}
\end{equation*}
Thus for $|\lambda|\delta\ll1$ we get the uniform invertibility of $\mathcal{P}_\lambda^\delta(z)$. To get the same estimate for all $\lambda$, we need to assume
\begin{equation}
|\Re z|\ll\frac{1}{\delta},
\end{equation}
so that $|\lambda|\gg1+|\Re z|$ and we have the invertibility of
$\left(
\begin{array}{c}
P_\lambda-z \\
\gamma_1 \\
\end{array}
\right)$ without the correcting terms $R_\pm^{\lambda,\delta}$.
Notice that in such situation $\langle\lambda\rangle\sim\langle\lambda-\Re z\rangle$ with a $\delta$-dependent constant. All our estimates will depend on $\delta$.
\begin{lem}
\label{lem:airyinv}
For $|\lambda|\gg1+|\Re z|$ and $|\Im z|<C_1$, there exists a constant $C>0$ independent of $z$ and $\lambda$ such that for any $u\in B_{z,\lambda,0}$,
\begin{equation}\label{eq:largelambdainv1}
|\langle(P_\lambda-z)u,u\rangle|+\langle\lambda-\Re z\rangle^{-1/2}|\gamma_1u|^2\geqslant C^{-1}\langle\lambda-\Re z\rangle\|u\|_{L^2}^2.
\end{equation}
Furthermore, for small $r$,
\begin{equation}\label{eq:largelambdainv2}
\left(
\begin{array}{c}
P_\lambda-z \\
\gamma_1 \\
\end{array}
\right)u=
\left(
\begin{array}{c}
v \\
v_0 \\
\end{array}
\right)\;\;\Rightarrow\;\;
\|u\|_{B_{z,\lambda,r}}\leqslant C(\|v\|_{L^2_r}+\langle\lambda-\Re z\rangle^{1/4}|v_0|).
\end{equation}
\end{lem}
\begin{proof}
It is possible to repeat the argument as in Lemma \ref{lem:airy1} using orthogonal expansion with respect to $(e_j)$. We present here another proof by using the Poisson operator $K_\lambda:\mathbb{C}\to B_{z,\lambda,0}$, satisfying
\begin{equation*}
P_\lambda K_\lambda=0,\;\;\; \gamma_1K_\lambda=\Id.
\end{equation*}
This Poisson operator is given by multiplying $f=f_\lambda$ which is the solution to the equation
\begin{equation*}
e^{-2\pi i/3}(D_t^2+t)f+\lambda f=0,\;\;\; f'(0)=1.
\end{equation*}
We can give an explicit expression of $f$ in terms of the Airy function:
\begin{equation*}
f_\lambda(t)=\Ai'(e^{2\pi i/3}\lambda)^{-1}\Ai(t+e^{2\pi i/3}\lambda).
\end{equation*}
Notice that all the zeroes of $\Ai$ and $\Ai'$ lie on the negative real axis, this expression is well-defined as $\lambda$ is real.
We shall apply the asymptotic formulas for Airy function and its derivatives \eqref{asy:Airy-com} to study the $L^2$-norm of $f_\lambda$. First we consider the case $\lambda>0$, then
\begin{equation*}
\Ai'(e^{2\pi i/3}\lambda)=-(2\sqrt{\pi})^{-1}e^{\pi i/6}e^{\lambda^{3/2}}\lambda^{1/4}(1+O(\lambda^{-3/2})).
\end{equation*}
and
\begin{equation*}
\Ai(t+e^{2\pi i/3}\lambda)
=(2\sqrt{\pi})^{-1}e^{-\zeta}z^{-1/4}(1+O(|\zeta|^{-1}))
\end{equation*}
where
\begin{equation*}
z=t+e^{2\pi i/3}\lambda,\;\;\; |z|=(t^2-t\lambda+\lambda^2)^{1/2},\;\;\; \zeta=\frac{2}{3}z^{3/2}.
\end{equation*}
We change variables by letting $\arg z=\frac{\pi}{2}-\theta$, then $\theta\in[-\frac{\pi}{6},\frac{\pi}{2})$ and
\begin{equation*}
t=\frac{\lambda}{2}+\frac{\sqrt{3}}{2}\lambda\tan\theta,\;\;\; |z|=\frac{\sqrt{3}}{2}\lambda\sec\theta,\;\;\;
\zeta=\frac{\sqrt{3}}{4}\lambda^{3/2}e^{i(3\pi/4-3\theta/2)}
\sec^{3/2}\theta.
\end{equation*}
We have the following uniform asymptotic formulas in $\lambda$ and $\theta$ for $f_\lambda(t)$:
\begin{equation*}
f_\lambda(t)=g(\lambda)e^{\lambda^{3/2}\psi(\theta)}
e^{-i(7\pi/24-\theta/4)}(\sec^{-1/4}\theta)
(1+O(\lambda^{-3/2}\sec^{-3/2}\theta)).
\end{equation*}
where
\begin{equation*}
g(\lambda)=(\sqrt{3}/2)^{-1/4}\lambda^{-1/2}(1+O(\lambda^{-3/2})),\;\;\;
\psi(\theta)=-\frac{2}{3}-\frac{\sqrt{3}}{4}e^{i(3\pi/4-3\theta/2)}
\sec^{3/2}\theta.
\end{equation*}
Therefore
\begin{equation*}
\|f_\lambda\|_{L^2(0,\infty)}^2
=\frac{\sqrt{3}}{2}\lambda|g(\lambda)|^2
\int_{-\pi/6}^{\pi/2}
e^{\lambda^{3/2}\varphi(\theta)}
(\sec^{3/2}\theta)(1+O(\lambda^{-3/2}\sec^{-3/2}\theta))d\theta,
\end{equation*}
where
\begin{equation*}
\varphi(\theta)=2\Re\psi(\theta)=2\left[-\frac{2}{3}
-\frac{\sqrt{3}}{4}\sec^{3/2}\theta
\cos(\frac{3\pi}{4}-\frac{3\theta}{2})\right]
\end{equation*}
satisfies
\begin{equation*}
\varphi(-\pi/6)=0,\;\;\; \lim_{\theta\to\pi/2-0}\varphi(\theta)=-\infty,
\end{equation*}
and
\begin{equation*}
\varphi'(\theta)=-\frac{3\sqrt{3}}{4}\sec^{5/2}\theta
\sin(\frac{3\pi}{4}-\frac{\theta}{2})<-\frac{3\sqrt{3}}{8}<0, \;\;\; \theta\in[-\frac{\pi}{6},\frac{\pi}{2}).
\end{equation*}
Therefore integration by part gives us
\begin{equation}
\|f_\lambda\|=O(\lambda^{-3/4}).
\end{equation}
Now for every $u\in B_{z,\lambda,0}$, let $v=u-K_\lambda(\gamma_1u)=u-u'(0)f_\lambda$, we have $v'(0)=0$. Now we can write
\begin{equation*}
\begin{split}
\langle(P_\lambda-z)u,u\rangle=\langle(P_\lambda-z)v,v\rangle+
\overline{\gamma_1u}\langle(P_\lambda-z)v,f_\lambda\rangle\\
-z(\gamma_1u)\langle
f_\lambda,v\rangle-z|u'(0)|^2\|f_\lambda\|^2_{L^2}.
\end{split}
\end{equation*}
For the second term on the right-hand side, we integrate by parts:
\begin{equation*}
\begin{split}
\langle(P_\lambda-z)v,f_\lambda\rangle=&\;-e^{-2\pi i/3}v(0)
+\langle v,(P_\lambda-z)^\ast f_\lambda\rangle\\
=&\;-e^{-2\pi i/3}v(0)+(\lambda(1-e^{2\pi i/3})-\bar{z})\langle v,f_\lambda\rangle.
\end{split}
\end{equation*}
Therefore
\begin{equation*}
\begin{split}
\langle(P_\lambda-z)u,u\rangle=&\;e^{-2\pi i/3}\langle(D_t^2+t)v,v\rangle+(\lambda-z)\|v\|^2
-e^{-2\pi i/3}(\overline{\gamma_1u})v(0)\\
&\;+\overline{\gamma_1u}(\lambda(1-e^{2\pi i/3})-\bar{z})\langle v,f_\lambda\rangle-z(\gamma_1u)\langle f_\lambda,v\rangle-z|\gamma_1u|^2\|f_\lambda\|^2_{L^2},
\end{split}
\end{equation*}
where we notice that $\langle(D_t^2+t)v,v\rangle$ is always nonnegative. This gives
\begin{equation*}
\begin{split}
|\langle(P_\lambda-z)u,u\rangle|\geqslant&\;\Re(e^{\pi i/3}\langle(P_\lambda-z)u,u\rangle\\
\geqslant&\;\frac{1}{2}\langle(D_t^2+t)v,v\rangle
+C^{-1}\langle\lambda-\Re z\rangle\|v\|^2-\epsilon\langle\lambda-z\rangle^{1/2}|v(0)|^2\\
&\;-\epsilon\langle\lambda-\Re z\rangle\|v\|^2
-O_\epsilon(\langle\lambda-z\rangle^{-1/2})|\gamma_1u|^2
\end{split}
\end{equation*}
Now by choosing $\epsilon$ small enough but fixed and using
\begin{equation*}
\langle\lambda-z\rangle^{1/2}|v(0)|^2\leqslant
2\langle\lambda-z\rangle^{1/2}\|D_tv\|\|v\|
\leqslant\|D_tv\|^2+\langle\lambda-\Re z\rangle\|v\|^2
\end{equation*}
and $\langle(D_t^2+t)v,v\rangle\geqslant\|D_tv\|^2$ to deduce that
\begin{equation*}
|\langle(P_\lambda-z)u,u\rangle|\geqslant
C^{-1}\langle\lambda-\Re z\rangle\|v\|^2-C\langle\lambda-\Re z\rangle^{-1/2}|\gamma_1u|^2
\end{equation*}
by $\|u\|^2\leqslant C(\|v\|^2+\langle\lambda-\Re z\rangle^{-3/2}|\gamma_1u|^2)$, we can conclude the proof of \eqref{eq:largelambdainv1} for $\lambda>0$. For $\lambda<0$, we can get similarly $\|f_\lambda\|=O(|\lambda|^{-3/4})$ and then use
\begin{equation*}
|\langle(P_\lambda-z)u,u\rangle|\geqslant
\Re(-\langle(P_\lambda-z)u,u\rangle)
\end{equation*}
to reproduce the argument above and prove \eqref{eq:largelambdainv1}.
Now we prove \eqref{eq:largelambdainv2}. For $r=0$, we can see from
\eqref{eq:largelambdainv1},
\begin{equation*}
\|u\|^2_{L^2}\leqslant C\langle\lambda-\Re z\rangle^{-1}\|(P_\lambda-z)u\|_{L^2}\|u\|_{L^2}+C\langle\lambda-\Re z\rangle^{-3/2}|\gamma_1u|^2.
\end{equation*}
Therefore
\begin{equation*}
\|u\|_{L^2}\leqslant C\langle\lambda-\Re z\rangle^{-1}\|(P_\lambda-z)u\|+C\langle\lambda-\Re z\rangle^{-3/4}|\gamma_1u|^2
\end{equation*}
which proves \eqref{eq:largelambdainv2} for $r=0$. For small $r$, we can simply repeat the conjugation and perturbation argument as in the \ref{lem:airy1} to conclude the uniform invertibility.
\end{proof}
Now we give the desired invertibility for the full operator in the Grushin problem.
\begin{prop}
\label{prop:airy}
For $|\lambda|\geqslant1/(C\delta)$ and $|\Re z|\ll1/\delta$, $r\in[0,r_0]$ with $r_0>0$ small enough,
\begin{equation}
\label{es:grushin}
\mathcal{P}_\lambda^\delta\left(
\begin{array}{c}
u \\
u_- \\
\end{array}
\right)=
\left(
\begin{array}{c}
v \\
v_0 \\
v_+ \\
\end{array}
\right)\;\;\Rightarrow\;\;
\left\|\left(
\begin{array}{c}
u \\
u_- \\
\end{array}
\right)
\right\|_{\mathcal{B}_{z,\lambda,r}}\leqslant C
\left\|\left(
\begin{array}{c}
v \\
v_0 \\
v_+
\end{array}
\right)
\right\|_{\mathcal{H}_{z,\lambda,r}}.
\end{equation}
Moreover, we have the following mapping properties of $\mathcal{P}_\lambda^\delta(z)$ and its inverse $\mathcal{E}_\lambda^\delta(z)$:
\begin{equation}
\label{pro:mapping}
\begin{split}
\|\partial_\lambda^k\mathcal{P}_\lambda^\delta(z)
\|_{\mathcal{L}(\mathcal{B}_{z,\lambda,r},\mathcal{H}_{z,\lambda,r})}
\leqslant&\; C_k\langle\lambda-\Re z\rangle^{-k},\\\|\partial_\lambda^k\mathcal{E}_\lambda^\delta(z)
\|_{\mathcal{L}(\mathcal{H}_{z,\lambda,r},\mathcal{B}_{z,\lambda,r})}
\leqslant&\; C_k\langle\lambda-\Re z\rangle^{-k}.
\end{split}
\end{equation}
\end{prop}
\begin{proof}
Again, we start with $r=0$. Let
\begin{equation*}
\Pi=R_-R_+:L^2\to(\ker R_+)^\perp=\Image R_-=\bigoplus\limits_{j=1}^N\mathbb{C}e_j^{\lambda,\delta}
\end{equation*}
be the orthogonal projection. Then since
\begin{equation*}
\|D_t^2e_j^{\lambda,\delta}\|_{L^2}=O(\langle\delta\lambda\rangle),\;\;\;
\|te_j^{\lambda,\delta}\|_{L^2}=O(\langle\delta\lambda\rangle^{-1/2}),
\end{equation*}
we have $\|(P_\lambda-z)|_{\Image R_-}\|=O(\langle\lambda-\Re z\rangle)$. Also it is easy to see $\|R_+\|=\|R_-\|=1$. Since $\Pi u=R_-R_+u=R_-v_+$, we have
\begin{equation*}
\|\Pi u\|_{L^2}\leqslant|v_+|
\end{equation*}
and
\begin{equation}
\label{es:piu}
\|(P_\lambda-z)\Pi u\|_{L^2}\leqslant O(\langle\lambda-\Re z\rangle)|v_+|.
\end{equation}
On the other hand, by the previous lemma,
\begin{equation*}
\begin{split}
\|(I-\Pi)u\|^2_{L^2}\leqslant&\;C\langle\lambda-\Re z\rangle^{-1}|\langle(P_\lambda-z)(I-\Pi)u,(I-\Pi)u\rangle|\\
&\;\;\;\;+C\langle\lambda-\Re z\rangle^{-3/2}|\gamma_1(I-\Pi)u|^2\\
\end{split}
\end{equation*}
For the first term, we have
\begin{equation*}
\begin{split}
\langle(P_\lambda-z)(I-\Pi)u,(I-\Pi)u\rangle=&\;
\langle(I-\Pi)(P_\lambda-z)(I-\Pi)u,u\rangle\\
=&\;\langle(I-\Pi)(P_\lambda-z)u,u\rangle-
\langle(I-\Pi)(P_\lambda-z)\Pi u,u\rangle\\
=&\;\langle(I-\Pi)(v-R_-u_-),u\rangle-
\langle(P_\lambda-z)\Pi u,(I-\Pi)u\rangle\\
=&\;\langle(I-\Pi)v,u\rangle-
\langle(P_\lambda-z)\Pi u,(I-\Pi)u\rangle\\
=&\;\langle v,(I-\Pi)u\rangle-
\langle(P_\lambda-z)\Pi u,(I-\Pi)u\rangle.\\
\end{split}
\end{equation*}
For the second term, we use $\gamma_1\Pi=0$ to get
\begin{equation*}
\gamma_1(I-\Pi)u=\gamma_1u=v_0.
\end{equation*}
Therefore
\begin{equation*}
\begin{split}
\|(I-\Pi)u\|_{L^2}^2
\leqslant&\;C\langle\lambda-\Re z\rangle^{-1}(\|v\|_{L^2}+\|(P_\lambda-z)\Pi u\|_{L^2})\|(I-\Pi)u\|\\
&\;\;\;\;+C\langle\lambda-\Re z\rangle^{-3/2}|v_0|\\
\end{split}
\end{equation*}
and thus
\begin{equation}
\label{es:ipiu}
\begin{split}
\|(I-\Pi)u\|_{L^2}
\leqslant&\;C\langle\lambda-\Re z\rangle^{-1}(\|v\|_{L^2}+\|(P_\lambda-z)\Pi u\|_{L^2})
+C\langle\lambda-\Re z\rangle^{-3/4}|v_0|\\
\leqslant&\;C\langle\lambda-\Re z\rangle^{-1}\|v\|_{L^2}+|v_+|
+C\langle\lambda-\Re z\rangle^{-3/4}|v_0|.
\end{split}
\end{equation}
Combining \eqref{es:piu} and \eqref{es:ipiu}, we have
\begin{equation*}
\langle\lambda-\Re z\rangle\|u\|_{L^2}\leqslant C(\|v\|_{L^2_r}
+\langle\lambda-\Re z\rangle^{1/4}|v_0|
+\langle\lambda-\Re z\rangle|v_+|.).
\end{equation*}
Since
\begin{equation*}
u_-=R_+R_-u_-=R_+(v-(P_\lambda-z)u)=R_+v-R_+(P_\lambda-z)u,
\end{equation*}
we have
\begin{equation*}
|u_-|\leqslant\|v\|_{L^2}+\|R_+(P_\lambda-z)u\|_{L^2}
\leqslant\|v\|_{L^2}+C\sum_{j=1}^N
|\langle(P_\lambda-z)u,e_j^{\lambda,\delta}\rangle|.
\end{equation*}
To estimate the sum, we integrate by parts and get
\begin{equation*}
\langle(P_\lambda-z)u,e_j^{\lambda,\delta}\rangle
=\langle u,(P_\lambda-z)^\ast e_j^{\lambda,\delta}\rangle
+e^{-2\pi i/3}u'(0)e_j^{\lambda,\delta}(0).
\end{equation*}
where $(P_\lambda-z)^\ast=e^{2\pi i/3}(D_t^2+t)+\lambda-\bar{z}$ is the formal adjoint of $P_\lambda-z$ so
\begin{equation*}
\|(P_\lambda-z)^\ast e_j^{\lambda,\delta}\|_{L^2}=O(\langle\lambda-\Re z\rangle).
\end{equation*}
In addition, we have $u'(0)=v_0$ and by definition of $e_j^{\lambda,\delta}$,
\begin{equation*}
e_j^{\lambda,\delta}(0)=O(\langle\delta\lambda\rangle^{1/4}),
\end{equation*}
which shows that
\begin{equation*}
|\langle(P_\lambda-z)u,e_j^{\lambda,\delta}\rangle|
\leqslant C\langle\lambda-\Re z\rangle\|u\|+C\langle\lambda-\Re z\rangle^{1/4}|v_0|.
\end{equation*}
As a consequence,
\begin{equation*}
|u_-|\leqslant C(\|v\|_{L^2_r}
+\langle\lambda-\Re z\rangle^{1/4}|v_0|
+\langle\lambda-\Re z\rangle|v_+|).
\end{equation*}
Now as in Lemma \ref{lem:airy1}, we can use the equation $(P_\lambda-z)u=v-R_-u_-$ to give the estimates on the $L^2$ norm of $D_t^2u$ and $tu$. This finishes the proof of \eqref{es:grushin} for $r=0$.
To extend this to $r\in[0,r_0]$ for some small $r_0>0$, we notice that \begin{equation*}
\|(e^{\pm rt/2}-1)e_j^{\lambda,\delta}\|=\|(e^{\pm r\langle\delta\lambda\rangle^{-1/2}t/2}-1)e_j\|=o(1)
\end{equation*}
uniformly as $r\to0$ which allow us to repeat the argument in Lemma \ref{lem:airy2}.
Finally, since for $k>1$,
\begin{equation*}
\partial_\lambda^k\mathcal{P}_\lambda^\delta(z)=
\left(
\begin{array}{cc}
\delta_{1k} & \partial_\lambda^kR^{\lambda,\delta}_+ \\
0 & 0 \\
\partial_\lambda^kR^{\lambda,\delta}_- & 0 \\
\end{array}
\right)
\end{equation*}
and
\begin{equation*}
\|\partial_\lambda^ke_j^{\lambda,\delta}\|_{L^2_r}
=O_k(1)\delta^k\langle\delta\lambda\rangle^{-k}
=O_k(1)\langle\lambda-\Re z\rangle^{-k},
\end{equation*}
we get the mapping properties of $\mathcal{P}_\lambda^\delta(z)$ in \eqref{pro:mapping}. For its inverse $\mathcal{E}_\lambda^\delta(z)$, \eqref{es:grushin} gives the mapping property when $k=0$. The case $k>0$ follows directly from the case $k=0$ and the Leibnitz rule.
\end{proof}
To end this part, we study the $(-+)$-component of $\mathcal{E}_\lambda^\delta$:
\begin{prop}
\label{prop:airy:e-+}
For any $\epsilon>0$, $|\lambda|\leqslant1/(C\sqrt{\delta}),|\Re z|\ll1/\sqrt{\delta}$ sufficiently small depending on $\epsilon$,
\begin{equation}
\label{pro:e-+:perturb}
\|E_{-+}^\delta(z,\lambda)-\diag(z-\lambda-e^{-2\pi i/3}\zeta_j')\|
\leqslant\epsilon
\end{equation}
and if $\det E_{-+}^\delta(z,\lambda)=0$, then
\begin{equation}
\label{pro:e-+:zero}
z=\lambda+e^{-2\pi i/3}\zeta_j'.
\end{equation}
Moreover, for $|\lambda|\gg1+|\Re z|$,
\begin{equation}
\label{pro:e-+:inverse}
\|E_{-+}^\delta(z,\lambda)^{-1}
\|_{\mathcal{L}(\mathbb{C}^N,\mathbb{C}^N)}
=O(\langle\lambda-\Re z\rangle^{-1})
\end{equation}
\end{prop}
\begin{proof}
The \eqref{pro:e-+:perturb} follows from the perturbation
\begin{equation*}
\|E_{-+}^\delta(z,\lambda)-\diag(z-\lambda-e^{-2\pi i/3}\zeta_j')\|
\leqslant O(\lambda|\delta|)\langle\lambda-\Re z\rangle.
\end{equation*}
Let us recall the general fact, (which is essentially the Schur complement formula, see e.g. \cite{HS} or \cite{SZ7} in the setting of Grushin problems),
\begin{equation*}
(E^{\delta}_{-+})^{-1}=-R_+^{\lambda,\delta}
\left(
\begin{array}{c}
P_\lambda-z \\
\gamma_1 \\
\end{array}
\right)^{-1}
\left(
\begin{array}{c}
R_-^{\lambda,\delta} \\
0 \\
\end{array}
\right).
\end{equation*}
Since $\left(
\begin{array}{c}
P_\lambda-z \\
\gamma_1 \\
\end{array}
\right)$ is not invertible precisely when $\eta_j=e^{-2\pi i/3}\zeta_j'+\lambda-z=0$, (in which case $e_j$ is in the kernel), the same is true for $E_{-+}^{\delta}$. This gives \eqref{pro:e-+:zero}. Finally, in the case $|\lambda|\gg1+|\Re z|$, by \ref{lem:airyinv},
$\left(
\begin{array}{c}
P_\lambda-z \\
\gamma_1 \\
\end{array}
\right)$ is invertible. Therefore $E_{-+}^\delta:\mathbb{C}^N_{\langle\lambda-\Re z\rangle}\to\mathbb{C}^N$ is also invertible, which gives \eqref{pro:e-+:inverse}.
\end{proof}
\subsection{The ``easy'' model}
When $|\lambda|\gg1+|\Re z|$ and $|\Im z|<C_1$, we can consider an even simpler model problem with the operator \eqref{model:easy} which is already invertible. To obtain the uniform symbolic properties, we shall construct the Grushin problem using the same correction terms $R_\pm^{\lambda,\delta}$ as in \eqref{model:modairy}. We define
\begin{equation}
\mathcal{P}_\lambda^\#(z)=\left(
\begin{array}{cc}
P_\lambda^\#-z & R_-^{\lambda,\delta}\\
\gamma_1 & 0 \\
R_+^{\lambda,\delta} & 0 \\
\end{array}
\right):\mathcal{B}_{\lambda,r}^\#
\to\mathcal{H}_{\lambda,r}^\#,
\end{equation}
where the spaces $\mathcal{B}_{\lambda,r}^\#$ and $\mathcal{H}_{\lambda,r}^\#$ are defined by
\begin{equation}
\begin{split}
\mathcal{B}_{\lambda,r}^\#&\;=B_{\lambda,r}^\#\times\mathbb{C}^N,
B_{\lambda,r}^\#=\{u\in L^2_r:D_t^2u\in L^2_r\},\\
\left\|\left(
\begin{array}{c}
u \\
u_- \\
\end{array}
\right)
\right\|_{\mathcal{B}_{\lambda,r}^\#}&\;
=\langle\lambda\rangle\|u\|_{L^2_r}+\|D_t^2u\|_{L^2_r}+|u_-|,\\
\mathcal{H}_{\lambda,r}^\#&\;
=L^2_r\times\mathbb{C}_{\langle\lambda\rangle^{1/4}}\times
\mathbb{C}^N_{\langle\lambda\rangle},\\
\left\|\left(
\begin{array}{c}
v \\
v_0 \\
v_+
\end{array}
\right)
\right\|_{\mathcal{H}_{\lambda,r}^\#}&\;=\|v\|_{L^2_r}
+\langle\lambda\rangle^{1/4}|v_0|
+\langle\lambda\rangle|v_+|.
\end{split}
\end{equation}
\begin{prop}
\label{prop:easy}
For $|\lambda|\gg1+|\Re z|$, and $r\in[0,r_0]$ with $r_0>0$ small enough, $\mathcal{P}_\lambda^\#(z):\mathcal{B}_{\lambda,r}^\#
\to\mathcal{H}_{\lambda,r}^\#$ is uniformly invertible. We have the mapping properties for $\mathcal{P}_\lambda^\#(z)$ and its inverse $\mathcal{E}_\lambda^\#(z)$:
\begin{equation}
\begin{split}
\|\partial_\lambda^k\mathcal{P}_\lambda^\#(z)
\|_{\mathcal{L}(\mathcal{B}_{\lambda,r}^\#,
\mathcal{H}_{\lambda,r}^\#)}
\leqslant&\; C_k\langle\lambda\rangle^{-k}\\
\|\partial_\lambda^k\mathcal{E}_\lambda^\#(z)
\|_{\mathcal{L}(\mathcal{H}_{\lambda,r}^\#,
\mathcal{B}_{\lambda,r}^\#)}
\leqslant&\; C_k\langle\lambda\rangle^{-k}.
\end{split}
\end{equation}
Moreover, the $(-+)$-component of $\mathcal{E}_\lambda^\#$ satisfies:
\begin{equation}
E_{-+}^\#(z,\lambda)^{-1}=O(\langle\lambda\rangle^{-1}).
\end{equation}
\end{prop}
\begin{proof}
The proof is almost identical to the Airy model problem we discussed above. To make the argument work, we only need to replace the Poisson operator $K_\lambda$ by $K_\lambda^\#$ satisfying
\begin{equation*}
P_\lambda^\#K_\lambda^\#=0, \gamma_1K_\lambda^\#=0,
\end{equation*}
which is given by multiplying the function
\begin{equation*}
f_\lambda^\#=-e^{\pi i/3}\lambda^{-1/2}\exp(-e^{-\pi i/3}\lambda^{1/2}t).
\end{equation*}
When $\lambda$ is negative, we choose the branch $\lambda^{1/2}=i(-\lambda)^{1/2}$ so $f_\lambda^\#$ has exponential decay. An easy calculation shows that
\begin{equation*}
\|f_\lambda\|_{L^2}=O(|\lambda|^{-3/4}),
\end{equation*}
and therefore all our arguments in Lemma \ref{lem:airyinv}, thus in Proposition \ref{prop:airy} and \ref{prop:airy:e-+} can be carried out in the same way. We shall omit the details here.
\end{proof}
\subsection{The $\mu$-dependent construction.}
Now we shall put the parameter $\mu$ back into the operator and describe the necessary modification we need to make in the model problem. The idea is to change coordinates $t=\mu^{-1/3}\tilde{t}$ in \eqref{model:airy} which will reduce to the case $\mu=1$. From our discussion, it will be clear that when $\mu$ varies in a compact subset of $(0,\infty)$ all the estimates will be uniformly in $\mu$ provided that we construct all the operators accordingly and replace the the eigenvalues $\zeta_j'$ of Neumann Airy operator $D_t^2+t$ by $\mu^{2/3}\zeta_j'$. More precisely, we have the following Grushin problem
\begin{equation}
\label{model:modairymu}
\mathcal{P}_\lambda^\delta(z)=\left(
\begin{array}{cc}
P_\lambda-z & R_-^{\lambda,\delta,\mu} \\
\gamma_1 & 0 \\
R_+^{\lambda,\delta,\mu} & 0 \\
\end{array}
\right):
\mathcal{B}_{z,\lambda,r}\to\mathcal{H}_{z,\lambda,r}
\end{equation}
where the spaces $\mathcal{B}_{z,\lambda,r}, \mathcal{H}_{z,\lambda,r}$ are as before and we reintroduce the additional parameter $\mu$ in the operators
\begin{equation*}
\begin{split}
P_\lambda-z=&\; e^{-2\pi i/3}(D_t^2+\mu t)+\lambda-z\\
R_+^{\lambda,\delta,\mu}u=&\;(\langle u,e_{j,\mu}^{\lambda,\delta}\rangle)_{1\leqslant j\leqslant N}\\
R_-^{\lambda,\delta,\mu}u_-=&\;\sum_{j=1}^Nu_-(j)e_{j,\mu}^{\lambda,\delta}
\end{split}
\end{equation*}
with
\begin{equation}
\label{def:mueigens}
e_{j,\mu}^{\lambda,\delta}(t)=\mu^{1/6}e_j^{\lambda,\delta}(\mu^{1/3}t)
=\mu^{1/6}\langle\delta\lambda\rangle^{1/4}
e_j(\mu^{1/3}\langle\delta\lambda\rangle^{1/2}t).
\end{equation}
In the mean time, we also replace the $R_\pm^{\lambda,\delta}$ in the easy model by $R_\pm^{\lambda,\delta,\mu}$. Then all the previous results hold uniformly in $\mu\in[C^{-1},C]\subset(0,\infty)$ with possibly a smaller $r_0>0$ due to the change of variable $t=\mu^{-1/3}\tilde{t}$.
\section{Second microlocal symbol class for Grushin problems}
\label{sec:second}
In this part, we consider the symbol class for the operator \eqref{op:comb} near the boundary where we have the expression in coordinates $(t=h^{-2/3}x_n,x')$,
\begin{equation}
\begin{split}
P-z=&\;e^{-2\pi i/3}(D_t^2+2tQ(h^{2/3}t,x',hD_{x'};h))\\
&+h^{-2/3}(R(x',hD_{x'};h)-w)+F(h^{2/3}t,x')h^{2/3}D_t-z,
\end{split}
\end{equation}
and $\gamma=\gamma_1+h^{2/3}k\gamma_0$. The difficulty is that though this operator has a good symbol property, out construction of the inverse requires a symbol class that has a non-classical behavior. More precisely, the symbol class will contain functions of $h^{-2/3}(R(x',\xi')-w)$ and near the glancing hypersurface $\Sigma_w=\{R(x',\xi')=w\}$. We lose $2/3$-power of $h$ each time we differentiate such symbols in the transversal direction. Symbol classes characterizing such non-classical behavior are introduced in \cite{SZ6} and we shall follow their approach.
\subsection{Second microlocalization with respect to a hypersurface}
In this part, we review some facts about the second microlocalization with respect to a hypersurface. For details, see \cite{SZ6}.
We always assume that $X$ is a $n$-dimensional compact smooth manifold and $\Sigma\subset T^\ast X$ is a smooth compact hypersurface. In our application, $X=\partial\mathcal{O}$ will be the boundary of the obstacle and $\Sigma=\Sigma_w=\{(x',\xi')\in T^\ast\partial\mathcal{O}:R(x',\xi')=w\}$ will be the glancing hypersurface. We shall also fix a distance function $d(\Sigma,\cdot)$ on $T^\ast X$ as the absolute value of a defining function of $\Sigma$. In particular, $d(\Sigma,\cdot)$ vanishes only on $\Sigma$ and behaves like $\langle\xi\rangle$ near the infinity in $T^\ast X$.
To start with, we recall the standard class of semiclassical symbols on $T^\ast X$, see e.g. \cite{DS}, \cite{Ma} and \cite{Z2},
\begin{equation*}
S^{m,k}(T^\ast X)=\{a\in C^\infty(T^\ast X\times(0,1]):|\partial_x^\alpha\partial_\xi^\beta a(x,\xi;h)|\leqslant C_{\alpha\beta}h^{-m}\langle\xi\rangle^{k-|\beta|}\}.
\end{equation*}
One can also study the more general class $S^{m,k}_\delta$ with $0\leqslant\delta<\frac{1}{2}$ where the right-hand side is replaced by $C_{\alpha\beta}h^{-m-\delta(|\alpha|+|\beta|)}
\langle\xi\rangle^{k-(1-\delta)|\beta|+\delta|\alpha|}$.
Now for any $0\leqslant\delta<1$ we define a class of symbols associated to $\Sigma$: $a\in S_{\Sigma,\delta}^{m,k_1,k_2}(T^\ast X)$ if
\begin{equation}\label{2msymbol}
\begin{split}
&\text{near } \Sigma: V_1\cdots V_{l_1}W_1\cdots W_{l_2}a=O(h^{-m-\delta l_1}
\langle h^{-\delta} d(\Sigma,\cdot)\rangle^{k_1}),\\
&\text{where } V_1, \ldots, V_{l_1} \text{ are vector fields tangent to } \Sigma,\\
&\text{ and }W_1, \ldots, W_{l_2} \text{ are any vector fields};\\
&\text{away from } \Sigma: \partial_x^\alpha\partial_\xi^\beta a(x,\xi;h)=O(h^{-m-\delta k_1}\langle\xi\rangle^{k_2-|\beta|}).
\end{split}
\end{equation}
To define the corresponding class of operators $\Psi_{\Sigma,\delta}^{m,k_1,k_2}$, we start locally by assuming $\Sigma$ is of the normal form $\Sigma_0=\{\xi_1=0\}$. Then near $\xi_1=0$, we can write $a=a(x,\xi,\lambda;h)$ with $\lambda=h^{-\delta}\xi_1$. Then \eqref{2msymbol} becomes
\begin{equation}
\partial_x^\alpha\partial_\xi^\beta\partial_\lambda^l
a(x,\xi,\lambda,h)=O(h^{-m})\langle\lambda\rangle^{k-l},
\end{equation}
which we shall write $a=\tilde{O}(h^{-m}\langle\lambda\rangle^k)$. Then we can define
\begin{equation}
\widetilde{\Op}_h(a)u(x)=\frac{1}{(2\pi h)^n}\int e^{\frac{i}{h}\langle x-y,\xi\rangle}a(x,\xi,h^{-\delta}\xi_1,h)u(y)dyd\xi.
\end{equation}
Then as in the standard semiclassical calculus, we have the composition formula: for $a=\tilde{O}(h^{-m_1}\langle\lambda\rangle^{k_1})$ and $b=\tilde{O}(h^{-m_2}\langle\lambda\rangle^{k_2})$,
\begin{equation*}
\widetilde{\Op}_h(a)\circ\widetilde{\Op}_h(b)
=\widetilde{\Op}_h(a\#_hb) \mod{\Psi^{-\infty,-\infty}(X)},
\end{equation*}
where
\begin{equation*}
a\#_hb(x,\xi,\lambda;h)=\sum_{\alpha\in\mathbb{N}^n}
\frac{1}{\alpha!}(h\partial_{\xi'})^{\alpha'}
(h\partial_{\xi_1}+h^{1-\delta}\partial_\lambda)^{\alpha_1}a
D_x^\alpha b\in\tilde{O}(h^{-m_1-m_2}\langle\lambda\rangle^{k_1+k_2}).
\end{equation*}
We also have a version of Beals's characterization of pseudodifferential operators: Let $A=A_h:\mathcal{S}(\mathbb{R}^n)\to\mathcal{S}'(\mathbb{R}^n)$ and put $x'=(x_2,\ldots,x_n)$. Then $A=\tilde{\Op}_h(a)$ for some $a=\tilde{O}(h^{-m}\langle\lambda\rangle^k)$ if and only if for all $N,p,q\geqslant0$ and every sequence $l_j(x',\xi'),j=1,\ldots,N$ of linear forms on $\mathbb{R}^{2(n-1)}$ there exists $C>0$ such that
\begin{equation*}
\begin{split}
\|\ad_{l_1(x',hD_{x'})}\circ\cdots\circ\ad_{l_N(x',hD_{x'})}\circ
(\ad_{h^{1-\delta}D_{x_1}})^p\circ(\ad_{x_1})^q&Au\|_{(q-\min(k,0))}\\
\leqslant&\; Ch^{N+(1-\delta)(p+q)}\|u\|_{(\max(k,0))},
\end{split}
\end{equation*}
where $\|u\|_{(p)}=\|u\|_{L^2}+\|(h^{1-\delta}D_{x_1})^pu\|_{L^2}$.
The global definition of the class $\Psi_{\Sigma,\delta}^{m,k_1,k_2}(X)$ relies on the invariance of $\widetilde{\Op}_h(\tilde{O}(\langle\lambda\rangle^m))$ under conjugation by $h$-Fourier integral operators whose associated canonical relation fixed $\{\xi_1=0\}$. See Proposition 4.2 in \cite{SZ6}. Now we define $A\in\Psi_{\Sigma,\delta}^{m,k_1,k_2}(X)$ if and only if\\
(1) for any $m_0\in\Sigma$ and any $h$-Fourier integral operator $U:C^\infty(X)\to C^\infty(\mathbb{R}^n)$ elliptic near $((0,0),m_0)$ whose corresponding canonical transformation $\kappa$ satisfies $\kappa(m_0)=(0,0)$, $\kappa(\Sigma\cap V)\subset\{\xi_1=0\}$ for some neighborhood $V$ of $m_0$, we have
$UAU^{-1}=\widetilde{\Op}_h
(\tilde{O}(h^{-m}\langle\lambda\rangle^{k_1})$, microlocally near $(0,0)$;\\
(2) for any $m_0$ outside any fixed neighborhood of $\Sigma$, $A\in\Psi^{m+\delta k_1,k_2}(X)$ microlocally near $m_0$ in both classical and semiclassical sense.
In particular, we have the quantization map
\begin{equation*}
\Op_{\Sigma,h}:S_{\Sigma,\delta}^{m,k_1,k_2}(T^\ast X)\to\Psi_{\Sigma,\delta}^{m,k_1,k_2}(X),
\end{equation*}
and the principal symbol map
\begin{equation*}
\sigma_{\Sigma,h}:\Psi_{\Sigma,\delta}^{m,k_1,k_2}(X)\to
S_{\Sigma,\delta}^{m,k_1,k_2}(T^\ast X)/
S_{\Sigma,\delta}^{m-1+\delta,k_1-1,k_2-1}(T^\ast X).
\end{equation*}
For $a\in S^{m,k_1,-\infty}_{\Sigma,\delta}$ we introduce a notion of essential support. We say for an $h$-dependent family of sets $V_h\subset T^\ast X$,
\begin{equation*}
\esssupp a\cap V_h=\emptyset
\end{equation*}
if and only if there exists $\chi\geqslant0$, $\chi\in S^{0,0,-\infty}(T^\ast X)$, such that
\begin{equation*}
\chi|_{V_h}\geqslant1, \chi a\in S^{-\infty,-\infty}(T^\ast X).
\end{equation*}
As the standard case, if $a,b\in S_{\Sigma,\delta}^{m,k,-\infty}(T^\ast X)$ satisfies $\Op_{\Sigma,h}(a)=\Op_{\Sigma,h}(b)$, then
$\esssupp a=\esssupp b$. Therefore we can define for $A\in \Psi^{m,k,-\infty}_{\Sigma,\delta}(X)$ the semiclassical wave front set as $\WF_h(A)=\esssupp a$ if $A=\Op_{\Sigma,h}(a)$.
Now we generalize the symbol class to an arbitrary order function $m$ and vector valued as operators from a Banach space $\mathcal{B}$ to another Banach space $\mathcal{H}$. We assume that $m=m(x,\xi,\lambda;h)$ is an order function with respect to the metric $g=dx^2+d\xi^2/\langle\xi\rangle+d\lambda^2/\langle\lambda\rangle$ in the sense that
\begin{equation*}
|g_{(x,\xi,\lambda)}(y,\eta,\mu)|\leqslant c\Rightarrow
C^{-1}m(x,\xi,\lambda)\leqslant m(x+y,\xi+\eta,\lambda+\mu)
\leqslant Cm(x,\xi,\lambda).
\end{equation*}
(See \cite{Ho} for instance.) We also assume that $\mathcal{B}$ and $\mathcal{H}$ are equipped with $(x,\xi,\lambda;h)$-dependent norms $\|\cdot\|_{m_\mathcal{B}}$, $\|\cdot\|_{m_\mathcal{H}}$ which are equivalent to some fixed norm (may not uniformly), respectively. In addition, we assume that
the norms are continuous with respect to the metric $g$, uniformly with respect to $h$. Then we say that $a\in S_{\Sigma,\delta}(T^\ast X,m,\mathcal{L}(\mathcal{B},\mathcal{H}))$ if
\begin{equation}
\|a(x,\xi;h)u\|_{m_\mathcal{H}(x,\xi,\lambda;h)}\leqslant Cm(x,\xi,\lambda;h)\|u\|_{m_\mathcal{B}(x,\xi,\lambda;h)}, \lambda=h^{-\delta}d(\Sigma,\cdot), \text{ for all } u\in\mathcal{B},
\end{equation}
and if this statement is stable under applications of vector fields in the sense of \eqref{2msymbol}, namely,
\begin{equation}
\begin{split}
&\text{near } \Sigma: V_1\cdots V_{l_1}W_1\cdots W_{l_2}a=O_{\mathcal{L}(\mathcal{B},\mathcal{H})}(mh^{-\delta l_1}),\\
&\text{where } V_1, \ldots, V_{l_1} \text{ are vector fields tangent to } \Sigma,\\
&\text{ and }W_1, \ldots, W_{l_2} \text{ are any vector fields};\\
&\text{away from } \Sigma: \partial_x^\alpha\partial_\xi^\beta a(x,\xi;h)=O_{\mathcal{L}(\mathcal{B},\mathcal{H})}
(m\langle\xi\rangle^{-|\beta|}).
\end{split}
\end{equation}
Then we can obtain a class of operators $\Psi_{\Sigma,\delta}(X;m,\mathcal{L}(\mathcal{B},\mathcal{H}))$ and the corresponding principal symbol map
\begin{equation}
\begin{split}
\sigma_{\Sigma,h}:
\Psi_{\Sigma,\delta}&(X;m,\mathcal{L}(\mathcal{B},\mathcal{H}))\\
&\to S_{\Sigma,\delta}(T^\ast X;m,\mathcal{L}(\mathcal{B},\mathcal{H}))/
S_{\Sigma,\delta}(T^\ast X;m\langle h^{-\delta}d(\Sigma,\cdot)\rangle^{-1},
\mathcal{L}(\mathcal{B},\mathcal{H})).
\end{split}
\end{equation}
\subsection{Analysis near the glancing hypersurface}
We can use $|R(x',\xi')-w|$ as our distance function to the glancing hypersurface $\Sigma_w$ for which we shall perform the second microlocalization. First, we work near the glancing hypersurface, i.e. $|R(x',\xi')-w|\leqslant2C^{-1}$, then
$$\lambda=h^{-2/3}(R(x',\xi')-w)=O(h^{-2/3}).$$
We shall think of this as perturbation of the principal symbol
\begin{equation}
\left(
\begin{array}{c}
P_0-z \\
\gamma_1 \\
\end{array}
\right)=
\left(
\begin{array}{c}
e^{-2\pi i/3}(D_t^2+\mu t)+\lambda-z \\
\gamma_1 \\
\end{array}
\right),
\end{equation}
where
$$\mu=2Q(x',\xi')\in[C^{-1},C].$$
As in the previous section, we set up the Grushin problem by letting $R_\pm=R_\pm^{\lambda,\delta}$ there. Then we have the operator-valued symbol
\begin{equation}
\mathcal{P}_0(z)=
\left(
\begin{array}{cc}
P_0-z & R_- \\
\gamma_1 & 0 \\
R_+ & 0 \\
\end{array}
\right)
\end{equation}
which is uniformly invertible in $\mathcal{L}(\mathcal{B}_{z,\lambda,r},\mathcal{H}_{z,\lambda,r})$ with inverse $\mathcal{E}_0(z)$.
For simplicity, let us pretend for now that $Q$ does not depend additionally in $h$, then by Taylor expansion with respect to $x_n=h^{2/3}t$, we have
$$\mathcal{P}(z)\equiv\mathcal{P}_0(z)+h^{2/3}
\mathcal{K}_0+\sum_{j=1}^\infty h^{2j/3}T^j\mathcal{P}_j+\sum_{j=1}^\infty h^{2j/3}T^{j-1}\mathcal{D}_j.$$
Here
$$\mathcal{K}_0=\left(
\begin{array}{cc}
0 & 0 \\
k(x')\gamma_0 & 0 \\
0 & 0 \\
\end{array}
\right),$$
$$\mathcal{P}_j=
\left(
\begin{array}{cc}
\frac{1}{j!}2e^{-2\pi i/3}t\partial_t^jQ(0,x',\xi') & 0 \\
0 & 0 \\
0 & 0 \\
\end{array}
\right),$$
$$\mathcal{D}_j=
\left(
\begin{array}{cc}
\frac{1}{(j-1)!}\partial_t^{j-1}F(0,x')D_t & 0 \\
0 & 0 \\
0 & 0 \\
\end{array}
\right),$$
and
$$T=\left(
\begin{array}{ccc}
t & 0 & 0\\
0 & 0 & 0\\
0 & 0 & 0\\
\end{array}
\right).$$
To find the inverse of such symbols, we shall take the approach similar to \S 1 of Sj\"{o}strand \cite{S1} which is again motivated by the work of Boutet de Monvel-Kree \cite{BK} on formal analytic symbols. Instead of considering a symbol $q=q(x,\xi;h)$, we deal with the formal operator
$$Q=q(x,\xi+hD_x;h)
\equiv\sum_{\alpha\in\mathbb{N}^{n-1}}\frac{1}{\alpha!}
\partial_\xi^\alpha q(x,\xi;h)(hD_x)^\alpha.$$
The symbol $q$ itself can be recovered by the formula
$$q=Q(1).$$
The advantage to work with this setting is that the composition formula
$$a\#_hb=\sum_{\alpha\in\mathbb{N}^{n-1}}
\frac{1}{\alpha!}(h\partial_\xi)^\alpha aD_x^\alpha b$$
becomes the formal composition of the corresponding formal operators $A$ and $B$:
$$a\#_hb=A\circ B(1).$$
Therefore to find the inverse of such a symbol is equivalent to find the inverse of the corresponding formal operator.
For this purpose, we shall introduce the following class of operators
\begin{equation*}
\mathfrak{A}=\sum_{k,\alpha}
(h^{2/3}T)^kA_{k,\alpha}(x',\xi,\lambda;h)D_{x'}^\alpha,
\end{equation*}
where
$$A_{k,\alpha}:\mathcal{B}_{z,\lambda,r}
\to\mathcal{H}_{z,\lambda,r}.$$
The inverse of such operators should be of the form
\begin{equation*}
\mathfrak{B}=\sum_{k,\alpha}
(h^{2/3}T)^kB_{k,\alpha}(x',\xi,\lambda;h)D_{x'}^\alpha,
\end{equation*}
where
$$B_{k,\alpha}:\mathcal{H}_{z,\lambda,r}\to\mathcal{B}_{z,\lambda,r}.$$
However, we should notice that the $T$ in the second class of operators should be interpreted as
$$T=\left(
\begin{array}{ccc}
t & 0\\
0 & 0\\
\end{array}
\right)$$
acting on $\mathcal{B}_{z,\lambda,r}$ instead of on $\mathcal{H}_{z,\lambda,r}$. When needed, we shall write this one as $T_{\mathcal{B}}$ and the previous one as $T_{\mathcal{H}}$.
There are several technical issues about these two different operators $T$ that we have to deal with. First, $T$ is not a bounded operator on $\mathcal{B}_{z,\lambda,r}$ or $\mathcal{H}_{z,\lambda,r}$. We can deal with this issue by relaxing the exponentially weighted space.
$$T^k=O(1)C^kk^k(r-r')^{-k}
:\mathcal{B}_{z,\lambda,r}\to\mathcal{B}_{z,\lambda,r'}$$
if $r>r'$ and similar for $\mathcal{H}_{z,\lambda,r}\to\mathcal{H}_{z,\lambda,r'}$. Therefore we can work on the formal level and interpret the formal operators in the end as operators from $\mathcal{B}_{z,\lambda,r}$ to $\mathcal{H}_{z,\lambda,r'}$ (or similar operators with the weight function in the codomain relaxed to $r'$.)
The second issue comes from the non-commutativity of operators $T$ with $A_k$ or $B_k$. To compose two such operators $\mathfrak{A}$ and $\mathfrak{B}$, we are hoping to get a class of operators
\begin{equation*}
\mathfrak{C}=\sum_{k,\alpha}
(h^{2/3}T)^kC_{k,\alpha}(x',\xi',\lambda)D_{x'}^\alpha,
\end{equation*}
where
$$C_{k,\alpha}:\mathcal{H}_{z,\lambda,r}\to\mathcal{H}_{z,\lambda,r}
\text{ or }\mathcal{B}_{z,\lambda,r}\to\mathcal{B}_{z,\lambda,r},$$
depending on the order of composition. This composition will involve the ``commutators'' $\ad_T=[T,\cdot]$ which we should interpreted as
$$\ad_T(A)=T_{\mathcal{H}}A-AT_{\mathcal{B}},$$
$$\ad_T(B)=T_{\mathcal{B}}B-BT_{\mathcal{H}},$$
when it acts on different classes. We shall also need $\ad_T$ to act on the two different classes of $\mathfrak{C}$ and we shall interpret it accordingly.
This involves the study of stability of mapping properties of $A_k$ and $B_k$ under the ``commutator operation'' $\ad_T$. We first consider $\mathcal{P}_0$ to see its mapping properties and then adjust our definition of formal operators in a suitable way.
\begin{lem}
For $|\Re z|\ll1/\delta$, we have
\begin{equation}
\ad_T^k\mathcal{P}_0=O_k(\delta^{-k/2}\langle\lambda-\Re z\rangle^{-k/2})
:\mathcal{B}_{z,\lambda,r}\to\mathcal{H}_{z,\lambda,r}.
\end{equation}
\end{lem}
\begin{proof}
We have seen in the last section that this is true for $k=0$. A simple calculation gives
\begin{equation*}
\ad_T^k\mathcal{P}_0=
\left(
\begin{array}{cc}
\ad_t^k(P_0-z) & t^kR_- \\
(-1)^k\gamma_1t^k & 0 \\
(-1)^kR_+t^k & 0 \\
\end{array}
\right),
\end{equation*}
where $\ad_t=[t,\cdot]$ is the commutator with multiplying $t$. For $k=1$,
$$\ad_t(P_0-z)=2ie^{-2\pi i/3}D_t=O(\langle\lambda-\Re z\rangle^{-1/2}):\mathcal{B}_{z,\lambda,r}\to L_r^2.$$
For $k=2$,
$$\ad_t^2(P_0-z)=-2e^{-2\pi i/3}=O(\langle\lambda-\Re z\rangle^{-1}):\mathcal{B}_{z,\lambda,r}\to L_r^2.$$
For $k>2$,
$$\ad_t^k(P_0-z)=0.$$
For $k=1$,
$$(-1)^k\gamma_1t^k=\gamma_0=O(\langle\lambda-\Re z\rangle^{-1/2}):
\mathcal{B}_{z,\lambda,r}\to \mathbb{C}_{\langle\lambda-\Re z\rangle^{1/4}}.$$
For $k>1$,
$$(-1)^k\gamma_1t^k=0.$$
Also for $k\geqslant1$, we have
$$R^+t^k=O_k(\delta^{-k/2}\langle\lambda-\Re z\rangle^{-k/2}):
\mathcal{B}_{z,\lambda,r}\to\mathbb{C}^N;$$
$$(-1)^kt^kR^-=O_k(\delta^{-k/2}\langle\lambda-\Re z\rangle^{-k/2}):
\mathbb{C}^N\to L^2_r.$$
Combining all these estimates together, we get the desired mapping properties for $\ad_T^k\mathcal{P}_0$.
\end{proof}
On the other hand, we also need the stability for $\mathcal{P}_0(z)$ under differentiation in $x',\xi',\lambda$ which will give the second microlocal symbol class which is simply
\begin{equation*}
\mathcal{P}_0(z)\in S_{\Sigma_w,2/3}(\partial\mathcal{O};
1,\mathcal{L}(\mathcal{B}_{z,\lambda,r},\mathcal{H}_{z,\lambda,r})).
\end{equation*}
We shall combine the two types of mapping properties together to get
\begin{equation*}
\partial_{x'}^\alpha\partial_{\xi'}^\beta
\partial_\lambda^l\ad_T^k\mathcal{P}_0(z)
=O(\delta^{-k/2}\langle\lambda-\Re z\rangle^{-l-k/2})
:\mathcal{B}_{z,\lambda,r}\to\mathcal{H}_{z,\lambda,r},
\end{equation*}
where the constants depending on $k,l,\alpha,\beta$. Now each of $\partial_{x'},\partial_{\xi'},\partial_\lambda$ and $\ad_T$ is a derivation provided if we interpret $\ad_T$ suitably. We get similar estimates for the inverse:
\begin{equation*}
\partial_{x'}^\alpha\partial_{\xi'}^\beta
\partial_\lambda^l\ad_T^k\mathcal{E}_0(z)
=O(\delta^{-k/2}\langle\lambda-\Re z\rangle^{-l-k/2})
:\mathcal{H}_{z,\lambda,r}\to\mathcal{B}_{z,\lambda,r},
\end{equation*}
since we have seen the estimates for $k=l=0,\alpha=\beta=0$ in last section. We can replace $\langle\lambda-\Re z\rangle$ by $\langle\lambda\rangle$ with the expense of $\delta$-dependent constants.
Also we have the symbol properties for $\mathcal{P}_j$, $\mathcal{D}_j$ and $K_0$:
\begin{equation*}
\partial_{x'}^\alpha\partial_{\xi'}^\beta
\partial_\lambda^l\ad_T^k\mathcal{P}_j(z)
=O(\langle\lambda\rangle^{-l-k/2})
:\mathcal{B}_{z,\lambda,r}\to\mathcal{H}_{z,\lambda,r},
\end{equation*}
\begin{equation*}
\partial_{x'}^\alpha\partial_{\xi'}^\beta
\partial_\lambda^l\ad_T^k\mathcal{D}_j(z)
=O(\langle\lambda\rangle^{-1/2-l-k/2})
:\mathcal{B}_{z,\lambda,r}\to\mathcal{H}_{z,\lambda,r},
\end{equation*}
and
\begin{equation*}
\partial_{x'}^\alpha\partial_{\xi'}^\beta
\partial_\lambda^l\ad_T^k\mathcal{K}_0(z)
=O(\langle\lambda\rangle^{-1/2-l-k/2})
:\mathcal{B}_{z,\lambda,r}\to\mathcal{H}_{z,\lambda,r}.
\end{equation*}
We remark that we neglect a number of simplifying features here, for example, for $\mathcal{K}_0$, only when all of $\beta$, $k$ and $l$ are zero, the operator does not vanish.
Now we can introduce the suitable class of formal operators:
\begin{equation}
\mathfrak{A}=\sum_{\alpha\in\mathbb{N}^{n-1},j,k,l,m\in\mathbb{N}}
(h^{2/3}T)^j(h^{2/3}\langle\lambda\rangle^{-1/2})^k
(h^{1/3}\langle\lambda\rangle^{-1})^lh^m
\mathcal{A}_{\alpha,j,k,l,m}(x',\xi',\lambda,z)D_{x'}^\alpha,
\end{equation}
with the mapping properties for $\mathcal{A}_{\alpha,j,k,l,m}$
\begin{equation}
\partial_{x'}^{\tilde{\alpha}}\partial_{\xi'}^{\tilde{\beta}}
\partial_\lambda^{\tilde{l}}\ad_T^{\tilde{k}}
\mathcal{A}_{\alpha,j,k,l,m}
=O(\langle\lambda\rangle^{-\tilde{l}-\tilde{k}/2})
:\mathcal{B}_{z,\lambda,r}\to\mathcal{H}_{z,\lambda,r}.
\end{equation}
We shall rewrite the operator $\mathcal{P}$ as
\begin{equation*}
\mathcal{P}(z)=h^{2/3}\mathcal{K}_0(x')+\sum_{j=0}^\infty h^{2j/3}T^j
(\mathcal{P}_j(x',\xi',\lambda,z;h)
+h^{2/3}\mathcal{D}_{j+1}(x';h)),
\end{equation*}
where $\mathcal{K}_0$ is as above and $\mathcal{P}_j$, $\mathcal{D}_j$ satisfies the same properties as above.
Then the associated formal operator $\mathfrak{P}$ is given by
\begin{equation*}
\begin{split}
\mathfrak{P}=&\;
\sum_{\alpha\in\mathbb{N}^{n-1}}
\frac{1}{\alpha!}\partial_{\xi'}^\alpha
(\mathcal{P}(x',\xi',\lambda,z;h))(hD_{x'})^\alpha\\
=&\;\sum_{\alpha\in\mathbb{N}^{n-1}}\frac{1}{\alpha!}
[\partial_{\xi''}^{\alpha''}
(\partial_{\xi_1}+h^{-2/3}\partial_\lambda)^{\alpha_1}\mathcal{P}]
(x',\xi',\lambda,z;h)(hD_{x'})^\alpha\\
=&\;\sum_{\alpha\in\mathbb{N}^{n-1}}\frac{1}{\alpha!}
[(h\partial_{\xi''})^{\alpha''}(h\partial_{\xi_1}
+h^{1/3}\partial_\lambda)^{\alpha_1}\mathcal{P}]
(x',\xi',\lambda,z;h)D_{x'}^\alpha\\
=&\;h^{2/3}\mathcal{K}_0+\sum_{j=1}^\infty h^{2j/3}T^{j-1}\mathcal{D}_j\\
&\;+
\sum_{\alpha\in\mathbb{N}^{n-1}}\frac{1}{\alpha!}
\sum_{j\in\mathbb{N}}h^{2j/3}T^j[(h\partial_{\xi''})^{\alpha''}
(h\partial_{\xi_1}+h^{1/3}\partial_\lambda)^{\alpha_1}\mathcal{P}_j]
(x',\xi',\lambda,z;h)D_{x'}^{\alpha'}
\end{split}
\end{equation*}
is in this class and with principal term $\mathcal{P}_0(x',\xi,\lambda,z)=\mathcal{P}_0(z)$. Here we write $\alpha'=(\alpha_1,\alpha'')$.
For the inverse, we introduce the class of operators $\mathfrak{B}$ of the same form as $\mathfrak{A}$ with $\mathcal{A}_{\alpha,j,k,l,m}$ replaced by $\mathcal{B}_{\alpha,j,k,l,m}$ satisfying
\begin{equation}
\partial_{x'}^{\tilde{\alpha}}\partial_{\xi'}^{\tilde{\beta}}
\partial_\lambda^{\tilde{l}}\ad_T^{\tilde{k}}
\mathcal{B}_{\alpha,j,k,l,m}
=O(\langle\lambda\rangle^{-\tilde{l}-\tilde{k}/2})
:\mathcal{H}_{z,\lambda,r}\to\mathcal{B}_{z,\lambda,r}.
\end{equation}
Then the composition of $\mathfrak{A}$ and $\mathfrak{B}$,
$$\mathfrak{C}=\mathfrak{A}\circ\mathfrak{B},\;\;\; (\text{or } \mathfrak{B}\circ\mathfrak{A}),$$
is of the same form as $\mathfrak{A}$ and $\mathfrak{B}$ with $\mathcal{A}_{\alpha,j,k,l,m}$ or $\mathcal{B}_{\alpha,j,k,l,m}$ replaced by $\mathcal{C}_{\alpha,j,k,l,m}$ satisfying
\begin{equation}
\partial_{x'}^{\tilde{\alpha}}\partial_{\xi'}^{\tilde{\beta}}
\partial_\lambda^{\tilde{l}}\ad_T^{\tilde{k}}
\mathcal{B}_{\alpha,j,k,l,m}
=O(\langle\lambda\rangle^{-\tilde{l}-\tilde{k}/2})
:\mathcal{H}_{z,\lambda,r}\to\mathcal{H}_{z,\lambda,r}.
\end{equation}
(or $\mathcal{B}_{z,\lambda,r}\to\mathcal{B}_{z,\lambda,r}$.)
Now the construction of the formal inverses is through the standard techniques of Neumann series.
\begin{lem}
If $\mathfrak{A}$ is as above with $\mathcal{A}_0$ invertible. Also $\mathcal{B}_0=\mathcal{A}_0^{-1}$ satisfying
$$\mathcal{B}_0=O(1):
\mathcal{H}_{z,\lambda,r}\to\mathcal{B}_{z,\lambda,r}.$$
Then there exists $\mathfrak{B}$ as above with the principal term $\mathcal{B}_0$ such that
$$\mathfrak{A}\circ\mathfrak{B}=\Id,\;\;\; \mathfrak{B}\circ\mathfrak{A}=\Id.$$
\end{lem}
\begin{proof}
Let $\mathfrak{C}=\mathfrak{A}\circ\mathfrak{B}_0$ where $\mathfrak{B}_0=\mathcal{B}_0$, then $\mathfrak{C}$ is as above with $\mathcal{C}_0=\mathcal{A}_0\circ\mathcal{B}_0=\Id$. Therefore we can form the formal Neumann series
$$\mathfrak{D}=\Id+(\Id-\mathfrak{C})
+(\Id-\mathfrak{C})\circ(\Id-\mathfrak{C})+\cdots$$
which again gives a formal operator as above. Then we can simply take $\mathfrak{B}=\mathcal{B}_0\circ\mathfrak{D}$ to get the right inverse. The left inverse can be constructed in the same way and the standard argument shows that the two must have the same formal expansions. And it is clear from the construction that the principal term of $\mathfrak{B}$ is $\mathcal{B}_0$.
\end{proof}
Now applying this lemma to $\mathfrak{P}$, we get an inverse $\mathfrak{E}$. Let $\mathcal{E}=\mathfrak{E}(1)$, we get a parametrix for $\mathcal{P}(z)$ in the region $|R(x',\xi')-w|\leqslant 2C^{-1}$:
\begin{equation}
\mathcal{E}(x',\xi',\lambda,z;h)=\sum_{j,k,l,m\in\mathbb{N}}
(h^{2/3}T)^j(h^{2/3}\langle\lambda\rangle^{-1/2})^k
(h^{1/3}\langle\lambda\rangle^{-1})^lh^m
\mathcal{E}_{0,j,k,l,m}(x',\xi',\lambda,z)
\end{equation}
with
\begin{equation}
\partial_{x'}^{\tilde{\alpha}}\partial_{\xi'}^{\tilde{\beta}}
\partial_\lambda^{\tilde{l}}\ad_T^{\tilde{k}}
\mathcal{E}_{0,j,k,l,m}
=O(\langle\lambda\rangle^{-\tilde{l}-\tilde{k}/2})
:\mathcal{H}_{z,\lambda,r}\to\mathcal{B}_{z,\lambda,r}.
\end{equation}
In particular, the principal term is exactly $\mathcal{E}_0$ as we constructed in the previous section.
\subsection{Analysis away from the glancing hypersurface}
Now we deal with the region $|R(x',\xi')-w|>C^{-1}$. In this case, $Q\ll|\lambda|=h^{-2/3}|R-w|$ so that we are working with the second model operator in last section where we regard $tQ(h^{2/3}t,x',\xi')$ also as a perturbation. Let
$$P_0^\#=e^{-2\pi i/3}D_t^2+\lambda,\;\;\; \lambda=h^{-2/3}(R(x',\xi')-w),$$
and $R_\pm$ as before. The operator-valued symbol
\begin{equation}
\mathcal{P}_0^\#(z)=
\left(
\begin{array}{cc}
P_0^\#-z & R_- \\
\gamma_1 & 0 \\
R_+ & 0 \\
\end{array}
\right):\mathcal{B}_{\lambda,r}^\#\to\mathcal{H}_{\lambda,r}^\#
\end{equation}
is uniformly invertible with inverse $\mathcal{E}_0^\#(z)$ since $|\lambda|\geqslant h^{-2/3}/C\gg|\Re z|$. Moreover,
\begin{equation*}
\mathcal{P}_0^\#(z)\in S_{\Sigma_w,2/3}(\partial\mathcal{O};1,
\mathcal{L}(\mathcal{B}_{\lambda,r}^\#,\mathcal{H}_{\lambda,r}^\#)).
\end{equation*}
Recall the definition for the symbol class that away from the glancing hypersurface, the symbol behaves classically and we do not need to specify the derivative in $\lambda$. However, we need to consider the possibility that $\xi'$ may get large. More precisely, the symbol properties for $\mathcal{P}_0^\#$ and $\mathcal{E}_0^\#$ are given by
\begin{equation*}
\partial_{x'}^\alpha\partial_{\xi'}^\beta\ad_T^k\mathcal{P}_0^\#(z)
=O(\langle\xi'\rangle^{-|\beta|}\langle\lambda\rangle^{k/2})
:\mathcal{B}_{\lambda,r}^\#\to\mathcal{H}_{\lambda,r}^\#;
\end{equation*}
\begin{equation*}
\partial_{x'}^\alpha\partial_{\xi'}^\beta\ad_T^k\mathcal{P}_0^\#(z)
=O(\langle\xi'\rangle^{-|\beta|}\langle\lambda\rangle^{-k/2})
:\mathcal{H}_{\lambda,r}^\#\to\mathcal{B}_{\lambda,r}^\#;
\end{equation*}
where we notice that $|\lambda|^{-k/2}\sim(h^{-1/3}\langle\xi'\rangle)^{-k}$ and $Q(0,x',\xi')=O(h^{2/3})|\lambda|$. For the lower order term in the expansion
\begin{equation*}
\mathcal{P}(z)\equiv
h^{2/3}\mathcal{K}_0+
\sum_{j=0}^\infty(h^{2/3}T)^j\mathcal{P}_j^\#(x,\xi,z;h)
\end{equation*}
with $T$, $\mathcal{K}_0$ as before and
\begin{equation*}
\partial_{x'}^\alpha\partial_{\xi'}^\beta\ad_T^k\mathcal{P}_j^\#
=O(1)\langle\xi'\rangle^{-|\beta|}(h^{1/3}\langle\xi'\rangle^{-1})^k
:\mathcal{B}_{\lambda,r}^\#\to\mathcal{H}_{\lambda,r}^\#.
\end{equation*}
We proceed exactly as before to define the associated formal operator
\begin{equation*}
\mathfrak{P}^\#=\sum_{\alpha\in\mathbb{N}^{n-1}}\frac{1}{\alpha!}
((h\partial_{\xi'})^\alpha\mathcal{P})D_{x'}^\alpha.
\end{equation*}
This motivate us to consider the general class of formal operators of the form
\begin{equation}
\mathfrak{A}^\#=\sum_{\alpha\in\mathbb{N}^{n-1},j,k\in\mathbb{N}}
(h^{2/3}T)^j(h\langle\xi'\rangle)^k
\mathcal{A}^\#_{\alpha,j,k}(x',\xi',z;h)D_{x'}^\alpha
\end{equation}
with
\begin{equation}
\partial_{x'}^{\tilde{\alpha}}\partial_{\xi'}^{\tilde{\beta}}
\ad_T^{\tilde{k}}\mathcal{A}_{\alpha,j,k}
=O(1)\langle\xi'\rangle^{-|\tilde{\beta}|}
(h^{1/3}\langle\xi'\rangle^{-1})^{\tilde{k}}
:\mathcal{B}_{\lambda,r}^\#\to\mathcal{H}_{\lambda,r}^\#.
\end{equation}
So we see that $\mathfrak{P}^\#$ is in this class. The same argument as in the case near the glancing hypersurface shows that $\mathfrak{P}^\#$ has a formal inverse $\mathfrak{E}^\#$ of the same form satisfying the estimates with $\mathcal{H}^\#$ and $\mathcal{B}^\#$ exchanged. Therefore we have an inverse of $\mathcal{P}(z)$ in the region $|R(x',\xi')-w|\geqslant C^{-1}$,
\begin{equation}
\mathcal{E}^\#(x',\xi',z;h)=\mathfrak{E}^\#(1)=
\sum_{j,k\in\mathbb{N}}
(h^{2/3}T)^j(h\langle\xi'\rangle)^k
\mathcal{E}^\#_{j,k}(x',\xi',z;h)
\end{equation}
with the following mapping properties
\begin{equation}
\partial_{x'}^\alpha\partial_{\xi'}^\beta\ad_T^{\tilde{k}}
\mathcal{E}^\#_{j,k}=O(1)\langle\xi'\rangle^{-|\beta|}
(h^{1/3}\langle\xi'\rangle)^k
:\mathcal{H}^\#_{\lambda,r}\to\mathcal{B}^\#_{\lambda,r}.
\end{equation}
\subsection{Analysis in the intermediate region}
\label{sec:inter}
In the intermediate region $C^{-1}\leqslant|R(x',\xi')-w|\leqslant2C^{-1}$, we observe that both cases reduce to the simpler expansions that coincide with each other.
The key point is that in this region both $\lambda$ and $\xi'$ will be irrelevant. In fact, $|\xi'|$ is bounded and $\lambda\sim h^{-2/3}$. Therefore we have the expansions
\begin{equation*}
\mathcal{E}(x',\xi',z;h)
=\sum_{j,k\in\mathbb{N}}(h^{2/3}T)^jh^k
\mathcal{E}_{j,k}(x',\xi',z;h)
\end{equation*}
where
\begin{equation*}
\partial_{x'}^\alpha\partial_{\xi'}^\beta\ad_T^{\tilde{k}}
\mathcal{E}_{j,k}=O(h^{k/3})
:\mathcal{H}_{z,\lambda,r}\to\mathcal{B}_{z,\lambda,r};
\end{equation*}
and
\begin{equation*}
\mathcal{E}^\#(x',\xi',z;h)
=\sum_{j,k\in\mathbb{N}}(h^{2/3}T)^jh^k
\mathcal{E}^\#_{j,k}(x',\xi',z;h)
\end{equation*}
where
\begin{equation*}
\partial_{x'}^\alpha\partial_{\xi'}^\beta\ad_T^{\tilde{k}}
\mathcal{E}^\#_{j,k}=O(h^{k/3})
:\mathcal{H}_{\lambda,r}^\#\to\mathcal{B}_{\lambda,r}^\#.
\end{equation*}
Of course the same is true for $\mathcal{P}$ with $\mathcal{B}$ and $\mathcal{H}$ exchanged. Therefore if we introduce spaces $\mathcal{B}$ and $\mathcal{H}$ which agrees with $\mathcal{B}_{z,\lambda,r}$ and $\mathcal{H}_{z,\lambda,r}$ microlocally in $|R(x',\xi')-w|<2C^{-1}$, also agrees with $\mathcal{B}_{\lambda,r}^\#$ and $\mathcal{H}_{\lambda,r}^\#$ microlocally in $|R(x',\xi')-w|>C^{-1}$. Then this coincidence on the intermediate region shows that the symbol $\mathcal{P}$ and $\mathcal{E}$ satisfies the global construction at least near the boundary.
\section{Global Grushin problems}
\label{sec:global}
\subsection{Estimates away from the boundary}
We begin by recalling the following estimates away from the boundary. Let
\begin{equation}
D(\alpha)=\{x\in\mathbb{R}^n\setminus\mathcal{O}:
d(x,\partial\mathcal{O})>\alpha\},
\end{equation}
and $H_h^k(\Omega)$ be the semiclassical Sobolev space on an open set $\Omega\subset\mathbb{R}^n$ (or on a compact manifold which we shall set to be $\partial\mathcal{O}$ later). Then in \cite[section 7]{SZ6}, the following proposition is proved.
\begin{prop}
Let $0<\epsilon<\frac{2}{3}$, $|\Re z|\leqslant L$, $|\Im z|\leqslant C$, then there exists $h_0=h_0(L)$ such that for $0<h<h_0(L)$, there exists maps $E_\epsilon, K_\epsilon$ defined on $C^\infty_c(D(h^\epsilon))$, with the properties $(P-z)E_{\epsilon}=I+K_{\epsilon}$ and
\begin{equation}
\begin{split}
E_\epsilon=&\;O(h^{2/3-\epsilon}): L^2(D(h^\epsilon))\to H_h^2(\mathbb{R}^n\setminus\mathcal{O}),\\
K_\epsilon=&\;O(e^{-C^{-1}h^{-1+\frac{3\epsilon}{2}}}): L^2(D(h^{\epsilon}))\to H_h^k(\mathbb{R}^n\setminus\mathcal{O}),\;\; \forall k\in\mathbb{R}.\\
\end{split}
\end{equation}
Moreover, for any fixed $\gamma\in(0,1)$, we can construct $E_\epsilon$ and $K_\epsilon$ such that for $u\in C_c^\infty(D(h^\epsilon))$, $E_\epsilon u$ and $K_\epsilon u$ are supported in $D((1-\gamma)h^\epsilon)$.
\end{prop}
We remark that we can not use Neumann series and this proposition to give an inverse of $P-z$ since the support of $K_\epsilon u$ is larger than $u$ in general.
\subsection{Setting up for global Grushin problems}
To study the global Grushin problem, we introduce the spaces for $w\in W\Subset(0,\infty)$, $0<\delta\ll1$, $0\leqslant r\leqslant r_0$:
\begin{equation*}
\begin{split}
\mathcal{B}_{w,r,\delta}=&\;H^2(\mathbb{R}^n\setminus\mathcal{O})
\times L^2(\partial\mathcal{O};\mathbb{C}^N),\\
\mathcal{H}_{w,r}=&\;L^2(\mathbb{R}^n\setminus\mathcal{O})\times H^{1/2}(\partial\mathcal{O})\times H^2(\partial\mathcal{O};\mathbb{C}^N).
\end{split}
\end{equation*}
with the norms which coincides the ones introduced in the previous sections in each of the regions we considered. We need to translate the norms to $x_n$-coordinates by the relation $x_n=h^{2/3}t$.
Let
\begin{equation}
\begin{split}
\left\|\left(
\begin{array}{c}
u \\
u_- \\
\end{array}
\right)
\right\|_{\mathcal{B}_{w,r,\delta}}=&\;
h^{-2/3}\|e^{r\psi(x_n)/2h^{2/3}}
(hD_{x_n})^2u\|_{L^2(\mathbb{R}^n\setminus\mathcal{O})}\\
&+h^{-2/3}\|e^{r\psi(x_n)/2h^{2/3}}\chi(x_n/\delta)
x_nu\|_{L^2(\mathbb{R}^n\setminus\mathcal{O})}\\
&+\|e^{r\psi(x_n)/2h^{2/3}}\langle x_n\rangle^{-2}\langle h^{-2/3}(-h^2\Delta_{\partial\mathcal{O}}-w)\rangle u\|_{L^2(\mathbb{R}^n\setminus\mathcal{O})}\\
&+h^{-2/3}\|e^{r\psi(x_n)/2h^{2/3}}(1-\chi(x_n/\delta))
u\|_{H_h^2(\mathbb{R}^n\setminus\mathcal{O})}\\
&+h^{1/3}\|u_-\|_{L^2(\partial\mathcal{O};\mathbb{C}^N)},\\
\left\|\left(
\begin{array}{c}
v \\
v_0 \\
v_+
\end{array}
\right)
\right\|_{\mathcal{H}_{w,r}}=&\;\|e^{r\psi(x_n)/2h^{2/3}}
v\|_{L^2(\mathbb{R}^n\setminus\mathcal{O})}\\
&+h^{1/3}\|\langle h^{-2/3}(-h^2\Delta_{\partial\mathcal{O}}-w)\rangle^{1/4}
v_0\|_{L^2(\partial\mathcal{O})}\\
&+h^{1/3}\|\langle h^{-2/3}(-h^2\Delta_{\partial\mathcal{O}}-w)\rangle v_+\|_{L^2(\partial\mathcal{O};\mathbb{C}^N)},
\end{split}
\end{equation}
where the weight function $\psi\in C^\infty([0,\infty);[0,1])$ satisfying $\psi(t)=t$ for $t<\frac{1}{2}$ and $\psi(t)=1$ for $t\geqslant1$; and the cut-off function $\chi\in C^\infty([0,\infty);[0,1])$ satisfying $\chi(t)=1$ for $t<1$ and $\chi(t)=0$ for $t>2$. Here we still use the geodesic normal coordinates $(x',x_n)\in\partial\mathcal{O}\times(0,\infty)$ for $\mathbb{R}^n\setminus\mathcal{O}$ as introduced before.
First we claim that
\begin{equation*}
\left(
\begin{array}{cc}
P-z & 0 \\
\gamma_1 & 0 \\
0 & 0 \\
\end{array}
\right):\mathcal{B}_{w,r}\to\mathcal{H}_{w,r}.
\end{equation*}
In fact, we can decompose $u\in H^2(\mathbb{R}^n\setminus\mathcal{O})$ as $u=u_1+u_2$ where $\supp u_1\subset\{x_n\leqslant3\delta\} $ and $\supp u_2\subset\{x_n\geqslant2\delta\}$. Then we see that
\begin{equation*}
\left\|\left(
\begin{array}{c}
u \\
0 \\
\end{array}
\right)
\right\|_{\mathcal{B}_{w,r,\delta}}
\sim\left\|\left(
\begin{array}{c}
u_1 \\
0 \\
\end{array}
\right)
\right\|_{\mathcal{B}_{w,r,\delta}}+
\left\|\left(
\begin{array}{c}
u_2 \\
0 \\
\end{array}
\right)
\right\|_{\mathcal{B}_{w,r,\delta}}.
\end{equation*}
We notice that
\begin{equation*}
\begin{split}
\left\|\left(
\begin{array}{c}
u_1 \\
0 \\
\end{array}
\right)
\right\|_{\mathcal{B}_{w,r,\delta}}
\sim &\;h^{-2/3}\|e^{rx_n/2h^{2/3}}
(hD_{x_n})^2u_1\|_{L^2(\mathbb{R}^n\setminus\mathcal{O})}\\
&+h^{-2/3}\|e^{rx_n/2h^{2/3}}\chi(x_n/\delta)
x_nu\|_{L^2(\mathbb{R}^n\setminus\mathcal{O})}\\
&+\|e^{rx_n/2h^{2/3}}\langle h^{-2/3}(-h^2\Delta_{\partial\mathcal{O}}-w)\rangle u\|_{L^2(\mathbb{R}^n\setminus\mathcal{O})},
\end{split}
\end{equation*}
so the estimate
\begin{equation*}
\|e^{r\psi(x_n)/2h^{2/3}}(P-z)u_1\|_{L^2}\leqslant
\left\|\left(
\begin{array}{c}
u_1 \\
0 \\
\end{array}
\right)
\right\|_{\mathcal{B}_{w,r,\delta}}
\end{equation*}
follows from the change of variable $x_n=h^{2/3}t$ and the result in \ref{sec:model} (only the boundedness of $P-z:B_{z,\lambda,r}\to L^2_r$). Also notice that
\begin{equation*}
\left\|\left(
\begin{array}{c}
u_2 \\
0 \\
\end{array}
\right)
\right\|_{\mathcal{B}_{w,r,\delta}}
\sim h^{-2/3}\|e^{r\psi(x_n)/2h^{2/3}}u_2
\|_{H^2_h(\mathbb{R}^n\setminus\mathcal{O})},
\end{equation*}
so we can easily deduce that
\begin{equation*}
\|e^{r\psi(x_n)/2h^{2/3}}(P-z)u_2\|_{L^2}\leqslant
\left\|\left(
\begin{array}{c}
u_2 \\
0 \\
\end{array}
\right)
\right\|_{\mathcal{B}_{w,r,\delta}}.
\end{equation*}
Finally we need to estimate $\gamma u$. We shall use the fact that
\begin{equation*}
\gamma_0=O(h^{-1/2}):H_h^2(\mathbb{R}^n\setminus\mathcal{O})\to
H_h^{3/2}(\partial\mathcal{O})
\end{equation*}
and
\begin{equation*}
h\gamma_1=O(h^{-1/2}):H_h^2(\mathbb{R}^n\setminus\mathcal{O})\to
H_h^{1/2}(\partial\mathcal{O})
\end{equation*}
which follows from the estimates of non-semiclassical restriction operators. Therefore we have
\begin{equation*}
\begin{split}
&\;h^{1/3}\|\langle h^{-2/3}(-h^2\Delta_{\partial\mathcal{O}}-w)\rangle^{1/4}
(\gamma u)\|_{L^2(\partial\mathcal{O})}\\
\leqslant&\; h^{1/6}\|\gamma u\|_{H_h^{1/2}(\partial\mathcal{O})}
\leqslant h^{1/6}\|h^{2/3}\gamma_1u\|_{H_h^{1/2}(\partial\mathcal{O})}
+h^{1/6}\|h^{2/3}k\gamma_0u\|_{H_h^{1/2}(\partial\mathcal{O})}\\
\leqslant &\; h^{5/6}\|\gamma_1u\|_{H_h^{1/2}(\partial\mathcal{O})}
+Ch^{5/6}\|\gamma_0u\|_{H_h^{3/2}(\partial\mathcal{O})}\\
\leqslant &\; Ch^{-2/3}\|u\|_{H_h^2(\mathbb{R}^n\setminus\mathcal{O})}.
\end{split}
\end{equation*}
Now we need to correct this operator with
\begin{equation*}
R_{+,w}:H^2(\mathbb{R}^n\setminus\mathcal{O})\to L^2(\partial\mathcal{O};\mathbb{C}^N),
\end{equation*}
and
\begin{equation*}
R_{-,w}:L^2(\partial\mathcal{O};\mathbb{C}^N)\to L^2(\mathbb{R}^n\setminus\mathcal{O}).
\end{equation*}
They are obtained by quantizing the symbols appeared in section \ref{sec:model}. Let $e^{\lambda,\delta}_{j,\mu}$ be as in \eqref{def:mueigens}, then we shall define
\begin{equation}
R_{+,w}=\Op_{\Sigma_w,h}(\tilde{e}_w^\delta)
:L^2(\mathbb{R}^n\setminus\partial\mathcal{O})\to L^2(\partial\mathcal{O};\mathbb{C}^N),
\end{equation}
where
\begin{equation*}
\tilde{e}_w^\delta\in S_{\Sigma_w,2/3}(\partial\mathcal{O};1,
\mathcal{L}(L^2[0,\infty);\mathbb{C}^N))
\end{equation*}
is given by
\begin{equation*}
\tilde{e}_w^\delta(j)u(p)=\int_0^\infty h^{-1/3}\chi(x_n)e_{j,\mu}^{\lambda,\delta}(h^{-2/3}x_n)u(x_n)dx_n, p\in T^\ast\partial\mathcal{O}
\end{equation*}
with $\lambda=h^{-2/3}(R(p)-w), \mu=Q(0,p)$. Similarly, the operator
$R_{-,w}$ can be defined as the formal adjoint of $R_{+,w}$ or more precisely,
\begin{equation*}
R_{-,w}=\Op_{\Sigma_w,h}((\tilde{e}_w^\delta)^\ast)
:L^2(\mathbb{R}^n\setminus\partial\mathcal{O})\to L^2(\partial\mathcal{O};\mathbb{C}^N),
\end{equation*}
where
\begin{equation*}
(\tilde{e}_w^\delta)^\ast\in S_{\Sigma_w,2/3}(\partial\mathcal{O};1,
\mathcal{L}(\mathbb{C}^N;L^2([0,\infty)))
\end{equation*}
is given by
\begin{equation*}
\tilde{e}_w^\delta u_-(p)=\sum_{j=1}^N
h^{-1/3}\chi(x_n)e_{j,\mu}^{\lambda,\delta}(h^{-2/3}x_n)u_-(j), p\in T^\ast\partial\mathcal{O}.
\end{equation*}
Then we have the Grushin problem for
\begin{equation}
\mathcal{P}_w(z)=
\left(
\begin{array}{cc}
P_w-z & R_{-,w} \\
\gamma_1 & 0 \\
R_{+,w} & 0 \\
\end{array}
\right):\mathcal{B}_{w,r}\to\mathcal{H}_{w,r}.
\end{equation}
Our goal is to construct an inverse of $\mathcal{P}_w(z)$ for all $h$ small depending on $\delta$,
\begin{equation}
\mathcal{E}_w(z)=
\left(
\begin{array}{ccc}
E_w(z) & K_w(z) & E_{w,+}(z) \\
E_{w,-}(z) & K_{w,-}(z) & E_{w,-+}(z) \\
\end{array}
\right):\mathcal{H}_{w,r}\to\mathcal{B}_{w,r}
\end{equation}
where $E_{w,-+}(z)$ has nice properties that will be specified later.
\subsection{Construction of the inverse operator}
To construct the inverse operator, we first separate to three different parts: near the boundary and glancing hypersurface, near the boundary away from the glancing hypersurface and away from the boundary. In this section, we again work with $w=1$ for simplicity and it will be clear that the analysis is uniform for $w$ in a fixed compact subset of $(0,\infty)$.
We consider the case near the boundary and glancing hypersurface first. Let us translate the space $\mathcal{B}_{z,\lambda,r}$ and $\mathcal{H}_{z,\lambda,r}$ in section \ref{sec:model} into the $x_n$-coordinates and scale it by $h^{1/3}$ due to the change of coordinates. In this stage, we drop the dependence on $z$ and introduce the same weight function $\psi$ as previous.
\begin{equation*}
\begin{split}
\left\|\left(
\begin{array}{c}
u \\
u_- \\
\end{array}
\right)
\right\|_{\mathcal{B}_{\lambda,r}}=&\;
h^{-2/3}\|e^{r\psi(x_n)/2h^{2/3}}
(hD_{x_n})^2u\|_{L^2([0,\infty))}+h^{-2/3}\|e^{r\psi(x_n)/2h^{2/3}}
x_nu\|_{L^2([0,\infty))}\\
&+\langle\lambda\rangle
\|e^{r\psi(x_n)/2h^{2/3}}u\|_{L^2(\mathbb{R}^n\setminus\mathcal{O})}
+h^{1/3}|u_-|_{\mathbb{C}^N},\\
\left\|\left(
\begin{array}{c}
v \\
v_0 \\
v_+
\end{array}
\right)
\right\|_{\mathcal{H}_{\lambda,r}}=&\;\|e^{r\psi(x_n)/2h^{2/3}}
v\|_{L^2([0,\infty))}
+h^{1/3}\langle\lambda\rangle^{1/4}|v_0|_{\mathbb{C}}
+h^{1/3}\langle\lambda\rangle|v_+|_{\mathbb{C}^N}.
\end{split}
\end{equation*}
\begin{lem}
Let $0<\epsilon<2/3$, $\chi_1\in\Psi^{0,0}(\partial\mathcal{O})$ be such that $\WF_h(\chi_1-\Id)\subset\{m:d(m,\Sigma)\geqslant C\}$ and $\WF_h(\chi_1)\subset\{m:d(m,\Sigma)\leqslant2C\}$. Then there exists $\mathcal{E}^L_1(z),\mathcal{E}^R_1(z)\in\Psi_{\Sigma,2/3}
(\partial\mathcal{O};
1,\mathcal{L}(\mathcal{H}_{\lambda,r},\mathcal{B}_{\lambda,r}))$ such that
\begin{equation*}
\mathcal{E}_1^L(z)\mathcal{P}(z)=
\chi_1\left(
\begin{array}{cc}
\chi(x_n/h^\epsilon) & 0 \\
0 & \Id \\
\end{array}
\right)+\mathcal{R}^L_1(z),
\end{equation*}
\begin{equation*}
\mathcal{P}(z)\mathcal{E}_1^R(z)=
\chi_1\left(
\begin{array}{ccc}
\chi(x_n/h^\epsilon) & 0 & 0 \\
0 & \Id & 0 \\
0 & 0 & \Id \\
\end{array}
\right)+\mathcal{R}^R_1(z),
\end{equation*}
where the remainder terms satisfy
\begin{equation*}
\begin{split}
\mathcal{R}_1^L(z)\in
\Psi_{\Sigma,2/3}(\partial\mathcal{O};h^N\langle\lambda\rangle^{-N},
\mathcal{L}(\mathcal{B}_{\lambda,r},\mathcal{B}_{\lambda,r}))\\
\mathcal{R}_1^R(z)\in
\Psi_{\Sigma,2/3}(\partial\mathcal{O};h^N\langle\lambda\rangle^{-N},
\mathcal{L}(\mathcal{H}_{\lambda,r},\mathcal{H}_{\lambda,r}))
\end{split}
\end{equation*}
for any $N$.
\end{lem}
\begin{proof}
From the previous section, we can construct an operator $\tilde{\mathcal{E}}_1\in\Psi_{\Sigma,2/3}(\partial\mathcal{O};1,
\mathcal{L}(\mathcal{H}_{\lambda,r},\mathcal{B}_{\lambda,r}))$ with
$\WF_h(\tilde{\mathcal{E}}_1)\subset\{m:d(m,\Sigma)\leqslant2C\}$ such that
\begin{equation*}
\tilde{\mathcal{E}}_1(z)\mathcal{P}(z)=\Id+\tilde{R}_1^L(z),\;\;\;
\mathcal{P}(z)\tilde{\mathcal{E}}_1(z)=\Id+\tilde{R}_1^R(z).
\end{equation*}
Here the remainder term $\tilde{R}_1^L$ satisfies that for any $A\in\Psi^{0,0}(\partial\mathcal{O})$ with $\WF_h(A)\subset\{m:d(m,\Sigma)\leqslant C\}$ and any $k$,
\begin{equation*}
A\tilde{R}_1^L=
\left(
\begin{array}{cc}
x_n^k & 0 \\
0 & 0 \\
\end{array}
\right)B_k^L+h^kA_k^L,
\end{equation*}
with
\begin{equation*}
A_k^L,B_k^L\in\Psi_{\Sigma,2/3}(\partial\mathcal{O};1,
\mathcal{L}(\mathcal{B}_{\lambda,r},\mathcal{B}_{\lambda,r})).
\end{equation*}
We notice that for $0<\epsilon<2/3$, the operator
\begin{equation*}
\left(
\begin{array}{cc}
\chi(x_n/h^\epsilon) & 0 \\
0 & \Id \\
\end{array}
\right)
\end{equation*}
is bounded on $\mathcal{B}_{\lambda,r}$. In fact, in $t$ coordinates, this becomes $\chi(h^{2/3-\epsilon}t)$ whose derivatives are all bounded. Therefore we can set
\begin{equation*}
\mathcal{E}_l^L(z)=\chi_1
\left(
\begin{array}{cc}
\chi(x_n/h^\epsilon) & 0 \\
0 & \Id \\
\end{array}
\right)\tilde{\mathcal{E}}_1(z).
\end{equation*}
Since $\langle\lambda\rangle=O(h^{-2/3})$, it is clear that this operator satisfies the condition. Similarly, we can construct
\begin{equation*}
\mathcal{E}_l^R(z)=\tilde{\mathcal{E}}_1(z)\chi_1
\left(
\begin{array}{ccc}
\chi(x_n/h^\epsilon) & 0 & 0\\
0 & \Id & 0\\
0 & 0 & \Id \\
\end{array}
\right).
\end{equation*}
\end{proof}
Now for the case near the boundary but away from the glancing hypersurface, the spaces $\mathcal{H}_{\lambda,r}^\#$ and $\mathcal{B}_{\lambda,r}^\#$ becomes
\begin{equation*}
\begin{split}
\left\|\left(
\begin{array}{c}
u \\
u_- \\
\end{array}
\right)
\right\|_{\mathcal{B}_{\lambda,r}^\#}=&\;
h^{-2/3}\|e^{r\psi(x_n)/2h^{2/3}}
(hD_{x_n})^2u\|_{L^2([0,\infty))}+\langle\lambda\rangle
\|e^{r\psi(x_n)/2h^{2/3}}u\|_{L^2(\mathbb{R}^n\setminus\mathcal{O})}
+h^{1/3}|u_-|_{\mathbb{C}^N},\\
\left\|\left(
\begin{array}{c}
v \\
v_0 \\
v_+
\end{array}
\right)
\right\|_{\mathcal{H}_{\lambda,r}}=&\;\|e^{r\psi(x_n)/2h^{2/3}}
v\|_{L^2([0,\infty))}
+h^{1/3}\langle\lambda\rangle^{1/4}|v_0|_{\mathbb{C}}
+h^{1/3}\langle\lambda\rangle|v_+|_{\mathbb{C}^N},
\end{split}
\end{equation*}
in the $x_n$-coordinates. In this situation, we have
\begin{lem}
Let $0<\epsilon<2/3$, $\chi_2\in\Psi^{0,0}(\partial\mathcal{O})$ be such that $\WF_h(\chi_2-\Id)\subset\{m:d(m,\Sigma)\leqslant C\}$ and $\WF_h(\chi_2)\subset\{m:d(m,\Sigma)\geqslant\frac{1}{2}C\}$. Then there exists $\mathcal{E}^L_2(z),\mathcal{E}^R_2(z)\in\Psi_{\Sigma,2/3}
(\partial\mathcal{O};
1,\mathcal{L}(\mathcal{H}_{\lambda,r},\mathcal{B}_{\lambda,r}))$ such that
\begin{equation*}
\mathcal{E}_2^L(z)\mathcal{P}(z)=
\chi_2\left(
\begin{array}{cc}
\chi(x_n/h^\epsilon) & 0 \\
0 & \Id \\
\end{array}
\right)+\mathcal{R}^L_2(z),
\end{equation*}
\begin{equation*}
\mathcal{P}(z)\mathcal{E}_2^R(z)=
\chi_2\left(
\begin{array}{ccc}
\chi(x_n/h^\epsilon) & 0 & 0 \\
0 & \Id & 0 \\
0 & 0 & \Id \\
\end{array}
\right)+\mathcal{R}^R_2(z),
\end{equation*}
where the remainder terms satisfy
\begin{equation*}
\begin{split}
\mathcal{R}_2^L(z)\in
\Psi_{\Sigma,2/3}(\partial\mathcal{O};h^N\langle\lambda\rangle^{-N},
\mathcal{L}(\mathcal{B}_{\lambda,r}^\#,\mathcal{B}_{\lambda,r}^\#))\\
\mathcal{R}_2^R(z)\in
\Psi_{\Sigma,2/3}(\partial\mathcal{O};h^N\langle\lambda\rangle^{-N},
\mathcal{L}(\mathcal{H}_{\lambda,r}^\#,\mathcal{H}_{\lambda,r}^\#))
\end{split}
\end{equation*}
for any $N$.
\end{lem}
\begin{proof}
We can repeat the same argument with the standard semiclassical calculus and notice that $\langle\lambda\rangle=O(h^{-2/3}\langle\xi'\rangle^2)$ to get the properties of the remainder.
\end{proof}
Now combining the two lemmas above, we get the approximated inverse near the boundary. More precisely,
\begin{prop}
There exists $\mathcal{E}^L(z),\mathcal{E}^R(z)
:\mathcal{H}_r\to\mathcal{B}_{r,\epsilon}$ such that
\begin{equation*}
\mathcal{E}^L(z)\mathcal{P}(z)=
\left(
\begin{array}{cc}
\chi(x_n/h^\epsilon) & 0 \\
0 & \Id \\
\end{array}
\right)+\mathcal{R}^L(z),
\end{equation*}
\begin{equation*}
\mathcal{P}(z)\mathcal{E}^R(z)=
\left(
\begin{array}{ccc}
\chi(x_n/h^\epsilon) & 0 & 0 \\
0 & \Id & 0 \\
0 & 0 & \Id \\
\end{array}
\right)+\mathcal{R}^R(z),
\end{equation*}
where the remainder terms satisfy
\begin{equation*}
\begin{split}
\langle h^2\Delta_{\partial\mathcal{O}}\rangle^N\mathcal{R}_3^L(z)
\langle h^2\Delta_{\partial\mathcal{O}}\rangle^N=O(h^N)
:\mathcal{B}_{r,\epsilon}\to\mathcal{B}_{r,\epsilon}\\
\langle h^2\Delta_{\partial\mathcal{O}}\rangle^N\mathcal{R}_3^R(z)
\langle h^2\Delta_{\partial\mathcal{O}}\rangle^N=O(h^N)
:\mathcal{H}_r\to\mathcal{H}_r,\\
\end{split}
\end{equation*}
for any $N$. Here $\langle h^2\Delta_{\partial\mathcal{O}}\rangle^N$
applies to all the components and the spaces $\mathcal{B}_{r,\epsilon}$ are defined as $\mathcal{B}_{r,\delta}$ further truncated to the $h^\epsilon$-neighborhood of the boundary by $\chi(x_n/h^{\epsilon})$.. Moreover, the $-+$-components for the approximate inverses
satisfy
\begin{equation*}
E_{-+}^L(z)\equiv E_{-+}^R(z)\in
\Psi_{\Sigma,2/3}^{0,1,2}(\partial\mathcal{O};
\mathcal{L}(\mathbb{C}^N,\mathbb{C}^N)).
\end{equation*}
\end{prop}
\begin{proof}
We can simply choose $\chi_1$ and $\chi_2$ such that $\chi_1+\chi_2=1$ and set $\mathcal{E}^\cdot(z)=\mathcal{E}^\cdot_1(z)+\mathcal{E}^\cdot_2(z)$,
$\cdot=L,R$. To prove the last statement, we notice that from the construction,
\begin{equation*}
E_{-+}^L=\chi_1\tilde{E}_{-+1}+\chi_2\tilde{E}_{-+2},\;\;\;
E_{-+}^R=\tilde{E}_{-+1}\chi_1+\tilde{E}_{-+2}\chi_2.
\end{equation*}
Near the glancing hypersurface, $\{m:d(m,\Sigma)\leqslant\frac{1}{2}C\}$, $\chi_1\equiv\Id$ while $\chi_2\equiv0$. Away from the glancing hypersurface
$\{m:d(m,\Sigma)\geqslant2C\}$, $\chi_1\equiv0$ while $\chi_2\equiv\Id$. In the intermediate region, $E_{-+1}\equiv E_{-+2}$ from our discussion in section \ref{sec:inter}. Therefore $E_{-+}^L$ and $E_{-+}^R$ are essentially the same in the $\Psi_{\Sigma,2/3}^{0,1,2}(\partial\mathcal{O};
\mathcal{L}(\mathbb{C}^N,\mathbb{C}^N))$.
\end{proof}
Finally, we can combine this with the estimate away from the boundary to get the inverse.
\begin{prop}
Let $0<\epsilon<2/3$, $0<h<h_0(\delta)$, there exists $\mathcal{E}_w(z):\mathcal{H}_{w,0}\to\mathcal{B}_{w,0,\epsilon}$ such that
\begin{equation*}
\mathcal{P}_w(z)\mathcal{E}_w(z)=\Id,\;\;\;
\mathcal{E}_w(z)\mathcal{P}_w(z)=\Id
\end{equation*}
and $E_{w,-+}\in\Psi_{\Sigma_w,2/3}^{0,1,2}(\partial\mathcal{O};
\mathcal{L}(\mathbb{C}^N,\mathbb{C}^N))$.
\end{prop}
\begin{proof}
Let us begin with an approximate right inverse
\begin{equation*}
\mathcal{\mathcal{E}}^R(z)=\mathcal{E}^R(z)
\left(
\begin{array}{ccc}
\tilde{\chi}(x_n/h^\epsilon) & 0 & 0\\
0 & \Id & 0\\
0 & 0 & \Id \\
\end{array}
\right)
+\left(
\begin{array}{ccc}
E_\epsilon(1-\tilde{\chi}(x_n/h^\epsilon)) & 0 & 0\\
0 & 0 & 0 \\
\end{array}
\right).
\end{equation*}
Here $\tilde{\chi}\in C^\infty([0,\infty))$ supported in $\{\chi=1\}$. Then we can compute
\begin{equation*}
\mathcal{P}(z)\tilde{\mathcal{E}}^R(z)=\Id+\mathcal{K}^R(z)
\end{equation*}
where the remainder is given by
\begin{equation*}
\mathcal{K}^R(z)=\mathcal{R}^R(z)
\left(
\begin{array}{ccc}
\tilde{\chi}(x_n/h^\epsilon) & 0 & 0 \\
0 & \Id & 0 \\
0 & 0 & \Id \\
\end{array}
\right)+
\left(
\begin{array}{ccc}
K_\epsilon(1-\tilde{\chi}(x_n/h^\epsilon)) & 0 & 0 \\
\gamma E_\epsilon(1-\tilde{\chi}(x_n/h^\epsilon)) & 0 & 0 \\
R_+E_\epsilon(1-\tilde{\chi}(x_n/h^\epsilon)) & 0 & 0 \\
\end{array}
\right).
\end{equation*}
Since $E_\epsilon(1-\tilde{\chi})u$ is supported away from the boundary, we have $\gamma E_\epsilon(1-\tilde{\chi}(x_n/h^\epsilon))=0$. Moreover,
for any smooth $u$, since $(1-\tilde{\chi}(x_n/h^{\epsilon}))u$ is supported in $D(h^\epsilon)$, $E_\epsilon(1-\tilde{\chi}(x_n/h^{\epsilon}))u$ is supported in $(D(1-\gamma)h^{\epsilon})$, so by the super-exponential decay of $e_{j,\mu}^{\lambda,\delta}$, we have
\begin{equation}
\tilde{e}_w^\delta(j)u(p,x_n)=\int_0^\infty h^{-1/3}\chi(x_n)e_{j,\mu}^{\lambda,\delta}(h^{-2/3}x_n)u(p,x_n)dx_n
=O(h^\infty)
\end{equation}
which gives $R_+E_\epsilon(1-\tilde{\chi}(x_n/h^\epsilon))=O(h^\infty)$.
Therefore we get $\mathcal{K}^R=O(h^\infty):\mathcal{H}_0\to\mathcal{H}_0$ and hence for $h$ small enough, $(\Id+\mathcal{K}^R)^{-1}=\Id+\mathcal{A}$ where $\mathcal{A}=O(h^\infty):\mathcal{H}_0\to\mathcal{H}_0$. We can now put
\begin{equation*}
\mathcal{E}(z)=\mathcal{E}^R(z)(\Id+\mathcal{A}(z))
\end{equation*}
Suppose
\begin{equation*}
\mathcal{A}(z)=
\left(
\begin{array}{ccc}
A_{11}(z) & A_{12}(z) & A_{13}(z) \\
A_{21}(z) & A_{22}(z) & A_{23}(z) \\
A_{31}(z) & A_{32}(z) & A_{33}(z) \\
\end{array}
\right)
\end{equation*}
then from the formula of $\mathcal{K}^R$, we see it is lower triangular and thus the same is true for $\mathcal{A}$. Therefore
\begin{equation*}
E_{-+}(z)=E_{-+}^R(z)+E_{-+}^R(z)A_{33}(z)
\end{equation*}
Here $A_{33}(z)\in\Psi^{-\infty,-\infty}(\partial\mathcal{O};
\mathcal{L}(\mathbb{C}^N,\mathbb{C}^N))$ since it comes entirely from $\mathcal{R}_3^R$. Therefore $E_{-+}(z)\in\Psi_{\Sigma_w,2/3}^{0,1,2}(\partial\mathcal{O};
\mathcal{L}(\mathbb{C}^N,\mathbb{C}^N))$ is essentially the same as $E_{-+}^R$ (and also as $E_{-+}^L$).
\end{proof}
\subsection{Reduction to $E_{-+}$}
Now we state the main result of this section.
\begin{thm}
\label{thm:e+-}
Assume that $W$ is a fixed compact subset of $(0,\infty)$ and $\epsilon\ll1$. For every $w\in W$ and $z\in\mathbb{C}$ such that $|\Re z|\ll1/\delta$, $|\Im z|\leqslant C_1$, there exists
\begin{equation}
E_{w,-+}(z)\in\Psi_{\Sigma_w,2/3}^{0,1,2}
\end{equation}
where
$\Sigma_w=\{p\in T^\ast\partial\mathcal{O}:R(p)=w\}$, $N=N(C_1)$ such that for $0<h<h_0$ and some large $C>0$:
(i) The multiplicity of resonances are given by
\begin{equation}
\label{eq:mult}
m_\mathcal{O}(h^{-2}(w+h^{2/3}z))=\frac{1}{2\pi i}
\tr\oint_{|\tilde{z}-z|=\epsilon}E_{w,-+}(\tilde{z})^{-1}
\frac{d}{d\tilde{z}}E_{w,-+}(\tilde{z})d\tilde{z}
\end{equation}
(ii) If $E_{w,-+}^0(z;p,h)=\sigma_{\Sigma,h}(E_{w,-+}(z))(p;h)$, $p\in T^\ast\partial\mathcal{O}$, then
\begin{equation}
E_{w,-+}^0(z,p,h)=O(\langle\lambda-\Re z\rangle):\mathbb{C}^N\to\mathbb{C}^N,
\end{equation}
where $\lambda=h^{-2/3}(R(p)-w)$.
(iii) For $|\lambda|\leqslant1/C\sqrt{\delta}$,
\begin{equation}
\|E_{w,-+}^0(z;p,h)-\diag(z-\lambda-e^{-2\pi i/3}
\zeta_j'(p))\|_{\mathcal{L}(\mathbb{C}^N,\mathbb{C}^N)}
\leqslant\epsilon.
\end{equation}
Moreover, $\det E_{w,-+}^0(z;p,h)=0$ if and only if
\begin{equation}
z=\lambda+e^{-2\pi i/3}\zeta_j'(p)
\end{equation}
for some $1\leqslant j\leqslant N$ and all zeroes are simple. Here $\zeta_j'(p)=\zeta_j'(2Q(p))^{2/3}$.
(iv) For $|\lambda|\geqslant1/C\sqrt{\delta}$, $E_{w,-+}^0$ is invertible and
\begin{equation}
E_{w,-+}^0(z,p,h)^{-1}=O(\langle\lambda-\Re z\rangle^{-1}):\mathbb{C}^N\to\mathbb{C}^N,
\end{equation}
\end{thm}
\begin{proof}
The statement (i) follows from the formula
\begin{equation*}
\left(
\begin{array}{c}
h^{-2/3}(P(h)-w)-z \\
\gamma \\
\end{array}
\right)^{-1}=(E_w(z),K_w(z))
-E_{w,+}(z)E_{w,-+}(z)^{-1}(E_{w,-}(z),K_{w,-}(z)).
\end{equation*}
The other statements follow directly from our construction of $\mathcal{E}_w$.
\end{proof}
\section{Proof of the theorem}
\label{sec:resfree}
\subsection{Resonance Bands}
We first prove Theorem \ref{thm:main1}. Under the pinched curvature condition, we have
\begin{equation*}
K\zeta_j'<\kappa\zeta_{j+1}',\;\;\; 1\leqslant j\leqslant j_0
\end{equation*}
which can be translated to
\begin{equation*}
\max_{p\in\Sigma}\zeta_j'(p)<\min_{p\in\Sigma}\zeta_{j+1}'(p),\;\;\; 1\leqslant j\leqslant j_0.
\end{equation*}
Suppose $\lambda$ is a resonance which satisfies that for some $1\leqslant j\leqslant j_0$,
\begin{equation*}
K\zeta_j'(\Re\lambda)^{1/3}+C\leqslant-\Im\lambda
\leqslant\kappa\zeta_{j+1}'(\Re\lambda)^{1/3}-C.
\end{equation*}
Let $\zeta=\lambda^2=h^{-2}(1+h^{2/3}z)$ and $h=(\Re\lambda)^{-1}$, then we have
\begin{equation*}
K\zeta_j'h^{1/3}+C\leqslant-\Im\lambda\leqslant\kappa\zeta_{j+1}'h^{1/3}-C
\end{equation*}
and
\begin{equation*}
\Re z=h^{-2/3}(h^2\Re\zeta-1)=O(h^{2/3}).
\end{equation*}
\begin{equation*}
-\Im z=h^{-2/3}(-h^2\Im\zeta)=-2h^{1/3}\Im\lambda\in
[2K\zeta_j'+Ch^{1/3},2\kappa\zeta_{j+1}'-Ch^{1/3}].
\end{equation*}
Therefore for $p\in\Sigma_1$, i.e. $R(p)=1$,
\begin{equation*}
\Im[z-\lambda-e^{-2\pi i/3}\zeta_k'(p)]
=\Im z+\zeta_k'(2Q(p))^{2/3}\cos(\pi/6)
\in[\Im z+2\kappa\zeta_k',\Im z+2K\zeta_k']
\end{equation*}
thus for at most one of $k\in\{j,j+1\}$,
\begin{equation*}
|\Im[z-\lambda-e^{-2\pi i/3}\zeta_k'(p)]|\geqslant Ch^{1/3}
\end{equation*}
while for all other $k\in\{1,\ldots,j_0\}$,
\begin{equation*}
|\Im[z-\lambda-e^{-2\pi i/3}\zeta_k'(p)]|\geqslant \frac{1}{O(1)}.
\end{equation*}
Therefore we can decompose
\begin{equation*}
E_{-+}(z):=E_{1,-+}(z)=A(z)G_{-+}(z)B(z)
\end{equation*}
where
\begin{equation*}
A(z),B(z)\in\Psi_{\Sigma_1,2/3}^{0,0,0}(\partial\mathcal{O};
\mathcal{L}(\mathbb{C}^N,\mathbb{C}^N))
\end{equation*}
are invertible and
\begin{equation*}
G_{-+}(z)\in\Psi^{0,1,2}_{\Sigma_1,2/3}(\partial\mathcal{O};
\mathcal{L}(\mathbb{C}^N,\mathbb{C}^N))
\end{equation*}
has principal symbol $G_{-+}^0(z)$, such that, near $\Sigma_1$,
\begin{equation*}
\Im G_{-+}^0(z)\geqslant C_0h^{1/3}\Id_{\mathbb{C}^N}
\end{equation*}
while away from $\Sigma_1$,
\begin{equation*}
\Im G_{-+}^0(z)\geqslant \frac{1}{O(1)}h^{-2/3}\langle\xi\rangle^2.
\end{equation*}
Now we choose $C_0$ large enough, then we see that the imaginary part of the total symbol of $G_{-+}(z)$ is bounded below by a positive symbol in $S_{\Sigma_1,2/3}^{-1/3,0,2}$. The sharp G\aa rding's inequality gives
\begin{equation*}
\|E_{-+}(z)u\|_{L^2}\geqslant C\|G_{-+}(z)u\|_{L^2}\geqslant Ch^{1/3}\|u\|_{L^2},\;\;\; \forall u\in C^\infty(\partial\mathcal{O};\mathbb{C}^N).
\end{equation*}
Therefore $E_{-+}(z)$ is invertible for $0<h\leqslant h_0$. Therefore when $\Re\lambda\geqslant C=h_0^{-1}$, it cannot be a resonance.
\subsection{Weyl's Law}
In this part, we sketch the proof of Theorem \ref{thm:weyl}. See \cite[Section 9-10]{SZ6} for details of the proof.
Heuristically, we want to use the symbol of $E_{w,-+}(z)$ to compute its trace, then use \eqref{eq:mult} to count the number of resonances. However, this operator is not in the trace class. The first step is to construct a finite-rank approximation $\tilde{E}_{w,-+}(z)\in\Psi_{\Sigma_w,2/3}^{0,1,2}
(\partial\mathcal{O};\mathcal{L}(\mathbb{C}^N,\mathbb{C}^N))$ which is invertible and such that
\begin{equation*}
\tilde{E}_{w,-+}(z)^{-1},\;\;(\Lambda_w^{-1}\tilde{E}_{w,-+}(z))^{-1},
\;\;\tilde{E}_{w,-+}(z)^{-1}E_{w,-+}(z)=O(1):
L^2(\partial\mathcal{O};\mathbb{C}^N)\to L^2(\partial\mathcal{O};\mathbb{C}^N)
\end{equation*}
where $\Lambda_w=\langle h^{-2/3}(-h^2\Delta_{\partial\mathcal{O}}-w)\rangle
\in\Psi_{\Sigma_w,2/3}^{0,1,2}$ is elliptic. Moreover, we have $E_{w,-+}(z)-\tilde{E}_{w,-+}(z)$ is independent of $z$ and of rank $M=O(Lh^{1-n+2/3})$. Microlocally $\tilde{E}$ is only different from $E$ on the the glancing region where $E$ is not invertible.
From this finite-rank approximation, we can solve another Grushin problem to reduce $E_{w,-+}$ to a finite matrix. More precisely, we consider
\begin{equation}
\label{2ndgru}
\mathcal{Q}_w(z)=\left(
\begin{array}{cc}
\Lambda^{-1}E_{w,-+}(z) & R_{w,-}(z) \\
R_{w,+}(z) & 0 \\
\end{array}
\right): L^2(\partial\mathcal{O};\mathbb{C}^N)\times\mathbb{C}^M
\to L^2(\partial\mathcal{O};\mathbb{C}^N)\times\mathbb{C}^M,
\end{equation}
with bounded inverse
\begin{equation*}
\mathcal{F}_w(z)=\left(
\begin{array}{cc}
F_w(z)\Lambda & F_{w,+}(z) \\
F_{w,-}(z) & F_{w,-+}(z) \\
\end{array}
\right): L^2(\partial\mathcal{O};\mathbb{C}^N)\times\mathbb{C}^M
\to L^2(\partial\mathcal{O};\mathbb{C}^N)\times\mathbb{C}^M.
\end{equation*}
The construction of the Grushin problem is as follows: Let $e_1,\ldots,e_M$ be an orthonormal basis of the image of $\Lambda_w^{-1}(E_{w,-+}(z)-\tilde{E}_{w,-+}(z))^\ast$, then we set
\begin{equation*}
R_{w,+}u(j)=\langle u,e_j\rangle,\;\;\;1\leqslant j\leqslant M;\;\;\;
R_{w,-}(z)u_-=\Lambda^{-1}\tilde{E}_{w,-+}(z)R_{w,+}^\ast u_-.
\end{equation*}
The inverse is given by
\begin{equation*}
\begin{split}
F_w(z)=&\;(I-R_{w,+}^\ast R_{w,+})\tilde{E}_{w,-+}(z)^{-1},\\
F_{w,+}(z)=&\;R_{w,+}^\ast-(I-R_{w,+}^\ast R_{w,+})\tilde{E}_{w,-+}(z)^{-1}E_{w,-+}(z)R_{w,+}^\ast,\\
F_{w,-}(z)=&\;R_{w,+}\tilde{E}_{w,-+}(z)^{-1},\\
F_{w,-+}(z)=&\;-R_{w,+}\tilde{E}_{w,-+}(z)^{-1}E_{w,-+}(z)R_{w,+}^\ast.
\end{split}
\end{equation*}
With these preparation, we can prove a local trace formula on the scale 1 in the $z$ variable for every $w$. This is on the scale $h^{2/3}$ for the semiclassical variable $w+h^{2/3}z$ which is the square of the resonances $h^2\lambda^2$. We remark that this is the largest scale that we can work with for each fixed $w$ since the whole microlocal framework is built exactly on such scale.
For the $j_0$-th band of the resonances, we consider a domain
\begin{equation*}
W=\left\{-\frac{1}{2}L<\Re z<\frac{1}{2}L, A_-<-\Im z<A_+\right\}
\end{equation*}
where
\begin{equation*}
2K\zeta_{j_0-1}'<A_-<2\kappa\zeta_{j_0}'\leqslant
2K\zeta_{j_0}'<A_+<2\kappa\zeta_{j_0+1}'.
\end{equation*}
Let $\partial W=\gamma=\gamma_1\cup\gamma_2\cup\gamma_3\cup\gamma_4$ be the boundary of $W$, where $\gamma_1$ and $\gamma_3$ are the horizontal segments while $\gamma_2$ and $\gamma_4$ are the vertical segments. If we write $\Res_w(h)=\{z:m_\mathcal{O}(h^{-2}(w+h^{2/3}z))>0\}$, then we have the local trace formula
\begin{equation}
\label{eq:localtr}
\begin{split}
\sum_{z\in\Res_w(h)\cap W}f(z)=&\;\sum_{j=1,3}\tr\frac{1}{2\pi i}
\int_{\gamma_j}f(z)\left[E_{w,-+}(z)^{-1}\frac{d}{dz}E_{w,-+}(z)\right.\\
&\;\;\;\left.-\tilde{E}_{w,-+}(z)^{-1}
\frac{d}{dz}\tilde{E}_{w,-+}(z)\right]dz
+O(Lh^{1-n+2/3})\\
\end{split}
\end{equation}
for any holomorphic function $f$ defined near $W$ such that $|f(z)|\leqslant1$ near $\gamma_2\cup\gamma_4$. (In fact, to make this argument work, we need to choose a slightly larger rectangular contour around $W$ and $f$ holomorphic in an even larger domain. Also we need to the contour does not pass through any of the poles of $E_{w,-+}^{-1}$. These technical issues are handled in \cite{SZ6}.)
The main idea to prove this local trace formula is to change the trace of $E_{-+}^{-1}E_{-+}'-\tilde{E}_{-+}^{-1}\tilde{E}_{-+}'$ to the trace of $F_{-+}^{-1}F_{-+}'=\log\det F_{-+}$ by using the Grushin problem \eqref{2ndgru} constructed above. We observe that $F_{-+}$ is an $M\times M$ matrix which is $O(1):\mathbb{C}^M\to\mathbb{C}^M$ under the standard norm. This shows that $\log\det F_{-+}=O(M)=O(Lh^{1-n+2/3})$ and thus all the contributions from the two vertical segment can be controlled by $O(Lh^{1-n+2/3})$ using lower modulus theorem. Notice that this characterization of resonances by the poles of $F_{-+}^{-1}$ also gives a local upper bound on the number of the resonances
\begin{equation}
\label{eq:locupper}
\sum_{|\Re\zeta-1|\leqslant Ch^{2/3},0<-\Im\zeta<Ch^{2/3}}m_\mathcal{O}(\zeta)=O(h^{1-n+2/3}).
\end{equation}
In the local trace formula \eqref{eq:localtr}, we use the (second microlocalization) symbol to compute the trace on the right-hand side and get
\begin{equation}
\label{eq:loccount}
\begin{split}
\sum_{z\in\Res_w(h)\cap W}f(z)=&\frac{h^{1-n+2/3}}{(2\pi)^{n-1}}
\int_{\Sigma_w\times\mathbb{R}}f(\lambda+e^{-2\pi i/3}\zeta_{j_0}'(q))
1_{I(q)}(s)L_{\Sigma_w}(dq)ds\\
&\;\;\;+O(Lh^{1-n+2/3})+O_{f,L}(h^{2-n})\\
\end{split}
\end{equation}
where $(q,s)\in\Sigma_w\times\mathbb{R}$ is a local coordinates for a neighborhood of $\Sigma_w\in T^\ast\partial\mathcal{O}$ such that $s|_{\Sigma_w}=0$, $L_{\Sigma_w}(dq)ds$ is the Liouville measure on $T^\ast X$, and
\begin{equation*}
I(q)=\{s\in\mathbb{R}:s+e^{-2\pi i/3}\zeta_{j_0}'(q)\in W\}.
\end{equation*}
For fixed $L$ (and say $f=1$), this does not give a better description of resonances than the upper bound \eqref{eq:locupper}. However, if we make $L$ large (which does not change the principal symbol in our construction, but may potentially affect the lower order terms), and choose $f$ suitably, we can get a better estimate than \eqref{eq:locupper}. The idea is to let $f$ to be very large in $W$ away from the $\gamma_2\cup\gamma_4$ but remain bounded ($|f|\leqslant1$ as required from the assumption in \eqref{eq:localtr}) near $\gamma_2\cup\gamma_4$. A standard choice is the Gaussian functions
\begin{equation*}
f_\epsilon(z)=((1+O(\epsilon L))e^{-\epsilon L^2/2})^{-1}e^{-\epsilon
(z-z_0)^2},\;\; z_0=-\frac{1}{2}i(A_-+A_+),\;\; \epsilon L\ll1,\; \epsilon L^2\gg\log\frac{1}{\epsilon}.
\end{equation*}
Then from \eqref{eq:loccount} we obtain
\begin{equation*}
\sum_{z\in\Res_w(h)\cap W}\sqrt{\frac{\epsilon}{2\pi}}e^{-\epsilon(\Re(z-z_0))^2/2}
=(1+O(\epsilon L))\frac{h^{1-n+2/3}}{(2\pi)^{n-1}}\int_{\Sigma_w}L_{\Sigma_w}(dq)+
O_{\epsilon,L}(h^{2-n}).
\end{equation*}
Finally, we let $L=\epsilon^{-2/3}$ and integrate in $w$ to get the Weyl's law in the semiclassical setting
\begin{prop}(see \cite[Proposition 10.1]{SZ6})
For $0<a<b$, let
\begin{equation*}
N_h([a,b];j)=\sum_{a<\Re z<b,2\kappa\zeta_j'h^{2/3}<-\Im z<2K\zeta_j'h^{2/3}}m_\mathcal{O}(h^{-2}z).
\end{equation*}
Then under the assumption of \ref{thm:main1}, we have
\begin{equation}
\label{semi}
N_h([a,b];j)=(1+O(\epsilon))\frac{h^{1-n}}{(2\pi)^{n-1}}
\int_{a\leqslant|\xi'|_{x'}^2\leqslant b}dx'd\xi'+O_\epsilon(h^{1-n+1/3})
\end{equation}
for any $1\leqslant j\leqslant j_0$ and $\epsilon>0$.
\end{prop}
Now the Weyl law \eqref{weyl} follows from a dyadic decomposition of the interval $|\lambda|\leqslant r$ and applying \eqref{semi} for each dyadic piece of the interval.
|
1,108,101,566,361 | arxiv | \section*{INTRODUCTION} The parameters for a high luminosity, high energy muon
collider are summarize in Tb.\ref{tb1}.
The design of an extreme low-beta interaction region for a muon
collider\cite{ref1} is non trivial and present a challenge in many ways similar
to the one encounter in the Next Linear Collider (NLC)\cite{ref2}. J.
Erwin\cite{ref3} and collaborators have designed the final focus system(FFS)
for the NLC with $\beta_x^*\approx 37\,{\rm mm},$ $\beta_y^*\approx
100\,\mu{\rm m}$ and transverse beam dimensions of $\sigma_x\approx 420\,{\rm
nm}$ and $\sigma_y\approx 2.5\,{\rm nm}$ for the 1 TeV center of mass case.
Similarly, the latest version of the CLIC\cite{ref4} FFS also
calls for $\sigma_x\approx 90\,{\rm nm},$ $\sigma_y\approx 8\,{\rm nm}$ and
beta functions $\beta_x^*\approx 2.2\,{\rm mm},$ $\beta_y^*\approx 0.157\,
{\rm mm}$ at the interaction point (IP) for 500 GeV in the center of mass.
Both these designs of a FFS follow the prescription proposed by
Brown\cite{ref5}; it consists of two telescopes with two chromatic correction
sections between them. The extremely low beta function at the IP results in the need of very strong quadrupoles which generate large
chromaticity. This chromaticity must be corrected locally and this is achieved
with two strong pairs of non-interleaved sextupoles. One pair, situated at the
position of maximum $\beta_x$ corrects the horizontal chromaticity, the other
pair at maximum $\beta_y$ corrects the vertical chromaticity. The two
sextupoles of a pair are separated by a phase advance $\phi=\pi$ $(\Delta Q
=-0.5).$ This arrangement cancels the second-order geometric aberrations of the
sextupoles thus reducing the second order tune shift by several order of
magnitude. The bandwidth of the system is limited by the third-order
aberrations and the
remaining second-order amplitude dependent tune shift,\cite{ref6}
\begin{eqnarray}
\Delta Q_x=& {\partial Q_x\over \partial \epsilon_x} \epsilon_x+
{\partial Q_x\over \partial \epsilon_y} \epsilon_y \nonumber \\
\Delta Q_y=& {\partial Q_y\over \partial \epsilon_x} \epsilon_x+
{\partial Q_y\over \partial \epsilon_y} \epsilon_y \label{eq1}
\end{eqnarray}
These
aberrations arise from: a) small phase error between the
sextupoles and the final quadruplet; b) finite length of the sextupoles.
The residual chromaticity at the IP could be reduced by adding a number of
sextupoles at locations with nonzero dispersion, as it was suggested by
Brinkmann\cite{ref7}, which have the function of correcting locally the
chromaticity of each module. Finally, a system of octupoles could be designed
to correct the third-order aberrations. Overall, it is believed that it could be
possible to construct a system with a bandwidth of $\approx 1\,\%.$
There have been several previous attempts to design the FFS for a muon
collider\cite{ref8}, which have been summarized and compared in
ref.\cite{ref9}.
\section {Results}
Following the above prescription, a design
by Napoly\cite{ref10} was taken as a starting point; the final doublet used by
Napoly was replaced by a quadruplet as the final telescope\cite{ref11}. Partial
optimization of the design has been performed with the code MAD, alpha VMS
version 81.6/1\cite{ref12} Another important modification was to replace the
split sextupoles by single elements, this simple change reduced the amplitude
dependent tune shift by an order of magnitude.
Starting from the interaction point (IP), there is an initial telescope with
magnification 3, ending in a focus $O_1$ (see Fig.\ref{fg1}).
\begin{figure}[tbh]
\centering
\epsfxsize=14.0cm \epsfysize=8.0cm \epsfbox{ffsfg1.ps}
\caption{Schematic of a FFS with extremely small beta function at the IP for a
muon collider}
\label{fg1}
\end{figure}
Then it follows by FODO
cells, each with a phase advance of exactly ${\pi \over 2}.$ Intermediate foci
are generated at $O_2$, $O_3.$ Approximately midway between these foci,
vertical correction sextupoles ($S_{y1}$ and $S_{y2}$) are introduced and are
near maxima in $\beta_y.$ Then follow another similar sequence of cells with
intermediate foci at $O_4$, $O_5,$ but in these cells the sign of all
quadrupoles have been reversed. The two following sextupoles, placed between
these foci now fall on maxima of $\beta_x$ and thus serve to correct the
horizontal chromaticity $({\partial Q \over \partial \delta}).$ The horizontal
bending magnets are introduced to achieve dispersion at the sextupoles; reverse
bends are also used to reduce the dispersion between the vertical correction
sextupoles and thus avoid otherwise excessive second order dispersion $(
x\approx \delta^2).$
The strength of the sextupoles $(S_x$ and $S_y)$ are
adjusted to minimize the first order chromaticity while trim quadrupoles
$(TQ_x$ and $TQ_y)$ are used to minimize the second order chromaticity
$({\partial^2 Q \over \partial \delta^2}).$ The lattice is ended by a second
telescope, also with magnification 3, that could be used to match the
correction system into an arc lattice.
The final focus system, from the matching telescope to the IP\cite{ref11}, has
a very small residual chromaticity and should be chromatically transparent when
is attached to the arc lattice.
The total length of the FFS is $\approx
475.8\,{\rm m}.$ The lattice consists of 44 quadrupoles, 14 sector dipoles and
4 sextupoles. The lattice does, however, include dipoles with excessive field and
with no space between most elements.
The variation of the tune shift at the IP as a function of $\delta={\delta
p\over p}$ is shown in Fig.\ref{fg2}.
\begin{figure}[tbh]
\centering
\epsfxsize=14.0cm \epsfysize=14.0cm \epsfbox{ffsfg2.ps}
\caption{Tune shift $Q_{x,y}$ vs ${\delta p\over p}$ }
\label{fg2}
\end{figure}
$Q_y$ is essentially flat over a
bandwidth of $\pm 0.4\%$; $Q_x$ has obvious non-linear components, although the
variation of tune, peak to peak is less than $0.03$ within a bandwidth of $\pm
0.3\,\%.$ Likewise, the relative $\beta^*$ variation $(\Delta
\beta^*={(\beta^*-3)\over 3})$ (Fig.\ref{fg3}) is negligible within a bandwidth
of $\pm 0.3\,\%.$
\begin{figure}[tbh]
\epsfxsize=14.0cm \epsfysize=14.0cm \epsfbox{ffsfg3.ps}
\caption{Beta function $\beta^*$ vs ${\delta p\over p}$}
\label{fg3}
\end{figure}
The remaining figures show the beta functions as a function of the
position z (Fig.\ref{fg4}a); the chromaticity (Fig.\ref{fg4}b) and
dispersion (Fig.\ref{fg4}c) as
function of position $z$ along the FFS and energy spread $\delta $
(Fig.\ref{fg4}d).
\begin{figure}[tbh]
\epsfxsize=14.0cm \epsfysize=14.0cm \epsfbox{ffsfg4.ps}
\caption{$\beta$ functions, chromaticity and dispersion vs z for different
momentum ($\pm 0.4\,\%$); lower right window: chromaticity vs
${\delta p\over p}$ }
\label{fg4}
\end{figure}
\section{Summary}
A {\it test model} of the FFS for a muon collider has been described. The design
satisfies the collider requirements, although it is not fully
realistic. Errors and tolerance analysis are yet to be performed as well as
tracking through the FFS to conform the achievable luminosity.
In order to make this final focus realistic, spaces will have to be introduced
between elements and its length will have to be increased to achieve the
require dispersion without unrealistic dipole fields.
We would like to
emphasize the need of new levels of sophistication in the correction of
non-linear tune shift both in amplitude and momentum dependency, in storage
rings with extreme low beta functions at the IP.
\section*{ACKNOWLEDGMENTS}
This research was supported by the U.S. Department of Energy
under Contract No. DE-ACO2-76-CH00016. (RBP) gratefully acknowledge stimulating
discussions with J. Irwin and O. Napoly; both authors thanks K.-Y. Ng and
D. Trbojevic for their helpful comments and W. Graves for his contribution in
the early stage of this work.
\newpage
|
1,108,101,566,362 | arxiv | \section{Introduction}
Weak gravitational lensing has developed into a useful probe of the
Universe. Weak gravitational lensing takes advantage of small
distortions (``shears'') of distant galaxies which are caused by
the deflection of light ray paths by intervening gravitational
fields, as predicted by General Relativity. While the matter component
of the Universe is dominated by dark matter which cannot directly be
seen, weak gravitational lensing allows us to directly map out the
total mass distribution including dark matter.
Many weak lensing studies focus on two-point statistics of the shear,
which is a direct observable of weak lensing analyses, such as cosmic
shear and tangential shear around galaxies and clusters of galaxies
\citep[e.g.,][]{bartelmann01,hoekstra08}. However, one can also
directly reconstruct the projected mass distribution from the observed
shear map \citep[e.g.,][]{kaiser93,seitz95,schneider96}. Such weak
lensing mass maps provide an important means of studying the
large-scale structure of the Universe as well as non-Gaussian features
of the matter density field. For example, massive clusters of galaxies
can be identified from peaks in mass maps
\citep[e.g.,][]{miyazaki02,miyazaki07,wittman06,shan12,utsumi14,liu15}.
Correlations of mass maps with light maps constructed from galaxies
and clusters of galaxies can reveal the connection between mass and
light, by constraining mass-to-light ratios and galaxy biases
\citep[e.g.,][]{hoekstra02,okabe10,jullo12,chang16,pujol16,utsumi16}.
These applications of mass maps may be enhanced
further by interpolation methods to recover mass maps in masked
regions \citep[e.g.,][]{pires09,vanderplas12}.
Weak lensing mass maps have been constructed in many surveys,
including the Cosmological Evolution Survey
\citep[COSMOS;][]{massey07}, Deep Lens Survey \citep[DLS;][]{kubo09},
the Canada-France-Hawaii Telescope Lensing Survey
\citep[CFHTLenS;][]{vanwaerbeke13},
the CFHT/MegaCam Stripe-82 Survey \citep[CS82;][]{shan14},
Dark Energy Survey Science Verification data
\citep[DES SV;][]{chang15,vikram15}, the Kilo-Degree Survey
\citep[KiDS;][]{kuijken15}, and the Red Cluster Sequence
Lensing Survey \citep[RCSLenS;][]{hildebrandt16}. It has been shown
that these mass maps correlate well with the light distributions
estimated from galaxies, which demonstrated the power of weak lensing
measurements for these surveys. These mass maps have also been used to
check residual systematics, by cross-correlating them with any
parameters related to Point Spread Function (PSF) and observing
conditions, as residual systematics in weak lensing measurements
can produce apparent correlations with these parameters.
While weak gravitational lensing for a fixed source redshift can only
probe the two-dimensional matter density field projected along the
line-of-sight, one can reconstruct the three-dimensional mass
distribution as well by combining weak lensing mass reconstructions in
different source redshift bins
\citep[e.g.,][]{taylor01,hu02,bacon03,simon09,vanderplas11,leonard14,bohm17}.
The idea has been applied to observations to obtain three-dimensional
mass maps \citep{taylor04,massey07,simon12}, although they have been
restricted to relatively small areas given the requirement of high
source galaxy densities. One can improve the accuracy of
three-dimensional mass reconstructions by using some information from
galaxy distributions as a prior \citep[e.g.,][]{amara12,szepietowski14},
although with this approach the resulting mass maps are no longer
independent of galaxy light distributions.
In this paper, we present two-dimensional and three-dimensional mass
maps from the Hyper Suprime-Cam (HSC) Subaru Strategic Program
\citep{aihara17a,aihara17b}, a wide-field imaging survey using the
Hyper Suprime-Cam \citep{miyazaki17a} mounted on the Subaru
8.2-meter telescope. Weak lensing analysis of commissioning data has
already demonstrated that the HSC is a powerful
instrument for weak lensing studies \citep{miyazaki15,utsumi16}.
The purpose of this paper is to construct weak lensing mass maps
to check the performance of weak lensing measurements in the HSC survey.
To do so, we cross-correlate weak lensing mass maps with the
distribution of stellar masses of red galaxies, which are known to
trace the large-scale structure of the Universe. We also
cross-correlate our mass maps with maps of various quantities such
as PSF and seeing sizes, to check for any possible
residual systematics in the reconstructed mass maps. Validating mass
maps is important for future applications of weak lensing mass maps in
the HSC survey, including the construction of mass-selected cluster
samples and cross-correlations of weak lensing maps with other surveys
such as ACTPol.
This paper is organized as follows. In Section~\ref{sec:data}, we
describe our source galaxy catalog for weak lensing analysis as well
as our photometric red galaxy sample used for constructing galaxy mass
maps. Our two-dimensional mass map analysis is presented in
Section~\ref{sec:2dmap}, whereas our three-dimensional mass map
analysis is presented in Section~\ref{sec:3dmap}. We summarize our
result in Section~\ref{sec:summary}. Unless otherwise specified, we
assume a flat $\Lambda$-dominated cosmology with the matter density
$\Omega_M=0.27$, the baryon density $\Omega_b=0.045$, cosmological
constant $\Omega_\Lambda=0.73$, dimensionless Hubble constant
$h=0.71$, the power spectrum tilt $n_s=0.96$, and the normalization of
the density fluctuation $\sigma_8=0.80$.
We note that our conclusion is insensitive to the
choice of the cosmological parameters.
\section{Data}\label{sec:data}
\subsection{Weak lensing shear catalog}\label{sec:shearcat}
Galaxy shape measurements and the resulting shear catalog in the HSC
S16A dataset are detailed in \citet{mandelbaum17}. In short, the
shapes of galaxies in the coadded $i$-band images are estimated using
the re-Gaussianization method \citep{hirata03}, and are calibrated using
simulated galaxy images that are similar to those used in GREAT3
\citep{mandelbaum15}. The image simulation includes
realistic HSC PSFs and is carefully designed to reproduce the
observed distribution of galaxy properties remarkably well, which
allows a reliable estimate shear calibration and additive biases
(see Mandelbaum et al., in prep.).
We use conservative cuts for selecting galaxies
with secure shape measurements, e.g., $S/N\geq 10$ and $i\leq 24.5$.
The shear catalog has been tested and shown to pass requirements for
cosmological studies. The shape catalog contains $\sim 12$~million
galaxies selected from 137~deg$^2$, giving an average raw
number density of galaxies $\bar{n}\sim 25$~arcmin$^{-2}$.
The HSC S16A dataset consists (mostly) of 6 patches; XMM, GAMA09H,
GAMA15H, HECTOMAP, VVDS, and WIDE12H. While we present mass maps for
these individual patches separately, we combine our results on
cross-correlations for all these 6 patches.
Accurate photometric redshifts for the shape catalog are important,
particularly for three-dimensional weak lensing mass reconstructions.
Thus we apply an additional cut to select galaxies with secure
photometric redshifts. We do so by selecting galaxies with the
standard deviation computed from the probability distribution function
(PDF) of the photometric redshift smaller than 0.3. This cut removes
$\sim 16$\% of the galaxies from the shape catalog. While photometric
redshifts are measured for the HSC galaxies using several different
techniques, throughout this paper we use the {\tt mlz} photometric
redshifts \citep{tanaka17}.
We use mock shear catalogs to estimate statistical uncertainties on
the mass maps. Details of the mock shear catalogs are given in
Appendix~1.
\subsection{Galaxy catalog}\label{sec:galcat}
We need a galaxy sample with reasonably accurate redshift information
in order to compare mass maps from weak lensing with galaxy
distributions. While the HSC survey footprint overlaps SDSS, the
redshift coverage of SDSS spectroscopic galaxies is limited. Following
redMaGiC \citep{rozo16}, in this paper we construct a photometrically
selected sample of luminous red galaxies (LRGs) from the HSC data by
taking advantage of the Stellar Population Synthesis (SPS) fitting method
developed for the CAMIRA algorithm \citep{oguri14,oguri17}.
The CAMIRA algorithm fits all galaxies with the SPS model of passively
evolving galaxies from \citet{bruzual03}, with careful corrections for
slight color differences between the model and observations using
spectroscopic galaxies, to compute the goodness-of-fit $\chi^2$ which
is used to construct a three-dimensional richness map for identifying
clusters of galaxies. The calibration for the HSC survey and the
resulting cluster catalog is presented in \citet{oguri17}, in which
$\sim 2000$ clusters of galaxies at $0.1<z<1.1$ selected from the area
of $\sim 230$~deg$^2$ are reported.
We use this SPS model calibrated in the HSC survey
\citep[see][]{oguri17} to select LRGs as follows. We fit all galaxies
with the SPS model, leaving redshift, stellar mass, and
metallicity as model parameters. In this model a single instantaneous
burst at the formation redshift $z_f=3$ is assumed, and a prior is
added to the metallicity \citep[see][]{oguri14}. Since we only consider
passively evolving galaxies in the SPS model, any galaxies that can be
fit well with the SPS model are red galaxies. Specifically, we select
galaxies with best-fit $\chi^2<10$ (3 degrees of freedom). In order to
construct a roughly volume-limited galaxy sample, we restrict the
redshift range to $0.05<z_{\rm photo}<1.05$, where $z_{\rm photo}$ is
the best-fit photometric redshift, and the stellar mass range to
$M_*>10^{10.3}M_\odot$, where the stellar mass is derived assuming the
\citet{salpeter55} initial mass function. From the HSC S16A Wide
dataset, we select 1,024,729 LRGs that satisfy these criteria.
\begin{figure}
\begin{center}
\includegraphics[width=8cm]{fig1.eps}
\end{center}
\caption{Comparison of photometric redshifts of LRGs $z_{\rm LRG}$ with
their spectroscopic redshifts $z_{\rm spec}$. We plot the scatter
$\sigma_z$ ({\it solid}) and bias $\delta_z$ ({\it dashed}) of the
residual $(z_{\rm LRG}-z_{\rm spec})/(1+z_{\rm spec})$ as a
function of redshift.}
\label{fig:lrg_zcomp}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=8cm]{fig2.eps}
\end{center}
\caption{Stellar mass densities of LRGs $\rho_{\rm LRG}$ as a function
of photometric redshift, for LRGs with stellar masses
$M_*>10^{10.3}M_\odot$. }\label{fig:lrg_ngal}
\end{figure}
We cross match the LRGs with spectroscopic galaxies in the HSC
footprint \citep[see][and references therein]{oguri17} to check the
accuracy of the photometric redshifts of LRGs. Using 51,402 LRGs that
match the spectroscopic galaxy catalog, we derive the scatter
$\sigma_z$ and bias $\delta_z$ of the residual $(z_{\rm LRG}-z_{\rm
spec})/(1+z_{\rm spec})$. Figure~\ref{fig:lrg_zcomp} shows the
scatter and bias as a function of redshift. Here we apply $3\sigma$
clipping when estimating the scatter and bias. The resulting outlier
rate is $\sim 7\%$ for the whole sample. We find that the scatter
is $\sigma_z\sim 0.02$ for the whole redshift range of interest,
except for the lowest redshift $z\sim 0.1$ which is probably due to
relatively poor photometric accuracy of nearby, very bright galaxies in
the HSC survey. The relatively poor photometric redshifts at the
lowest redshift may also be due to the lack of $u$-band images.
We also note that the scatter is larger than the
scatter of photometric redshifts of CAMIRA clusters, $\sigma_z\lesssim
0.01$, because the cluster photometric redshifts are derived by
combining photometric redshifts of several cluster member galaxies.
In Figure~\ref{fig:lrg_ngal}, we derive the stellar mass density of
LRGs by summing up the stellar masses of all the LRGs with $M_*>10^{10.3}M_\odot$.
The stellar mass density increases from $z=1$ to $0$, which is broadly
consistent with previous analysis of the evolution of early-type
galaxies \citep[e.g.,][]{bell04}.
\begin{figure*}
\begin{center}
\includegraphics[width=11.2cm]{fig3a.eps} \\
\hspace*{2.5mm} \includegraphics[width=11.5cm]{fig3b.eps}
\end{center}
\caption{Total mass ({\it upper}) and galaxy mass ({\it lower}) maps in the
XMM field. In the total mass map, we show the S/N of the
weak lensing reconstructed map, which is roughly proportional to the
convergence $\kappa$. In the galaxy mass map we directly show the
galaxy mass map value $\kappa_{\rm g}$ defined in
equation~(\ref{eq:kappag}). The smoothing scale is $\theta_s=2'$
(see equation~\ref{eq:Gaussian}). }
\label{fig:map_XMM}
\end{figure*}
\begin{figure*}
\begin{center}
\includegraphics[width=13.5cm]{fig4a.eps} \\
\hspace*{2.5mm} \includegraphics[width=13.9cm]{fig4b.eps}
\end{center}
\caption{Same as Figure~\ref{fig:map_XMM}, but for the GAMA09H field.}
\label{fig:map_G09}
\end{figure*}
\begin{figure*}
\begin{center}
\includegraphics[width=8.3cm]{fig5a.eps} \\
\hspace*{2.1mm} \includegraphics[width=8.55cm]{fig5b.eps}
\end{center}
\caption{Same as Figure~\ref{fig:map_XMM}, but for the WIDE12H field.}
\label{fig:map_W12}
\end{figure*}
\begin{figure*}
\begin{center}
\includegraphics[width=13.5cm]{fig6a.eps} \\
\hspace*{2.5mm} \includegraphics[width=13.9cm]{fig6b.eps}
\end{center}
\caption{Same as Figure~\ref{fig:map_XMM}, but for the GAMA15H field.}
\label{fig:map_G15}
\end{figure*}
\begin{figure*}
\begin{center}
\includegraphics[width=9.0cm]{fig7a.eps} \\
\hspace*{2.1mm} \includegraphics[width=9.25cm]{fig7b.eps}
\end{center}
\caption{Same as Figure~\ref{fig:map_XMM}, but for the HECTOMAP field.}
\label{fig:map_HEC}
\end{figure*}
\begin{figure*}
\begin{center}
\includegraphics[width=12cm]{fig8a.eps} \\
\hspace*{2.5mm} \includegraphics[width=12.3cm]{fig8b.eps}
\end{center}
\caption{Same as Figure~\ref{fig:map_XMM}, but for the VVDS field.}
\label{fig:map_VVD}
\end{figure*}
\section{Two-dimensional mass maps}\label{sec:2dmap}
\subsection{Mass reconstruction technique}\label{sec:wlmap2d}
The construction of mass maps from weak lensing requires spatial
filtering to reduce noise. There are several possible choices of
spatial filters which must be chosen depending on the application
of the mass maps. \citet{miyazaki17b} also constructs
wide-field mass maps from the same HSC data, but they are interested
in identifying clusters of galaxies from peaks in mass maps. In
searching for clusters, it is beneficial to use spatial filters
that eliminate the large-scale power in order to reduce the scatter in
peak heights coming from the large-scale structure. In contrast,
since we are interested in the large-scale structure, in this paper we
use a Gaussian filter which retains the large-scale power.
Systematic tests with $B$-mode mass maps are also presented in
\citet{mandelbaum17} and \citet{miyazaki17b}.
In this paper we follow a mass reconstruction method proposed by
\citet{kaiser93}. Since we are interested in large-scale mass
distributions, in this paper we always consider the weak lensing
limit, $|\kappa| \ll 1$. First we smooth the shear field
$\gamma_\alpha(\boldsymbol{\theta})$ ($\alpha=1$, $2$) as
\citep{seitz95}
\begin{equation}
\hat{\gamma}_\alpha(\boldsymbol{\theta})=
\frac{\sum_i w_i\left[\gamma_\alpha
(\boldsymbol{\theta}_i)-c_{\alpha,i}\right] W(|\boldsymbol{\theta}-\boldsymbol{\theta}_i|)}
{\sum_i w_i (1+m_i)W(|\boldsymbol{\theta}-\boldsymbol{\theta}_i|)},
\end{equation}
where $w_i$ is the inverse variance weight for the $i$-th
galaxy given in the weak lensing shear catalog,
the shear $\gamma_\alpha(\boldsymbol{\theta}_i)$ is related to
the distortion $e_\alpha$
(the ellipticity defined by second moments of the galaxy image)
as $\gamma_\alpha(\boldsymbol{\theta}_i)=e_\alpha(\boldsymbol{\theta}_i)/2{\cal
R}$ with ${\cal R}$ being the shear responsivity
that connects the distortion and the shear, $W(\theta)$ is the
Gaussian smoothing kernel
\begin{equation}
W(\theta)=\frac{1}{\pi \theta_{\rm s}^2}\exp
\left(-\frac{\theta^2}{\theta_{\rm s}^2}\right),
\label{eq:Gaussian}
\end{equation}
and $m_i$ and $c_{\alpha,i}$ are the multiplicative and additive biases
for the $i$-th galaxy (see \citealt{mandelbaum17} for more details on
the shear responsivity and calibration factors).
We then convert the shear field to the convergence field via
\begin{equation}
\hat{\kappa}(\boldsymbol{\theta})=\frac{1}{\pi}\int d^2\theta'
\frac{\hat{\gamma}_t(\boldsymbol{\theta'}|\boldsymbol{\theta})}
{|\boldsymbol{\theta}-\boldsymbol{\theta'}|^2},
\end{equation}
where $ \gamma_t(\boldsymbol{\theta'}|\boldsymbol{\theta})$ is a
tangential shear at position $\boldsymbol{\theta'}$ computed with
respect to the reference position $\boldsymbol{\theta}$.
In practice we construct the mass map on a regular grid adopting a
flat-sky approximation. First, we create a pixelized shear map for each
of the 6 patches with a pixel size of $0\farcm5$, apply the Fast
Fourier Transform (FFT), and conduct the convolutions in the Fourier
space to obtain the smoothed convergence map, which is sometimes
referred as an $E$-mode mass map. Since FFT assumes a periodic
boundary condition, we apply zero padding beyond the boundary of the
image before FFT. The imaginary part of the reconstructed convergence
map represents a $B$-mode mass map, which is used to check for certain
types of residual systematics in our weak lensing measurements. In
\citet{mandelbaum17}, we show that the $B$-mode mass map PDF well
follows the Gaussian distribution as expected for weak lensing mass
maps without significant systematic errors in shape measurements. In
fact, the boundary effect induces small non-vanishing $B$-mode
signals, which can be estimated from our mock shape catalog which has
exactly the same geometry as our input HSC shape catalog \citep[see
also][]{mandelbaum17}.
We also construct a noise map as follows. We randomly rotate the
orientations of individual galaxies, and construct a mass map using
the randomized galaxy catalog. We repeat this procedure to create
300 random mass maps from 300 realizations of randomized galaxy
catalogs. We then compute a standard deviation of each pixel from the
300 random mass maps to construct a ``sigma map'', a map showing the
spatial variation of the statistical noise of the reconstructed mass
map. The sigma map includes only the shape noise and measurement
error, and does not include cosmic shear. From the sigma map we can
define the signal-to-noise ratio (S/N) for each pixel simply from the
ratio of the $\kappa$ value of the reconstructed mass map to the
standard deviation of $\kappa$ from the sigma map.
In real observations, there are several regions where data are
missing due to bright star masks and edges. Reconstructed mass maps in
and near those regions are noisy and are not suitable for analysis.
To determine the mask region for each mass map, we construct a number
density map of the input galaxy catalog by convolving the number
density in each pixel with the same smoothing kernel which was used in
constructing mass maps (equation~\ref{eq:Gaussian}). Then we derive the
mean of the number density map with 2.5$\sigma$ clipping. We adopt
clipping because the number density map has a non-Gaussian tail.
We mask all pixels with the {\it smoothed} number density less than
0.5 times the mean number density computed above, assuming that they
correspond to edges and regions that are affected by bright star
masks. In addition, we derive the mean of the sigma map with
2.5$\sigma$ clipping and mask all pixels with the noise value larger
than twice the mean value, although this additional cut removes only a
minor fraction of the survey area.
The criteria for these masking procedures are
determined empirically so that the degradation of the mass map at
the edge is not significant.
We show the mass maps of the 6 HSC S16A patches in
Figures~\ref{fig:map_XMM}, \ref{fig:map_G09}, \ref{fig:map_W12},
\ref{fig:map_G15}, \ref{fig:map_HEC}, and \ref{fig:map_VVD}.
These mass maps are created using a relatively small smoothing scale
of $\theta_s=2'$. Here we show S/N maps which are similar to $\kappa$
maps except near the edges where the noise is slightly larger. In the
cross-correlation analysis below we use $\kappa$ maps rather than S/N maps.
The total area of unmasked regions in these mass maps is $\sim
167$~deg$^2$, which is larger than the total area of the regions where
the weak lensing shape catalog is defined, $\sim 137$~deg$^2$
\citep[see][]{mandelbaum17}, because of the non-local nature of the
weak lensing mass reconstruction.
\subsection{Galaxy mass maps}\label{sec:galmap2d}
The LRG sample constructed in Section~\ref{sec:galcat} is used to
create a galaxy mass map, a projected map of stellar masses of LRGs
with the same redshift weight as weak lensing. Specifically, we
compute a galaxy mass map value in each pixel as
\begin{equation}
\hat{\kappa}_{\rm
g}(\boldsymbol{\theta}_i)=
\sum_k\frac{M_{*,k}}{(D(z_k)\Delta\theta)^2\Sigma_{\rm crit}(z_k)},
\label{eq:kappag}
\end{equation}
where $k$ runs over LRGs that fall within a pixel centered at
$\boldsymbol{\theta}_i$, $M_{*,k}$ is the stellar mass of $k$-th LRG,
$D(z_k)$ is the angular diameter distance to the LRG photometric
redshift $z_k$, and $\Delta\theta=0\farcm5$ is the size of each
pixel. The critical surface density $\Sigma_{\rm crit}^{-1}(z_k)$ is
computed as
\begin{equation}
\Sigma_{\rm crit}^{-1}(z_k)=\frac{4\pi G}{c^2} D(z_k)\int_{z_k}^\infty dz\,p(z)
\frac{D(z_k,z)}{D(z)},
\end{equation}
where $p(z)$ is the average PDF of photometric redshifts of source
galaxies used for the weak lensing analysis, and $D(z_k,z)$ and $D(z)$
are angular diameter distances from redshift $z_k$ to $z$ and from
redshift $0$ to $z$, respectively. Strictly speaking, the critical
surface density has some spatial variation from the large-scale
structure of source galaxies, which is not taken in account in the
following analysis. From equation~(\ref{eq:kappag}), we subtract the
mean value as $\hat{\kappa}_{\rm g}(\boldsymbol{\theta}_i)\rightarrow
\hat{\kappa}_{\rm g}(\boldsymbol{\theta}_i)-\bar{\hat{\kappa}}_{\rm
g}$, and apply the same Gaussian smoothing kernel
(equation~\ref{eq:Gaussian}) as used for the weak lensing mass map to
obtain the final galaxy mass map.
We show the galaxy mass maps of the 6 HSC S16A patches in
Figures~\ref{fig:map_XMM}, \ref{fig:map_G09}, \ref{fig:map_W12},
\ref{fig:map_G15}, \ref{fig:map_HEC}, and \ref{fig:map_VVD}, which are
created using the same smoothing scale of $\theta_s=2'$ as for the
weak lensing mass maps.
\subsection{Cross-correlations of maps}\label{sec:cc2d}
We quantify the correlation between mass maps from weak lensing and
galaxy mass maps from photometric LRGs using the Pearson correlation
coefficient. For any two maps $\kappa_1(\boldsymbol{\theta}_i)$ and
$\kappa_2(\boldsymbol{\theta}_i)$ with zero means, $\langle
\kappa_i\rangle=0$, the correlation coefficient $\rho_{\kappa_1\kappa_2}$
is defined as
\begin{equation}
\rho_{\kappa_1\kappa_2}
=\frac{\sum_i\kappa_1(\boldsymbol{\theta}_i)\kappa_2(\boldsymbol{\theta}_i)}
{\left[\sum_i\left\{\kappa_1(\boldsymbol{\theta}_i)\right\}^2\right]^{1/2}
\left[\sum_i\left\{\kappa_2(\boldsymbol{\theta}_i)\right\}^2\right]^{1/2}},
\label{eq:pearson}
\end{equation}
where the summation runs over the pixels. The correlation coefficient
becomes $\rho_{\kappa_1\kappa_2}\sim 0$ if the two maps are
independent, whereas $\rho_{\kappa_1\kappa_2}\sim 1$ if the two maps
are highly correlated.
We cross-correlate $E$-mode and $B$-mode mass maps
(Section~\ref{sec:wlmap2d}) with galaxy mass maps
(Section~\ref{sec:galmap2d}). Since $E$-mode mass maps correspond to
the true matter distributions, we expect that the galaxy mass maps
correlate only with $E$-mode mass maps.
Figure~\ref{fig:plot_cormap_smoothing} shows the correlation
coefficients as a function of the smoothing size $\theta_s$ in
the Gaussian smoothing kernel. Here we combine the cross-correlation
results of all the 6 HSC S16A patches. For each patch, we
compute the cross-correlation coefficients and estimate their errors
using the 50 mock samples of the weak lensing shape catalog
(Section~\ref{sec:shearcat}).
We use the standard deviation of cross-correlation
coefficients for the 50 mock samples as our error estimate.
We then compute the inverse-variance
weighted average of the correlation coefficient of each map
combination which we show in Figure~\ref{fig:plot_cormap_smoothing}.
Figure~\ref{fig:plot_cormap_smoothing} indicates that the $E$-mode mass
maps indeed correlates well with the galaxy mass maps. The correlation
coefficients are consistent with zero for the $B$-mode mass maps.
We find that the correlation coefficients increase with increasing
smoothing size $\theta_s$, which is expected because larger smoothing
sizes reduce the statistical errors from the shot noise more
efficiently. It is worth noting that the HSC mass maps show
significant cross correlation ($\rho=0.34\pm 0.01$) even for the small
smoothing size of $\theta_s=2'$. This result should be compared with
previous wide-field mass maps constructed in CFHTLenS
\citep{vanwaerbeke13} and DES SV \citep{chang15,vikram15} for which
much larger smoothing sizes of $\theta_s\sim 7'$ are required to
obtain $\rho\sim 0.3$\footnote{Note that the definition of the
smoothing sizes in this paper are different from these previous works
by a factor of $\sqrt{2}$.}. This difference is
mainly due to the high density of the shape catalog for weak lensing
measurements in the HSC survey.
Our study demonstrates that the HSC
survey can generate mass maps at higher resolution than CFHTLenS and
DES SV, which is crucial for the construction of a mass-selected
cluster sample from weak lensing mass maps \citep{miyazaki17b}.
Larger correlation coefficients with increasing smoothing scale is
understood as follows. The shot noise depends on the source number
density $\bar{n}$ and smoothing scale as $\sigma\propto
(\bar{n}\theta_s)^{-1}$, where in the range of our interest the
fluctuation of a smoothed mass map due to the large-scale structure
roughly scales as $\sigma_{\rm LSS}\propto \theta_s^{-0.4}$. Therefore
at large $\theta_s$ the shot noise becomes smaller than $\sigma_{\rm
LSS}$ that produces a correlation between mass map and
galaxy mass maps. This also suggests that the transition smoothing
scale beyond which we see large correlation coefficients is inversely
proportional to the source number density, which explains the
difference between our results and previous results from CFHTLenS and
DES SV. However, our result as well as previous results show that
correlation coefficients do not approach to unity but saturate
at $\sim 0.5-0.6$ at very large $\theta_s$, which is presumably due
to the combination of several effects, including the limited redshift
and mass ranges of the LRG sample, errors in the stellar mass and
photometric redshift estimates, and the lack of blue galaxies in the
galaxy sample. Intrinsic alignments may also affect our weak lensing
mass maps, although the effect of intrinsic alignments on the
correlation coefficients is expected to be relatively minor.
\begin{figure}
\begin{center}
\includegraphics[width=8cm]{fig9.eps}
\end{center}
\caption{Pearson correlation coefficients (equation~\ref{eq:pearson})
between mass maps from weak lensing and galaxy mass maps from LRGs
as a function of the smoothing size $\theta_s$ in
equation~(\ref{eq:Gaussian}). Filled squares show cross-correlations
between the $E$-mode mass map ($\kappa_{\rm E}$) and the galaxy mass
map ($\kappa_{\rm g}$). Filled circles show the cross-correlation
between the $B$-mode mass map ($\kappa_{\rm B}$) and the galaxy mass
map ($\kappa_{\rm g}$). Errors are estimated from 50 mock samples
of the weak lensing shear catalog, which include cosmic variance
(see Appendix~1).}
\label{fig:plot_cormap_smoothing}
\end{figure}
\subsection{Systematics tests}
\begin{figure}
\begin{center}
\includegraphics[width=8cm]{fig10.eps}
\end{center}
\caption{Test of systematic effects in weak lensing mass maps from
cross-correlations of mass maps with various quantities that are
potentially a source of systematics \citep[see
also][]{vikram15}. We show results for smoothing sizes of both
$\theta_s=4'$ ({\it filled circles}) and $16'$ ({\it filled
squares}). For comparison, the rightmost points show the
cross-correlation coefficients between weak lensing and galaxy mass
maps presented in Figure~\ref{fig:plot_cormap_smoothing}, which represents
the physical cross correlation rather than the systematic test.
Errors are estimated from 50 mock samples of the weak lensing shear
catalog, which include cosmic variance (see Appendix~1).}
\label{fig:plot_cormap_all}
\end{figure}
Following \citet{vikram15}, we also examine cross-correlations between
weak lensing mass maps and various maps using parameters that
potentially act as a source of systematic effects in generating mass
maps. While a number of tests have been performed in
\citet{mandelbaum17}, systematics tests based on the mass maps presented
here serve as additional checks for validating the shear catalog.
The main source of systematics in weak lensing measurements comes from
the PSF. Imperfect corrections of PSFs in galaxy shape
measurements can generate artificial correlations between $E$- and
$B$-mode mass maps and PSF parameters. We construct star mass maps
$\kappa^{\rm star}_{\rm E}$ and $\kappa^{\rm star}_{\rm B}$ which use
star ellipticities $e_1^{\rm star}$ and $e_2^{\rm star}$ to construct
weak lensing mass maps with the same method as described in
Section~\ref{sec:wlmap2d}. The star catalog used for this analysis is
the same as the one used for various systematics tests in
\citet{mandelbaum17}. For this purpose, we use both the original
star ellipticities $e_i^{\rm star}$ as well as star ellipticities after
the PSF correction is applied, i.e., $e_i^{\rm star,cor}=e_i^{\rm
star}-e_i^{\rm PSF}$. We also create maps of star ellipticities
$e_1$ and $e_2$ themselves. These maps are
constructed first by deriving their average values in each pixel and
convolve the maps of these average values with the Gaussian smoothing
kernel of equation~(\ref{eq:Gaussian}).
In addition, we create maps of seeing sizes, star densities $n_{\rm
star}$, and average galaxy sizes of the shape catalog $r_{\rm gal}$,
as these parameters may also produce systematic effects in weak
lensing shape measurements. Again, these maps are smoothed with the
same smoothing kernel.
Figure~\ref{fig:plot_cormap_all} shows the results for smoothing sizes
of both $\theta_s=4'$ and $16'$. Again, results for all the 6 HSC S16A
patches are combined. We find that cross-correlations between weak
lensing mass maps and the parameters considered above are small. All
the cross-correlations are consistent with zero within $\sim
2\sigma$ level (given the large number of cross-correlations
considered here, we naturally expect that some of the points can
deviate more than $1\sigma$ by chance), which is in marked contrast to
the cross-correlations between mass maps and galaxy mass maps, which
are detected quite significantly. A possible exception is
cross-correlations between star weak lensing mass maps and maps with
star (PSF) ellipticities, although their cross-correlation
coefficients are much smaller than the cross-correlations between mass
maps and galaxy mass maps. This small deviation from zero is
presumably due to small residual PSF leakage and PSF modeling errors
that are also seen in other systematics tests
\citep[see][]{mandelbaum17}. We conclude that our weak lensing mass
maps constructed in the HSC survey are not significantly affected by
systematic effects.
\section{Three-dimensional mass maps}\label{sec:3dmap}
\subsection{Three-dimensional mass reconstruction}\label{sec:wlmap3d}
We can also reconstruct three-dimensional mass maps from weak lensing
by taking advantage of photometric redshift measurements for source
galaxies. We follow \citet{simon09} to use a linear algorithm with
the Wiener filtering for the three-dimensional mass reconstruction.
First we consider convergence $\kappa_l$ for the source redshift
bin $l$ at $z_{l,{\rm min}}<z<z_{l,{\rm max}}$. Since the convergence
is the projected matter density field, it can be described by a
weighted sum of the density fluctuation $\delta_k$ at redshift
$z_{k,{\rm min}}<z<z_{k,{\rm max}}$ as
\begin{eqnarray}
\kappa_l &\approx &\sum_k \left[\int_{z_{k,{\rm min}}}^{z_{k,{\rm max}}} dz
\frac{\bar{\rho}(z)}{H(z)(1+z)\Sigma_{{\rm
crit},l}(z)}\right]\delta_k \nonumber\\
&\equiv & \sum_k Q_{lk}\delta_k,
\label{eq:del2kap}
\end{eqnarray}
where
$H(z)$ is the Hubble parameter at redshift $z$ and
the critical density $\Sigma_{{\rm crit},l}^{-1}(z)$ for the source redshift
bin $l$ is approximately given by
\begin{equation}
\Sigma_{{\rm crit},l}^{-1}(z)\approx \frac{4\pi G}{c^2} D(z)\frac{D(z,\bar{z}_l)}{D(\bar{z}_l)},
\end{equation}
with $\bar{z}_l=(z_{l,{\rm min}}+z_{l,{\rm max}})/2$. Given multiple
source and lens redshift bins, Equation~(\ref{eq:del2kap}) reduces to
a system of linear equations, which can be inverted easily to obtain
${\boldsymbol \delta}$ from lensing observables. In practice, however,
three-dimensional mass reconstruction is very noisy even with the high
source galaxy density of the HSC survey, and therefore an additional
regularization is essential. Here we adopt the Wiener filtering which
efficiently reduces the noise in the Fourier domain \citep{simon09}. We
assume that the noise is dominated by the shot noise. Then the noise
power between the $l$-th and $m$-th source redshift bins is given by
\begin{equation}
N_{lm}=\delta_{lm}\frac{\sigma_e^2}{\bar{n}_l},
\end{equation}
where
$\delta_{lm}$ is the Kronecker delta,
$\sigma_e$ is the root-mean-square of the source galaxy
ellipticity, and $\bar{n}_l$ is the mean number density of source
galaxies in the $l$-th bin, both of which are directly estimated from
the observation. On the other hand, the signal power in the
$k$-th and $n$-th lens redshift bins is given by
\begin{equation}
S_{kn}=\delta_{kn}C_\ell(z_k),
\end{equation}
\begin{equation}
C_\ell(z_k)=\frac{1}{(\Delta \chi_k)^2}\int_{z_{k,{\rm min}}}^{z_{k,{\rm max}}}
dz\frac{P^{\rm m}(k=\ell/\chi)}{H(z)\chi^2},
\end{equation}
where
$\Delta \chi_k \approx \Delta z_k/H(\bar{z}_k)$,
$\Delta z_k =z_{l,{\rm max}}-z_{l,{\rm min}}$, and $P^{\rm m}(k)$ is
the matter power spectrum, which is computed using the halofit model
\citep{smith03,takahashi12}. The use of this signal power corresponds
to the transverse Wiener filter in \citet{simon09}. Given the expected
signal and noise powers, the three-dimensional mass reconstruction with
Wiener filtering from the observed (pixelized) shear maps in different
source redshift bins, ${\boldsymbol \gamma}$, is expressed as
\begin{equation}
{\boldsymbol \delta}({\boldsymbol \ell})=\tilde{W}(\ell)D^*({\boldsymbol \ell})
\left[\alpha \mathbf{S}^{-1}+\mathbf{Q}^{\rm T}\mathbf{N}^{-1}\mathbf{Q}\right]^{-1}
\mathbf{Q}^{\rm T}\mathbf{N}^{-1}{\boldsymbol \gamma}({\boldsymbol \ell}),
\label{eq:3dreconst}
\end{equation}
where $D({\boldsymbol \ell})={\boldsymbol \ell}^2/\ell^2$ \citep{kaiser93},
and $\tilde{W}(\ell)$ is the Fourier transform of the Gaussian
smoothing kernel (equation~\ref{eq:Gaussian}).
The parameter $\alpha$ in equation~(\ref{eq:3dreconst}) is an
important parameter which tunes the strength of the Wiener
filtering. The larger value of $\alpha$ leads to better
signal-to-noise ratios, although it also induces a bias in the
redshift of the reconstructed matter structure \citep{simon09}.
We try several different values of $\alpha$, and based
on the trial result,
in this paper we adopt $\alpha=0.03$, which appears to represent a good
compromise between the signal-to-noise ratio and small bias in the
redshift.
We need a large smoothing to reduce the shot noise in the
three-dimensional mass reconstruction. We adopt the pixel size of
$1'$, and the smoothing size of $\theta_{\rm s}=20'$ throughout this
section. We consider the source redshift range of $0.1<z<2.9$ with the
bin size of $\Delta z=0.1$, and the lens redshift range of
$0.05<z<1.05$ with the bin size of $\Delta z=0.1$.
\begin{figure*}
\begin{center}
\includegraphics[width=8.3cm]{fig11a.eps}
\includegraphics[width=8.3cm]{fig11b.eps}
\end{center}
\caption{Three-dimensional mass map from the VVDS region ({\it
Left}). We also show the corresponding three-dimensional galaxy
mass map from the photometric LRG sample ({\it Right}). The contours
are drawn from 2$\sigma$ to 6$\sigma$
with the 1$\sigma$ interval,
where $\sigma$ is the rms map value.}
\label{fig:map3d}
\end{figure*}
We show an example of reconstructed three-dimensional mass maps in
Figure~\ref{fig:map3d}.
\subsection{Three-dimensional galaxy mass maps}\label{sec:galmap3d}
Three-dimensional galaxy mass maps are also constructed from the LRG
sample presented in Section~\ref{sec:galcat}. The stellar mass density
of each pixel with the side length of $1'$ and $\Delta z=0.1$ is
simply computed as $\rho_k=\sum_k M_{*,k}/V$, where $V$ is the volume of the
pixel. We then apply the same Gaussian smoothing kernel as used in the
three-dimensional mass reconstruction in Section~\ref{sec:wlmap3d}.
For each redshift slice, we again subtract the mean value.
An example of three-dimensional galaxy mass maps is also shown in
Figure~\ref{fig:map3d}.
\subsection{Cross-correlation results}
Following Section~\ref{sec:cc2d}, we quantify the correlation between
the three-dimensional mass map (Section~\ref{sec:wlmap3d}) and the
three-dimensional galaxy mass map (Section~\ref{sec:galmap3d}) using
the Pearson correlation coefficient defined in equation~(\ref{eq:pearson}).
We cross-correlate mass maps in the same or different redshift bins.
In order to increase the signal-to-noise ratio further, we combine two
redshift bins to have 5 redshift slices at $0.05<z<1.05$ with the width
$\Delta z=0.2$. Thus, for each $E$- or $B$-mode mass map, we compute
$5\times 5=25$ correlation coefficients to check whether we successfully
recover the three-dimensional matter structure with weak lensing.
In the same manner as in the two-dimensional mass maps, we combine
results for all the 6 HSC S16A patches.
\begin{figure}
\begin{center}
\includegraphics[width=8cm]{fig12.eps}
\end{center}
\caption{Pearson correlation coefficients (equation~\ref{eq:pearson})
between the three-dimensional mass maps from weak lensing and
three-dimensional galaxy mass maps from LRGs. Here we show the diagonal
correlation coefficients (i.e., same redshift bins for mass
maps and galaxy mass maps) as a function of redshift. Both $E$-mode
({\it filled squares}) and $B$-mode ({\it filled circles}) mass map
results are shown. Errors are estimated from 50 mock samples
of the weak lensing shear catalog. }
\label{fig:plot_cov}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=8cm]{fig13a.eps}
\includegraphics[width=8cm]{fig13b.eps}
\end{center}
\caption{Matrix of Pearson correlation coefficients for the same and
different redshift bins between three-dimensional $E$-mode ({\it
upper}) and $B$-mode ({\it lower}) mass maps and three-dimensional
galaxy mass maps. The typical statistical error on the correlation
coefficient is $\sim 0.02$.}
\label{fig:plot_covmat}
\end{figure}
Figure~\ref{fig:plot_cov} shows the diagonal part of the correlation
coefficients. We find that the cross-correlations are significantly
detected particularly at low redshifts, $z\lesssim 0.6$, which
indicates the successful three-dimensional mass reconstruction.
The correlation coefficients are not very large due to
the large effect of the shot noise in three-dimensional mass
reconstruction.
Although the $E$-mode correlation coefficients decrease at higher
redshifts, we find that this is partly due to the redshift bias in
reconstructed mass maps. This is obvious from
Figure~\ref{fig:plot_covmat}, which shows the full correlation matrix
of the three-dimensional mass maps and three-dimensional galaxy mass
maps. The Figure indicates that the three-dimensional mass
reconstruction is indeed successful, but the redshift of the
reconstructed mass distribution is biased low particularly at high
redshift. As discussed in Section~\ref{sec:wlmap3d}, this is in fact
expected in our mass reconstruction method because the Wiener
filtering method used here is a biased estimator. The redshift bias of
$\Delta z\sim 0.2$ at $z\sim 0.9$ for our choice of the parameter
$\alpha=0.03$ is in good agreement with the expected bias estimated by
\citet{simon09}.
Since the bias is understood fairly well, in principle
we can construct the galaxy mass map with the expected redshift bias
of the Wiener filtering method to make a fairer comparison, which we
leave for future work.
Thus we conclude that we successfully reconstructed
three-dimensional mass map out to high redshift, which is made
possible thanks to the high number density of the weak lensing shape
catalog in the HSC survey.
\section{Summary}\label{sec:summary}
We have presented weak lensing mass maps from the HSC S16A dataset
covering 167~deg$^2$. We have cross-correlated projected
two-dimensional mass maps with two-dimensional galaxy mass maps
constructed from stellar masses of photometric LRGs that are also
selected from the HSC data. We have found that the $E$-mode mass maps
correlate with the galaxy mass maps significantly, even with relatively
small smoothing sizes of $\theta_s=2'$. More specifically, the
cross-correlation coefficients are $\rho=0.54\pm0.03$ for
$\theta_s=8'$ and $\rho=0.34\pm 0.01$ for $\theta_s=2'$. This finding
confirms the validity of our weak lensing measurements and weak
lensing mass reconstructions. We have also checked for potential
systematic effects in mass maps by cross-correlating the weak lensing
mass maps with maps of various parameters that can be a source of
systematic effects in mass maps, and found that the cross-correlations
are sufficiently small. Finally, we reconstructed three-dimensional
mass maps from weak lensing using photometric redshift measurements of
individual source galaxies. We have found that the three-dimensional
mass map correlates reasonably well with three-dimensional galaxy mass
map, which indicates that our three-dimensional weak lensing mass
reconstruction is successful.
Our work demonstrates the power of the HSC survey for weak lensing
studies. This is mainly due to the high number density of source
galaxies of $\bar{n}\sim 25$~arcmin$^{-2}$ for weak lensing analysis. In
particular, previous three-dimensional weak lensing mass reconstructions
have been limited to relatively small areas \citep[e.g.,][]{massey07},
and this work successfully applied the technique to much wider area to
obtain wide-field three-dimensional mass maps. Given the validation of
mass maps presented in this paper, we plan to use HSC weak lensing
mass maps to study the large-scale structure of dark matter and
baryons, including the construction of a mass-selected cluster sample
\citep{miyazaki17b} and the correlation of dark matter and hot gas
from the cross-correlation of weak lensing mass maps and
Sunyaev-Zel'dovich maps \citep{osato17}.
\begin{ack}
We thank T. Hamana and K. Osato for useful discussions, and the
anonymous referee for useful comments.
This work was supported in part by World Premier International Research Center Initiative (WPI Initiative), MEXT, Japan, and JSPS KAKENHI Grant Number 26800093, 15H05887, 15H05892, and 15H05893.
MO acknowledges financial support from JST CREST Grant Number JPMJCR1414.
RM is supported by the US Department of Energy Early Career Award Program.
HM is supported by the Jet Propulsion Laboratory, California Institute of Technology, under a contract with the National Aeronautics and Space Administration.
The Hyper Suprime-Cam (HSC) collaboration includes the astronomical communities of Japan and Taiwan, and Princeton University. The HSC instrumentation and software were developed by the National Astronomical Observatory of Japan (NAOJ), the Kavli Institute for the Physics and Mathematics of the Universe (Kavli IPMU), the University of Tokyo, the High Energy Accelerator Research Organization (KEK), the Academia Sinica Institute for Astronomy and Astrophysics in Taiwan (ASIAA), and Princeton University. Funding was contributed by the FIRST program from Japanese Cabinet Office, the Ministry of Education, Culture, Sports, Science and Technology (MEXT), the Japan Society for the Promotion of Science (JSPS), Japan Science and Technology Agency (JST), the Toray Science Foundation, NAOJ, Kavli IPMU, KEK, ASIAA, and Princeton University.
The Pan-STARRS1 Surveys (PS1) have been made possible through contributions of the Institute for Astronomy, the University of Hawaii, the Pan-STARRS Project Office, the Max-Planck Society and its participating institutes, the Max Planck Institute for Astronomy, Heidelberg and the Max Planck Institute for Extraterrestrial Physics, Garching, The Johns Hopkins University, Durham University, the University of Edinburgh, Queen's University Belfast, the Harvard-Smithsonian Center for Astrophysics, the Las Cumbres Observatory Global Telescope Network Incorporated, the National Central University of Taiwan, the Space Telescope Science Institute, the National Aeronautics and Space Administration under Grant No. NNX08AR22G issued through the Planetary Science Division of the NASA Science Mission Directorate, the National Science Foundation under Grant No. AST-1238877, the University of Maryland, and Eotvos Lorand University (ELTE).
This paper makes use of software developed for the Large Synoptic Survey Telescope. We thank the LSST Project for making their code available as free software at http://dm.lsst.org.
Based in part on data collected at the Subaru Telescope and retrieved from the HSC data archive system, which is operated by the Subaru Telescope and Astronomy Data Center at National Astronomical Observatory of Japan.
\end{ack}
\section*{Appendix 1. Mock shear catalogs}
We construct mock shear catalogs which take full account of the survey
geometry, the spatial inhomogeneity, and the redshift distribution of
galaxies. We do so by adopting a real shear catalog from the
observations and replacing the ellipticities of individual galaxies with
mock ellipticity values that include the cosmic shear from
ray-tracing simulations.
First we review the relation between the ellipticity and the shear in
our shear catalog. In this paper we adopt the re-Gaussianization
method \citep{hirata03}, which uses the second moments of the surface
brightness distribution of the source,
$Q_{ij}\propto \int d\vec{\theta} I(\vec{\theta})\theta_i\theta_j$,
where $I(\vec{\theta})$ is the surface brightness distribution and the
coordinate origin is set to the center of the source, to define the
ellipticity of each galaxy. Specifically, a complex ellipticity is
defined as $\epsilon=(Q_{11}-Q_{22}+2iQ_{12})/(Q_{11}+Q_{22})$.
In practice a weight function is included in the measurement of the
second moment, which is ignored here just for simplicity.
The intrinsic ellipticity $\epsilon^{\rm int}$ and the observed
ellipticity with a weak lensing effect $\epsilon^{\rm lens}$ are
related as \citep[e.g.,][]{seitz95}
\begin{equation}
\epsilon^{\rm lens}=\frac{\epsilon^{\rm int}+2g+g^2\epsilon^{\rm
int,*}}{1+|g|^2+2{\rm Re}[g\epsilon^{\rm int, *}]},
\label{eq:lensellip}
\end{equation}
where $g=\gamma/(1-\kappa)$ is the so-called reduced shear.
In order to construct a mock shear catalog, we adopt the real shear
catalog from the HSC observation. In the mock catalog, the coordinates
of all the galaxies in the shear catalogs are kept unchanged, but we
simply replace the observed ellipticities of the individual galaxies,
$\epsilon^{\rm obs}$, with simulated values. To derive simulated
ellipticity values, first we randomly rotate each galaxy,
$\epsilon^{\rm ran}=e^{i\phi}\epsilon^{\rm obs}$, where $\phi$ is a
random number between $0$ and $2\pi$. We need to distinguish the
intrinsic ellipticity from the measurement error because they have
different impacts on the shear responsivity. For each galaxy, the
shear catalog has an estimate of the intrinsic rms ellipticity
$\sigma_{\rm int}$ (parameter {\tt
ishape\_hsm\_regauss\_derived\_rms\_e}) and the measurement
error $\sigma_{\rm sta}$ (parameter {\tt ishape\_hsm\_regauss\_derived\_sigma\_e}).
For each galaxy, we derive a randomized intrinsic ellipticity as
\begin{equation}
\epsilon^{\rm int}=f\epsilon^{\rm ran},
\end{equation}
\begin{equation}
f=\frac{\sigma_{\rm int}}{\sqrt{\sigma_{\rm int}^2+\sigma_{\rm sta}^2}}.
\end{equation}
We then add weak lensing effects via equation~(\ref{eq:lensellip}) to
convert $\epsilon^{\rm int}$ to lensed galaxy ellipticity
$\epsilon^{\rm lens}$. For weak lensing, we take all-sky weak lensing
maps presented in \citet{takahashi17}. The cosmological model
is from the best-fit result of the Wilkinson Microwave Anisotropy Probe
nine year data \citep{hinshaw13} with $\Omega_M=0.279$,
$\Omega_b=0.046$, $\Omega_\Lambda=0.721$, $h=0.7$, $n_s=0.97$, and
$\sigma_8=0.82$. \citet{takahashi17} created all-sky weak
lensing maps at 38 source redshift slices from $z=0.05$ to $5.3$,
which are stored in a HEALPix format \citep{gorski05}. Although there
are realizations with different angular resolutions, we use a low
resolution version with NSIDE equal to 4096, which roughly corresponds
to a pixel size of $\sim 1$~arcmin. For each galaxy, we randomly
assign its redshift following the photometric redshift PDF of that
galaxy \citep[see][]{tanaka17}, and obtain values of convergence
$\kappa$ and complex shear $\gamma$ from the all-sky weak lensing maps
at two adjacent redshift slices and linearly interpolating map
values. The value of $\gamma$ is rescaled by a factor of $(1+m)$ for
each galaxy in order to account for the multiplicative bias
\citep[see][]{mandelbaum17}. After adding weak lensing effects from
the all-sky ray-tracing simulations, we add a random measurement
noise, $\epsilon^{\rm mock}=\epsilon^{\rm lens}+(N_1+iN_2)$, where
$N_i$ is a random value drawn from a normal distribution with a
standard deviation of $\sigma_{\rm sta}$. From this procedure we
create a list of mock ellipticities $\epsilon^{\rm mock}$ for the weak
lensing shear catalog, which properly include the effect of the cosmic
shear. When generating different realizations, we randomly rotate the
all-sky weak lensing map before assigning weak lensing effects to
randomized galaxies so that we take the different realization of the
cosmic shear from the different patch of the all-sky weak lensing map.
|
1,108,101,566,363 | arxiv | \section{Introduction}
While the currently observable universe is isotropic to a very high degree, this is not a generic feature of spacetimes near singularities --- rather the opposite. In fact, within general relativity and under certain assumptions in the matter sector, the most generic approaches to singularities (such as cosmological big bang or big crunch singularities, but also black hole singularities) are highly anisotropic. They display a chaotic mixmaster behaviour \cite{Misner:1969hg}, with infinite chaotic oscillations on a finite time interval. This behaviour proceeds in epochs, the beginning and end of which are well approximated by the Kasner metric \cite{Kasner:1921zz}. This is known as the Belinski-Khalatnikov-Lifshitz (BKL) singularity \cite{Belinsky:1970ew} (see, e.g., \cite{Belinski:2017fas,Belinski:2009wj,Belinski:2014kba} for reviews).
In the context of the current paradigm of very early universe cosmology, anisotropies from the big bang initial singularity would be quickly washed away by a period of accelerated expansion (i.e.~inflation). However, our knowledge of possible pre-inflationary physics is very scarce, and the question of what happened near the big bang remains of fundamental interest. In particular, semi-classical general relativity most likely does not hold anymore at such high energy scales, and what was the nature of the initial big bang singularity (if there was one) remains an open question, especially whether it was of BKL type or rather isotropic. Within string theory, chaotic anisotropies are expected (e.g., \cite{Damour:2000wm,Damour:2000hv,Damour:2002tc,Damour:2002et}), while some semi-classical higher-derivative theories of gravity have stable isotropic cosmological singularities (e.g., \cite{Middleton:2008rh}) or can limit the growth of anisotropies \cite{Barrow:2005qv,Barrow:2006xb}. Other theories of modified gravity can similarly bound shear anisotropies \cite{Sakakihara:2020rdy} or screen them \cite{Starobinsky:2019xdp,Galeev:2021xit}. Moreover, in a gravitational ultraviolet-complete theory such as quadratic gravity \cite{Stelle:1976gc}, requiring semi-classical cosmological transition amplitudes from the big bang to today to be well defined and finite severely constrains anisotropic singularities \cite{Lehners:2019ibe,Jonas:2021xkx}.
In the context of alternative very early universe scenarios such as models of bouncing cosmology, the question of the evolution of anisotropies also plays an important role, even well before the approach to the high-curvature big crunch/bounce singularity (or before a non-singular bounce occurs). For instance, in a Bianchi type-I universe, the contribution from shear anisotropies to the total energy density budget is proportional to $1/a^6$, where $a$ is the spatially averaged scale factor. While this decays very rapidly in an expanding universe, it conversely grows much faster than for other known matter types (e.g., pressureless dust and radiation), thus representing an instability to standard isotropic contracting models. This is not an issue for ekpyrotic cosmology since the isotropic background scaling solution arises from a scalar field with energy density growing as $1/a^{2\epsilon}$ with $\epsilon>3$, thus effectively diluting anisotropies. As such, the ekpyrotic scenario has been shown to be very robust with respect to dynamically producing an isotropic universe, as demonstrated by analytic and numerical studies \cite{Erickson:2003zm,Garfinkle:2008ei,Cook:2020oaj,Ijjas:2020dws,Ijjas:2021gkf,Ijjas:2021wml}. In fact, formally, ekpyrosis implies a no-hair theorem stating that the future big crunch is a stable isotropic singularity \cite{Lidsey:2005wr}. The theorem does not hold, however, if instead of an ekpyrotic scalar field one has an imperfect fluid with anisotropic pressures that satisfy ekpyrotic equations of state \cite{Barrow:2015wfa} (deviations from perfect fluids will be further discussed below). It is also to be noted that single-field ekpyrosis predicts a blue spectrum of scalar perturbations. This is resolved in the case of two-field ekpyrosis at the cost of introducing an additional degree of freedom.
In the context of matter bounce cosmology, where a scale-invariant power spectrum of adiabatic curvature perturbations is generated during a matter-dominated contracting phase \cite{Wands:1998yp,Finelli:2001sr,Brandenberger:2012zb}, the growth of anisotropies represents a serious problem \cite{Levy:2016xcl} (to the opposite of ekpyrotic cosmology), which prevents the simplest models from being viable. Of the very few possible resolutions to this problem, we can mention the hypothetical possibility of promoting the graviton to a massive spin-2 field with mass larger than the Hubble scale in the contracting phase \cite{Lin:2017fec}. Therefore, in matter bounce cosmology and in a more general context of a bouncing or cyclic universe (not ekpyrotic), the question of how could the universe become isotropic enough for some structure formation scenario to successfully work and/or for a non-singular bounce to be achieved\footnote{Most realisations of a non-singular bounce usually simply rely on the presumption of isotropy. However, non-singular bounces with sizable anisotropies are possible (see, e.g., \cite{Bramberger:2019zez,Anabalon:2019equ,Kumar:2021mgc,Rajeev:2021yyl}), but anisotropies should be at least small enough after the bounce at the onset of radiation-dominated expansion to match later observational constraints from the cosmic microwave background \cite{Planck:2018jri}. Furthermore, if we include curvature in our bouncing model, significant anisotropies close to the bounce may not allow the universe to re-expand and create a singularity in the Weyl curvature tensor.} remains mostly unsolved.
Most approaches to cosmology from an effective field theory point of view often assume the matter content to be represented by minimally coupled scalar fields or perfect fluids. However, non-viscous fluids are an approximation to more realistic fluid dynamics models. For instance, scalar fields non-minimally coupled to gravity \cite{Faraoni:2021lfc,Giusti:2021sku}, neutrinos that are free streaming \cite{Misner:1967zz,Misner:1967uu,Stewart:1968,Matzner:1969,Weinberg:1971mx,Matzner:1972b,Weinberg:2003ur}, or any realistic interacting fluid all depict some form of viscosity. Therefore, the influence of viscosity on early- and late-time cosmologies has been studied in the context of Refs.~\cite{Hawking:1966qi,Misner:1967zz,Misner:1967uu,Stewart:1968,Stewart:1969,Matzner:1969,Weinberg:1971mx,Matzner:1972,Matzner:1972b,Parnovskii:1977,Belinskii:1979,Gron:1990ew,Weinberg:2003ur,Hervik,Brevik:review,Brevik:2019yma,Anand:2017wsj,Goswami:2016tsu,Lu:2018smr,Atreya:2017pny,Natwariya:2019fif,Mishra:2020onx} and many more. In particular, non-singular solutions have been found in the context of bulk viscosity (see, e.g., \cite{Brevik:review}), and the effect of shear viscosity has been studied as an isotropisation mechanism \cite{Belinski:2013jua,Belinski:2017fas,Ganguly:2019llh,Ganguly:2020daq}. However, the formulation of the shear viscosity term in analytic form --- while accounting for relativistic effects --- is challenging. Eckart \cite{Eckart:1940te} and Landau-Lifshitz \cite{LandauLifshitz} formulate a hydrodynamic relativistic theory of shear viscosity for models whose characteristic motion timescales are much larger than the relaxation time of the system to equilibrium. Close to a singularity, most characteristic motion should cease, so this approximation would not apply. This situation is applicable to the case of a contracting universe close to a bounce when the anisotropy energy density would grow the fastest in the absence of any other isotropising mechanism. Moreover, the theory formulated by Eckart and by Landau-Lifshitz allows for the superluminal propagation of viscous excitations. The Israel-Stewart \cite{Israel:1979wp} theory is able to rid the formalism of this problem. For the purposes of this work, we will be using a restricted version of the Israel-Stewart formalism to model the shear viscosity term. The restriction will apply in that we assume that the relaxation time to equilibrium for the fluid under consideration is very small. There exists no closed form for the viscosity term for non-zero relaxation times.
With these considerations, one can arrive at a phenomenological model for the coefficient of viscosity $\eta$ as a power law of the energy density $\rho$, i.e., $\eta \propto \rho^n$. This was the approach of previous studies, e.g., \cite{Belinski:2013jua,Belinski:2017fas,Ganguly:2019llh,Ganguly:2020daq} in the context of approaches to singularities\footnote{The literature of phenomenological studies of viscosity in \emph{general} cosmological contexts is too vast to mention here.}, but this still remains a phenomenological model. There is no microscopic model --- analogous to the kinetic theory picture of colliding hard spheres --- of the origin of this viscosity for a cosmological model. It is our intention in this work to provide the beginnings of such a microscopic realisation for viscosity embedded in concrete cosmological scenarios.
Two main avenues will be explored: an interacting scalar field in a thermal bath and black holes. The former has been extensively studied in quantum field theory (QFT), with sophisticated techniques to compute the viscosity coefficient (see, e.g., \cite{Jeon:1994if,Jeon:1995zm,Kapusta:2006pm}). Also, strongly interacting QFTs often have gravity duals (in a holographic description), from which computations have led to a viscosity bound conjecture (see, e.g., \cite{Policastro:2001yc,Kovtun:2004de,Son:2007vk}). This conjecture implies that realistic, interacting fluids always have a minimal amount of viscosity, at least of the order of their entropy density. All of this motivates us to consider a simple QFT in a cosmological background as a first microphysical realisation of viscous cosmology.
The second avenue involves black holes, which are often ubiquitous in cosmological scenarios involving a phase of contraction prior to a bounce. Indeed, black holes could form from direct collapse of inhomogeneities \cite{Banks:2002fe,Quintin:2016qro,Chen:2016kjx} or already exist from preexisting structures (as in a cyclic universe). Such black holes are expected to potentially dominate the universe near a big crunch or bounce (except possibly in regions which could undergo ekpyrotic contraction \cite{Lehners:2008qe,Lehners:2009eg}), and as such, a dense `gas' of black holes has been proposed as a state of matter at very high energies, as studied in string theory (see, e.g., \cite{Masoumi:2014vpa,Masoumi:2015sga,Masoumi:2014nfa,Mathur:2020ivc}, as well as \cite{Banks:2001px,Banks:2003ta,Banks:2004cw,Banks:2004vg} for the related holographic scenario and \cite{Veneziano:2003sz,Quintin:2018loc} for string-size black holes). In the context of black holes forming in a contracting universe, there is a serious possibility that such black holes could persist through a bounce, thus transitioning into our expanding universe as primordial black holes \cite{Carr:2011hv,Clifton:2017hvg,Carr:2017wkz,Coley:2020ykx} or remnants thereof \cite{Rovelli:2018hbk,Rovelli:2018hba,Barrau:2021spy}. The important novelty of this work is in realising that black holes, due to their gravitational attraction and intrinsic non-deformability \cite{LeTiec:2020spy,Chia:2020yla,Charalambous:2021mea}, can be treated collectively as a non-perfect fluid with shear viscosity. Therefore, under certain approximations where the hydrodynamical approximation is valid, dissipative effects form, which tend to isotropise the cosmology.
\paragraph*{Outline} We shall begin in Sec.~\ref{sec:review} by reviewing the concepts of stress, shear, viscosity, and their phenomenological implications for anisotropic cosmologies, with an emphasis on the models that are later studied in this paper. We then demonstrate in Sec.~\ref{sec:viscomani} some microphysical examples of shear viscosity: the case of an interacting scalar field theory at finite temperature and a gravitationally interacting gas of black holes, both in its dilute and dense limit. We study the effect of the viscosity coefficients derived in these scenarios on the small and the large anisotropy limits of the background universe in Sec.~\ref{sec:evo}. We briefly comment on the implications for gravitational waves in Sec.~\ref{sec:GWs}, and finally in Sec.~\ref{sec:conclusions}, we present our conclusions.
\paragraph*{Notation} Throughout this paper, we use the mostly plus metric signature $(-,+,+,+)$. Latin indices at the beginning of the alphabet run over spacetime coordinates ($a,b,c,d,\ldots\in\{0,\ldots,3\}$), while Latin indices from roughly the third of the alphabet run over spatial coordinates only ($i,j,k,\ldots\in\{1,2,3\}$). We also work with units where the speed of light, the reduced Planck constant, and the Boltzmann constant are set to unity ($c=\hbar=k_\mathrm{B}=1$), and $M_\mathrm{Pl}^2:=1/(8\pi G_\mathrm{N})$ defines the reduced Planck mass in terms of the Newtonian constant of gravitation $G_\mathrm{N}$.
\section{Review of stress, shear and viscosity}\label{sec:review}
\subsection{The definition of the shear and stress-energy tensors and the meaning of viscosity}\label{subsec:microscopic_def}
Spatially homogeneous, anisotropic models can be investigated using the orthonormal frame formalism from dynamical systems analysis (see, e.g., \cite{Ehlers:1993gf,ellis_maartens_maccallum_2012}). The geometry is split into a fluid moving orthogonally to the homogeneous spatial hypersurface, with the timelike fluid 4-velocity $u^a$ being equal to the unit normal vector of the spatial hypersurface, hence $g_{ab}u^au^b=-1$. In the spirit of the $3+1$ decomposition of the spacetime manifold, the fluid velocity vector that defines the foliation can be used to find a projection tensor,
\begin{equation}
h_{ab}=g_{ab}+u_au_b\,,
\end{equation}
which represents the induced metric on the spatial hypersurface. The corresponding extrinsic curvature of the spatial hypersurface is then given by
\begin{equation}
K_{ab}=h_a{}^c h_b{}^d\nabla_du_c=:\mathrm{D}_bu_a\,,
\end{equation}
where the last equality defines the spatial covariant derivative, i.e., the spacetime covariant derivative projected on the spatial hypersurface.
With simple tensorial algebra, the above can be used to show that the extrinsic curvature tensor can also be written as
\begin{equation}
K_{ab}=\nabla_bu_a+u_bu^c\nabla_cu_a=\nabla_bu_a+u_b\dot u_a\,,
\end{equation}
where the time derivative of the fluid velocity $\dot u_a:=u^c\nabla_cu_a$ defines the acceleration of the fluid.
The extrinsic curvature tensor can be decomposed into an expansion tensor and a vorticity tensor as $K_{ab}=\Theta_{ab}+\omega_{ab}$, which are respectively symmetric ($\Theta_{ab}=K_{(ab)}$) and anti-symmetric ($\omega_{ab}=K_{[ab]}$). We assume throughout that the spacetime has no vorticity, so we set $\omega_{ab}\equiv 0$.
The expansion tensor can be further decomposed as
\begin{equation}\label{eq:defsheartensot}
\Theta_{ab}=\frac{1}{3}\Theta h_{ab}+\sigma_{ab}\,,
\end{equation}
where $\Theta:=g^{ab}\Theta_{ab}=\mathrm{D}_au^a=\nabla_au^a$ is the trace part known as the expansion scalar, while the traceless part defines the shear tensor $\sigma_{ab}$ (so $g^{ab}\sigma_{ab}=0$). Gathering the above, the shear tensor can be written fully in terms of the fluid velocity as
\begin{equation}
\sigma_{ab}=\mathrm{D}_{(b}u_{a)}-\frac{1}{3}h_{ab}\Theta\label{eq:sigmaabeq1}
\end{equation}
or alternatively as
$\sigma_{ab}=\nabla_{b}u_{a}+u_{b}\dot u_{a}-h_{ab}\Theta/3$
under the no-vorticity assumption. Albeit the shear tensor is traceless, a useful scalar characterisation of the shear is defined as $\sigma^2:=\sigma_{ab}\sigma^{ab}/2$, and this is used throughout this work.
The symmetric energy-momentum (or stress-energy) tensor for a generic fluid can be written as
\begin{equation}
T_{ab}=\rho u_au_b+ph_{ab}+2q_{(a}u_{b)}+\pi_{ab}\,,
\end{equation}
where $\rho=u^au^bT_{ab}$ is the energy density, $p=h^{ab}T_{ab}/3$ is the pressure, $\pi_{ab}=h_{(a}{}^ch_{b)}{}^dT_{cd}-ph_{ab}$ is the anisotropic stress tensor (the components are also known as the anisotropic pressures), and $q_a=-h_a{}^bu^cT_{bc}$ is the heat conduction vector measured by an observer comoving with the fluid. For the purposes of this work, we ignore heat transfer ($q_a\equiv 0$).
The extra term for a non-perfect fluid represented by the anisotropic stress, which will soon be related to shear viscosity, has to satisfy the
constraints $\pi_{ab}=\pi_{ba}$, $g^{ab}\pi_{ab}=0$, and $u^a\pi_{ab}=0$,
by virtue of being the projected (symmetric) traceless part of the energy-momentum tensor.
Upon introducing a dissipative term in the energy-momentum tensor such as $\pi_{ab}$, one has to be aware that the fluid may deviate from its thermodynamic equilibrium, and the relaxation time $\tau$ to the equilibrium state (a.k.a.~the collision time or Maxwell time) may generally be non-zero. In such a case, there is no closed analytic expression for these viscous anisotropic pressures. Instead, they are defined via a differential equation \cite{Israel:1976tn,Belinski:2017fas},
\begin{equation}\label{eq:equation_shear_stress}
\pi_{ab}+\tau h_a{}^ch_b{}^d\dot\pi_{cd}=-2\eta\sigma_{ab}\,,
\end{equation}
where $\eta$ is known as the viscosity coefficient.
Then, the entropy density, considering only shear viscous terms and setting the bulk modulus to zero, can be expanded as \cite{Belinskii:1979} (see also \cite{ellis_maartens_maccallum_2012})
\begin{equation}
s=s_0+\frac{\pi_{ab}\pi^{ab}}{2\eta T}\tau+\mathcal{O}(\tau^2)=s_0+\frac{4\eta\sigma^2}{T}\tau+\mathcal{O}(\tau^2)\,,
\end{equation}
where $s_0$ represents the entropy density at equilibrium and $T$ is the fluid equilibrium temperature.
In order for a fluid description of the system to still be valid, we have to assume that the system is fairly close to the equilibrium state. This can thus be quantified by the following approximation,
\begin{equation}\label{eq:mfp_approx}
\tau\ll\frac{2\eta Ts_0}{\pi_{ab}\pi^{ab}}\simeq\frac{T s_0}{4\eta\sigma^2}\,,
\end{equation}
where the approximate equality on the right-hand side precisely holds when $\tau$ is small.
For our purposes, we neglect $\tau$ in comparison to the equilibrium entropy density, and so the differential equation \eqref{eq:equation_shear_stress} collapses to the simpler expression
\begin{equation}
\pi_{ab}=-2\eta\sigma_{ab}\,,\label{eq:defeta1}
\end{equation}
as can be seen in standard textbooks (e.g., \cite{LandauLifshitz,LandauLifshitz2}).
Since $\pi_{ab}$ is the (projected) symmetric traceless part of $T_{ab}$, it is natural for it to be proportional to the other projected symmetric traceless tensor defined above, namely the shear tensor.
This theory with $\tau \to 0$, though standard, may be plagued by a superluminal propagation of shear excitations --- the velocity of this propagation is given by \cite{Belinski:2017fas,LandauLifshitz2}
\begin{equation}
c_\mathrm{s}\sim\sqrt{\frac{\eta}{\rho\tau}}\,.\label{eq:csshear}
\end{equation}
This issue is discussed further below.
Deviations from a perfect fluid are sometimes written as (ignoring heat transfer)
\begin{equation}
T_{ab}=\rho u_au_b+\bar ph_{ab}+\Pi_{ab}\,,
\end{equation}
where $\bar p$ now denotes the perfect fluid pressure or average pressure. The deviation from a perfect fluid can then generally be written as a linear combination of the trace and traceless parts of the expansion tensor as
\begin{equation}
\Pi_{ab}=-2\eta\sigma_{ab}-\zeta\Theta h_{ab}\,,
\end{equation}
which implies that the energy-momentum tensor becomes
\begin{equation}
T_{ab}=\rho u_au_b+(\bar p-\zeta\Theta)h_{ab}-2\eta\sigma_{ab}\,,
\end{equation}
and hence the `total pressure' is $p=\bar p-\zeta\Theta$.
As shown in, e.g., Refs.~\cite{Weinberg:1971mx,Ehlers:1993gf,ellis_maartens_maccallum_2012}, the proportionality coefficients $\eta$ and $\zeta$ have the thermodynamical interpretation of shear viscosity and bulk viscosity, respectively. Bulk viscosity has the effect of modifying the pressure term. In the case of a flat universe, the bulk viscosity term, which is proportional to the volume expansion rate $\Theta$, can be expressed as a non-linear equation of state (EoS) $p=p(\rho)$, with the pressure being a quadratic function of energy density. Quadratic equations of state have been shown to admit non-singular bouncing solutions \cite{Bozza:2009jx,Ananda:2005xp,Ananda:2006gf,Ganguly:2019llh}. Given the relevance of anisotropic stress and shear anisotropies for this work, we are mostly concerned by the shear viscosity entering in \eqref{eq:defeta1}, hence we assume no bulk viscosity throughout ($\zeta\equiv 0$, so the `total pressure' and `perfect fluid pressure' have the same meaning, i.e., $\bar p=p$). The resulting energy-momentum tensor, $T_{ab}=\rho u_au_b+ph_{ab}-2\eta\sigma_{ab}$, is the same as motivated in the previous paragraph.
To gain some intuition about shear viscosity, let us consider a Minkowski background for the time being.
We can do this without loss of generality to derive the viscosity coefficient since it is an intrinsic property of the fluid (just like an EoS).
In other words, by the equivalence principle of general relativity, we are free to consider a locally Minkowski space to derive the properties of the fluid and later apply such properties to a curved spacetime.
Equations \eqref{eq:sigmaabeq1} and \eqref{eq:defeta1} for a Minkowski metric tell us that
\begin{equation}
\pi_{ij}=-2\eta\sigma_{ij}=-2\eta\left(\partial_{(j}u_{i)}-\frac{1}{3}\delta_{ij}\partial_ku^k\right)\,,
\end{equation}
where we are specialising ourselves to the spatial components.\footnote{In fact, if we consider a frame where the fluid 4-velocity is constant, e.g., $u^a=(1,\vec{0})$, then $\sigma_{0b}=\sigma_{a0}=0$, i.e., the shear tensor is purely spatial. This is to be expected in complete generality since, as it is explicit from Eq.~\eqref{eq:sigmaabeq1}, the shear is a tensor that is fully projected onto the spatial hypersurface.}
Let us further consider a simplified setup in $(2+1)$ dimensions, where one has a fluid in between two infinite-dimensional plates. Let the fluid move in the $+x$ direction (in Cartesian coordinates), with a velocity that only depends on the $y$ direction, i.e., $u^i(x,y,z)=(u^x(y),0,0)$. This is the typical setup to derive the heuristic expression for a fluid's viscosity from kinetic theory first principles (see, e.g., \cite{ChapmanCowling,LeBellac,Burshtein}). In this setup, it is clear that the above relation between stress, viscosity and shear reduces to
\begin{equation}
\pi^x{}_y=-\eta\partial_yu^x\,,
\end{equation}
hence the viscosity coefficient is the proportionality factor that relates the net momentum flux through a constant-$y$ surface to the velocity gradient of the fluid. In other words, viscosity is a measure of the rate of momentum diffusion in the fluid.
The mean distance between interactions among the fluid's microscopic constituents is characterised by the mean free path $\ell_\mathrm{mfp}$, and thus, the velocity difference of particles moving through a constant-$y$ surface is proportional to $-\ell_\mathrm{mfp}\partial_yu^x$. This assumes that the mean free path is much smaller than the overall size of the system, i.e., $\ell_\mathrm{mfp}\ll L$, where $L$ is the distance between the two plates in this simplified setup.
The momentum flux is then proportional to multiplying $-\ell_\mathrm{mfp}\partial_yu^x$ by the energy density $\rho$ and the mean propagation speed of the particles or the root-mean-square speed for a given statistical distribution; for the purpose of our work, as an approximation, we will simply associate this speed with the sound speed $c_\mathrm{s}$.
Combining the above, we arrive at the expression
\begin{equation}
\eta=\alpha c_\mathrm{s}\rho\ell_\mathrm{mfp}\sim\frac{c_\mathrm{s}E}{\sigma_\mathrm{cs}}\,,\label{eq:etakin}
\end{equation}
where $\alpha$ is a proportionality constant of $\mathcal{O}(1)$ whose precise value depends on the exact microphysics at play, the statistical distribution, etc.; moreover, it shall encapsulate the uncertainty in our choice of mean velocity.
In the second equality above (up to a proportionality factor of order unity, hence the sign $\sim$), we used the fact that we can write the mean free path as $\ell_\mathrm{mfp}=1/(\beta n\sigma_\mathrm{cs})$ in terms of the number density $n$, related to the energy density by the energy of the individual particles $E$ via $\rho=En$, and the cross sectional area $\sigma_\mathrm{cs}$, which is a measure of the interaction probability among particles. The constant proportionality factor $\beta$ of order unity again depends on the exact statistical distribution.
From the above, we see that the smaller the interaction probability (i.e., the smaller the cross section), the farther a particle travels before interacting with another one (i.e., the larger the mean free path), the easier the momentum transfer, and therefore the larger the viscosity is. However, one needs to be careful since it would appear the limit $\sigma_\mathrm{cs}\to 0$ implies infinite viscosity, when one would rather believe that a fluid with no interactions should be viscous-free. Indeed, the issue with the vanishing cross section limit is that it implies an infinite mean free path, hence the assumption $\ell_\mathrm{mfp}\ll L$ is broken. In cosmology, the size of the system of interest can be associated with the Hubble radius, $L\sim|H|^{-1}$. We shall thus be particularly careful with this assumption throughout this work in order to remain in the regime of validity for the expression \eqref{eq:etakin} to hold. Nevertheless, if $\ell_\mathrm{mfp}\sim L$, it does not mean that viscosity goes away. In fact, the approximation can often be pushed to that limit within order 1 corrections that slightly reduce the viscosity (see, e.g., \cite{ChapmanCowling}). However, when $\ell_\mathrm{mfp}\gg L$, the above expression for viscosity definitely breaks down, and one generally expects $\eta\to 0$ as $\ell_\mathrm{mfp}\to\infty$.
Let us mention that the mean free path and the relaxation time (the average time between collisions) are related by the average velocity: $\ell_\mathrm{mfp}\sim c_\mathrm{s}\tau$. Hence, one can see that \eqref{eq:csshear} and \eqref{eq:etakin} are consistently related. This allows us to re-express the approximation \eqref{eq:mfp_approx} as an upper bound on the mean free path,
\begin{equation}
\ell_\mathrm{mfp}\lesssim\sqrt{\frac{Ts_0}{\rho\sigma^2}}=:\ell_\mathrm{max}\,.\label{eq:deflmax}
\end{equation}
Moreover, one could demand the speed of propagation not to surpass the speed of light, which from \eqref{eq:csshear} amounts to a lower bound on the mean free path. Combining those, and from the discussion of the previous paragraph, we arrive at the following regime of validity:
\begin{equation}
\frac{\eta}{\rho}\lesssim\ell_\mathrm{mfp}\lesssim\mathrm{min}\left\{\ell_\mathrm{max},|H|^{-1}\right\}\,.\label{eq:validitymfp}
\end{equation}
Therefore, given a model for which one can compute the viscosity thanks to Eq.~\eqref{eq:etakin}, the above lower and upper bounds essentially tell us the regime of validity of that expression in terms of the size of the fluid's mean free path. This shall be the basis of our consistency checks throughout this work.
\subsection{The effect of shear viscosity in anisotropic cosmology}\label{sec:viscoanicosmo}
In order to study the effect of shear viscosity of the form \eqref{eq:defeta1} in cosmology, let us write down
the Einstein equations with no cosmological constant, $G_{ab}=M_\mathrm{Pl}^{-2}T_{ab}$, as follows when $q_a=\omega_{ab}=\dot u_a=0$ \cite{ellis_maartens_maccallum_2012},
\begin{subequations}\label{eq:EFE-orthonormal}
\begin{align}
\frac{1}{3}\Theta^2&=\frac{\rho}{M_\mathrm{Pl}^2}-\frac{1}{2}{}^{(3)}\!R+\sigma^2\,,\\
\dot\Theta+\frac{1}{3}\Theta^2&=-\frac{1}{2M_\mathrm{Pl}^2}(\rho+3p)-2\sigma^2\,,\\
\dot\rho+\Theta(\rho+p)&=-\pi^{ab}\sigma_{ab}\,,\\
\dot\sigma_{ab}+\Theta\sigma_{ab}&=\frac{1}{M_\mathrm{Pl}^2}\pi_{ab}-{}^{(3)}\!R_{ab}+\frac{1}{3}{}^{(3)}\!Rh_{ab}\,,
\end{align}
\end{subequations}
where ${}^{(3)}\!R_{ab}$ and ${}^{(3)}\!R$ are, respectively, the 3-dimensional Ricci curvature tensor and scalar on the spatial hypersurface. This system has within it cosmologies containing anisotropies in the expansion, i.e.~different expansion rates in the $3$ different spatial directions, as well as containing anisotropies in the $3$-curvature.
In order to study the effects of anisotropic pressure on an anisotropic universe, let us specialise to a simple flat anisotropic universe. This is known as the Bianchi type-I universe. It represents the case of maximal anisotropy when it is empty, in which case it is called the Kasner solution. It also only has expansion anisotropy, instead of anisotropy in the $3$-curvature as well. The metric can be represented as
\begin{equation}\label{eq:BIMetric}
g_{ab}\dd x^a\dd x^b=-\dd t^2+a(t)^2e^{2\beta_{(i)}(t)}\delta_{ij}\dd x^i\dd x^j\,,
\end{equation}
with the constraint $\sum_{i=1}^3\beta_{(i)}(t)=0$, where $\beta_{(i)}(t)$ denotes the anisotropy in direction $x^i$ and $a(t)$ denotes the spatially averaged scale factor (in the sense that $\ln a=\langle\ln a_{(i)}\rangle$ with $a_{(i)}=ae^{\beta_{(i)}}$). From this, $H(t):=\dot a/a$ defines the spatially averaged Hubble parameter, and the hypersurface geometry is characterized by ${}^{(3)}\!R_{ab}={}^{(3)}\!R=0$, $\Theta=3H$, and
\begin{equation}
\sigma_{ij}=a^2e^{2\beta_{(i)}}\dot\beta_{(i)}\delta_{ij}\,,\qquad\sigma_i{}^j=\dot\beta_{(i)}\delta_i{}^j\,.
\end{equation}
In particular, $\sigma^2=(1/2)\sum_{i=1}^3\dot\beta_{(i)}^2$. The resulting equations of motion (EOMs) are
\begin{subequations}\label{eq:BIall}
\begin{align}
3M_\mathrm{Pl}^2H^2&=\rho+\rho_\sigma\,,\label{eq:BIconstrgen}\\
2M_\mathrm{Pl}^2\dot H&=-(\rho+p)-2\rho_\sigma\,,\label{eq:BIHEOM}\\
\dot\rho+3H(\rho+p)&=-\pi^{ab}\sigma_{ab}\,,\label{eq:BIrhoEOM}\\
\dot\sigma_{ab}+3H\sigma_{ab}&=M_\mathrm{Pl}^{-2}\pi_{ab}\,,\label{eq:BIsigmaEOM}
\end{align}
\end{subequations}
where $\rho_\sigma:=M_\mathrm{Pl}^2\sigma^2$ defines the energy density in shear anisotropies.
In the presence of a perfect fluid, the stress tensor vanishes, and we recover the shear equation\footnote{Note that, while $\dot f=\partial_tf$ for any scalar-valued function $f$, we have $\dot\sigma_{ij}=u^a\nabla_a\sigma_{ij}=\partial_t\sigma_{ij}-2(H+\dot\beta_{(i)})\sigma_{ij}$ for a rank-2 tensor in the above Bianchi type-I spacetime. Also, recall $\sigma_{ab}$ is purely spatial, so in particular $\sigma^2=\sigma_{ab}\sigma^{ab}/2=\sigma_{ij}\sigma^{ij}/2$.}
\begin{equation}
\partial_t\sigma_i{}^j+3H\sigma_i{}^j=0\implies\sigma_i{}^j\propto\frac{1}{a^3}\,,
\end{equation}
and so $\rho_\sigma\propto 1/a^6$, according to which shear anisotropies essentially contribute to the Friedmann equations as a perfect fluid with stiff EoS $p_\sigma=\rho_\sigma$. In particular, one recovers the result that anisotropies typically dominate the energy budget of the universe near cosmological singularities since, as $a\to 0$, $\rho_\sigma\propto 1/a^6$ grows faster than $\rho\propto 1/a^{-3(1+w)}$ for a background perfect fluid with EoS $w:=p/\rho\in(-1,1)$. As a result, the spacetime near the singularity is well approximated by the anisotropic Kasner metric, and the approach to the metric is of BKL type, as mentioned in the Introduction.
An immediate loophole is if the matter EoS satisfies $w>1$, which is known as an ultra-stiff ekpyrotic EoS, in which case the background energy density dominates as the scale factor goes to small values, hence isotropising the universe such that it becomes well approximated by a Friedmann-Lema\^itre-Robertson-Walker (FLRW) metric.
In the same situation, the general existence of anisotropic stresses acts as a positive source, and the shear equation of motion is modified according to Eq.~\eqref{eq:BIsigmaEOM}.
This causes the energy density in the anisotropies to grow faster than $a^{-6}$, and hence an ekpyrotic fluid can no longer be reliably expected to isotropise the universe. On doing an extension of this study to anisotropic cosmologies with anisotropic $3$-curvature as well as expansion anisotropies (for example, in Bianchi type IX), one finds that anisotropic stresses even if they are ultra-stiff on average, fail to isotropise the cosmology. In fact, a bounce fails to occur as the geometry approaches an anisotropic singularity \cite{Barrow:2015wfa}.
Let us now try to gain some intuition about how shear viscosity might change this picture. This was discussed initially in the context of neutrino viscosity and its effects on isotropisation \cite{Misner:1967uu}. The discussion was extended to derive a possible phenomenological form of such a shear viscous term in \cite{Belinski:2017fas}. There, one postulates a shear viscosity coefficient in an anisotropic stress of the form \eqref{eq:defeta1}, which is dependent on a power law of the energy density as $\eta\propto\rho^{1/2}$. The power is $1/2$, which allows for isotropisation and an attractor behaviour to a Friedmann singularity \cite{Belinski:2017fas}. This form of the viscous anisotropic stress, though, allows for the propagation of super-luminal excitations, which we have at the cost of the viscous stresses having a closed form.
If the shear viscosity enters the stress tensor as in Eq.~\eqref{eq:defeta1}, then the matter conservation equation and the shear EOM are generally modified as follows:
\begin{subequations}\label{eq:mattershearEOMs}
\begin{align}
\dot\rho+3H(\rho+p)&=4\eta\sigma^2\,;\\
\partial_t\sigma_i{}^j+3H\sigma_i{}^j&=-2M_\mathrm{Pl}^{-2}\eta\sigma_i{}^j\,.\label{eq:sigmaijBI}
\end{align}
\end{subequations}
Together with Eqs.~\eqref{eq:BIconstrgen}--\eqref{eq:BIHEOM}, those are typically coupled, first-order ordinary differential equations (not necessarily linear), for which analytic solutions can be found only in special cases.
Moreover, the viscosity coefficient is in general time dependent (i.e., it may depend on background quantities such as $a$, $\rho$, $H$, etc.).
Let us first consider the simplest case where it is simply a constant, i.e., $\eta=\mathrm{constant}=:\kappa$, with mass dimension $3$. We use a different variable $\kappa$ here to denote the constant viscosity since we will use such a positive, dimensionful\footnote{The dimensionality of $\kappa$ depends on the expression; it may not always be the same.} constant of proportionality for the viscosity coefficient throughout, i.e., it will serve as a reference scale in the time-dependent examples below.
Accordingly, the shear EOM becomes
\begin{equation}
\partial_t\sigma_i{}^j+3\left(\frac{\dot a}{a}\right)\sigma_i{}^j+2\left(\frac{\kappa}{M_\mathrm{Pl}^2}\right)\sigma_i{}^j=0\,,
\end{equation}
whose general solution can be written in the form
\begin{equation}
\rho_\sigma\propto\frac{1}{a^6}\exp\left(-\frac{4\kappa t}{M_\mathrm{Pl}^2}\right)\,,\label{eq:rhosigmaconstanteta}
\end{equation}
as was already found by Misner \cite{Misner:1967zz}.
We notice that the $1/a^6$ behaviour is modified due to the constant viscosity coefficient $\kappa$ by an exponential factor in time.
Applying this to our physical considerations of interest, we note that the BKL approach to a singularity\footnote{Note that the BKL singularity is related to a singularity in the Weyl tensor which is directly related to $\sigma_{ij}$.} is probably not affected too much since, as $a\to 0$ and $t\to 0$, one gets very close to the situation where $\rho_\sigma\sim 1/a^6\to\infty$. Nevertheless, the exact growth rate of the shear anisotropies is modified in the approach to a singularity, but its exact value can only be recovered provided a solution for $a(t)$ is also found, which requires additional input.
While a constant viscosity coefficient may not isotropise a singularity, it might still dilute anisotropies over an intermediate timescale thanks to the above exponential suppression in $\rho_\sigma$. Such an example will be explored in greater detail in the subsequent section.
As a second example, let us explore the possibility that $\eta=\kappa/a^3$, which we will motivate in the next section. In fact, this will appear as a possible scaling of the viscosity coefficient in the context of an interacting scalar field theory in a radiation bath. In such a context, the shear EOM can be rewritten as
\begin{equation}
H\left(a(\sigma_i{}^{j})'+3\sigma_i{}^j\right)+2\left(\frac{\kappa}{M_\mathrm{Pl}^2a^3}\right)\sigma_i{}^j=0\,,
\end{equation}
where a prime here denotes a derivative with respect to $a$. Assuming the background to be radiation dominated, one has $a(t)=\sqrt{t/t_0}$, and so $H(a)=1/(2t_0a^2)$ is positive for $t_0>0$ (expansion) and negative for $t_0<0$ (contraction). As a result, one can solve the above differential equation, and the evolution of shear anisotropies is modified as
\begin{equation}
\rho_\sigma\propto\frac{1}{a^6}\exp\left(\frac{8\kappa t_0}{M_\mathrm{Pl}^2a}\right)\,.\label{eq:shearsoletaam3}
\end{equation}
Interestingly, if $t_0<0$ (contraction), one finds that $\rho_\sigma\to 0$ as $a\to 0$, and so it appears that anisotropies have been fully washed out by the time of a big crunch. Alternatively, if $t_0>0$ (expansion), one finds that $\rho_\sigma$ badly blows up in the backward approach to the big bang, exponentially more severely than in the BKL case. Equivalently, it means the anisotropies very quickly decay under forward time evolution in an expanding universe. However, in both instances (contraction and expansion), the meaning of viscosity near the singularity might be lost, as will be discussed in greater detail in the next section.
As a last example for this section, let us consider the possibility that $\eta=\kappa|H|$ (in this case, $\kappa$ has mass dimension $2$), which will be further motivated in the next section.\footnote{We note that this corresponds to the case $\eta\sim\sqrt{\rho}$ in a regime where $\rho\gg\rho_\sigma$ according to Eq.~\eqref{eq:BIconstrgen}. This is the scaling that was noticed to lead to perfect isotropisation \cite{Belinski:2017fas,Ganguly:2020daq}.} For simplicity, let us rewrite this expression as $\eta=\varepsilon\kappa H$ with $\varepsilon=+1$ for $H>0$ (expansion) and $\varepsilon=-1$ for $H<0$ (contraction). The shear EOM in this case reduces to
\begin{equation}
\partial_t\sigma_i{}^j+\left(3+2\varepsilon\frac{\kappa}{M_\mathrm{Pl}^2}\right)H\sigma_i{}^j=0\,,
\end{equation}
whose solution is immediately read off to be
\begin{equation}
\rho_\sigma\propto\frac{1}{a^{6+4\varepsilon\kappa/M_\mathrm{Pl}^2}}\,.\label{eq:rhosigmaDBHG}
\end{equation}
Interestingly, the growth rate of the shear anisotropies is modified in such a case by adding a correction to the power of the $1/a^6$ scaling; in fact, one can write it as $\rho_\sigma\propto 1/a^{6+\delta}$ with $\delta=4\varepsilon\kappa/M_\mathrm{Pl}^2$. One then notices that for $\varepsilon=-1$ (contraction), the shear anisotropies grow less fast than the typical behaviour in the approach to a big crunch\footnote{In fact, one even finds that $\rho_\sigma\to 0$ as $a\to 0$ if $\kappa>3M_\mathrm{Pl}^2/2$, meaning that anisotropies would be completely damped out by the time of the crunch in such a case.}, while for $\varepsilon=+1$ (expansion), they grow faster in the (backward) approach to the big bang; they also correspondingly decay faster under forward time evolution out of the big bang.
This analysis can be extended to cases where there is both expansion and curvature anisotropy. One such example is the Bianchi type-IX universe. This is the case that is taken to be the generic approach to the singularity according to the BKL analysis \cite{Belinsky:1970ew}. Due to the presence of the anisotropic $3$-curvature, this cosmology on contraction shows infinite chaotic mixmaster oscillations on a finite time interval (when the lower limit of that time interval is $0$), which is an attractor behaviour. An isotropisation mechanism that is successful would be able to resolve this attractor behaviour in the form of chaotic oscillations. In the absence of anisotropic pressures, numerical studies by \cite{Garfinkle:2008ei} among others show that ekpyrosis is successful in doing this. In a separate work \cite{Ganguly:2019llh}, a viscosity coefficient of the form $\eta=\kappa\rho^{1/2}$ for some constant $\kappa$ of mass dimension $1$ is shown to successfully isotropise a Bianchi-IX universe as well as mitigate the mixmaster chaos.
\section{Some examples of microphysical manifestations of viscosity}\label{sec:viscomani}
\subsection{Interacting scalar field theory at finite temperature}\label{subsec:finite-temp-intro}
Let us consider a finite-temperature scalar field theory with action of the form
\begin{equation}
\label{eq:scalar}
S=\int\dd^4x\,\sqrt{-g}\left(\frac{M_\mathrm{Pl}^2}{2}R-\frac{1}{2}g^{ab}\nabla_a\phi\nabla_b\phi-V(\phi)\right)\,,
\end{equation}
with potential
\begin{equation}\label{eq:potential_finiteTemp}
V(\phi)=\frac{1}{2}m^2\phi^2+\frac{\lambda}{4!}\phi^4\,,
\end{equation}
where $m>0$ and $0<\lambda\ll 1$ are the mass and the self-interaction coupling constant, respectively.
The physical mass of the field is well approximated by $m(1+\mathcal{O}(\lambda))\simeq m$ at weak coupling (after renormalization, up to radiative corrections and at zero temperature).
Denoting $\mu:=m/M_\mathrm{Pl}$, one could imagine having the following hierarchy of scales: $0<\mu\ll\mu/\lambda\ll\lambda\ll 1$. This allows one to distinguish various regimes where the system behaves very differently as a function of the temperature $T$ of the thermal bath \cite{Jeon:1995zm}.
Let us emphasize two such regimes:
\begin{itemize}
\item When $0<T/M_\mathrm{Pl}\ll\mu$, the system is effectively composed of a non-relativistic, dust-like scalar field. Indeed, the potential is dominated by the zero-temperature mass, i.e., $V(\phi)\simeq m^2\phi^2/2$. In an FLRW background, as long as $|H|\ll m$ (or $|H|/M_\mathrm{Pl}\ll\mu$ in dimensionless units), the field is coherently oscillating with vanishing time-averaged effective pressure, i.e., the EoS is that of dust. If one explores the limit of the universe getting smaller, the Hubble scale and the temperature of the thermal bath both grow as $|H|\sim\rho^{1/2}\sim a^{-3/2}$ and $T\propto a^{-1}$, so radiation with $\rho\propto a^{-4}$ will quickly become dominant. Nevertheless, the mass term in the Lagrangian remains important in intermediate regimes within $\mu\lesssim T/M_\mathrm{Pl}\lesssim\mu/\lambda$.
\item When $T/M_\mathrm{Pl}\gg\mu/\lambda$, the mass term becomes negligible, so the potential is dominated by the interaction term, i.e.~the $\lambda\phi^4$ term. In this regime, the EoS is that of radiation, $p=\rho/3$, with energy density growing as $\rho\propto T^4\propto a^{-4}$. What is crucial is that in this regime the interactions imply a scattering cross section already at the level of the $2\to 2$ tree diagram, and consequently, the fluid should have shear viscosity. We will expand on this below.
\end{itemize}
In an anisotropic background, the anisotropies would quickly begin to dominate in the approach to a singularity for such a scalar field model, ignoring viscous effects. This is most simply seen in the case of Bianchi I, which contains only expansion anisotropies and in which the energy density in the anisotropies grows as $a^{-6}$.
However, the thermal bath would remain, and thanks to the rising temperature, the regime $T/M_\mathrm{Pl}\gg\mu/\lambda$ would be reached. From then on, the presence of the interaction term implies the appearance of viscosity, which may alter the evolution of anisotropies even if those are \textit{a priori} dominant and large. How efficient viscosity may be at damping the anisotropies will be addressed later.
Let us now consider the scalar field above dominated by its self-interaction potential of the form $\lambda\phi^4$ in a thermal bath with temperature $T$. (This follows the example discussed in Ref.~\cite{Son:2007vk}).
Then, this scalar field in a thermal bath behaves like radiation with background energy density, number density, and temperature scaling as $\rho\propto a^{-4}$, $n\propto a^{-3}$, and $T\propto a^{-1}$, respectively; in particular, $n\propto T^3$. Additionally, QFT at finite temperature gives a cross section\footnote{Some intuition for this goes as follows \cite{Jeon:1994if}: the typical cross section of a $\lambda\phi^4$ theory goes as $\sigma_\mathrm{cs}\sim\lambda^2/s$, where $s$ is the square of the center-of-mass energy here. In the limit of interest, in particular for $T\gg m$, one can argue that the only relevant energy scale is the temperature, hence $s\sim T^2$ and $\sigma_\mathrm{cs}\sim\lambda^2/T^2$.} $\sigma_\mathrm{cs}\sim\lambda^2/T^2$. Putting everything together, the mean free path is
\begin{equation}\label{eq:mfp_def}
\ell_\mathrm{mfp}\sim\frac{1}{\lambda^2T}\,.
\end{equation}
As viscosity must be measured on scales much larger than the mean free path, it follows that one cannot take the decoupling limit, $\lambda\rightarrow 0$, at which the mean free path goes to infinity.
We recall that one should demand $\ell_\mathrm{mfp}\lesssim|H|^{-1}$ on cosmological scales. If we are in a radiation-dominated background, we have $M_\mathrm{Pl}^2H^2\sim\rho\sim a^{-4}\sim T^4$, and this would mean $T/M_\mathrm{Pl}\lesssim\lambda^2$. We would thus have to be in the regime $\mu/\lambda\ll T/M_\mathrm{Pl}\lesssim\lambda^2\ll\lambda\ll 1$, so one cannot take $\lambda$ too small for the regime to exist in the first place.
Of course, once viscosity is taken into account in the Einstein equations, the background evolution is expected to be modified, and one has to find the proper regime of validity then.
Another aspect that must be considered for the above to hold is thermalization. Indeed, the finite-temperature interaction cross section only holds if the scalar field is in thermal equilibrium with the thermal bath, which is the case as long as the interaction rate $\Gamma=n\sigma_\mathrm{cs}\langle v\rangle$ is greater than the Hubble rate $|H|$. In the high-temperature relativistic limit discussed above, the field is relativistic with unit average velocity $\langle v\rangle$, hence the interaction rate is simply equal to the inverse mean free path, $\Gamma=1/\ell_\mathrm{mfp}$. Correspondingly, the requirement for thermal equilibrium is the same as the one for the kinetic theory viscosity derivation to hold, i.e., $\ell_\mathrm{mfp}<|H|^{-1}$, which we discussed above and which we will check explicitly when solving the full set of equations in the next section. Certainly, since $\Gamma\sim T$ and $|H|\sim T^2$ in a radiation-dominated universe, one does not expect thermal equilibrium to hold up to arbitrarily high energy scales.\footnote{Other effects, however, might come into play and improve thermalization. For example, \cite{1972JETP...34.1159Z} shows that thermalization is stable to particle production as long as the universe isotropises before the minima of contraction.}
A caveat to keep in mind, however, is that this only applies as a toy model, where the scalar field $\phi$ does not couple to any other fields. In a more realistic context, the physics becomes more complicated regarding thermalization. Indeed, a gauge singlet scalar field can couple to other degrees of freedom, such as the standard-model Higgs, fermions, etc. If so, as $\phi$ acquires a large vacuum expectation value (VEV), the standard-model fields would obtain large, VEV-dependent masses, which would suppress the amplitude of the scattering processes, hence delaying chemical equilibrium and thermalization. Such discussions can be found in supersymmetry (e.g., \cite{Allahverdi:2005mz} and references therein). In our context, this implies that further analysis is certainly needed.
Since the scalar field in \eqref{eq:scalar} has a canonical kinetic term, its sound speed is unity, and correspondingly, the viscosity coefficient can be evaluated according to \eqref{eq:etakin} as
\begin{equation}
\eta\sim\frac{T^3}{\lambda^2}\,.\label{eq:etafiniteT}
\end{equation}
The exact coefficient of proportionality is difficult to estimate, but to leading order in small $\lambda$ and small $m/T$, it is expected to be in the $\mathcal{O}(1-10^3)$ regime (see, e.g., \cite{Jeon:1994if,Jeon:1995zm,Kapusta:2006pm}).
With this expression in hand, one can then solve for the Einstein equations to determine how the shear viscosity arising from the self-interacting scalar field affects the evolution of the shear anisotropies. Equation \eqref{eq:etafiniteT} suggests $\eta\propto a^{-3}$, which as we saw in Sec.~\ref{sec:viscoanicosmo}, yields the solution \eqref{eq:shearsoletaam3} upon assuming a radiation-dominated background, according to which the universe isotropises to the future. In the next section, we will solve the equations in more generality by means of numerical methods. This will also allow us to comment more specifically on the regime of validity \eqref{eq:validitymfp} over which Eq.~\eqref{eq:etafiniteT} applies.
Since the entropy density goes as $s\propto a^{-3}\propto T^3$ for a radiation bath, we notice that \eqref{eq:etafiniteT} implies
\begin{equation}
\frac{\eta}{s}\sim\frac{1}{\lambda^2}=\mathrm{constant}\,.
\end{equation}
For $\lambda$ at least $\lesssim 1$, this means there is a constant lower bound on the ratio of the viscosity coefficient over the entropy density, $\eta/s\gtrsim 1$. This is reminiscent of the viscosity bound conjecture (e.g., \cite{Policastro:2001yc,Kovtun:2004de,Son:2007vk}), claiming $\eta/s\geq 1/(4\pi)$, which comes from anti-de Sitter black hole solutions in various gravitational theories that are holographically dual to strongly interacting QFTs (non-perfect fluids with shear viscosity). It is thus an interesting observation that many theories suggest a lower bound on the viscosity coefficient, further motivating the investigation of this work.
\subsection{Dilute gas of black holes}
In a contracting universe that is not perfectly homogeneous, perturbative inhomogeneities grow under contraction in a similar manner to anisotropies. This growth of inhomogeneities could lead us to an endpoint where a pre-bounce early universe could be populated by a gas of black holes. It was shown in \cite{Quintin:2016qro,Chen:2016kjx} that a perfect fluid with quantum vacuum initial conditions in the asymptotic past or thermal initial conditions at a finite time inevitably end up collapsing into Hubble-size black holes at a scale that is determined by the smallness of the fluid's sound speed. If structures already exist when the universe starts contracting (such as in a cyclic context), smaller black holes form first. Initially, these black holes are dilute --- this has been modelled as each black hole being situated on a lattice in \cite{Clifton:2017hvg,Coley:2020ykx}. As the universe contracts, these black holes become denser with a contracting Hubble radius. There is then a dense limit where the Schwarzschild radius $R$ is of the size of the Hubble radius, $R\sim|H|^{-1}$. This case will be dealt with in the next subsection.
In this current subsection, we shall be interested in the possible dilute case where $R\ll |H|^{-1}$, far in the contracting phase when the universe is still very large, i.e., very far away from the putative ultimate crunching singularity. We shall model a dilute gas of black holes as a set of hard balls of radius $R$ that nevertheless attract one another gravitationally.
To that level of approximation, these could in fact just be any astrophysical objects (small elliptical galaxies, stars, etc.), which might as well populate the universe in this scenario.
Interactions
between these hard spheres would give rise to a viscous drag, very similar to that derived in kinetic theory.
Although the black holes in the dilute gas would have to coalesce to form a gas of larger black holes to ultimately enter into the dense limit $R \sim |H|^{-1}$, effects of black holes (or other astrophysical objects) coalescing is not taken into account in this analysis in the dilute limit.
In fact, we do not know exactly when the perturbations become too large as to not trust the approximations, hence we must add a word of caution. While the approximations might hold initially, it is unclear how long they may last, and this has to be taken into account when drawing conclusions.
We start by saying that the cross section for two black holes as described above to interact is given by (see, e.g., \cite{Loeb:2020lwa})
\begin{equation}
\sigma_\mathrm{cs}\sim\left(\frac{R}{c_\mathrm{s}^2}\right)^2\,,
\end{equation}
where the sound speed $c_\mathrm{s}$ of the gas represents the average velocity of the distribution\footnote{One would generally expect a distribution of masses/radii and velocities for the gas of black holes. Here we are thus referring to $R$ and $c_\mathrm{s}$ as the mean radius and velocity, respectively. We are not making any assumption about the distribution since too many factors come into play in the formation of such a gas of black holes. Beyond idealised analytical estimates as in \cite{Quintin:2016qro,Chen:2016kjx}, this would potentially require numerical simulations, which would nevertheless be very dependent on the chosen initial conditions. Therefore, we remain agnostic about exact values for $R$ and $c_\mathrm{s}$ and treat them as free parameters.} of black holes.
We note that in the relativistic limit where $c_\mathrm{s}\to 1$ the expression for the cross section reduces to that for non-interacting hard spheres, $\sigma_\mathrm{cs}\sim R^2$, as expected. Alternatively, in the pressureless limit $c_\mathrm{s}\to 0$, the cross section tends to infinity. This is understood from the fact that if all the black holes in the gas were perfectly static (say at some initial time), they would inevitably merge in some finite time due to the infinite-range gravitational attraction between them, hence the certain collision probability.
Making use of the relation between the Schwarzschild radius and mass (upon specialising our attention to black holes), $R\sim M/M_\mathrm{Pl}^2$, the number density of the gas is related to its energy density via $n\sim\rho/(RM_\mathrm{Pl}^2)$, from which we can read the mean free path $\ell_\mathrm{mfp}\sim(n\sigma_\mathrm{cs})^{-1}$ as
\begin{equation}
\ell_\mathrm{mfp}\sim\frac{c_\mathrm{s}^4M_\mathrm{Pl}^2}{\rho R}\,.\label{eq:mfpdiluteBHG}
\end{equation}
The viscosity can then be evaluated as
\begin{equation}
\eta\sim\frac{c_\mathrm{s}^5M_\mathrm{Pl}^2}{R}\,,\label{eq:etafindBHG}
\end{equation}
which is just a constant since in the limit $R\ll|H|^{-1}$ one does not expect the Schwarzschild radius to be affected much by the cosmological background under the present approximations.
Let us comment on the regime of validity of the above expression for viscosity, recalling \eqref{eq:validitymfp}. The inequality $\eta/\rho\lesssim\ell_\mathrm{mfp}$, which followed from demanding a sub-luminal propagation speed of shear viscosity excitations, is satisfied provided $c_\mathrm{s}\lesssim 1$. This is not a surprise as we expect the sound speed of the dilute gas of black holes to precisely be the propagation speed of viscosity excitations, and this sound speed is certainly expected to be subluminal.
The inequality on the right-hand side of \eqref{eq:validitymfp} is less trivial though. To tackle it, let us first make the observation that for a gas of black holes to first form one generally has to be in a background that is relatively close to isotropy. This can certainly be envisioned in the context of a cyclic universe, where a prior expanding phase can efficiently isotropise the universe.
Thus, we can assume here that shear is initially subdominant, or at most of the order of the fluid's energy density, i.e., $\sigma^2\lesssim\rho/M_\mathrm{Pl}^2\sim H^2$. How the shear subsequently evolves, given the viscosity \eqref{eq:etafindBHG}, will be addressed in the following section, but we already saw that a constant viscosity coefficient can lead to temporary exponential suppression of the shear [recall \eqref{eq:rhosigmaconstanteta}]. Under the assumption that shear is subdominant, the mean free path \eqref{eq:mfpdiluteBHG} can be written as $\ell_\mathrm{mfp}\sim c_\mathrm{s}^4/(RH^2)$. The requirement that this is smaller than the Hubble radius thus reads
\begin{equation}
c_\mathrm{s}^4\lesssim R|H|\,,\label{eq:smallcsconstr}
\end{equation}
where the right-hand side is expected to be much smaller than unity since we are considering $R\ll|H|^{-1}$. Therefore, Eq.~\eqref{eq:etafindBHG} for viscosity is expected to apply only if the sound speed is very small. While $c_\mathrm{s}$ remains at the level of a free parameter given the uncertainties stipulated earlier, one certainly does not expect black holes to have large peculiar velocities upon formation from gravitational collapse, so the above inequality does not appear unreasonable.
The other requirement from the right-hand side inequality of \eqref{eq:validitymfp}, which comes from the small Maxwell time assumption, is generally found to be less stringent than \eqref{eq:smallcsconstr} as long as the shear remains subdominant. To see this, let us express the temperature of the dilute black hole gas assuming a Maxwell-Boltzmann distribution of velocities, such that $T\sim c_\mathrm{s}^2 M$. The entropy density can also be read from the sum of the black hole's individual entropies, $s\sim n(RM_\mathrm{Pl})^2\sim\rho R$, which dominates over the `ideal gas' entropy in this context. Putting those together, we arrive at $\sqrt{Ts/(\rho\sigma^2)}\sim c_\mathrm{s}RM_\mathrm{Pl}/\sigma$, and thus the mean free path \eqref{eq:mfpdiluteBHG} is smaller than $\ell_\mathrm{max}$ as long as $c_\mathrm{s}^3\lesssim R^2\rho/(M_\mathrm{Pl}\sigma)$. If $\sigma^2\ll\rho$, this is not a very severe constraint. Even if the shear is of the order of the energy density, then the constraint reduces to $c_\mathrm{s}^3\lesssim R^2|H|M_\mathrm{Pl}$, which is generally no more restrictive than \eqref{eq:smallcsconstr} since we expect to be in a deeply sub-Planckian cosmological regime ($|H|\ll M_\mathrm{Pl}$).
\subsection{Dense gas of black holes}
A contracting universe that isotropically evolves with a dilute gas of black holes will arrive at a phase where the separation distance is of the order of their Schwarzschild radius.
As the black holes are pushed closer together, we arrive at the dense black hole gas picture (e.g., \cite{Banks:2002fe}). In this picture, the EoS resembles a stiff fluid $p=\rho$, a result that is derived from thermodynamic considerations in this current section.
In the dense black hole gas picture, every Hubble patch can be thought to be filled by a Hubble-size black hole, i.e., $R=|H|^{-1}$. As the universe keeps contracting, one expects some form of quantum instability that allows black holes to `continuously' bifurcate into smaller black holes such that the relation $R=|H|^{-1}$ holds as a function of time (this is forbidden classically \cite{Hawking:1973uf}). Though such a phase is highly hypothetical, it is not violating the second law of thermodynamics as we will see below, and it might well occur if black holes at high densities are to be replaced by stringy counterparts (see, e.g., \cite{Veneziano:2003sz,Quintin:2018loc,Masoumi:2014vpa,Masoumi:2015sga,Masoumi:2014nfa,Mathur:2020ivc}).
At the level of semi-classical gravity, such a phase would inevitably still ultimately lead to a collapse of the whole universe into a singularity, but again, this is poorly studied and new physics might well come into play.
Let us consider a region of physical volume $V$ containing $N$ black holes of Schwarzschild radius $R$, so $N\sim V/R^3$.
The total energy in the volume is then given by
$E\sim NM\sim VM_\mathrm{Pl}^2/R^2$, where $M\sim M_\mathrm{Pl}^2R$ is the Schwarzschild mass of the black holes.
One must keep in mind the following: we assume that we can describe the black holes by their usual Schwarzschild mass and radius coming from the Schwarzschild metric of a single black hole embedded in Minkowski space (i.e., asymptotically flat). This might not hold in a universe that has a possibly infinite number of black holes and that could be dynamical, but we have no good prescription in that situation, so we will stick with the usual Schwarzschild description --- more comments are to be given in the discussion section.
Then, if the entropy of each black hole is given by the Bekenstein-Hawking entropy,
the total entropy in the volume is given by $S\sim NM_\mathrm{Pl}^2R^2\sim VM_\mathrm{Pl}^2/R$.
These relations can be combined to yield $S\sim M_\mathrm{Pl}\sqrt{EV}$, or in terms of densities,
\begin{equation}
s\sim M_\mathrm{Pl}\sqrt{\rho}\,.
\end{equation}
Using standard thermodynamic relations such as $1/T=\partial_ES$ and $p=T\partial_VS$, one finds a temperature $T\sim\sqrt{\rho}/M_\mathrm{Pl}\sim M_\mathrm{Pl}^2/M$ and a pressure $p=\rho$. It is in that sense that the dense black hole gas picture is akin to a stiff fluid.
To then get the viscosity (which has never been considered before for a dense black hole gas), let us estimate the interaction cross section by $\sigma_\mathrm{cs}\sim R^2$, where in analogy with the dilute gas of the previous subsection the propagation speed of fluctuations is essentially taken to be unity for a stiff fluid. The mean free path follows as $\ell_\mathrm{mfp}\sim 1/(n\sigma_\mathrm{cs})\sim R$ since $n=N/V\sim R^{-3}$. Already, we see that the mean free path is of the order of the Hubble radius by construction. Indeed, if black holes are expected to fill each Hubble patch, then it takes a distance $R\sim|H|^{-1}$ before black holes interact with one another. As such, we expect the na\"ive kinetic expression \eqref{eq:etakin} for viscosity to be only a rough order of magnitude estimate. Nevertheless, it should convey the right scaling as a function of energy density. The above mean free path implies $\eta\sim\rho R$, but the energy density is actually related to the black holes' radius as $\rho\sim(M_\mathrm{Pl}/R)^2$ as we saw above, hence we finally obtain
\begin{equation}
\eta\sim\frac{M_\mathrm{Pl}^2}{R}\sim M_\mathrm{Pl}^2|H|\sim M_\mathrm{Pl}\sqrt{\rho}\,.\label{eq:etaBHG}
\end{equation}
It is interesting to notice that, from the results above, the ratio of viscosity to entropy density is constant (and of order unity), as was the case for the finite-temperature interacting scalar field.
As already mentioned, we might indeed expect the conjectured bound $\eta/s\geq 1/(4\pi)$ to hold.
In fact, by assuming the conjecture, the above result has already been guessed and consequences thereof explored in \cite{Masoumi:2014nfa}.
Another interesting observation is that \eqref{eq:etaBHG} implies $M_\mathrm{Pl}^2H^2\sim\rho$, which is the Friedmann constraint equation with no anisotropies. This is perhaps not a surprise since the derivation essentially assumes the universe to be isotropic enough for the dense black hole gas to form in the first place. However, it seems to suggest already that no anisotropy is allowed to form when the universe is dominated by such matter. As we saw from Sec.~\ref{sec:viscoanicosmo}, a viscosity coefficient of the form of \eqref{eq:etaBHG} does indeed lead to isotropisation, i.e., the energy density in anisotropies always remains subdominant compared to the energy density of a stiff fluid (the dense black hole gas in this case).
Therefore, this picture of a dense black hole gas represents the only microphysical origin known to the authors of a stiff fluid with viscosity given by the scaling $\eta\propto\rho^{1/2}$, which was previously phenomenologically understood to perfectly isotropise the universe \cite{Belinski:2017fas,Belinski:2013jua,Ganguly:2020daq}, i.e., leading to a Friedmann singularity if taken all the way to a big crunch.
\section{The evolution of anisotropies in various scenarios}\label{sec:evo}
In the previous section, we presented three fluids for which one can derive a viscosity coefficient from a microphysical perspective. The dilute and dense black hole gases can in fact be viewed as a single fluid in opposite limits, while the finite-temperature interacting scalar field is unambiguously different in nature. In deriving the properties of the black hole gas (in both the dilute and dense limits), we had to resort to the assumption that the background cosmology was isotropic to a good approximation in the first place. The question of how anisotropies (even if small initially) can evolve subsequently remains well posed.
The goal of this section is thus to explore the evolution of anisotropies for the fluids described in the previous section, first under the assumption of small anisotropies initially (which can apply to both black hole gases and the scalar field example), and then conversely, in the limit of large initial anisotropies (which can only be applied to the scalar field model).
\subsection{Small anisotropy limit}
For a black hole gas in the dense limit (where $\eta\propto\rho^{1/2}$), the evolution of anisotropies is already known from the analytical solution \eqref{eq:rhosigmaDBHG} and previous works \cite{Belinski:2017fas,Belinski:2013jua,Ganguly:2020daq} as already discussed, which confirms the isotropising power of such a fluid. In the dilute limit (where $\eta$ is constant), we also already obtained an analytical solution in \eqref{eq:rhosigmaconstanteta}, but a solution for the background scalar factor $a(t)$ is needed to fully quantify the evolution of anisotropies in such a case. This is where assuming small anisotropies initially (so approximately FLRW) can be useful analytically.
Under the assumption of a FLRW metric initially, we can solve for the evolution of a Bianchi-I metric for a wide class of viscous fluids. For the sake of generality, let us consider a phenomenological parametrisation of the viscosity coefficient as
\begin{equation}\label{eq:defeta}
\eta=\kappa\left(\frac{\rho}{M_\mathrm{Pl}^4}\right)^nM_\mathrm{Pl}^3\,,
\end{equation}
where the constant $n$ determines how viscosity scales as a function of $\rho$ (e.g., $n=0$ for a dilute black hole gas, while $n=1/2$ for a dense black hole gas), and $\kappa\geq 0$ is the proportionality factor, whose exact value can be derived from the microphysics of the fluid.
In the above, we set up the dimensions such that $\kappa$ is a dimensionless constant this time.
With this parameterisation of viscosity, let us rewrite the background EOMs \eqref{eq:BIall} as follows,
\begin{subequations}
\begin{align}
3M_\mathrm{Pl}^2H^2&=\rho\left(1+\Omega_\sigma\right)\,,\\
2M_\mathrm{Pl}^2\dot H&=-\rho\left(1+w+2\Omega_\sigma\right)\,,\\
\dot\rho+3H\rho(1+w)&=\kappa M_\mathrm{Pl}^{1-4n}\rho^{1+n}\Omega_\sigma\,,\\
\partial_t\sigma_i{}^j+3H\sigma_i{}^j&=-\kappa M_\mathrm{Pl}^{1-4n}\rho^n\sigma_i{}^j\,,\label{eq:sigmamunurhon}
\end{align}
\end{subequations}
where $w:=p/\rho$ defines the matter EoS and where we defined
\begin{equation}
\Omega_\sigma:=\frac{\rho_\sigma}{\rho}=\frac{M_\mathrm{Pl}^2\sigma^2}{\rho}
\end{equation}
to be the ratio of the shear energy density to the matter energy density, which we dub the shear-to-matter ratio.
The logic to solve the above analytically shall be the following: consider a contracting universe in which the energy density in anisotropies is initially contributing at most as much as the matter content, i.e., the ratio $\Omega_\sigma=\rho_\sigma/\rho$ is at most order 1 initially.
Then, one can say that, initially, $3H^2\simeq\rho/M_\mathrm{Pl}^2$ and $\dot\rho+3H(\rho+p)\simeq 0$ (provided $\kappa$ is also not too large) as a rough approximation.
In other words, one assumes that the spacetime is approximately FLRW at the onset of the analysis and check whether that approximation may remain valid under time evolution (i.e., whether it improves or worsens).
Practically speaking, this means checking whether or not $\Omega_\sigma$ remains $\leq 1$.
With no viscosity, we already saw that $\rho_\sigma$ grows as $a^{-6}$, so even if we start out with small anisotropies, the contraction will cause these anisotropies to grow to such an extent that the universe quickly becomes anisotropy dominated, certainly more than allowed for a successful bounce to occur or for a structure formation scenario to be realised.
We are now asking the question whether the inclusion of shear viscosity from a fluid that can reasonably be expected to be present can mitigate the growth of these anisotropies.
Our question is thus whether $\rho_\sigma$ may remain subdominant, and in fact, how much it may decay as the universe contracts under the influence of shear viscosity.
The solution to the EOM for $\sigma_i{}^j$, Eq.~\eqref{eq:sigmamunurhon}, reads
\begin{equation}
\sigma_i{}^j(t)=\sigma_i{}^j(t_\mathrm{i})\,\mathrm{exp}\left[-\int_{t_\mathrm{i}}^t\mathrm{d}\tilde t\,\Big(3H(\tilde t)+\kappa M_\mathrm{Pl}^{1-4n}\rho(\tilde t)^n\Big)\right]\,,\label{eq:sigmaint}
\end{equation}
where $t_\mathrm{i}$ is the time at which the initial conditions are set.
Given the approximate FLRW background, we have
\begin{equation}
H(t)\simeq\frac{2}{3(1+w)t}\,,\qquad\rho(t)\simeq\frac{4M_\mathrm{Pl}^2}{3(1+w)^2t^2}\,,
\end{equation}
where we shall be looking at the regime where $t<0$ for a period of contraction.
Performing the integral in \eqref{eq:sigmaint}, it follows that
\begin{align}
&\Omega_\sigma(t)=\Omega_\sigma(t_\mathrm{i})\left(\frac{t_\mathrm{i}}{t}\right)^{\frac{2(1-w)}{1+w}}\nonumber\\
&~\times\exp\left[-\frac{2^{1+2n}\kappa (M_\mathrm{Pl}|t_\mathrm{i}|)^{1-2n}}{3^n(1-2n)(1+w)^{2n}}\left(1-\left(\frac{t}{t_\mathrm{i}}\right)^{1-2n}\right)\right]\,,\label{eq:rhosigmarhogen}
\end{align}
as long as $n\neq 1/2$ (one has to treat the $n=1/2$ case separately) and where we used the fact that we have $t^{1-2n}=t(t^2)^{-n}<0$ for $t<0$, hence $t^{1-2n}=-|t|^{1-2n}$.
A first thing to notice from \eqref{eq:rhosigmarhogen} is that with no viscosity ($\kappa=0$), one is left with $\Omega_\sigma\propto|t|^{-2(1-w)/(1+w)}$, and therefore, one recovers the usual result that anisotropies grow, are constant, or decay compared to the background energy density as $t\rightarrow 0^-$ if $w<1$, $w=1$, or $w>1$, respectively (assuming the fluid's EoS parameter is always at least greater than $-1$).
Then, reinserting viscosity with $\kappa>0$, we note that if $n>1/2$,
the term in the exponential becomes dominated by $-(2n-1)(t_\mathrm{i}/t)^{2n-1}$, which goes to $-\infty$ as $t\to 0^-$.
Consequently, anisotropies are (exponentially) infinitely suppressed, and the BKL instability is resolved in this regime.
However, we do not know of a realistic fluid, which would have a well-defined viscosity all the way to high energy scales with $n>1/2$.\footnote{In fact, if a fluid has viscosity satisfying the relation $\eta\propto\rho^n$ with $n>1/2$ such that it fully isotropises the BKL singularity, it has been shown that it would necessarily imply superluminal propagation of viscous excitations \cite{Belinskii:1979,Belinski:2013jua,Belinski:2017fas}.}
The case $n=1/2$ is treated separately later, so let us focus on the cases when $n<1/2$. One can see from \eqref{eq:rhosigmarhogen} that as $t\to 0^-$, the factor in the exponential only goes to a finite negative constant, so while the anisotropies are exponentially suppressed compared to the non-viscous solution, the approach to the singularity remains highly anisotropic for $w<1$. The exponential suppression remains interesting though, especially in a context where the universe might not reach a singularity or even Planckian scales. Indeed, it might be interesting to see if there could be significant isotropisation before a bounce occurs. To explore this question, let us first observe that demanding the time derivative of \eqref{eq:rhosigmarhogen} to be negative at the initial time $t_\mathrm{i}$, we find that the shear-to-matter ratio $\Omega_\sigma$ is initially decaying (demanding $\dot\Omega_\sigma(t_\mathrm{i})<0$, so the universe is initially isotropising) as long as
\begin{equation}
\kappa>\frac{3^{1-n}}{2}(1-w)\left(\frac{|H_\mathrm{i}|}{M_\mathrm{Pl}}\right)^{1-2n}=:\kappa_\mathrm{min}\,,\label{eq:kappamin}
\end{equation}
assuming $w>-1$, and where $H_\mathrm{i}=2/(3(1+w)t_\mathrm{i})$ is the initial value of the Hubble parameter.
What this shows is that, for $w=1$, any positive non-zero viscosity coefficient suffices to start isotropisation, i.e., for the shear-to-matter ratio $\Omega_\sigma$ to start decreasing.
When $w<1$, however, the viscosity coefficient cannot be arbitrarily small for that matter; it needs to be larger than a minimal value dubbed $\kappa_\mathrm{min}$. The smaller the EoS parameter, the larger $\kappa$ needs to be. Also, the higher the initial energy scale, the larger the viscosity coefficient must be to be able to begin isotropisation. Vice versa, when the universe is initially very large (small initial energy scale), the viscosity coefficient can be smaller.
This dependence on the initial Hubble scale is most important the closer $n$ is to $0$, but it becomes less important for $n$ closer to $1/2$.
Provided isotropisation starts, we can then check at what point there is a turnaround, i.e., a point where the shear-to-matter ratio $\Omega_\sigma$ starts growing again. By solving $\dot\Omega_\sigma=0$, we find that this occurs at an energy scale
\begin{equation}
\frac{|H_\star|}{M_\mathrm{Pl}}=\left(\frac{2\kappa}{3^{1-n}(1-w)}\right)^{\frac{1}{1-2n}}\,.\label{eq:endofisoscale}
\end{equation}
This expression only applies for $w<1$; for $w\geq 1$, $\Omega_\sigma$ always decreases until a big crunch or a bounce is reached.
In fact, the larger $w$ is, the higher the energy scale at which isotropisation stops can be.
The same applies for the viscosity coefficient, as expected.
Additionally, the closer $n$ is to the value $1/2$, the more efficient isotropisation is.
In the cases where isotropisation stops, $\Omega_\sigma$ starts growing again, and one can approximate its subsequent evolution by the usual power-law scaling without viscosity,
\begin{equation}
\Omega_\sigma(t)\approx\Omega_{\sigma,\star}\left(\frac{t}{t_\star}\right)^{\frac{2(1-w)}{1+w}}\,,
\end{equation}
where $t_\star=2/(3(1+w)H_\star)$ is the end-of-isotropisation time following from \eqref{eq:endofisoscale}, and $\Omega_{\sigma,\star}:=\Omega_\sigma(t_\star)$ is the corresponding value of the shear-to-matter ratio at that time.
The time at which the ratio reaches unity,
$t_\mathrm{c}=t_\star\Omega_{\sigma,\star}^{(1+w)/(2(1-w))}$,
represents the moment when anisotropies start dominating again, and so the moment when the initial assumption breaks down and the above solutions do not apply anymore.
Past that point, we essentially expect the universe to reach its chaotic mixmaster behavior toward the big crunch.
Let us explore the above timescales in a specific model of interest.
Let us consider the case of a constant viscosity coefficient corresponding to $n=0$, which was already solved in \eqref{eq:rhosigmaconstanteta}. If we now assume the background to be approximately FLRW with matter having the EoS of dust ($w=0$), we can write this sub-case of \eqref{eq:rhosigmarhogen} as
\begin{equation}
\frac{\Omega_\sigma}{\Omega_{\sigma,\mathrm{i}}}=\left(\frac{a_\mathrm{i}}{a}\right)^3\exp\left[-\frac{4\kappa}{3}\frac{M_\mathrm{Pl}}{|H_\mathrm{i}|}\left(1-\left(\frac{a}{a_\mathrm{i}}\right)^{3/2}\right)\right]\,.
\end{equation}
This is thus the solution for the evolution of anisotropies in the example of a contracting universe containing a dilute gas of black holes with effective EoS $w=0$, which is initially isotropic to a good approximation.
The evolution of $\Omega_\sigma$ in this case is shown in the top plot of Fig.~\ref{fig:smallani} as a function of the $e$-folding number defined according to
\begin{equation}
\mathcal{N}:=\ln\left(\frac{aH}{a_\mathrm{i}H_\mathrm{i}}\right)\,.
\end{equation}
The bottom plot of Fig.~\ref{fig:smallani} shows similar computations, but applying \eqref{eq:rhosigmarhogen} for some phenomenological\footnote{Such arbitrary values could potentially correspond to some intermediate regime, in between a dilute and a dense black hole gas for instance, or in the case of the scalar field example, in between the matter- and radiation-dominated regimes.} case with $w=1/12$ and $n=1/6$.
Curves of different color show different values of the viscosity coefficient of proportionality $\kappa$, whose value as a fraction of the minimal isotropising coefficient $\kappa_\mathrm{min}$ can be read off from the color bar.
There, we can see that for $\kappa$ close to $\kappa_\mathrm{min}$ (the curves with lighter color), the universe does start by isotropising, but this is not very efficient, and $\Omega_\sigma$ quickly turns over and grows as a power law beyond $\Omega_\sigma=1$, indicating a future shear-dominated universe. It is only for values of $\kappa$ that are about 1 or 2 orders of magnitude larger than $\kappa_\mathrm{min}$ that we start seeing long-lasting isotropisation (of the order of tens of $e$-folds). In those cases (darker curves), isotropisation is extremely efficient (exponential as expected) for the first few $e$-folds before turnaround and power-law growth. However, since $\Omega_\sigma$ shrinks to exponentially small values at first, it takes several tens of $e$-folds before shear becomes dominant again.
\begin{figure}
\centering
\includegraphics[scale=0.6]{omegaSigmaVsCalN2.pdf}
\caption{Plots of the shear-to-matter ratio $\Omega_\sigma=M_\mathrm{Pl}^2\sigma^2/\rho$ as a function of the $e$-folding number $\mathcal{N}\sim\ln(a|H|)$. The top plot shows the case of a dust-like EoS $w=0$ and constant viscosity coefficient ($n=0$), while the bottom plot shows an example for non-zero values with $w=1/12$ and $n=1/6$. The colors code as indicated by the color bars shows the value of the viscosity coefficient of proportionality $\kappa$ as a ratio of the minimal isotropising value $\kappa_\mathrm{min}$ derived in \eqref{eq:kappamin}. The initial conditions are set at a time $t_\mathrm{i}=-10^{80}\,t_\mathrm{Pl}$, and the initial shear-to-matter ratio is set to the threshold value $\Omega_{\sigma,\mathrm{i}}=1$. This value is highlighted by the horizontal dotted grey line.}
\label{fig:smallani}
\end{figure}
The problem with the above description in the case of the physically motivated dilute black hole gas is that large viscosity coefficients $\kappa/\kappa_\mathrm{min}\sim\mathcal{O}(10^2)$ are not expected to respect previously discussed approximations.
To see this, let the constant viscosity coefficient $\eta=\kappa M_\mathrm{Pl}^3$ be given according to \eqref{eq:etafindBHG} for a dilute black hole gas. Together with \eqref{eq:kappamin} when $w=n=0$, we thus find
\begin{equation}
\frac{\kappa}{\kappa_\mathrm{min}}\sim\frac{c_\mathrm{s}^5}{R|H_\mathrm{i}|}\,,
\end{equation}
which needs to be at the very least greater than $1$ for isotropisation to work, i.e., one needs $c_\mathrm{s}^5>R|H_\mathrm{i}|$. However, this is clearly incompatible with the requirement that the mean free path has to be smaller than the Hubble radius, cf.~\eqref{eq:smallcsconstr}. Therefore, we conclude that a dilute black hole gas is not viscous enough for isotropisation to start, even less so for an isotropic background to be sustained.
The evolution of anisotropies in the case of an interacting scalar field at high temperature in the small anisotropy limit was already found in Sec.~\ref{sec:viscoanicosmo}. Indeed, assuming the background to be FLRW and radiation dominated and taking the viscosity coefficient to be $\eta\propto a^{-3}$ in accordance with \eqref{eq:etafiniteT}, one finds the solution \eqref{eq:shearsoletaam3}, which depicts isotropisation as $a\to 0$. This solution is equivalent to \eqref{eq:rhosigmarhogen} with $w=1/3$ and $n=3/4$. As mentioned earlier, $n>1/2$ immediately implies isotropisation in this limit, but approximations most likely break down before reaching a singularity in this case. For this reason, this requires greater scrutiny, and so we defer the analysis of this scenario to the next subsection, where we look at the large anisotropy limit numerically, making no approximation about the background.
To end this subsection, we come back to the special case of $n=1/2$. In such a case, the integral \eqref{eq:sigmaint} yields
\begin{equation}
\Omega_\sigma(t)=\Omega_{\sigma,\mathrm{i}}\left(\frac{t}{t_\mathrm{i}}\right)^{\frac{2}{1+w}\left(\frac{2}{\sqrt{3}}\kappa-(1-w)\right)}\,.
\end{equation}
Isotropisation thus occurs as $t\to 0$ only if
\begin{equation}
\kappa>\frac{\sqrt{3}}{2}(1-w)\,.
\end{equation}
In the case of a stiff fluid with $w=1$, we see that any non-zero positive viscosity coefficient of proportionality leads to isotropisation, in accordance with the expectations previously mentioned.
We note that for a general EoS such a lower bound on the viscosity coefficient of proportionality has already been derived in \cite{Ganguly:2020daq}.
In fact, there it is found that for $n=1/2$, whenever
\begin{equation}
\kappa>3(1-w)\,,
\end{equation}
the future crunching singularity is a stable Friedmann singularity (i.e., the universe fully isotropises by then). This has been derived for all Bianchi classes, and thus, it may explain the more stringent proportionality factor of 3 compared to $\sqrt{3}/2$ found in our simplified Bianchi-I analysis under the assumption of small anisotropies.
\subsection{Large anisotropy limit}
\subsubsection{Bianchi I}
As mentioned in the previous subsection, the case of an interacting scalar field at finite temperature has strong potential isotropising power, although this remained at the level of assuming small anisotropies initially. The strength of this model, though, lies in the fact that one does not have to make any assumption about the `formation' of the fluid or its previous history. In other words, even if the universe is highly anisotropic to start with, we would still reasonably expect a $\lambda\phi^4$ scalar field in a thermal bath to exhibit viscosity, and consequently, given its contribution to the coupled Einstein equations, affect the subsequent evolution of the anisotropies. This would even be true if one started in a maximally anisotropic homogeneous flat universe, also known as a Kasner universe. We shall be interested in this initial limit in this subsection, i.e., when anisotropies are dominant over everything else.
Let us first restrict ourselves to the case of flat spatial sections, i.e., to a Bianchi type-I metric (the case with curvature anisotropy, Bianchi IX, is treated separately later). We now seek to solve the corresponding equations \eqref{eq:BIall} numerically, where in the case of the field theory model introduced in Sec.~\ref{subsec:finite-temp-intro}, we can use \eqref{eq:etafiniteT} for the viscosity coefficient together with the usual scaling of temperature $T\propto 1/a$. In this case, the matter and shear EOMs \eqref{eq:mattershearEOMs} reduce to
\begin{subequations}\label{eq:EOMsfiniteT}
\begin{align}
\dot\rho+4\frac{\dot a}{a}\rho&=\frac{4\alpha T_0^3}{\lambda^2}\left(\frac{a_0}{a}\right)^3\sigma^2\,,\\
\partial_t\sigma_i{}^j+3\frac{\dot a}{a}\sigma_i{}^j&=-\frac{2\alpha T_0^3}{\lambda^2M_\mathrm{Pl}^2}\left(\frac{a_0}{a}\right)^3\sigma_i{}^j\,,
\end{align}
\end{subequations}
assuming the matter EoS $w=1/3$ for the radiation bath, and where we denote the (expected order 1) constant of proportionality in the viscosity coefficient \eqref{eq:etafiniteT} by $\alpha$. For the purpose of the numerical analysis, we simply set $\alpha=1$, and the scalar field self-interaction coupling constant is taken to be $\lambda=10^{-3}$. Other numerical values have been explored, but we focus our attention here on the free parameter $T_0$, which sets the initial temperature of the thermal bath at the initial scale factor value $a_0$. Exploring a range of values for $T_0$ shall encapsulate different choices for the combination of parameters $\alpha T_0/\lambda^2$.
\begin{figure}
\centering
\includegraphics[scale=0.6]{BIfiniteT.pdf}
\caption{Plots of the shear-to-matter ratio (top plot), ratio of the mean free path over the Hubble radius (middle plot) and ratio of the mean free path over its maximal allowed value (bottom plot) as functions of the $e$-folding number $\mathcal{N}$. The curves of different color show different choices for the initial temperature $T_0$, as shown by the top color bar. The horizontal dotted grey line always indicates where the ratios cross unity. Successful isotropisation (with all approximations under control) is achieved when the curves are under this line.}
\label{fig:BIfiniteT}
\end{figure}
We are now in position to numerically solve the set of coupled ordinary differential equations \eqref{eq:BIHEOM} and \eqref{eq:EOMsfiniteT}, which respect the constraint \eqref{eq:BIconstrgen}. Solutions are shown in Fig.~\ref{fig:BIfiniteT} for the shear-to-matter ratio $\Omega_\sigma$ (in the top plot) as a function of the $e$-folding number $\mathcal{N}$. Initial conditions are picked at the Hubble scale $H_0=-10^{-50}\,M_\mathrm{Pl}$ such that the initial shear-to-matter ratio is $\Omega_{\sigma,0}=10^{15}$, i.e., we want the anisotropies to be dominant over the matter at the initial time and see how this changes under time evolution. The curves of different color show different values of the initial thermal bath temperature $T_0$, ranging from colder ($10^{-25}\,M_\mathrm{Pl}$, blue) to warmer ($10^{-17}\,M_\mathrm{Pl}$, red).
Starting from the colder temperatures in blue in Fig.~\ref{fig:BIfiniteT}, we see that $\Omega_\sigma$ first grows as the usual power law in a Kasner universe, before starting to saturate. In fact, after about 10 $e$-folds, $\Omega_\sigma$ reaches a constant, already showing that viscosity has started becoming effective in mitigating the otherwise unbounded growth of anisotropies. For the lighter shades of blue and the green/yellow curves ($T_0\sim\mathcal{O}(10^{-23}-10^{-20})\,M_\mathrm{Pl}$), we can see that $\Omega_\sigma$ starts by decreasing, demonstrating isotropisation, but the exponential damping does not last. Rather, $\Omega_\sigma$ saturates at some constant value greater than $1$, meaning that the universe remains anisotropy dominated (though with bounded shear).
The situation changes once we consider initial temperatures warmer than about $10^{-19.5}\,M_\mathrm{Pl}$. For the darker orange curves, we see that there is initial isotropisation followed by saturation, but then there is a second phase of exponential isotropisation, which brings $\Omega_\sigma$ to exponentially small values, well below unity, such that the universe is isotropic to a very good approximation. For the red curves, this isotropisation occurs all at once, with no intermediate saturation phase, and the universe becomes isotropic within a few $e$-folds (or even a fraction of an $e$-fold for the temperatures closer to $10^{-17}\,M_\mathrm{Pl}$ and above).
In all of these cases, it is important to consider whether the viscosity approximation is valid though (and whether thermal equilibrium holds). In its simplest iteration, this can be stated as the situation when the mean free path $\ell_\mathrm{mfp}$ remains less than the characteristic length scale of the system. In our case, the characteristic length scale is given by the size of the horizon $|H|^{-1}$, where $H$ is the average expansion rate as before. For this reason, we show the ratio $\ell_\mathrm{mfp}/|H|^{-1}$ in the middle plot of Fig.~\ref{fig:BIfiniteT}. For our purposes, we use the expression for the mean free path in this field theory, derived in Eq.~\eqref{eq:mfp_def}, with the constant of proportionality set to $1$. We see that in all cases considered the approximation remains valid for at least 40 $e$-folds. For the higher initial temperatures that successfully lead to an isotropic universe, we see that the approximation remains valid even longer, up to at least $60$ $e$-folds. Once the mean free path becomes of the order of the Hubble radius and even surpasses it, the expression used for viscosity does not apply anymore. In fact, one would rather expect viscosity to go to zero as thermal equilibrium is lost in the limit where the averaged volume shrinks to zero. Therefore, one cannot realistically expect isotropisation to remain effective all the way to a crunching singularity. Rather, a Kasner singularity is anticipated. Yet, considering the efficiency of the exponential damping of shear within the regime of validity of the theory initially when $T_0$ is high enough, even if one were to turn off viscosity altogether once $\ell_\mathrm{mfp}\sim|H|^{-1}$, it would take several hundreds of $e$-folds (if not more) before the power-law growth in anisotropies would bring $\Omega_\sigma$ back to values greater than unity. Therefore, it is expected that in any realistic scenario where the universe would undergo a non-singular bounce at some high-curvature scale such that a singularity is never reached, the universe would still be highly isotropic at the onset of the transition from contraction to expansion.
At last, let us point out that according to \eqref{eq:validitymfp} another approximation should be satisfied, namely $\ell_\mathrm{mfp}/\ell_\mathrm{max}\lesssim 1$, where $\ell_\mathrm{max}$ is defined in Eq.~\eqref{eq:deflmax}. This can be computed using the usual radiation entropy relation $s\propto T^3$. The result is shown in the bottom plot of Fig.~\ref{fig:BIfiniteT}, where it can be seen that the ratio $\ell_\mathrm{mfp}/\ell_\mathrm{max}$ remains well below unity throughout the evolution and for any initial temperature in the given range. In fact, the approximation improves under time evolution and for warmer initial temperatures. Therefore, the requirement that the Maxwell relaxation time be small enough does not represent a threat to the validity of the viscosity approximations in this context.
\subsubsection{Bianchi IX}
The above discussion only concerns expansion anisotropies, where the underlying geometry is a flat anisotropic universe. However, the generic approach to a singularity and, in our case, the endpoint of contraction is the closed anisotropic universe described by the Bianchi type-IX metric (see, e.g., \cite{Kiefer:2018uyv} and references therein). In this case, the anisotropy energy density is not just stored in the expansion tensor, but there is also an anisotropy `potential'. This potential is nothing but the anisotropic $3$-curvature terms that arise in the closed Bianchi type-IX universe, and which are responsible for the chaotic mixmaster oscillations on approach to a singularity.
The Bianchi-IX metric takes the general form of a homogeneous spacetime as follows,
\begin{equation}
g_{ab}\mathbf{d}x^a\otimes\mathbf{d}x^b=-\mathbf{d}t\otimes\mathbf{d}t+h_{ij}\bm{\sigma}^i\otimes\bm{\sigma}^j\,.
\end{equation}
Here, $h_{ij}$ is the spatial metric, and the $\bm{\sigma}^i$'s are one-forms, which take the simple Cartesian form $\mathbf{d}x^i$ in the case of a flat anisotropic universe [Bianchi I, cf.~\eqref{eq:BIMetric}]. In the case of homogeneous spacetimes with non-trivial curvature, they can always be chosen so that $h_{ij}$ always remains strictly a function of time. In the case of the Bianchi-IX universe, these one-forms take the following shape,
\begin{align}
\bm{\sigma}^1&=-\sin\psi\,\mathbf{d}\theta+\cos\psi\sin\theta\,\mathbf{d}\varphi\,,\nonumber\\
\bm{\sigma}^2&=\,\cos\psi\,\mathbf{d}\theta+\sin\psi\sin\theta\,\mathbf{d}\varphi\,\nonumber\\
\bm{\sigma}^3&=\,\cos\theta\,\mathbf{d}\varphi+\mathbf{d}\psi\,,
\end{align}
which are the differential forms on a 3-sphere with coordinate ranges $0\leq\theta\leq\pi$, $0\leq\varphi\leq 2\pi$, and $0\leq\psi\leq 4\pi$. In the frame in which the metric $h_{ij}$ is diagonal and strictly a function of time, it takes the form
\begin{equation}
h^i{}_j=a^2\,\mathrm{diag}\left(e^{2\beta_++2\sqrt{3}\beta_-},\,e^{2\beta_+-2\sqrt{3}\beta_-},\,e^{-4\beta_+}\right)\,.
\end{equation}
The volume averaged expansion is given by the scale factor $a(t)$. The variables $\beta_\pm(t)$ are the Misner variables that are used to parameterise the anisotropies.
The shear anisotropy has only two independent components as the anisotropic shear is traceless. In this formalism, the three-dimensional curvature ${}^{(3)}\!R$ on spatial hypersurfaces of constant coordinate time is given by
\begin{equation}
{}^{(3)}\!R=-\frac{2}{a^2}U(\beta_+,\beta_-)\,,
\end{equation}
where the curvature potential $U(\beta_+,\beta_-)$ is given by
\begin{align}
U(\beta_+,\beta_-)=&~\frac{1}{4}e^{-8\beta_+}-e^{-2\beta_+}\cosh\left(2\sqrt{3}\beta_-\right)\nonumber\\
&+e^{4\beta_+}\sinh^2\left(2\sqrt{3}\beta_-\right)\,.
\end{align}
The Einstein equations \eqref{eq:EFE-orthonormal} in this formalism become (we set $M_\mathrm{Pl}=1$ for the rest of this subsection)
\begin{subequations}\label{eq:BIXeveq}
\begin{align}
&3H^2=\rho+3\left(\dot\beta_+^2+\dot\beta_-^2\right)+\frac{1}{a^2}U(\beta_+,\beta_-)\,,\label{eq:constrBIX}\\
&-2\dot H=\rho+p+6\left(\dot\beta_+^2+\dot\beta_-^2\right)+\frac{2}{3a^2}U(\beta_+,\beta_-)\,,\\
&\dot\rho+3H(\rho+p)=12\eta\left(\dot\beta_+^2+\dot\beta_-^2\right)\,,\\
&\ddot\beta_\pm+3H\dot\beta_\pm+\frac{1}{6a^2}\partial_{\beta_\pm}U=-2\eta\dot\beta_\pm\,,
\end{align}
\end{subequations}
where the shear energy density is $\sigma^2=3(\dot\beta_+^2+\dot\beta_-^2)$.
\begin{figure*}
\centering
\includegraphics[scale=0.6]{BIXfiniteTfracE.pdf}
\caption{Plot of the fractional energy density ($X/3H^2$) for the different contributions $X$ to the total energy budget under forward time evolution [as a function of $N=-\ln(a/a_0)$]. The different contributions are: $X=\rho$ (matter [radiation] in orange), $X=\sigma^2$ (shear in purple), and $X=-{}^{(3)}\!R/2=U/a^2$ (curvature in olive). The sum of the three contributions always adds up to $3H^2$ in accordance with the constraint equation \eqref{eq:constrBIX}, hence the total fractional energy density is $1$, as depicted by the top horizontal dotted grey line. The bottom horizontal dotted grey line at $0$ indicates exponentially small contribution. The left plot shows the standard evolution without viscosity, while the right plot shown an example once viscosity is taken into account.}
\label{fig:BIXfiniteT}
\end{figure*}
If one ignores the presence of viscosity and simply set $\eta\equiv 0$, then one recovers the usual chaotic mixmaster behaviour in the approach to a singularity. To see this, let us numerically solve the above set of ordinary differential equations when the matter content is radiation-like ($p=\rho/3$). The initial conditions are set in a contracting phase with $H_0=-10^{-40}$, $\beta_{+,0}=10^{-10}$, $\beta_{-,0}=-10^{-1}$, $\beta_{+,0}'=-1$, $\beta_{-,0}'=0$, and $a_0\approx 9.1\times 10^{39}$, where a prime here denotes a derivative with respect to the $e$-folding number $N:=-\ln(a/a_0)$, which turns out to be an easier time variable\footnote{The $e$-folding numbers $\mathcal{N}$ and $N$ only differ by a factor of $(1+3w)/2$ for a power-law solution $a(t)\propto|t|^{2/(3(1+w))}$. In particular, they are equal when $w=1/3$, and $\mathcal{N}$ ticks twice as fast as $N$ when $w=1$.} to work with in Bianchi IX, numerically speaking. Such values are chosen such that, initially, $\rho_0/(3H_0^2)=1/10$, $\sigma_0^2/(3H_0^2)=1$, and $-{}^{(3)}\!R_0/(6H_0^2)=-1/10$. Physically, this means that we choose shear anisotropies to be dominant over matter (radiation) initially since $\sigma_0^2/\rho_0=10$, but we want curvature anisotropies to be small. Taking $|\beta_\pm|\ll 1$ initially, the anisotropy potential is negative (the potential minimum is $-3/4$), indicating positive spatial curvature (${}^{(3)}\!R>0$), and the curvature radius is set to be small by taking $a_0$ large. In other words, we want to start in a large universe relatively close to a flat Bianchi-I spacetime in this example.
The result of the evolution is shown in the left plot of Fig.~\ref{fig:BIXfiniteT}. There, we see that, without viscosity, the radiation contribution (orange curve) rapidly goes to $0$, while anisotropies dominate. In fact, there is a chaotically oscillatory exchange between shear anisotropies (purple curve) and curvature anisotropies (olive curve), representative of the mixmaster dynamics as the universe approaches a BKL singularity.
When viscosity is introduced, the situation changes, as can be seen in the right plot of Fig.~\ref{fig:BIXfiniteT}. There, we numerically solve the same previous set of equations with the same initial conditions, except now the viscosity coefficient is taken to be $\eta=\alpha T_0^3(a_0/a)^3/\lambda^2$ as in the previous subsection. Numerical values for this example are taken to be $\alpha=1$, $\lambda=10^{-3}$, and $T_0=10^{-16}$.
For the first $e$-fold or so, the evolution with and without viscosity is very similar. However, as the scale factor decreases, the temperature rises and so does the viscosity coefficient. Accordingly, the radiation component is not diluted with respect to the anisotropies; rather, it remains more or less constant and starts growing after a few $e$-folds. Counterbalancing, the contribution from shear starts decreasing already after about $2$ $e$-folds. By $\approx 8.98$ $e$-folds, radiation becomes dominant over shear and curvature anisotropies ($\Omega_\sigma$ becomes smaller than unity). From then on, the spacetime becomes more and more isotropic, with anisotropies decaying to exponentially small values, and with the chaotic oscillations in the anisotropies stopping. Through this evolution, the mean free path $\ell_\mathrm{mfp}$ is found to remain smaller than the Hubble radius, up to approximately $35.66$ $e$-folds. By then, $\log_{10}\Omega_\sigma\approx -67.71$. As discussed in the previous subsection, beyond this point one cannot fully trust the approximations leading to the viscosity coefficient, which should in fact start decreasing. Nevertheless, even if viscosity were to suddenly become negligible again, it would take more than about $78$ $e$-folds before anisotropies would become dominant again. Therefore, we can say that the model is isotropic to a very good approximation for more than $100$ $e$-folds in total, and the warmer the initial temperature $T_0$ of the thermal bath, the longer the isotropic phase, in the same spirit as seen in Fig.~\ref{fig:BIfiniteT} for Bianchi I. In the end, it seems that isotropisation due to viscosity in a finite-temperature field theory is robust against curvature anisotropies, i.e., the same qualitative results hold whether the spacetime is of Bianchi type I or IX.
\section{Implications for gravitational waves}\label{sec:GWs}
So far, we have discussed the process of isotropisation in a contracting universe. We have shown that the addition of shear viscous anisotropic stress leads to a reduction of the fractional contribution of the shear anisotropies in many instances. The shear anisotropies that we have studied so far have been in spatially homogeneous cosmological settings --- they are non-perturbative by definition. Then, the perturbative limit of this represents a homogeneous and isotropic universe, but containing gravitational wave perturbations. The concept of an isotropic universe sourced by gravitational waves being equivalent to an anisotropic universe is not a new one. For example, in an open or flat isotropic Friedmann model, gravitational waves superimposed upon the background only leave homogeneity untouched, and hence reproduce the corresponding spatially homogeneous, anisotropic cosmology when the wavelength is infinitely long \cite{lukashGW}. This fulfills the assumption of homogeneity as the periodicity of a propagating wave with finite wavelength would actively violate it. There are some exceptions to this rule, such as in the case of circularly polarised gravitational waves \cite{lukashGW}, where the average quantities coincide with the scenario of a gravitational tensor representing a homogeneous isotropic cosmological model being sourced by gravitational wave anisotropies. The approach to a singularity also becomes quasi-isotropic and resembles the Friedmann solution. In general, one can assume that whatever physical effect modifies the propagation of shear will also correspondingly affect the propagation of gravitational waves (and vice versa). Examples include massive gravity \cite{Lin:2017fec}, neutrinos and more (e.g., \cite{Weinberg:2003ur,Pritchard:2004qp,Watanabe:2006qe,Stefanek:2012hj,Dent:2013asa,Baym:2017xvh,Kite:2021yoe,Brevik:2019yma,Goswami:2016tsu,Lu:2018smr}).
As there appears to be an inexorable link between the shear anisotropies we have been studying and gravitational waves, it would be interesting to see how a characteristic spectrum of gravitational waves would be affected at the end of shear viscosity driven contraction. For the purposes of this computation, we shall restrict ourselves to a flat background. This indicates the case of Bianchi I. In fact, we can even assume the background to be flat FLRW as the only anisotropies present are in expansion and can be written as part of the energy density. This is a very simple example of the general idea that perturbative shear anisotropies on a homogeneous background can be represented as gravitational waves on an isotropic background.
The general equations of motion in a homogeneous background given in \eqref{eq:EFE-orthonormal}, written in terms of the electric and magnetic parts of the Weyl curvature tensor denoted by $E_{ab}$ and $H_{ab}$, are given by \cite{ellis_maartens_maccallum_2012}
\begin{subequations}
\begin{align}
\dot{\sigma}_{ab}+2H\sigma_{ab}+E_{ab}=&~\frac{1}{2M_\mathrm{Pl}^2}\pi_{ab}\,,\label{eq:dotsigmaEB}\\
\dot{E}_{ab}+3HE_{ab}-\mathrm{curl}~H_{ab}=&-\frac{1}{2M_\mathrm{Pl}^2}\Big[(\rho+p)\sigma_{ab}\nonumber\\
&\qquad+\dot{\pi}_{ab}+H\pi_{ab}\Big]\,,\label{eq:dotEEB}\\
\dot{H}_{ab}+3HH_{ab}+\mathrm{curl}~E_{ab}=&~\frac{1}{2M_\mathrm{Pl}^2}\mathrm{curl}~\pi_{ab}\,.
\end{align}
\end{subequations}
These equations are written assuming linear perturbations around a flat background and follow the covariant and gauge-invariant approach to perturbation theory outlined in \cite{ellis_maartens_maccallum_2012}. They are easily generalisable to the fully non-linear case as for example in \cite{ellis_maartens_maccallum_2012, 1992ApJ...395...34B}.
This is different from the metric perturbation approach where we linearise the metric around a background and then trace the time evolution of the metric perturbations through the perturbed Einstein equations. The disadvantage of this approach is of course that it is hard to generalise to non-linear perturbations. The relative advantage of the latter approach is that we start out from the full non-linear equations \eqref{eq:EFE-orthonormal} and then linearise around a given background, in our case it would be the FLRW background. Gravitational wave perturbations in the metric-perturbation approach are the tensor modes born out of perturbations to the $ij$ components of the metric tensor and then traced through the Einstein equations. In contrast, in the covariant, gauge-invariant approach, gravitational wave perturbations are expressed as curvature perturbations that propagate and manifest themselves in the evolution of the electric and magnetic parts of the Weyl tensor. Pure tensor modes must be transverse and tracefree and therefore cause the divergence of the electric and magnetic parts of the Weyl tensor to disappear, i.e., $\mathrm{D}_bE_a{}^b=0$ and $\mathrm{D}_bH_a{}^b=0$, as well as the divergences of the shear anisotropy tensor and the anisotropic stress to disappear, $\mathrm{D}_b \sigma_a{}^b=0$ and $\mathrm{D}_b \pi_a{}^b=0$.
In the linearised limit around FLRW and the presence of anisotropic stress of the form of \eqref{eq:defeta1}, one can take a time derivative of \eqref{eq:dotsigmaEB} and use \eqref{eq:dotEEB} to derive a wave equation for the shear anisotropies $\sigma_{ij}$ \cite{ellis_maartens_maccallum_2012}, reminiscent of the wave equation obeyed for gravitational waves,
\begin{align}
&\ddot{\sigma}_{ij}+\left(5H+\frac{2\eta}{M_\mathrm{Pl}^2}\right)\dot{\sigma}_{ij}\nonumber\\
&+\left(\frac{1}{M_\mathrm{Pl}^2}\Big((\rho-3p)+2(\dot\eta+2H\eta)\Big)-\frac{\partial^2}{a^2}\right)\sigma_{ij}=0\,.\label{eq:sigmawaveeq}
\end{align}
In fact, perturbing the spatial metric as
\begin{equation}
g_{ij}=h_{ij}=a^2(\delta_{ij}+\gamma_{ij})\,,
\end{equation}
where $\gamma_{ij}$ is the transverse and traceless tensor perturbation corresponding to the gravitational wave perturbation, the shear anisotropy tensor is related to the metric tensor perturbation as follows (e.g., \cite{Pereira:2019mpp}),
\begin{equation}\label{eq:relhsig}
\sigma_{ij}=\frac{1}{2}a^2 \partial_t\gamma_{ij}\,,\qquad\sigma_i{}^j=\frac{1}{2}\partial_t\gamma_i{}^j\,.
\end{equation}
This is because the shear tensor ultimately is the traceless part of the expansion tensor defined by \eqref{eq:defsheartensot}, which is related to the time derivative of the metric variables in a homoegeneous spacetime. In drawing the equivalence between the metric perturbation approach to perturbation theory and the covariant gauge invariant approach, this relation would allow us to recover the familiar evolution equation for the metric tensor modes $\gamma_{ij}$ through Eq.~\eqref{eq:dotsigmaEB}.\footnote{We can also see this by noting that the electric part of the Weyl tensor $E_{ab}$ is related to the traceless part of the $3$-Ricci tensor denoted by ${}^{(3)}\!R_{\langle ab \rangle}$ as
\begin{equation*}
{}^{(3)}\!R_{\langle ab \rangle} =E_{ab} + \frac{1}{2M_\mathrm{Pl}^2}\pi_{ab} - H\sigma_{ab}+\sigma_{c\langle a}\sigma_{b\rangle}{}^c
\end{equation*}
This relation is taken to be in the absence of vorticity, as in all of this work. The full equations are found in the Appendix of \cite{ellis_maartens_maccallum_2012}.}
The corresponding equation is of the form
\begin{equation}
\partial_t^2\gamma_i{}^j+\left(3H+\frac{2\eta}{M_\mathrm{Pl}^2}\right)\partial_t\gamma_i{}^j-\frac{\partial^2}{a^2}\gamma_i{}^j=0\,,\label{eq:gammawaveeq}
\end{equation}
agreeing with, e.g., \cite{Fanizza:2021ngq,Goswami:2016tsu}.
In the infrared limit, i.e., on large super-Hubble scales where $\partial^2/a^2\to 0$, the equation becomes
\begin{equation}
\partial_t^2\gamma_i{}^j+\left(3H+\frac{2\eta}{M_\mathrm{Pl}^2}\right)\partial_t\gamma_i{}^j\simeq 0\,.\label{eq:gammawaveeqIR}
\end{equation}
This can also be found by substituting \eqref{eq:relhsig} into the previously derived equation \eqref{eq:sigmaijBI}, which makes the connection between anisotropies and long-wavelength gravitational waves explicit.
A key aspect of the above, either viewed through \eqref{eq:sigmawaveeq} or \eqref{eq:gammawaveeq}, is that shear and equivalently gravitational waves receive a damping factor (in the form of a friction term) due to the presence of viscosity with $\eta>0$. The negativity of the Hubble parameter in a contracting universe typically implies the growth of shear and of gravitational waves (most easily seen on super-Hubble scales).\footnote{This is a problem, for instance, in the context of matter bounce cosmology, where a scale-invariant power spectrum of tensor perturbations is amplified to the same extent as scalar perturbations, resulting in an order unity tensor-to-scalar ratio (see, e.g., \cite{Quintin:2015rta,Li:2016xjb,Lin:2017fec}).} The viscosity coefficient can counterbalance this effect though, such that anisotropies are damped (resulting in isotropisation) and so are gravitational waves. In fact, in the FLRW limit, one can solve \eqref{eq:gammawaveeqIR} for the long-wavelength $\partial_t\gamma_i{}^j$ in the same way we solved for $\sigma_i{}^j$ in \eqref{eq:sigmaint}, from which we can translate the results. For a constant viscosity coefficient and a pressureless EoS, one finds an exponential damping initially [in the form of \eqref{eq:rhosigmaconstanteta}, where we should think of $\rho_\sigma$ being replaced by $\rho_\mathrm{GW}:=(M_\mathrm{Pl}^2/8)\partial_t\gamma_i{}^j\partial_t\gamma_j{}^i$]. A similar result was derived in \cite{Hawking:1966qi} for a constant coefficient of viscosity. In the context of matter bounce cosmology, this damping would not realistically resolve the large tensor-to-scalar ratio problem if the viscosity is coming from a dilute gas of black holes (for the same reason it could not realistically lead to isotropisation within the regime of validity of the approximations). For an interacting field theory at finite temperature with $\eta\propto 1/a^3$ and a radiation EoS, one recovers exponential damping in the form of \eqref{eq:shearsoletaam3}. For a dense black hole gas with $\eta=\kappa |H|$ (when $H<0$) and a stiff EoS, one finds in a similar way to \eqref{eq:rhosigmaDBHG} that $\rho_\mathrm{GW}\propto 1/a^{2(3-2\kappa/M_\mathrm{Pl}^2)}$, and hence gravitational waves are completely damped out by the time $a\to 0$ if $\kappa>3M_\mathrm{Pl}^2/2$.
\section{Discussion and conclusions}\label{sec:conclusions}
Bouncing cosmologies present an alternative to traditional expanding cosmologies by avoiding an initial singularity. The expense occurs by hypothesising some possible new physics at the bounce, which causes the universe to re-expand after an initial phase of contraction. However, there are a few problems regarding the growth of anisotropies and inhomogeneities in the contracting phase itself. Traditionally, a phase of ekpyrosis, where a fast-rolling scalar field mediates a slow contraction, exhibits an effective EoS $p \gg \rho$ and is able to dominate over the anisotropies and inhomogeneities.
Other dissipative mechanisms, such as particle creation and other quantum effects (e.g., \cite{1972JETP...34.1159Z,1974JETP...39..742L,Hu:1978zd,Hartle:1980nn,Calzetta:1986ey}), a non-linear EoS (e.g., \cite{Bozza:2009jx,Ganguly:2019llh}), and the introduction of shear viscosity have been studied in the context of anisotropy reduction. In this work, we have studied possible microphysical realisations of such a dissipative model of shear viscosity. We have studied this in the context of a gas of black holes, both in the dilute and the dense limit. We find that the coefficient of viscosity remains constant and is temporarily effective in suppressing anisotropies in the dilute limit. However, the viscosity approximation is violated unless the viscosity coefficient is small enough, in which case isotropisation cannot occur. In the dense black hole gas case (which is considerably more speculative), we have the beginnings of a microphysical picture of understanding how a coefficient of viscosity that scales with energy density as $\eta \propto \rho^{1/2}$ can be realised and, as has been seen in the literature, give rise to successful isotropisation and lead to a Friedmann singularity (if allowed to evolve to a crunch) even in the most general of anisotropic spatially homogeneous universes.
Another microphysical example that we have studied is the case of a $\lambda \phi ^4$ interacting scalar field theory at finite temperature. The effective evolution of the background is that of a radiation-dominated universe. We studied the evolution of anisotropies in the case of a flat Bianchi type-I universe containing only expansion anisotropies, as well as in the case of a spatially curved closed anisotropic Bianchi type-IX universe. We found that in both cases the viscous damping dissipates the energy density in the anisotropy into radiation. The viscosity approximation itself remains valid in both cases, at least for enough $e$-folds for the exponential suppression of anisotropies to be effective, under assumptions of high initial temperature and a universe that does not start out curvature dominated deep in the contracting phase for the case of Bianchi IX. Similar results have been found in the same context, but using different analyses and in the context of particle creation and semi-classical gravity \cite{Calzetta:1986ey}. Finally, as the anisotropy tensor itself is related to the time derivative of the tensor modes, the effect of the shear dissipation is equivalent to a damping of the amplitude of long-wavelength gravitational waves (see, e.g., \cite{Loeb:2020lwa,Mottola:1985ee} for additional implications of this principle).
While the $\lambda\phi^4$ model is an interesting toy model, which successfully manifests isotropisation, it does not constitute a complete theory of the very early universe. In particular, it cannot explain the formation of structures, i.e., it does not generate a nearly scale-invariant spectrum of curvature perturbations on large scales by itself. The addition of a spectator field (e.g., \`a la curvaton \cite{Cai:2011zx}) could potentially resolve this issue, but this would require further investigation, especially with regard to the competition between quantum and thermal fluctuations in such a model. Alternatively, a contracting $\lambda\phi^4$ model could be part of a larger scenario that includes a period of inflation (e.g., \cite{Qiu:2015nha,Graham:2019bfu,Ji:2021mvg}), which takes care of generating the right perturbations.
For the matter bounce scenario, where scale-invariant curvature perturbations are generated during a phase of matter-dominated contraction, it appears viscosity can serve as an isotropising mechanism to keep the model close enough to FLRW. However, this remains phenomenological since viscosity is actually hard to generate in a fluid that weakly interacts by definition. For example, we showed in this paper that a dilute gas of black holes could not realistically provide sufficient viscosity to keep the universe isotropic. Thus, unless one modifies the gravitational theory, e.g., with a graviton mass \cite{Lin:2017fec}, which suppresses both anisotropies and gravitational waves, or with a specific non-minimal coupling to gravity (e.g., \cite{Nandi:2019xag,Nandi:2020sif,Nandi:2020szp}, but see also \cite{Akama:2019qeh}), the matter bounce scenario remains unviable.
In any more realistic bouncing scenario hoping to explain the origin of the cosmic microwave background, one has to be aware that requiring isotropy with $\Omega_\sigma<1$ for a certain number of $e$-folds might not be sufficient. Indeed, $\Omega_\sigma$ might have to be several orders of magnitude below unity for the bounce to be achievable and for cosmological perturbations not to receive significant contributions from the shear. This is due to the fact that shear enters as a source term in the scalar, vector, and tensor perturbations of an anisotropic universe such as Bianchi I (see, e.g., \cite{Pereira:2007yy}). Therefore, one expects an upper bound on the size that $\sigma^2$ may be allowed to reach in any given scenario \cite{Ed}.
Another aspect that needs to be taken into consideration in a more realistic scenario is the presence of shear due to quantum fluctuations in addition to the classical anisotropies discussed in this work. For instance, stochastic fluctuations of a scalar field could produce an anisotropic stress sourcing shear. However, when the background EoS satisfies $w\geq 0$ as studied in this work, the resulting quantum shear only becomes dominant near the Planck scale \cite{Grain:2020wro}. Therefore, any `low-energy' bounce could evade this issue, though it remains an important contribution to shear that needs to be considered seriously in light of the previous paragraph.
Let us end by commenting on the dense black hole gas. As we mentioned, this remains the only known model resulting in $\eta\propto\sqrt{\rho}$ and thus in full isotropisation within the approximations. Such a gas remains a fairly exotic toy model though. To start, the possible formation channels of such a gas remain hand-wavy; dealing with large inhomogeneities and their collapse into black holes would certainly have to be tackled numerically as in, e.g., \cite{Clifton:2017hvg,deJong:2021bbo}. Also, there is a great lack of understanding of the evolution of black holes embedded in cosmological backgrounds (apart from approximately Schwarzschild-de Sitter and McVittie spacetimes --- see, e.g., \cite{Bousso:1997wi,Gregory:2018ghc,Kaloper:2010ec,Faraoni:2012gz,Faraoni:2013aba}), and refining the corresponding approximations made on that front would definitely improve the description of the dense black hole gas. Nevertheless, if such a gas could really exist in nature in the very early universe (near a crunching singularity for instance), it remains interesting to ask the question of what could be the possible subsequent evolution of the gas. Could the black holes pass through a bounce and become primordial black holes as suggested in \cite{Carr:2011hv,Clifton:2017hvg,Carr:2017wkz,Coley:2020ykx} or evaporate into remnants accounting for dark matter \cite{Rovelli:2018hba,Rovelli:2018hbk,Barrau:2021spy}? Could the black holes become stringy in nature at high energies and be part of a greater string-cosmology scenario \cite{Veneziano:2003sz,Quintin:2018loc}? Or could the black holes evaporate and emit specific electromagnetic signals or merge and emit specific gravitational-wave signals \cite{Barrau:2017ukm,Papanikolaou:2020qtd}? All those questions deserve closer scrutiny and could open up the path to a new understanding of the physics near the highest cosmological energy scales.
\begin{acknowledgments}
The authors acknowledge the stimulating atmosphere at McGill University, Dartmouth College and Nordita while this project was initiated and prepared over the years and thank Robert Brandenberger for insightful discussions and encouragement to pursue this project in the first place. This project also progressed thanks to discussions following the program Physics of the Early Universe --- An Online Precursor (code: ICTS/peu2020/08) of the International Centre for Theoretical Sciences (ICTS). J.\,Q.\ further thanks the Department of Applied Mathematics and Theoretical Physics (DAMTP), University of Cambridge for kind hospitality while this work was prepared and Jean-Luc Lehners, Edward Wilson-Ewing, and Maurizio Gasperini for valuable discussions. C.\,G.\ would like to thank the Cambridge Philosophical Society for the Henslow Fellowship. They would also like to thank Wolfson College, Cambridge and DAMTP, University of Cambridge for hosting them for the duration of the fellowship. Through the completion of this work, research at the Albert Einstein Institute has been supported by the European Research Council (ERC) in the form of the ERC Consolidator Grant CoG 772295 ``Qosmology'', and J.\,Q.\ further acknowledges financial support in part from the \textit{Fond de recherche du Qu\'ebec --- Nature et technologies} postdoctoral research scholarship and the Natural Sciences and Engineering Research Council of Canada Postdoctoral Fellowship.
\end{acknowledgments}
\bibliographystyle{JHEP2}
|
1,108,101,566,364 | arxiv | \section{Introduction}
A significant part of the information perceived by a person and required for making even the simplest everyday decisions is presented in multiple modalities, that is, with the help of different types of ``input information'', requiring the use of various senses and types of knowledge. Visual information requires visual perception, processing natural language texts presupposes the knowledge of the language, auditory information implies the perception and analysis of sound, and so on. Each of these modalities is handled by separate, sometimes overlapping areas of machine learning and artificial intelligence: computer vision, natural language processing, speech processing, video processing, and so on. \blfootnote{*Both authors contributed equally to this research.}
However, a successful solution to emerging problems often cannot be obtained by analyzing data coming from only one modality, just as it is not always sufficient for a human being to use only sight or only hearing to make a rational decision. In such cases, information required to solve such problems can be divided into several ``input types'', called data modalities, all of which should be taken into consideration to make successful decisions.
Multi-task learning has a long history mostly in the natural language processing domain. One of the possible reasons is that having the correct representation and thus ``understanding'' of text passage, one can solve many downstream tasks: sentiment analysis, question answering, language translation etc. One of the most widely used approaches here is to have the lower (encoding) layers shared for all tasks, while having the upper layers (also called ``heads'') task-specific and learned separately \cite{liu2019multi}.
It is only recently that scientists have proposed to combine multi-modality and multi-task in one model, taking the joint approach: using different encoders for different modalities, then combining different types of information during middle processing, and completing the process with task-specific heads - e.g. the UniT \cite{hu2021unit} approach, where visual and textual modalities are used, and 7 tasks of computer vision (e.g. object detection), text processing (e.g. sentiment analysis) and vision-and-language (e.g. visual question answering) fields are solved.
\begin{figure}[t!]
\centering
\includegraphics[width=0.5\textwidth]{show_scheme_v7.png}
\caption{Concept of the multi-modal and multi-task architecture Fusion Brain. The tasks here are C2C -- Code2code Translation, HTR -- Handwritten Text Recognition, ZsOD - Zero-shot Object Detection, VQA - Visual Question Answering, AEC - Audio Emotion Classification, and TEC - Text Emotion Classification}
\label{fig:conceptfusionbrain}
\end{figure}
The problem of training large pretrained multi-modal and multi-task models can be separated into 2 subtasks: 1) How to combine modalities, and 2) How to combine tasks.
As for the first question, the current state-of-the-art research in the multi-modal processing is mostly focusing on the questions of the stage at which modalities should be fused (``early'', ``middle'' or ``late'' fusion) and the ways to implement this fusion (through iterative processing or by a modality bottleneck) \cite{liang2018multimodal,li2019visualbert,das2020detecting,savchenko2020ad}. The important approaches for modality fusion are Perceiver \cite{jaegle2021perceiver} and Perceiver IO \cite{jaegle2021perceiverio}, where the modality-specific information serves as the key-value for iterative cross-attention and is later processed by GPT-2-like \cite{radford2019language} transformer. Another interesting and promising example of sharing the modality information is the so-called multimodal bottleneck transformer (MBT) \cite{nagrani2021attention}, where the fusion of the modalities is done: a) closely to the top of the transformer layers; b) only through a very small number $B$ of multimodal neurons (in the work $B = 4$ is used) making the cross-modality sharing only through a small bottleneck, which proves to be very efficient. Finally, incorporation of different modalities (like RGB and OpticalFlow) inside the single model via mutual modality learning can be used \cite{komkov2020mutual}.
The combination of tasks can also be implemented in different ways. An approach similar to above-mentioned UniT is the so-called frozen pretrain transformer (FPT) technique \cite{lu2021pretrained}, which is a source of inspiration for our proposed baseline. However, such multi-task pipeline, when different tasks/modalities are processed through separate heads, is not the only one. The more interesting approaches use the more sophisticated ways of dealing with multiple tasks: for instance, they incorporate either the task-specific adapters \cite{houlsby2019parameter,pfeiffer2020adapterfusion} between the frozen layers or the fully learnable (trainable) task representation (embedding) that can be later propagated in the non-trivial way through the major part of the model (see Perceiver IO, HyperGrid \cite{tay2020hypergrid} or conditionally adapted approach \cite{pilault2020conditionally}).
The corresponding research in the field of information retrieval (IR) is also worth mentioning. For now, however, it seems that quite straightforward solutions are used for IR, e.g. the combination of all task-specific datasets for training NLP model for multiple tasks \cite{maillard2021multi}, or the processing of multi-modal data with the single transformer using the representations obtained by modality-specific encoders as the inputs for the multi-modal retrieval \cite{gabeur2020multi, dzabraev2021mdmmt}.
We aim to promote the development of such promising and challenging field as multi-modal and multi-task research. Our main contributions are the following:
\begin{itemize}
\item prepared the data, task statement and leaderboard for the Fusion Brain Challenge;
\item proposed the specialized as well as the overall metric to evaluate the models;
\item created the simple yet efficient baseline which combines multi-modal as well as multi-task approach.
\end{itemize}
\section{Tasks}
Within the competition we proposed to solve 4 subtasks:
\begin{enumerate}
\item Code2code translation (C2C),
\item Handwritten text recognition (HTR),
\item Zero-shot object detection (ZsOD),
\item Visual question answering (VQA).
\end{enumerate}
In the following subsections we will discuss each of these subtasks in more details.
\subsection{Subtask 1 - Code2code Translation}
Among the various problems within ML4Code field, the task of translating code snippets from one programming language (PL) to another was chosen. Even though source code can be attributed to text modality, it is definitely more structured than natural language, thus we would like to distinguish between them. The proposed task not only adds ``code modality'' to the challenge but also imposes the requirement for the model to be multilingual since it has to understand and generate code in two PLs.
Our C2C task requires a model to translate code snippets from Java to Python. The choice of such a pair of PLs induces extra complexity to the problem since translation between statically- and dynamically-typed languages is more intricate than translation between PLs with the same type checking.
For training we proposed to use a dataset presented in~\cite{avatar}. AVATAR is a parallel corpus that consists of solutions written in Java and Python for 8,506 programming problems collected from competitive programming sites, online platforms, and open source repositories. We used solutions of 6,807 tasks from AVATAR for train, leaving 1,699 examples for the public part of the test set. The private test dataset was designed as follows: at first, Python snippets with a length corresponding to that of the 90th percentile of AVATAR test set part written in Python (up to 282 tokens obtained after tokenization \cite{pytok}) were retrieved from CodeNet~\cite{codenet} dataset; these code snippets were translated to Java by three annotators and then cross-checked; at the final stage, Java functions (not longer than 356 tokens, which matches the 90th percentile of the public test requests' lengths) were back-translated to Python and cross-checked as well to ensure that Python snippets generate the same outputs as source functions when given the same inputs. The resulting number of Java-Python pairs is 322.
CodeBLEU~\cite{CodeBLEU} is selected as an evaluation metric for this task.
\subsection{Subtask 2 - Handwritten Text Recognition}
Handwritten Text Recognition is the task that naturally combines image and text modalities; the model is given an image with a handwritten piece of text in Russian or English and is required to transcribe it into digital text as an output. The dataset for this task was manually collected and annotated; it is composed of the examples from school notebooks. The training data consist of 66,599 images of words written in Russian language (participants of the Challenge may use an open datasets with forms of handwritten English text, e.g., IAM Handwriting Database \cite{htrdb}). The public test set includes 14,973 images: 5,973 in English and 9,000 in Russian. The private test part consists of 12,556 images, 5,494 of which are in English and 7,062 – in Russian.
In total, our new handwritten dataset contains 82,661 images of Russian words, which makes it the largest Russian handwritten dataset in the world so far. We have also released this dataset \cite{htrdatasets} for the benefit of the research community.
The evaluation metric for this task is string accuracy - the proportion of cases in which the predicted text (string) coincides with the ground truth transcription.
\subsection{Subtask 3 - Zero-shot Object Detection}
ZsOD task sets the following problems to the model: firstly, the model should accurately predict bounding boxes for various objects depicted in the images, given the descriptions of these objects in natural language \cite{xiuye2021ZsOD}. In our case, such a common computer vision task as object detection is complicated by the fact that there is no set of predefined classes to choose from – a model is expected to detect classes not present in the training set (i.e. in a zero-shot regime). During inference, a model receives image-query pairs; a query is formatted as a list of textual descriptions (in Russian or English) of objects to detect. The query may contain entities that are absent in the image; a model should predict an empty list as a bounding box for such objects.
The public test dataset is formed from a part of the VisualGenome~\cite{visualgenome} dataset (1,000 examples); the set of classes in it was hidden from the participants. Region descriptions from VisualGenome are used as positive classes (descriptions are normalized: reduced to lowercase; non-printable characters are removed, etc.; boxes related to the same entity are combined under a single description); negative classes are formed by replacing some objects/attributes in the description with those that are missing in the photo. For example, ``a grey chair'' is replaced by ``a \textit{pink} chair''. Also, descriptions of objects belonging to the same domain as the correct classes are used as negative examples: if the photo shows a street, then as negative examples there may be, for instance, descriptions such as ``tall green bricks wall'', ``shingled home in distance'', ``food stand in the street'' (provided, of course, that the described objects are not in the photo). The images for the private test set were either extracted from YFCC100M dataset~\cite{yfcc} or crawled from the Internet. In total, 827 images were attributed with positive (the descriptions of objects which are present on the photo) and negative (the descriptions of missing objects) labels by 10 annotators. The number of positive classes varies from 7 to 10 – the same held true for the negative ones. For a specific image, descriptions can be either in English or in Russian. There can be more than one bounding box for a particular description in the queries, a perfect model should predict all of them.
The F1-score metric is used for evaluation. Refer to the section~\ref{sec:f1-zsod} for more details.
\subsection{Subtask 4 - Visual Question Answering}
VQA is a classical multi-modal task that requires model to understand a textual question and generate an answer to it based on the corresponding image. The peculiarity of the problem is that the questions are not homogeneous: a correct answer can either consist of several words, or be monosyllabic (a ``yes/no'' answer) or be a number. It is assumed that only one answer per question is required. As with other tasks, the model should be bilingual in order to perform well, since questions can be expressed in both English and Russian and the answer is expected to be in the same language except when the question concerns the text on the image. For example, when the question is ``What is written on the T-shirt?'' the answer should be in the same language in which the text is written.
The public test dataset consists of questions in both Russian and English: the Russian-language part is translated examples from the first 10 thousand samples of the validation part of the VQA v2 dataset, the English part - next 10 thousand original samples from the same dataset. The public test set size is 5,446 examples. The private test set was compiled similarly to the one for ZsOD task, except for the nature of annotation: for each image (in total, 1,000 images), 6 questions in Russian or English and corresponding answers were formulated, resulting in 6,000 samples. The intersection with the private test set for ZsOD task is 724 images.
The evaluation metric for this task is accuracy. Each question has a list of possible correct answers; if the prediction matches at least one of the ground truth answers, it is considered true positive.
\section{Baseline}
\begin{figure*}
\includegraphics[width=\textwidth]{FBC_Baseline_final.jpg}
\caption{Baseline architecture}
\label{baselinearch}
\end{figure*}
We provide a concept \cite{fbconcept} of a single model that is trained on several tasks related to different modalities (visual, audio and text). The concept is inspired by a work \cite{lu2021pretrained} that examines the ability of pretrained language models based on the Transformer architecture to form qualitative representations of arbitrary data sequences – thus, generalizing to other modalities with minimal finetuning. The basis of the architecture proposed in the concept is the pretrained GPT-2 \cite{radford2019language} language model; experiments are carried out both with a ``frozen'' model (with only output layer being finetuned), and with a model in which all layers are trained on three modalities simultaneously.
We build our baseline solution also on top of Frozen Pretrained Transformer. The overall architecture can be seen on Figure \ref{baselinearch}. The core, the ``shared brain'' of the whole pipeline is GPT-2 Large, pretrained on natural language; each type of data for a particular task undergoes its specific transformations in order to match the GPT-2's input format, and also has its specific head to generate predictions in accordance with the task. The input and output layers for each of the subtasks are described below.
It is worth mentioning that one can use any of the so-called foundation model (see, e.g., in-depth report \cite{foundationmodels}) instead of GPT-2 as Fusion Brain Core (see Figure \ref{fig:conceptfusionbrain}). Following the researchers from Stanford University CRFM we define foundation models as models trained on broad data at scale such that they can be adapted to a wide range of downstream tasks. Pretty nice examples of such models are BERT \cite{devlin2019bert}, BART \cite{lewis2019bart}, T5 \cite{raffel2020exploring}, GPT-3 \cite{brown2020language}, CLIP \cite{clip_open_ai}, DALL-E \cite{ramesh2021zeroshot}.
\subsection{C2C (code)}
As code is similar to natural language (although it is certainly more structured; the problem of choosing the best representation of source code goes beyond the scope of this work), no major transformations are needed in order to prepare the data for processing with GPT-2. The task is solved in decoder-only machine translation manner: during training, the source sequence (code snippet in Java) is concatenated with the target one (in Python) through the SEP token; the resulting sequence is fed into the GPT-2 with LM head on top in order to minimize the Categorical Cross-Entropy (CCE) loss \cite{Rubinstein99thecross-entropy}. When trained, the model auto-regressively generates Python code given Java function.
\subsection{HTR (image)}
It is somewhat remarkable that images can also be processed using a language model and the proposed method. At first, raw images are subjected to smart resizing with proportions being preserved and empty space being padded; these resized images are then converted into vertical patches with full height and width equal to 8 pixels: $3 \times H_0 \times W_0 \rightarrow 3 \times 128 \times 512 \rightarrow 64 \times (128 \times 8 \times 3)$. Image patch features are extracted with a linear projection layer in order to match the size of the GPT-2 embedding space (1280) before being processed with GPT-2. The transformer outputs are then passed through LSTM and linear layers. The training process is based on the Connectionist Temporal Classification (CTC) loss \cite{ctc} that shows high performance in handwritten text recognition task \cite{shonenkov2021stackmix,de2019no,michael2019evaluating,DBLP:journals/corr/abs-2103-09354}.
\subsection{VQA and ZsOD (image + text)}
The proposed pipelines for solving VQA and ZsOD tasks are similar. Raw images are resized, processed with a small convolutional backbone (ResNet-18) \cite{he2015deep}, Conv2D layer with a kernel size equal to $1$ and Flatten layer in order to match the size of the embedding space before processing with GPT-2:
$3 \times H_0 \times W_0 \rightarrow 3 \times 224 \times 224 \rightarrow (7 \times 7) \times 512 \rightarrow (7 \times 7) \times 1280$. Texts are converted to tokens with the pretrained GPT-2 tokenizer, processed with token and position embeddings. The transformer outputs (both for text tokens and image feature map) are then projected with a linear layer to a shared semantic space using InfoNCE loss \cite{oord2019representation} like in CLIP \cite{clip_open_ai}. The interaction of projected multimodal features takes place in the Multi-Modality Cross-Attention (MMCA) mechanism \cite{Wei_2020_CVPR}. The processing described above, as well as weights, is common for both tasks, but InfoNCE loss is used only for text-image pairs from ZsOD input.
In case of VQA, text projections are used as queries (Q), image feature map projections are used as keys (K) and values (V). The output of MMCA blocks is passed through the linear layer in order to get a projection corresponding to the dimension of the vocabulary. CCE loss is used when adjusting model weights during training. The answer is generated auto-regressively.
In case of ZsOD it is vice versa: image feature map projections are used as Q, text projections are used as K and V. The output of MMCA blocks is passed through the adaptive max pool layer to reduce the amount of resulting bounding boxes per text query to 8 items. The bounding box predictions in the format of (x, y, w, h, probability score) are generated using MLP layers with Binary Cross-Entropy (BCE) loss \cite{Rubinstein99thecross-entropy}, Generalized Intersection over Union \cite{Rezatofighi_2018_CVPR} loss and L1 loss \cite{mae_article}.
\section{Experiments}
The main goal of our experiments is to compare metrics of models trained separately for each task and model trained on all tasks at once (Fusion). In Fusion experiments task type balance sampler is used for avoiding unbalanced learning, but samples from different tasks can appear in the mini-batch before performing the back propagation step. AdamW optimizer and OneCycleLR scheduler are used for optimization. All parameters for all experiments (single and fusion tasks) are equal: warmup 0.1, initial lr 4e-6, max lr 4e-5, final lr 2e-7, weight decay 1e-2, beta coefficients (0.9,0.999), 10 millions samples, batch size 4, 16xV100 32Gb GPUs.
The results of our experiments are introduced in Table~\ref{tab:privatescores}. Total score is the sum of scores for four subtasks, with the exception for CodeBLEU metric which is multiplied by 0.01 (refer to the section~\ref{sec:overall} for more details.). An interesting observation is that Fusion experiment exposed less over-fitting problems.
\begin{table}[H]
\centering
\small
\begin{tabularx}{0.47\textwidth}{@{\extracolsep{\fill}}lccccc}
\toprule[1.2pt]
\addlinespace[0.5em]
\shortstack{training \\ setup} & \shortstack{\textbf{C2C} \\ \textbf{CodeBLEU}} & \shortstack{\textbf{HTR} \\ \textbf{Acc}} & \shortstack{\textbf{ZsOD} \\ \textbf{F1}} & \shortstack{\textbf{VQA} \\ \textbf{Acc}} &
\textbf{Overall} \\
\midrule
\addlinespace[0.5em]
Single-task & 0.34 & \textbf{0.63} & 0.17 & 0.25 & 1.39\\
\addlinespace[0.5em]
\hline
\addlinespace[0.7em]
\shortstack{Fusion} & \textbf{0.39} & 0.61 & \textbf{0.21} & \textbf{0.30} & \textbf{1.51}\\
\addlinespace[0.5em]
\bottomrule[1.2pt]
\end{tabularx}
\caption{Private scores for different training strategies \label{tab:privatescores}}
\end{table}
\subsection{Emissions reduction}
Recently, reporting energy and carbon metrics of training deep learning models has become common practice to promote energy-efficient research \cite{reportco2henderson, carbonpatterson}. In \cite{mlco2} the Machine Learning Emissions Calculator (ML CO2) is proposed, which estimates carbon emissions based on GPU type, hours spent on training, cloud provider, and region. This approach is very useful as it does not require reproducing the training process \cite{aigambit}. According to ML CO2, we estimate (see Table~\ref{tab:co2}) that training the model in the fusion setup generates almost one third less CO2eq (carbon-dioxide equivalent) than when training in a single-task regime, thus proving multi-task learning to be more energy-efficient and climate-friendly.
\begin{table}[H]
\centering
\small
\begin{tabularx}{0.47\textwidth}{@{\extracolsep{\fill}}lccc}
\toprule[1.2pt]
\addlinespace[0.5em]
\shortstack{training \\ setup} & \shortstack{\textbf{Training} \\ \textbf{time (hours)}} & \shortstack{\textbf{Training} \\ \textbf{params}} & \textbf{CO2 (kg)}\\
\midrule
\addlinespace[0.5em]
Single-task & 215.0 & 3,283,978,882 & 39.34\\
\addlinespace[0.5em]
\hline
\addlinespace[0.7em]
\shortstack{Fusion} & \textbf{150.5} & \textbf{988,272,474} & \textbf{27.45}\\
\addlinespace[0.5em]
\bottomrule[1.2pt]
\end{tabularx}
\caption{Total parameters summarized for all 4 tasks
\label{tab:co2}}
\end{table}
\section{Conclusion}
In this paper we have presented the AI Journey 2021 Challenge called Fusion Brain \cite{fbchallenge} -- a competition that is dedicated to the creation of the unified architecture which could deal with different modalities and solve 4 tasks for vision, language and programming code: Code2code Translation, Handwritten Text recognition, Zero-shot Object Detection, and Visual Question Answering. To test the participants' submissions, the datasets for each task were created; we also have described how the data were prepared. To date, the Russian part of the proposed dataset for HTR task is the largest Russian handwritten dataset in the world. We also came up with a task statement and created a leaderboard for the Fusion Brain Challenge. Actually, there were 41 teams that took part in the competition and made at least one submission, and 513 submissions in total (refer to \cite{dsworks} and section \ref{sec:private}). Moreover, according to our estimations, the proposed multi-task fusion approach proves to be more energy-efficient and therefore provides a CO2 emissions reduction.
\section{Acknowledgments}
We would like to thank Sber and SberCloud for granting the GPU-resources to experiment with different architectures and for supporting the Fusion Brain Challenge.
\bibliographystyle{plain}
|
1,108,101,566,365 | arxiv | \section{Introduction}
Although the idea of inflation \cite{Starobinsky:1980te, Sato:1980yn, Guth:1980zm} (for a review, see \cite{Lyth:2009zz}) started from solving the problems (like horizon and flatness problem) of the old hot big bang model, the key to distinguish different inflation models lies on primordial curvature perturbation $\zeta$ which provides the seeds for structure formation. Inflation could more or less generate some primordial curvature perturbation since we are living in a quantum world.
The primordial curvature perturbation generated from simple (single-field slow-roll) inflation models is adiabatic, almost scale invariant, and Gaussian. However, if inflaton is responsible for generating primordial curvature perturbation, the constraint of CMB normalization (i.e. $P_\zeta^{1/2} \simeq 5 \times 10^{-5}$) is so strong that we usually need to fine-tune the parameter(s) when building an inflation model. Furthermore, future experiments (like PLANCK satellite) may detect large non-Gaussianity and hence rule out simple scenarios of inflation. This will reveal for us a nontrivial way of generating primordial curvature perturbation and one of the promising ideas to generate large non-Gaussianity is through a curvaton \cite{Lyth:2001nq, Enqvist:2001zp, Moroi:2001ct}. Non-Gaussianity generated from curvaton scenario can be described by the nonlinear parameter $f_{NL}$, which takes the form
\begin{equation}
\zeta=\zeta_g+\frac{3}{5}f_{NL}\zeta^2_g+\cdots,
\end{equation}
where $\zeta_g$ denotes the Gaussian part of $\zeta$. Currently the upper bound of $f_{NL}$ is roughly given by ($2\sigma$) \cite{Komatsu:2010fb}
\begin{equation}
|f_{NL}| \;^{<}_{\sim} \;} \def\gae{\; ^{>}_{\sim} \; 100.
\end{equation}
In the near future, the PLANCK satellite will reduce the bound to $|f_{NL}|<5$ if non-Gaussainity is not detected.
Curvaton is supposed to be light\footnote{In the context of cosmology, ``light" means mass smaller than Hubble parameter.} and subdominant during inflation. Because it is light, it can produce sizable quantum fluctuations which when stretched outside the horizon during inflation would become classical perturbations. Because it is subdominat, the perturbations should be regarded as isocurvature perturbation after inflation when the curvaton field starts to oscillate. The curvaton is supposed to decay after inflaton decay and at the same time transform its isocurvature perturbation to curvature perturbation. If the curvature perturbation of the universe is from the curvaton, it could liberate the constraint on inflation and lower the scale of inflation \cite{Dimopoulos:2002kt}. It would be interesting if we can identify a field from particle physics to be the curvaton and one of the candidate is a right-handed (RH) sneutrino in the framework of supersymmetry (SUSY). The idea of a right-handed sneutrino curvaton has already be mentioned in the original paper of curvaton \cite{Lyth:2001nq} and been considered in \cite{McDonald:2003xq, McDonald:2004by, Moroi:2002vx, Postma:2002et, Mazumdar:2004qv, Lin:2009yn, Lin:2009fk}. In this case, however, it is possible that we have more than one generation of the right-handed sneutrino and more than one curvaton. The calculation for two curvaton decay has been considered in \cite{Assadullahi:2007uw}, however, the parameters used in the paper is not of direct use for particle physicists to find possible candidate for the curvatons. In this paper we consider the decay width of the right-handed sneutrino curvaton in terms of the Yukawa couplings and masses. It turns out that there are six parameters in addition to the Hubble parameter. We then numerically scanning the parameter space in order to find solutions.
This paper is organized as followsing.
In section \ref{sec1}, we review the result of \cite{Lin:2009fk} in order to compare single- and two- curvaton decays in the succeeding sections. In section \ref{sec2}, we present the formalism we use for calculating two curvaton decay. In section \ref{sec3}, we specify the parameter space and present our numerical results. Section \ref{sec4} is our conclusion. For completeness, there is also an Appendix section in the end of the paper where we summarize the equations adopted from \cite{Assadullahi:2007uw}.
\section{Single Right-Handed Sneutrino Curvaton}
\label{sec1}
It is well known that in standard model of particle physics, there is no good candidate to play the role of a curvaton. However, if we go beyond the standard model by imposing SUSY, there are lots of scalar fields. In this section we review the idea of a right-handed sneutrino which plays the role of the curvaton. We focus on the case that when the curvaton decays, it subdominates the energy density of the universe \cite{Lin:2009fk}, because large non-Gaussianity could be produced in this region of the parameters.
The superpotential of the RH neutrino is
given by
\begin{equation}
W_\nu=\lambda_\nu\Phi H_u L+\frac{m \Phi^2}{2},
\label{revision1}
\end{equation}
where $\Phi$ is the RH neutrino superfield, $H_u$ and $L$ are the
MSSM Higgs and lepton doublet superfields, and $m$ is the RH
neutrino mass. The canonical type-I seesaw mechanism gives the mass relation between light and heavy Majorana neutrino masses as
\begin{eqnarray}\label{numass}
m_{\nu} \sim \frac{\lambda^2_{\nu}v^2_{u}}{m}
\label{yukawac}
\end{eqnarray}
with $v_{u} \sim 10^2 \mbox{GeV} $ denoting the vacuum expectation value of $H_{u}$.
The mass squared differences revealed from the neutrino oscillation data, $\Delta m^2_{12} = 7.59^{+0.19}_{-0.21}\times10^{-5} {\rm eV^2}$ and $|\Delta m^2_{32}| = 2.43\pm0.13\times10^{-3} {\rm eV^2}$~\cite{pdg}, indicates that if we consider three generations of RH sneutrinos, the lightest left-handed neutrino mass can be very small while the mass of the other two generations are fixed due to the oscillation data. This means in principle we can have a Yukawa coupling arbitrarily small ($m_3 \sim 0$) but the other two Yukawa couplings should result in the neutrino masses around $m_{1,2} \sim 0.1\mbox{eV}$ (with a mass difference $\Delta m_{12} \sim 10^{-2}\mbox{ eV}$, namely, ``inverted ordering''). On the other hand, we can also have ``normal ording'' which implies $m_1 \sim 0$, $m_2 \sim 0.01\mbox{ eV}$, and $m_3 \sim 0.1\mbox{ eV}$. From Eq.~(\ref{yukawac}) we can see that for the case $m_\nu \sim 0.1\mbox{ eV}$, if we have $m \sim 10^{-6}M_P$ ($m \sim 10^{-8}M_P$), we need $\lambda_\nu \sim 10^{-1}$ ($\lambda_\nu \sim 10^{-2}$).
The potential of the RH sneutrino $\sigma$ can be expressed as
\begin{equation}
V(\sigma)=\frac{1}{2}m^2\sigma^2.
\end{equation}
For simplicity, here we do not consider the Hubble-induced mass term. It is possible that the mechanism which suppresses the Hubble-induced mass term for the inflaton also suppresses that for the curvaton. For example, this is the case for D-term hybrid inflation \cite{Binetruy:1996xj}.
The decay rate for the RH sneutrino is
\begin{equation}
\Gamma=\frac{\lambda^2_\nu}{4\pi}m.
\end{equation}
The spectrum is given by
\begin{equation}
P^{1/2}_{\zeta_\sigma}=\frac{1}{3\pi}\Omega_{\sigma,D}\frac{H_\ast}{\sigma_\ast} \simeq 5 \times 10^{-5}
\label{eq1}
\end{equation}
If we assume that at the time $t_o$ of curvaton oscillation with
energy density $\rho_\sigma(t_o)=m^2\sigma^2_\ast/2$, the universe
is dominated by radiation (the decay products of inflaton) with
energy density $\rho_R(t_o)=3m^2 M_P^2$. At the time of
curvaton decay $t_D$, the energy density of the universe is given by
$\rho_R(t_D)=3\Gamma^2M_P^2=\rho_R(t_o)(a(t_o)/a(t_D))^4$. Therefore
$a(t_D)/a(t_o)=(m/\Gamma)^{1/2}$ and $\Omega_{\sigma, D}$ is given
by
\begin{equation}
\Omega_{\sigma,D} \equiv \left( \frac{\rho_\sigma}{\rho_{tot}} \right)_D = \frac{1}{6} \left(\frac{\sigma_\ast}{M_P}\right)^2 \left(\frac{m}{\Gamma}\right)^{1/2}=\frac{1}{6} \left(\frac{\sigma_\ast}{M_P}\right)^2 \frac{\sqrt{4 \pi}}{\lambda_\nu}.
\label{eq2}
\end{equation}
Throughout this paper, we always use a subscript ``$\ast$" to denote horizon exit.
When $\Omega_{\sigma,D}$ is small, the nonlinear parameter is given by
\begin{equation}
f_{NL}=\frac{5}{4 \Omega_{\sigma,D}}.
\label{eq3}
\end{equation}
Note that here $f_{NL}$ can never be negative because we assume the curvaton is subdominant when it decays. As we will see in the following section, this is not true in the two-curvaton case.
By using Eqs.~(\ref{eq1}), (\ref{eq2}), and (\ref{eq3}), we can obtain
\begin{equation}
\lambda_\nu M_P^2 =3.9 \times 10^3 \sigma_\ast H_\ast
\end{equation}
and
\begin{equation}
f_{NL}=(8.25 \times 10^3)\frac{H_\ast}{\sigma_\ast}.
\end{equation}
It is interesting to note that here the spectrum does not give constriant on the mass of the right-handed sneutrino. However, as we can see in the next section, the results does depend on the masses when we consider the two curvatons case.
\section{Two Right-Handed Sneutrino Curvatons}
\label{sec2}
In our setup, we consider three generations of right-handed sneutrinos for type-I seesaw model and for simplicity we assume the heavist right-handed sneutrino (with a mass we denoted as $m_c$) does not play any role in cosmology, namely $m_c>H_\ast$.\footnote{If the gauge non-singlet scalar fields like a Higgs are light during inflation, the fields can establish an expectation value through quantum fluctuation. Through a Yukawa coupling, this gives an effective mass to the right-handed sneutrino field as can be seen in Eq.~(\ref{revision1}). The effect is most significant for the heavist right-handed snetrino due to the largest Yukawa coupling. Even if $m_c<H$, it is possible that the heavist right-handed sneutrino obtains an effective mass larger than the Hubble scale during inflation while other right-handed sneutrinos have masses lighter than the Hubble scale.} Here we consider the case that the lighter two right-handed sneutrinos are curvatons (we call them curvaton $a$ and curvaton $b$) and investigate the effects on the primordial curvature perturbation.
The decay rates of the two curvatons are
\begin{equation}
\Gamma_a=\frac{\lambda_a^2}{4 \pi}m_{a} \quad \mbox{and} \quad \Gamma_b=\frac{\lambda_b^2}{4 \pi}m_{b}
\end{equation}
where $\lambda_a$ and $\lambda_b$ are the Yukawa couplings and $m_a$ and $m_b$ are their masses.
Quite generally, we consider
\begin{equation}
H_\ast > m_a > m_b > \Gamma_a > \Gamma_b.
\label{eq9}
\end{equation}
We assume that when both of the curvaton decays, the energy density of the universe is dominated by radiation so that we can compare this with single curvaton case. The similar calculation of the energy density ratio $\Omega$ as in Eq.~(\ref{eq2}) can be obtained for radiation $\gamma$, curvaton $a$, and curvaton $b$.
At curvaton $a$ decay (which is denoted by subscript ``1" throughout this paper),
\begin{eqnarray}
\Omega_{\gamma_0 1} &\simeq& 1, \nonumber \\
\Omega_{a1} &=& \frac{1}{6}\left(\frac{a_\ast}{M_P}\right)^2\left(\frac{m_a}{\Gamma_a}\right)^{1/2} = \frac{1}{6}\left(\frac{a_\ast}{M_P}\right)^2 \frac{\sqrt{4 \pi}}{\lambda_a}, \nonumber \\
\Omega_{b1} &=& \frac{1}{6}\left(\frac{b_\ast}{M_P}\right)^2\left(\frac{m_b}{\Gamma_a}\right)^{1/2} = \frac{1}{6}\left(\frac{b_\ast}{M_P}\right)^2 \left(\frac{4 \pi m_b}{\lambda_a^2 m_a}\right)^{1/2}.
\label{eq14}
\end{eqnarray}
where the subscript $\gamma_0$ denotes pre-existing radiation just before curvaton $a$ decay.
At curvaton $b$ decay (which is denoted by subscript ``2" throughout this paper),
\begin{eqnarray}
\Omega_{\gamma_1 2} &\simeq& 1 + \frac{1}{6}(\frac{a_{*}}{M_P})^2\frac{\sqrt{4\pi}}{\lambda_a}, \nonumber \\
\Omega_{b2}&=&\frac{1}{6}\left(\frac{b_\ast}{M_P}\right)\left(\frac{m_b}{\Gamma_b}\right)^{\frac{1}{2}}= \frac{1}{6}\left(\frac{b_\ast}{M_P}\right)^2 \frac{\sqrt{4 \pi}}{\lambda_b}.
\label{eq15}
\end{eqnarray}
where the subscript $\gamma_1$ denotes radiation just before
curvaton $b$ decay.
To linear order, the spectrum is given by \cite{Assadullahi:2007uw}
\begin{equation}
P_{\zeta(1)}=A^2P_{\zeta_{a(1)}}+B^2P_{\zeta_{b(1)}}
\end{equation}
where the parameters $A$ and $B$ can be found in the Appendix and
\begin{equation}
P^{1/2}_{\zeta_{a(1)}}=\frac{1}{3 \pi}\frac{H_{\ast}}{a_\ast} \;\;\; \mbox{and } \;\;\;P^{1/2}_{\zeta_{b(1)}}=\frac{1}{3 \pi}\frac{H_{\ast}}{b_\ast}.
\label{revision2}
\end{equation}
Here we use subscript ``(1)" to denote ``first order" and ``(2)" will be used to denote ``second order".
If we define $\beta \equiv a_\ast/b_\ast$, we obtain
\begin{equation}
P^{1/2}_{\zeta(1)}=\left[A^2+\beta^2 B^2\right]^{1/2}\frac{1}{3\pi}\frac{H_\ast}{a_\ast}.
\end{equation}
We can write the spectrum explicitly by inserting $A$ and $B$ to obtain
\begin{eqnarray}
P^{1/2}_{\zeta(1)}
&=&P^{1/2}_{\zeta(1)}(H_{*},a_{*},b_{*},m_a,m_b,\lambda_a,\lambda_b)
\nonumber\\&=& \frac{H_{*}}{3\pi}\sqrt\frac{\left[12M_P^2\sqrt{\pi}\frac{a_{*}}{\lambda_a}+4\pi\frac{{a_{*}}^3}{\lambda_a^2}+3\pi\sqrt{\frac{m_b}{m_a}}(\frac{a_{*}}{\lambda_a})(\frac{{b_{*}}^2}{\lambda_a})
\right]^2+\left[12M_P^2\sqrt{\pi}(\frac{b_{*}}{\lambda_b})+\pi{b_{*}}\left[\frac{a_{*}^2}{\lambda_a}(\frac{1}{\lambda_a}\sqrt{\frac{m_b}{m_a}}+\frac{3}{\lambda_b})+3\sqrt{\frac{m_b}{m_a}}\frac{b_{*}^2}{\lambda_a\lambda_b}\right]
\right]^2}{\left[4M_P^2+\sqrt{\pi}(\frac{a_{*}^2}{\lambda_a}+\sqrt{\frac{m_b}{m_a}}\frac{b_{*}^2}{\lambda_a})
\right]^2\left[12M_P^2+\sqrt{\pi}(4\frac{a_{*}^2}{\lambda_a}+3\frac{b_{*}^2}{\lambda_b})\right]^2}. \nonumber \\
\end{eqnarray}
This is subjected to CMB normalization $P^{1/2}_{\zeta(1)} \sim 5 \times 10^{-5}$ at horizon exit.
To second order, the curvature perturbation is given by \cite{Assadullahi:2007uw}
\begin{equation}
\zeta \equiv \zeta_2=\zeta_{2(1)}+\frac{1}{2}\zeta_{2(2)}=\left[A\zeta_{a(1)}+B\zeta_{b(1)}\right]+\frac{1}{2}\left[C\zeta^2_{a(1)}+D\zeta^2_{b(1)}+E\zeta_{a(1)}\zeta_{b(1)}\right]
\label{revision3}
\end{equation}
where $\zeta_2$ is the total curvature perturbation after the second curvaton decay. As in Eq.~(\ref{revision2}), we use subscript ``(1)" to denote ``first order" and ``(2)" to denote ``second order". The first order part $\zeta_{2(1)}$ is Gaussian and the second order part $\zeta_{2(2)}$ is non-Gaussian.
The parameters $C$, $D$, $E$ can be found in the Appendix
and the nonlinear parameter is
\begin{equation}
f_{NL}=\frac{5}{6}\frac{CA^2+\frac{1}{2}\beta^2 EAB+\beta^4 D B^2}{(A^2+\beta^2 B^2)^2}.
\end{equation}
\section{Numerical Results}
\label{sec3}
There are six parameters: $a_{*},b_{*},m_a,m_b,\lambda_a,\lambda_b$ in addition to the Hubble parameter $H_{*}$ and one constraint (CMB normalization). We tackle the problem numerically by scanning the parameter space. First of all, we assume the inflaton contribution to the curvature perturbation is small, this implies
\begin{eqnarray}
P^{\frac{1}{2}}_{\zeta_{inf}}=\frac{1}{2\sqrt{2}\pi}\frac{H_*}{\sqrt{\epsilon_H}M_P}<5\times10^{-5},
\end{eqnarray}
where $\epsilon_H \equiv -\dot{H}/H^2$. For a typical value of $\epsilon_H\sim0.01$ at horizon exit, we have the bound for Hubble parameter
\begin{eqnarray}
H_\ast<\sqrt{2}\pi\times10^{-5}M_P.
\end{eqnarray}
We will numerically find solutions by making plots for the two cases $H_\ast=10^{-5}M_P$ and $H_\ast=10^{-6}M_P$ in this paper.
The range of the curvaton field values at Hubble exit are chosen to be
\begin{equation}
H_\ast<a_\ast<M_P \quad \mbox{and} \quad H_\ast<b_\ast<M_P.
\end{equation}
The lower bound is from the requirement that the classical field value is larger than its fluctuation and the upper bound is from the requirement that the curvaton would not drive a second stage of inflation.
The masses of the curvatons are
\begin{equation}
10^{-15}M_P<m_a<H_\ast \quad \mbox{and} \quad 10^{-15}M_P<m_b<m_a.
\end{equation}
The lower bound is from the fact that the mass of the RH sneutrino should be larger than its soft mass which is assumed to be TeV scale and we also use the constraint from Eq.~(\ref{eq9}). The decay rate is also constrained by Eq.~(\ref{eq9}),
\begin{eqnarray}
\Gamma_a=\frac{\lambda_a^2m_a}{4\pi}<m_b,
\end{eqnarray}
therefore we choose
\begin{eqnarray}
10^{-10}<\lambda_a<\min\{\sqrt{\frac{m_b}{m_a}4\pi},1\}.
\end{eqnarray}
And again we require
\begin{eqnarray}
\Gamma_a=\frac{\lambda_a^2}{4\pi}m_a>\Gamma_b=\frac{\lambda_b^2}{4\pi}m_b,
\end{eqnarray}
therefore we choose
\begin{eqnarray}
10^{-10}<\lambda_b<\min\{\lambda_a\sqrt{\frac{m_a}{m_b}},1\}.
\end{eqnarray}
We plot our results in Figs.~\ref{fig1}-\ref{fig3} for $H_\ast=10^{-5}M_P$ and Figs.~\ref{fig4}-\ref{fig6} for $H_\ast=10^{-6}M_P$. For comparison, we also plot the results from the single-curvaton case. Naively one may think it is possible that one of the curvaton plays no role at all and the results would be dominated by a single curvaton. However, those plots show that both of the curvatons could play some role in generating curvature perturbation.
In all of the plots, each point represents a solution of all six parameters subjected to CMB normalization. We pick up a few points and list them on the Table~\ref{output}. In the plots $f_{NL}$ is an output after we imposing CMB normalization. We notice that even we assume the energy density of the curvatons is subdominant when they decays, we can still get small or negative (as well as large) $f_{NL}$. In addition, we also find that one of the Yukawa coupling can be as large as $\sim 0.1$ while the other coupling is relatively small. This is interesting because it is consistent with the neutrino oscillation data which we discussed below Eq.~(\ref{yukawac}) by the assumption that the curvatons are RH sneutrinos. For example, in the second row of Table~\ref{output}, we would have the light neutrino masses correspond to the heavy neutrino masses $m_a$ and $m_b$ as $m_{\nu a} \sim m_1 \sim 0.1\mbox{ eV}$ and $m_{\nu b} \sim m_3 \sim 10^{-9}\mbox{ eV}$ respectively from Eq.~(\ref{yukawac}). We can easily choose a $m_c>m_a$ with a larger Yukawa coupling to make $m_{\nu c} \sim m_2 \sim 0.11 \mbox{ eV}$ and make an inveted ordering of neutrino masses compatible with neutrino oscillation. As another example, for the seventh row of Table~\ref{output}, $m_{\nu a} \sim m_2 \sim 0.01\mbox{ eV}$ can be obtained and along similar arguing we can obtain a normal ordering of neutrino masses. We would like to emphasis here that our goal is NOT showing that all of our parameter spaces are compatible with neutrino oscillation data, because our results can generically apply to other two-curvaton models with a similar decay rates. However, it is interesting enough if some of them do compatible with our assumption that two RH sneutrinos can play the role of curvatons.
\begin{figure}[h]
\centering
\includegraphics[width=0.5\textwidth, angle=-90]{Lambda_H_5.eps}
\caption{$\lambda_a$ versus $\lambda_b$.
The points outside the range of single-curvaton bound represent the effect of considering two curvatons. As we can see here, one of the Yukawa couplings can be quite large.}
\label{fig1}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=0.5\textwidth, angle=-90]{Field_H_5.eps}
\caption{$a_\ast$ versus $b_\ast$.}
\label{fig2}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=0.5\textwidth, angle=-90]{Mass_H_5.eps}
\caption{$m_a$ versus $m_b$. The apparent slope is due to the assumption that $m_a>m_b$.}
\label{fig3}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=0.5\textwidth, angle=-90]{Lambda_H_6.eps}
\caption{$\lambda_a$ versus $\lambda_b$. We found fewer points because it is more difficult to find solutions by our numerical computation for smaller $H_\ast$.}
\label{fig4}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=0.5\textwidth, angle=-90]{Field_H_6.eps}
\caption{$a$ versus $b$.}
\label{fig5}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=0.5\textwidth, angle=-90]{Mass_H_6.eps}
\caption{$m_a$ versus $m_b$.}
\label{fig6}
\end{figure}
\begin{center}
\begin{table}[h]
\begin{tabular}{|c|c|c|c|c|c|c|r|} \hline
$\frac{H_{*}}{M_P}$ & $\frac{a_{*}}{M_P}$ & $\frac{b_{*}}{M_P}$ & $\frac{m_a}{M_P}$ & $\frac{m_b}{M_P}$ & $\lambda_a$ & $\lambda_b$ & $f_{NL}$ \\ \hline \hline
$10^{-5}$ &$1.99\times10^{-2}$&$3.59\times10^{-4}$&$7.02\times10^{-6}$&$4.13\times10^{-6}$&$2.61\times10^{-5}$&$9.60\times10^{-6}$&$-0.95$ \\
$10^{-5}$ &$2.33\times10^{-2}$&$5.49\times10^{-3}$&$4.13\times10^{-6}$&$1.46\times10^{-6}$&$6.05\times10^{-1}$&$3.76\times10^{-5}$&$2.88$ \\
$10^{-5}$ &$2.87\times10^{-1}$&$4.39\times10^{-3}$&$4.06\times10^{-6}$&$9.38\times10^{-7}$&$1.53\times10^{-1}$&$2.46\times10^{-5}$&$4.15$ \\
$10^{-5}$ &$3.21\times10^{-1}$&$2.83\times10^{-3}$&$7.05\times10^{-6}$&$3.84\times10^{-6}$&$1.61\times10^{-1}$&$1.69\times10^{-5}$&$7.69$ \\
$10^{-5}$ &$3.10\times10^{-3}$&$6.38\times10^{-4}$&$6.07\times10^{-7}$&$4.12\times10^{-7}$&$3.17\times10^{-2}$&$6.14\times10^{-6}$&$42.0$ \\
$10^{-6}$ &$1.95\times10^{-3}$&$2.55\times10^{-4}$&$6.07\times10^{-8}$&$5.85\times10^{-9}$&$9.39\times10^{-8}$&$1.79\times10^{-7}$&$-1.12$ \\
$10^{-6}$ &$2.34\times10^{-2}$&$5.41\times10^{-4}$&$8.41\times10^{-8}$&$3.32\times10^{-8}$&$1.81\times10^{-2}$&$3.55\times10^{-7}$&$2.83$ \\
$10^{-6}$ &$1.83\times10^{-1}$&$5.23\times10^{-4}$&$6.30\times10^{-7}$&$6.12\times10^{-7}$&$7.12\times10^{-1}$&$3.72\times10^{-7}$&$3.31$ \\
$10^{-6}$ &$1.04\times10^{-1}$&$3.91\times10^{-4}$&$1.87\times10^{-7}$&$6.92\times10^{-8}$&$1.49\times10^{-2}$&$2.04\times10^{-7}$&$4.82$ \\
$10^{-6}$ &$2.72\times10^{-2}$&$5.17\times10^{-5}$&$6.72\times10^{-7}$&$6.44\times10^{-7}$&$2.36\times10^{-3}$&$4.20\times10^{-8}$&$52.0$ \\ \hlin
\end{tabular}
\caption{\label{output} List of a few points of the numerical solutions.}
\end{table}
\end{center}
\section{Conclusion and Discussion}
\label{sec4}
We have explored the parameter space of generating primordial curvature perturbation via two right-handed sneutrino curvaton decays in the framework that there are three generations of RH sneutrinos. We compared the results with the single-curvaton case and found that an additional RH sneutrino curvaton could have some effects and cannot be neglected. Notably we may still get small or even negative $f_{NL}$ in the case that the energy density of the curvatons sub-dominates when they decay.
As can be seen from Fig.~\ref{fig2} and Fig.~\ref{fig5} that the field value for the solutions may be not very small compared to the Planck mass (although it has to be smaller than the Planck mass in order to avoid driving a second stage of inflation). Therefore we have to suppress the nonrenormalizable terms in the superpotential. This can be achieved for example by judiciously assigning R-charge to the RH sneutrinos \cite{McDonald:2004by, Lin:2006xta}.
|
1,108,101,566,366 | arxiv | \section{Introduction}
\input{sections/introduction}
\section{Related work}
\label{sec:related-work}
\input{sections/related-work}
\section{Fourier Transform}
\label{sec:fourier_transform}
\input{sections/fourier-transform}
\section{Proposed method}
\label{sec:proposed_method}
\input{sections/proposed-method}
\section{Experiments}
\label{sec:experiments}
\input{sections/experiments}
\section{Results and Discussion}
\label{sec:results_and_discussion}
\input{sections/results-and-discussion}
\section{Conclusions}
\label{sec:conclusions}
\input{sections/conclusions}
\section*{Acknowledgments}
The authors would like to thank the Scientific Computing Group (SCG) from the São Carlos Institute of Physics for the bright-field microscopy lab. This work was supported by the Brazilian research agency Conselho Nacional de Desenvolvimento Científico e Tecnológico - Brasil (CNPq), process 132795/2018-3. This study was financed in part by the Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - Brasil (CAPES) - Finance Code 001.
\bibliographystyle{plainnat}
\subsection{Image dataset}
Three light microscopy datasets were acquired with the ZEISS SteREO
Discovery.v20 and the ZEISS AxioLab A1 stereo microscopes from the Scientific
Computing Group (SCG) at São Carlos Institute of Physics (IFSC). Samples of blurred and sharp images of both datasets are shown in Figure \ref{fig:datasets}.
\begin{figure}
\centering
\includegraphics[scale=0.3]{images/datasets.png}
\caption{Examples of the proposed dataset images: blurred \textit{Callisia} \textbf{(a)}, sharp \textit{Callisia} \textbf{(b)}, blurred \textit{Tradescantia} \textbf{(c)}, sharp \textit{Tradescantia} \textbf{(d)} and blurred \textit{Cthenante} \textbf{(e)}, sharp \textit{Cthenante} \textbf{(f)}.}
\label{fig:datasets}
\end{figure}
The datasets contain images from leaf histological samples of the plants \emph{Callisia repens}, \emph{Tradescantia zebrina} and \textit{Cthenante oppenheimiana}, acquired with different focal planes and with different magnification levels.
In order to validate the results with correlation to a subjective quality index, each image was labeled as sharp or blurred, which respectively translates to \emph{eligible} and \emph{negligible} for the fusion process. The relevant properties of the datasets are summarized in Table \ref{tab:dataset_info}.
\begin{table}[ht]
\centering
\begin{tabular}{lcccc}
\toprule
Dataset & Images & Mag. & Sharp & Sequence\\
\midrule
\textit{Callisia} & 56 & 50 & 9 & 41 - 49\\
\textit{Tradescantia} & 66 & 200 & 2 & 50 - 51\\
\textit{Cthenante} & 55 & 100 & 16 & 30 - 45\\
\bottomrule
\end{tabular}
\caption{Information about the proposed datasets.}
\label{tab:dataset_info}
\end{table}
\subsection{Validation metrics}
Three objective metrics were chosen to evaluate the performance of the proposed method. The classification of the images on our datasets can be considered as a subjective quality score, and therefore the objective metrics for comparison should relate to it. According to \citet{wang2011information}, evaluation metrics such as the Pearson Linear Correlation Coefficient (PLCC), the Spearman's Rank Correlation Coefficient (SRCC) and the Kendall's Rank Correlation Coefficient (KRCC) are suitable for the case. For all correlation coefficients, higher values yield higher reliability to the objective IQA metric.
\subsection{Image Degradation and the Fourier Spectrum}
Dirac Delta functions are generalizations of impulses, i.e. infinitely large values within an infinitely small time interval. The continuous Dirac Delta may be written as
\begin{equation}
\label{eqn:dirac_delta_function}
\delta^{2}(x,y)=
\begin{cases}
\infty, & \text{if } x^{2} + y^{2} =0\\
0, & \text{if } x^{2} + y^{2} \neq 0
\end{cases}
\end{equation}
\noindent subject to the constraint
\begin{equation}
\label{eqn:dirac_delta_constraint}
\int_{-\infty}^{\infty}
\int_{-\infty}^{\infty}
\delta^{2}(x,y)dxdy = 1,
\end{equation}
\noindent The discrete version of the Dirac Delta function consists of an infinite sum instead of the integral. It is also useful to define the concept of Point Spread Function (PSF) of an imaging device: it is the representation of a two-dimensional impulse from a light source, which forms a point-shaped white object. The PSF is an extended blob in an image that describes a single point object, which can be mathematically described as a low-pass kernel.
Digital images are created using imaging devices, e.g. an optical microscope. Those are capable of capturing information from the continuous scene and create a discrete representation, by means of sampling and quantization. The process of digital image formation can be represented by
\begin{equation}
\label{eqn:image_formation}
g(x,y) = f(x,y) \ast h(x,y) + \eta(x,y),
\end{equation}
\noindent where $f(x,y)$ is the original image (without any degradation), $g(x,y)$ is the image after all the degradation processes, $h(x,y)$ is the PSF of the imaging device and $\eta(x,y)$ is a function which describes the noise conditions in which the image was taken. The symbol $\ast$ denotes the convolution operation, which is the process of flipping a filter mask by $180^\circ$, moving it along the image and computing the sum of the products at each location \cite{gonzalez2006digital}. The convolution operation in equation \ref{eqn:image_formation} is defined by
\begin{equation}
\label{eqn:2d_discrete_convolution}
f(x,y) \ast h(x,y) =
\sum_{m=-a}^{a}
\sum_{n=-b}^{b}
f(m,n)h(x-m,y-n),
\end{equation}
\noindent where $a = (m-1)/2$ and $b = (n-1)/2$, given that the function $h(x,y)$ is considered to be a two-dimensional filter of size $m \times n$.
The Fourier Spectrum is the amount of each frequency component among a discrete range of them, described in the form of a distribution. After the transformation, the resulting Fourier spectrum of the image consists of a matrix with complex coefficients and zeros on each of its four corners. Usually, the applications require a shift between the first and third quadrants, and also the second and the fourth quadrants, to the center of the matrix. The unshifted and shifted Fourier transforms of an grayscale airplane test image are shown in Figures \ref{fig:airplane_fft_shift}.(c) and \ref{fig:airplane_fft_shift}.(d), respectively.
\begin{figure}[ht]
\centering
\includegraphics[scale=0.5]{images/airplane_fft_shift.png}
\caption{Original image \textbf{(a)}, luminance grayscale converted image \textbf{(b)}, unshifted Fourier spectrum of the grayscale image \textbf{(c)} and shifted Fourier spectrum of the grayscale image \textbf{(d)}.}
\label{fig:airplane_fft_shift}
\end{figure}
The frequency profile can be efficiently computed by \linebreak zero-padding the grayscale image before the transform so that the resulting image is a square matrix with power-of-two dimensions. The Fast Fourier Transform (FFT) is a \emph{divide and conquer} algorithm to reduce the computational complexity of the DFT from $\mathcal{O}(n^{2})$ to $\mathcal{O}(n\log{}n)$, which needs a power-of-two input sample size \cite{gonzalez2006digital}. Subsequently, each quadrant of the resulting matrix with the coefficients was shifted in order to achieve the same configuration of Figure \ref{fig:airplane_fft_shift}.(d). Let the matrix of DFT coefficients be represented as a square of side $L = \max{(m,n)}$, $k = L / 2$ be the maximum radius value for circles within the square and $C = (k,k)$ be the center of the infinite set of concentric circles inscribed in the square. Each circle represents a mask above the spectrum and stands for a frequency band, which starts as zero in the center of the matrix and increases together with the radius, as shown in Figure \ref{fig:spectrum_bands}.
\begin{figure}[ht]
\centering
\includegraphics[scale=0.43]{images/spectrum_bands.png}
\caption{Frequency bands as rings of radius $\{r_{i}: i\in\mathbb{N}^{*}\}$ drawn over the 2D spectrum.}
\label{fig:spectrum_bands}
\end{figure}
Therefore, complex coefficients within the concentric circles with small radius values tend to have more energy than ones with radius values closer to k. Circles with increasing radius size, i.e. $\{r_{i},...:i \in \{1,2,...,k\}\}$, will cover all the frequency information from the image. The circles of radii $\{r_{i},...:i \in \{k+1,k+2,...\}\}$ comprise a very small area of the spectrum, and therefore may not be considered. Blurred images, for example, exhibit more low-frequency components than high-frequency ones. This is because blur may be understood as a filtering procedure with a low-pass kernel.
\subsection{Pre-processing}
\label{subsec:pre_processing}
The color space is a crucial feature
for the application since it
synthesizes the information from the image in a one-dimensional element.
In our method, the image undergoes a grayscale conversion with the luminance method \cite{ponti2016image} - a
linear combination of the three channels of an image from a trichromatic space such as RGB, as shown in the matrix equation given by
\begin{equation}
\label{eqn:luminance}
I_{luminance} = 0.299R + 0.587G + 0.114B,
\end{equation}
\noindent where $I_{luminance}$ is the matrix that represents the grayscale converted image, $R$, $G$ and $B$ represent the matrices of the red, green and blue channels, respectively.
Next, the resulting grayscale image resolution is reduced. The images in our dataset are of high resolution (2560 $\times$ 1920), rendering the process unfeasible. The resizing procedure consists of a bilinear interpolation, which uses the four nearest neighbors to estimate the intensity at a given location \cite{gonzalez2006digital}. It can be described by
\begin{equation}
\label{eqn:bilinear_interpolation}
I_{resized}(x,y) = ax + by + cxy + d.
\end{equation}
The pre-processing step ends with image enhancement. Microscopy images are, in general, acquired in different illumination conditions as a result of the operator's setup, focus adjustment by moving the objective or the stage and even physical properties of the microscope itself like transmitted or reflected light. To overcome this and deliver a uniform image for the Fourier Transform, a mapping function is used to perform contrast enhancement. The chosen algorithm for this task is the Contrast Limited Adaptive Histogram Equalization (CLAHE). It consists of computing several local histograms and distributing the gray levels along the regions from where the histograms were computed. The distribution, in this case, is done within a threshold to keep homogeneous areas and to reduce the noise amplification that occurs in a standard Adaptive Histogram Equalization \cite{zuiderveld1994constrast}.
\subsection{Fourier Spectrum Sampling}
\label{subsec:fourier_spectrum_sampling}
As described in section \ref{sec:fourier_transform}, concentric circles along the shifted Fourier spectrum may be drawn in order to retrieve information from each frequency band. This approach is rather theoretical since the number of masks that may be applied to the spectra is finite. Taking into account the pixel resolution of the input images, it makes sense to sample the information, otherwise the computational complexity and running times
of the algorithm for one image alone would be impractical.
To comprise as much information about each frequency band as possible, we propose to sample the spectrum by means of radial lines as masks, i.e. white antialiased lines are drawn over a matrix of zeros, which are then element-wise multiplied by the spectrum. The lines are created from the $(x_{c},y_{c})$ center of the spectrum to points in an approximate radial position, which is calculated by
\begin{equation}
\label{eqn:points_on_radii}
P(x,y) =
(
x_{c} r_{j} \cos{a_{j}},
y_{c} r_{j} \sin{a_{j}}
)
\end{equation}
\noindent with the set of angles $\{a_{j}\}$ in the radian form, computed as
\begin{align}
\label{eqn:angles}
\left\{
a_{j} : a_{j} =
\frac{j \pi}{180}
\right\}
&& j = \{0,5,...,100\}.
\end{align}
\noindent The outcome of equation
\ref{eqn:points_on_radii} is a
floating-point ordered pair, which is
rounded to the nearest integer value. The
antialiasing is achieved with a Gaussian filtering process. One example of all the generated lines is shown in figure \ref{fig:radial_masks}:
\begin{figure}
\centering
\includegraphics[
scale=0.275]
{images/radial_masks.png}
\caption{Final mask of radial lines.}
\label{fig:radial_masks}
\end{figure}
\noindent After the element-wise multiplication, the radial vectors result in arrays of complex coefficients that represent samples of the frequency profile of the image. The radial lines in Figure \ref{fig:radial_masks} have different lengths, hence the length of each array is not the same, even with antialiasing. Therefore, the shortest length value among all the vectors is taken as a limit. All vectors go through an element-wise average, which results in a one-dimensional vector as a descriptor of the frequency spectrum. To obtain a uniform feature vector, we set the smallest vector size among all of them and discard them. Therefore, if $L$ is the size of the zero-padded square image, the dimension of the final feature vector is about $1 / 2L$.
\subsection{Statistical Analysis}
As a result of the feature extraction process described in sections \ref{subsec:pre_processing} and \ref{subsec:fourier_spectrum_sampling}, we have a low-dimensional and concise representation of the image that captures blur information. We propose a set of steps to analyze the dataset which relies on statistical tools and the mathematical properties of the Fourier spectrum.
Note that each feature vector is a distribution, with values in the range $[0,\infty)$. To use them as a probability distribution function suitable for techniques such as descriptive statistics and Bayesian inference they must be mapped onto the probability space $[0,1]$. Hence, we apply a linear operator $T : \ell^{2}(\mathbb{Z}^{2}) \rightarrow \ell^{2}(\mathbb{Z}^{2})$, written as
\begin{align}
\label{eqn:probability_operator}
x_{i} = \frac{x_{i}}{\sum_{j=0}^{n-1}x_{j}}
&& i = \{0,1,...,n-1\},
\end{align}
\noindent where each $x_{i}$ is a value of the descriptor which will be mapped onto a probability.
Information embedded in the low-frequency components of the descriptor, which corresponds to the Dirac delta distribution within the point spread function of the imaging system, should be discarded. However, it should be done with caution, so that the remaining information is enough to represent the blur profile of the image. In order to properly discard the Dirac delta components from each descriptor, we propose to find an optimal threshold that allows the data to be ``cropped'', i.e. a subset of it will be taken as the final representation of the blur profile. The threshold is chosen to maximize the difference between the maximum and the minimum among a set of kurtosis values that represent each descriptor.
The optimal threshold is computed as follows. We start with a crop size equal zero by computing the kurtosis of the entire set $\{x_{1},x_{2},...,x_{n}\}$ of all descriptors. The crop size is then incremented by 1, yielding the subset $\{x_{2},x_{3},...,x_{n}\}$. This process is repeated until the kurtosis of all crop sizes
is computed. The kurtosis is one of the probability
distribution shape statistics: a measure of how large the "tail" of the distribution is, such that smaller absolute values indicate that the distribution tends to be uniform. Kurtosis (Eq. \ref{eqn:kurtosis}) is defined as the ratio of the fourth moment (equation \ref{eqn:rth_moment} with $r = 4$) by the square of the variance (also equation \ref{eqn:rth_moment} with $r = 2$) \cite{zwillinger1999crc}
\begin{equation}
\label{eqn:rth_moment}
m_{r} = \frac{1}{n}
\sum_{i=1}^{k}p_{i}(x_{i} - \bar{x})^{r}
\end{equation}
\begin{equation}
\label{eqn:kurtosis}
g_{2} = \frac{m_{4}}{(m_{2})^{2}} - 3.
\end{equation}
\noindent The $-3$ constant is due to Fischer's approach, where the kurtosis of a normal distribution is zero. Algorithm \ref{alg:kurtosis_array} denotes the pre-processing step:
\begin{algorithm}
\caption{Kurtosis computation}
\label{alg:kurtosis_array}
\begin{algorithmic}[1]
\State // $X_{c \times n}$: dataset of $n$ descriptors with size $c \in C$, where \\ // $C = \{0,1,...,size(descriptor)\}$
\\
\State // $T(X)$: linear operator from equation \ref{eqn:probability_operator} to map the \\ // descriptors onto probability distributions
\\
\State $X \gets T(X)$
\State $A \gets zeros(c, n)$
\For {\textbf{each} crop size $c$ in $C$}
\For{\textbf{each} descriptor $i$ in $\{1,2,...,n\}$}
\State $A[crop][i] \gets$ \textbf{kurtosis}$\left(X[i].subset(0, crop)\right)$
\EndFor
\EndFor
\\
\Return $A$
\end{algorithmic}
\end{algorithm}
\noindent Next, we propose a procedure to compute the optimal threshold. It is described in algorithm \ref{alg:cut_threshold}. The best crop size will be chosen such that the range within the dataset values is maximum. This will allow high-frequency information to be discarded from the point spread function without loss of
blur information.
\begin{algorithm}[ht]
\caption{Find the optimal dataset variability threshold}
\label{alg:cut_threshold}
\begin{algorithmic}[1]
\State // $A_{c \times n}$: matrix with kurtosis values for all $n$ descriptors // that were computed at every crop size $c \in C$, where \\ // $C = \{0,1,...,size(descriptor)\}$
\\
\State $threshold \gets 0$
\State $maximum \gets \infty$
\For {\textbf{each} crop size $c$ in $C$}
\State $row \gets \{A_{c,1},A_{c,2},...,A_{c,n}\}$
\State $a \gets$ \textbf{max}$(row)$
\State $b \gets$ \textbf{min}$(row)$
\\
\If{$a < 0$ or $b < 0$}
\State \textbf{continue}
\EndIf
\\
\State $range \gets a - b$
\\
\If{$range > maximum$}
\State $maximum \gets range$
\State $threshold \gets c$
\EndIf
\EndFor
\\
\Return $threshold$
\end{algorithmic}
\end{algorithm}
With the optimal threshold, we can crop each descriptor without losing important frequency information. For each resultant vector, we
compute the interquartile range (IQR), which gives a measure of the spread for any distribution, whether or not it has a mean or variance. In a nutshell, it is the length of the interval that contains the middle half of the distribution
\cite{degroot2012probability}. Mathematically, it is the difference between the third ($Q_{3}$) and the first ($Q_{1}$) quartiles - values that separates the lowest $25\%$ of data from the highest $75\%$ and the highest $25\%$ of data from the lowest $75\%$, respectively \cite{devore2015probability}. IQR is described by
\begin{equation}
\label{eqn:iqr}
IQR = Q_{3} - Q_{1}.
\end{equation}
\noindent Next, for each distribution, we have a measure of its variability, which represents the sharpness metric of the corresponding image. In the scope of this work, higher variability means a higher amount of details (high frequencies) in the image. Therefore, the images with a higher interquartile range value should be classified as relatively sharp in accordance with a threshold.
This is achieved through another transformation, named $z$-score, that describes the location of an observation relative to the mean in units of the standard deviation. Given an arbitrary element $x$ of the distribution, a negative $z$-score shows that $x$ lies to the left of the mean, while a positive $z$-score indicate that the $y$ lies to the right of the mean \cite{mendenhall2016statistics}. The $z$-score is given by
\begin{equation}
\label{eqn:z-score}
z = \frac{x - \mu}{\sigma},
\end{equation}
\noindent where $\mu$ is the mean and $\sigma$ is the standard deviation of the distribution. Finally, the threshold to classify the images as sharp or blurred comes from the $z$-score, which measures how far an observation is from the mean of the dataset in terms of standard deviation units.
|
1,108,101,566,367 | arxiv | \section{Introduction}
A famous exercise of \cite{St1} proposes to the reader to show
that every item of a long list of combinatorial structures
provides a possible interpretation of the well-known sequence of
Catalan numbers. In addition, since its appearance, many new
combinatorial instances of Catalan numbers (in part due to Stanley
as well \cite{St2}) have been presented by several authors
(\cite{BEM,Cl,MM,MaSh,MaSe}, to cite only a few). What makes
Stanley's exercise even more scary is the request for an explicit
bijection for each couple of structures: even the more skillful
and bold student will eventually give up, frightened by such a
long effort.
The motivation of the present work lies in the attempt of making
the above job as easier as possible. We propose yet another
instance of Catalan numbers, by showing that they count pairs of
binary relations satisfying certain axioms. Of course this is not
the first interpretation of Catalan numbers in terms of binary
relations. For instance, a well-known appearance of Catalan
numbers comes from considering the so-called \emph{similarity
relations}; these have been introduced by Fine \cite{F} and
further studied by several authors \cite{GP,M,Sh}. However, what
we claim to be interesting in our setting is that fairly every
known Catalan structure (or, at least, most of the main ones) can
be obtained by suitably interpreting our relations in the
considered framework. From the point of view of our student, this
approach should result in a quicker way to find bijections:
indeed, it will be enough to guess the correct translation of any
two Catalan structures in terms of our binary relations to get, as
a bonus, the desired bijection. We hope to make this statement
much clearer in section \ref{instances}, where, after the
definition of a \emph{Catalan pair} and the proofs of some of its
properties (pursued on sections \ref{def-prop-enum}), we
explicitly describe some representations of Catalan pairs in terms
of well-known combinatorial objects.
The rest of the paper is devoted to show that Catalan pairs are
indeed a concept that deserves to be better investigated. In
section \ref{ReS} we show that any Catalan pair is uniquely
determined by its second component, and we also provide a
characterization of such a component in terms of forbidden
configurations (which, in our case, are forbidden posets). In
addition, we look at what happens when the second component of a
Catalan pair has some specific properties, namely when it
determines a connected posets or a (possibly distributive)
lattice. We also observe that the first component of a Catalan
pair does not uniquely determine the pair itself, and we give a
description of Catalan pairs having the same first component.
Finally, we propose some generalizations of Catalan pairs: in
section \ref{gener1} we see how to modify the axioms in order to
obtain pairs of relations associated with other important integer
sequences, such as Schr\"oder numbers and central binomial
coefficients; moreover we propose a slight, and very natural,
modification of the crucial axiom in the definition of a Catalan
pair and give an account on what this fact leads to.
Throughout the paper, the reader will find a (not at all
exhaustive) series of open problems. We hope they can serve to
stimulate future research on these topics.
\section{Catalan pairs}\label{def-prop-enum}
In what follows, given any set $X$, we denote
$\mathcal{D}=\mathcal{D}(X)$ the \emph{diagonal} of $X$, that is
the relation $\mathcal{D}=\{ (x,x)\; |\; x\in X\}$. Moreover, if
$\theta$ is any binary relation on $X$, we denote by
$\overline{\theta}$ the \emph{symmetrization} of $\theta$, i.e.
the relation $\overline{\theta}=\theta \cup \theta^{-1}$.
\subsection{Basic definitions}
Given a set $X$ of cardinality $n$, let $\mathcal{O}(X)$ be the
set of strict order relations on $X$. By definition, this means
that $\theta \in \mathcal{O}(X)$ when $\theta$ is an irreflexive
and transitive binary relation on $X$. In symbols, this means that
$\theta \cap \mathcal{D}=\emptyset$ and $\theta \circ \theta
\subseteq \theta$.
Now let $(S,R)$ be an ordered pair of binary relations on $X$. We
say that $(S,R)$ is a \emph{Catalan pair} on $X$ when the
following axioms are satisfied:
\begin{itemize}
\item[(i)] $S\in \mathcal{O}(X)$; \hfill (\textbf{ord S})
\item[(ii)] $R\in \mathcal{O}(X)$; \hfill (\textbf{ord R})
\item[(iii)] $\overline{R} \cup \overline{S}=X^2 \setminus
\mathcal{D}$; \hfill (\textbf{tot})
\item[(iv)] $\overline{R} \cap \overline{S}=\emptyset$; \hfill
(\textbf{inters})
\item[(v)] $S\circ R\subseteq R$; \hfill (\textbf{comp})
\end{itemize}
\bigskip
\emph{Remarks.}\begin{enumerate}
\item Observe that, since $S$ and $R$ are both strict order
relations, the two axioms (\textbf{tot}) and (\textbf{inters}) can
be explicitly described by saying that, given $x,y\in X$, with $x
\neq y$, exactly one of the following holds: $xSy$, $xRy$, $ySx$,
$yRx$.
\item Axiom (\textbf{comp}) could be reformulated by using strict
containment, i.e. $S\circ R\subset R$. In fact, it is not
difficult to realize that equality cannot hold since $X$ is
finite. However we prefer to keep our notation, thus allowing to
extend the definition of a Catalan pair to the infinite case.
\item From the above axioms it easily follows that $S\cap
S^{-1}=\emptyset$.
\end{enumerate}
\bigskip
In a Catalan pair $(S,R)$, $S$ (resp. $R$) will be referred to as
the \emph{first} (resp. \emph{second}) \emph{component}. Two
Catalan pairs $(S_1 ,R_1 )$ and $(S_2 ,R_2 )$ on the (not
necessarily distinct) sets $X_1$ and $X_2$, respectively, are said
to be \emph{isomorphic} when there exists a bijection $\xi$ from
$X_1$ to $X_2$ such that $xS_1 y$ if and only if $\xi (x)S_2 \xi
(y)$ and $xR_1 y$ if and only if $\xi (x)R_2 \xi (y)$. As a
consequence of this definition, we say that a Catalan pair has
\emph{size} $n$ when it is defined on a set $X$ of cardinality
$n$. The set of isomorphism classes of Catalan pairs of size $n$
will be denoted $\mathcal{C}(n)$. We will be mainly interested in
the set $\mathcal{C}(n)$, even if, in several specific cases, we
will deal with ``concrete" Catalan pairs. However, in order not to
make our paper dull reading, we will use the term
``Catalan pair" when referring both to a specific Catalan pair and
to an element of $\mathcal{C}(n)$. In the same spirit, to mean
that a Catalan pair has size $n$, we will frequently write
``$(S,R)\in \mathcal{C}(n)$", even if $\mathcal{C}(n)$ is a set of
isomorphism classes. In each situation, the context will clarify
which is the exact meaning of what we have written down.
\bigskip
As an immediate consequence of the definition of a Catalan pair
(specifically, from the fact that all the axioms are universal
propositions), the following property holds.
\begin{prop}\label{substr} Let $(S,R)$ be a Catalan pair on $X$. For any $\widetilde{X}\subseteq
X$, denote by $\widetilde{S}$ and $\widetilde{R}$ the restrictions
of $S$ and $R$ to $\widetilde{X}$, respectively. Then
$(\widetilde{S},\widetilde{R})$ is a Catalan pair on
$\widetilde{X}$.
\end{prop}
\subsection{First properties of Catalan pairs}
In order to get trained with the above definition, we start by
giving some elementary properties of Catalan pairs. All the
properties we will prove will be useful in the rest of the paper.
\begin{prop} Given a Catalan pair $(S,R)$, the following
properties hold:
\begin{enumerate}
\item $S\circ R^{-1}\subseteq R^{-1}$;
\item $R\circ S\subseteq R\cup S$;
\end{enumerate}
\end{prop}
\emph{Proof.}
\begin{enumerate}
\item If $xSyR^{-1}z$, then $xSy$ and $zRy$. Since $x$ and $z$ are
necessarily distinct (this follows from axiom (\textbf{inters})),
it must be either $zRx$, $xRz$, $zSx$ or $xSz$. It is then easy to
check that the three cases $xRz$, $zSx$, $xSz$ cannot hold. For
instance, if $xRz$, then $xRzRy$, whence $xRy$, against
(\textbf{inters}) (since, by hypothesis, $xSy$). Similarly, the
reader can prove that both $zSx$ and $xSz$ lead to a
contradiction. Thus $zRx$, i.e. $xR^{-1}z$.
\item Suppose that $xRySz$. Once again, observe that the elements
$x$ and $z$ are necessarily distinct, thus it must be either
$xRz$, $xSz$, $zRx$ or $zSx$. Similarly as above, it can be shown
that neither $zRx$ nor $zSx$ can hold. For instance, in the first
case, from $zRxRy$ we deduce $zRy$, but we have $ySz$ by
hypothesis. The case $zSx$ can be similarly dealt with.\hfill $\blacksquare$\bigskip
\end{enumerate}
\emph{Remark.} As a consequence of this proposition, we have that,
in the definition of a Catalan pair, axiom (\textbf{comp}) can be
replaced by:
\begin{equation}\label{overl}
S\circ \overline{R}\subseteq \overline{R}.
\end{equation}
The above property will be useful in the sequel, when we will
investigate the properties of the relation $R$.
\begin{prop}\label{compstar} Let $(S,R)$ be a pair of binary relations on $X$
satisfying axioms (\textbf{ord S}), (\textbf{ord R}),
(\textbf{tot}) and (\textbf{inters}). Then axiom (\textbf{comp})
is equivalent to:
\begin{displaymath}
\overline{S}\circ R\subseteq R\cup S^{-1}.\qquad \qquad
(\textbf{comp*})
\end{displaymath}
\end{prop}
\emph{Proof.}\quad Assume that axiom (\textbf{comp}) holds and let
$x\overline{S}yRz$. Since $x\overline{S}y$, we have two
possibilities: if $xSy$, then $xSyRz$ and $xRz$. Instead, if
$ySx$, then, being also $yRz$, we get that both the cases $xSz$
and $zRx$ cannot occur. Therefore it must be either $zSx$ or
$xRz$, which means that $(x,z)\in R\cup S^{-1}$.
Conversely, assume that condition (\textbf{comp*}) holds, and
suppose that $xSyRz$.We obviously deduce $x\overline{S}yRz$, and
so we have either $xRz$ or $zSx$. If $zSx$, then $zSxSy$, whence
$zSy$, against the hypothesis $yRz$. Therefore it must be
$xRz$.\hfill $\blacksquare$\bigskip
\subsection{Catalan pairs are enumerated by Catalan numbers}
To show that the cardinality of $\mathcal{C}(n)$ is given by the
$n$-th Catalan number $C_n$ we will provide a recursive
decomposition for the structures of $\mathcal{C}(n)$. We recall that the
sequence $C_n$ of Catalan numbers starts $1,1,2,5,14,42,\ldots$
(sequence A000108 in \cite{Sl}) and has generating function $\frac{1-\sqrt{1-4x}}{2x}$.
\bigskip
Given two Catalan pairs, say $(S,R)\in \mathcal{C}(n)$ and
$(S',R')\in \mathcal{C}(m)$, suppose that $S$ and $R$ are defined
on $X=\{ x_1 ,\ldots ,x_n \}$, whereas $S'$ and $R'$ are defined
on $Y=\{ y_1 ,\ldots ,y_m \}$, with $X\cap Y=\emptyset$. We define
the \emph{composition} of $(S,R)$ with $(S',R')$ to be the pair of
relations $(S'',R'')$ on the set $\{ z\} \cup X\cup Y$ of
cardinality $n+m+1$, defined by the following properties:
\begin{itemize}
\item[(i)] $S''$ and $R''$, when restricted to $X$, coincide with
$S$ and $R$, respectively;
\item[(ii)] $S''$ and $R''$, when restricted to $Y$, coincides
with $S'$ and $R'$, respectively;
\item[(iii)] for every $x\in X$ and $y\in Y$, it is $xR''y$;
\item[(iv)] for every $x\in X$, it is $xS''z$;
\item[(v)] for every $y\in Y$, it is $zR''y$;
\item[(vi)] no further relation exists among the elements of $\{
z\} \cup X\cup Y$.
\end{itemize}
\noindent For the composition we will use the standard notation, so that
$(S'',R'')=(S,R)\circ (S',R')$.
\bigskip
\noindent\emph{Remarks.} \begin{enumerate}
\item The above definition of composition can be clearly given in
a more compact form by setting $S''=S\cup S'\cup (X\times \{ z\}
)$ and $R''=R\cup R'\cup ((X\cup \{ z\} )\times Y)$.
\item From the above definition it follows that $S''$ is a strict
order relation on $\{ z\} \cup X\cup Y$ and $z$ is a maximal
element of $S''$. Indeed, if $zS''t$, for some $t$, then
necessarily $t\in Y$ (from (iv)), but from (v) we would also have
$zR''t$, against (vi). Similarly, it can be proved that $R''$ is a
strict order relation on $\{ z\} \cup X\cup Y$ and $z$ is a
minimal element of $R''$.
\end{enumerate}
\begin{prop}\label{uno} Let $\alpha =(S,R)\in \mathcal{C}(n)$ and
$\beta =(S',R')\in \mathcal{C}(m)$ be two Catalan pairs as above.
Then $\alpha \circ \beta =(S'',R'')\in \mathcal{C}(n+m+1)$.
\end{prop}
\emph{Proof.}\quad The fact that $S'',R''\in \mathcal{O}(\{ z\}
\cup X\cup Y)$ is stated in remark 2 above. Moreover, if $t,w\in
\mathcal{O}(\{ z\} \cup X\cup Y)$, with $t\neq w$, then the
following cases are possible:
\begin{itemize}
\item both $t$ and $w$ belong to $X$ or $Y$: in this case $(t,w)$
belongs to exactly one among the relations
$S,S^{-1},R,R^{-1},S',(S')^{-1},R',(R')^{-1}$.
\item $t$ belongs to $X$ and $w$ belongs to $Y$: then $tR''w$, and
no further relation exists between $t$ and $w$; the case $t\in Y$
and $w\in X$ can be treated analogously.
\item $t=z$ and $w\in X$: then the only relation between $t$ and
$w$ is $t(S'')^{-1}w$; and similarly, if $w\in Y$, we have only
$tR''w$.
\end{itemize}
As a consequence, we can conclude that $\overline{R''}\cup
\overline{S''}=(\{ z\} \cup X\cup Y)^2 \setminus \mathcal{D}$ and
$\overline{R''}\cap \overline{S''}=\emptyset$.
Finally, suppose that $t(S''\circ R'')w$. If $t,w$ both belong to
$X$ or else to $Y$, then it is immediate to see that $tR''w$.
Otherwise, suppose that $t$ and $w$ are both different from $z$:
then necessarily $t\in X$ and $w\in Y$, and so $tR''w$. Finally,
the cases $t=z$ and $w=z$ cannot occur, as a consequence of remark
2 above. Thus, we can conclude that, in every case, $tR''w$,
whence $S''\circ R'' \subseteq R''$.\hfill $\blacksquare$\bigskip
\begin{lemma}\label{S} Given a Catalan pair $(S,R)$ on $X$, let $x,y$ be
two distinct (if any) maximal elements of $S$. Then there exists
no element $t\in X$ such that $tSx$ and $tSy$.
\end{lemma}
\emph{Proof.}\quad If not, since $x$ and $y$ are maximal for $S$,
then necessarily $x\overline{R}y$. If there were an element $t\in
X$ such that $tSx$ and $tSy$, then, from $tSx\overline{R}y$, we
would get $t\overline{R}y$, against the fact that $tSy$.\hfill $\blacksquare$\bigskip
Lemma \ref{S} essentially states that the principal ideals
generated by the maximal elements of $X$ (with respect to $S$) are
mutually disjoint.
\begin{prop}\label{due} Let $\gamma =(S'',R'')$ be a Catalan pair of size
$l\geq 1$. Then there exist unique Catalan pairs $\alpha =(S,R)$
and $\beta =(S',R')$ such that $\gamma =\alpha \circ \beta$.
\end{prop}
\emph{Proof.}\quad Suppose that $\gamma$ is defined on $X_l$ of
cardinality $l$ and let $M(S'')$ be the set of the maximal
elements of $S''$. It is clear that $M(S'')\neq \emptyset$, since
$X_l$ is finite. Define the set $\Phi$ to be the set of all
elements of $M(S'')$ which are minimal with respect to $R''$. We
claim that $|\Phi |=1$. Indeed, since the elements of $M(S'')$ are
an antichain of $S''$, then necessarily they constitute a chain of
$R''$, and so the minimum of such a chain is the only element of
$\Phi$. Set $\Phi =\{ x_0 \}$, we can split $X_l$ into three
subsets, $\{ x_0 \}$, $X$ and $Y$, where $X=\{ x\in X_l \; |\;
xS''x_0 \}$ and $Y=\{ x\in X_l \; |\; x_0 R''x \}$. The reader can
easily check that the above three sets are indeed mutually
disjoint. To prove that their union is the whole $X_l$, let $x\in
X_l$ and suppose that $xS''x_0$ does not hold. Since $x_0$ is
maximal for $S''$, then necessarily $x_0\overline{R''}x$. Suppose,
ab absurdo, that $xR''x_0$. Denoting by $y$ the unique (by the
above lemma) element of $M(S'')$ for which $xS''y$, we would have
$xR''x_0 R''y$, and so $xR''y$, a contradiction. Thus we can
conclude that $x_0 R''x$, as desired. Finally, define
$\alpha=(S,R)$ and $\beta=(S',R')$ as the restrictions of
$(S'',R'')$ to the sets $X$ and $Y$, respectively. The fact that
$\alpha$ and $\beta$ are Catalan pairs follows from proposition
\ref{substr}, whereas the proof that $\alpha \circ \beta =\gamma$
is left to the reader. The uniqueness of the above described
decomposition follows from the fact that $|\Phi |=1$, i.e. there
is only one possibility of choosing $x_0$ so that it satisfies the
definition of composition of Catalan pairs. \hfill $\blacksquare$\bigskip
\begin{prop} For any $n\in \mathbf{N}$, we have:
\begin{equation}\label{Catalan}
|\mathcal{C}(n+1)|=\sum_{k=0}^{n}|\mathcal{C}(k)|\cdot
|\mathcal{C}(n-k)|.
\end{equation}
Since $|\mathcal{C}(0)|=1$, we therefore have that
$|\mathcal{C}(n)|=C_n$, the $n$-th Catalan number.
\end{prop}
\emph{Proof.}\quad By proposition \ref{due}, giving a Catalan pair
of size $n+1$ is the same as giving two Catalan pairs of sizes $k$
and $n-k$, for a suitable $k$. On the other hand, by proposition
\ref{uno} any two Catalan pairs of sizes $k$ and $n-k$ can be
merged into a Catalan pair of size $n+1$. These arguments
immediately imply formula (\ref{Catalan}).\hfill $\blacksquare$\bigskip
\section{Combinatorial interpretations of Catalan pairs}\label{instances}
In this section we wish to convince the reader that fairly every
combinatorial structure counted by Catalan numbers can be
interpreted in terms of Catalan pairs. More precisely, we deem
that any Catalan structure can be described using a suitable
Catalan pair $(S,R)$, where $S$ and $R$ are somehow naturally
defined on the objects of the class. To support this statement, we
will take into consideration here five examples, involving rather
different combinatorial objects, such as matchings, paths,
permutations, trees and partitions. For each of them, we will
provide a combinatorial interpretation in terms of Catalan pairs.
\subsection{Perfect noncrossing matchings and Dyck paths}\label{matchings}
Our first example will be frequently used throughout all the
paper. Given a set $A$ of even cardinality, a \emph{perfect
noncrossing matching} of $A$ is a noncrossing partition of $A$
having all the blocks of cardinality 2. There is an obvious
bijection between perfect noncrossing matchings and well formed
strings of parentheses.
A graphical device to represent a perfect noncrossing matching of
$A$ consists of drawing the elements of $A$ as points on a
straight line and join with an arch each couple of corresponding
points in the matching. Using this representation, we can define
the following relations on the set $X$ of arches of a given
perfect noncrossing matching:
\begin{itemize}
\item for any $x,y\in X$, we say that $xSy$ when $x$ is included
in $y$;
\item for any $x,y\in X$, we say that $xRy$ when $x$ is on the
left of $y$.
\end{itemize}
The reader is invited to check that the above definition yields a
Catalan pair $(S,R)$ on the set $X$.
\bigskip
\emph{Example.}\quad Let $X=\{ a,b,c,d,e,f,g\}$, and let $S$ and
$R$ be defined as follows:
\bigskip
$S=\{ (b,a),(f,e),(f,d),(e,d),(g,d)\}$
\smallskip
$R=\{ (a,c),(a,d),(a,e),(a,f),(a,g),(b,c),(b,d),(b,e),(b,f),(b,g),\\
\phantom{\indent R=\{}(c,d),(c,e),(c,f),(c,g),(e,g),(f,g)\} .$
\bigskip
It is easy to check that $(S,R)$ is indeed a Catalan pair on $X$
of size 7, which can be represented as in figure \ref{esempio}(a).
\begin{figure}[!h]
\begin{center}
\includegraphics[scale=0.5]{movo.eps}
\end{center}
\caption{The graphical representation of a Catalan pair in terms
of a noncrossing matching, and the associated Dyck
path.}\label{esempio}
\end{figure}
\bigskip
An equivalent way to represent perfect noncrossing matchings is to
use Dyck paths: just interpret the leftmost element of an arch as
an up step and the rightmost one as a down step. For instance, the
matching represented in figure \ref{esempio}(a) corresponds to the
Dyck path depicted in figure \ref{esempio}(b). Coming back to
Catalan pairs, the relations $S$ and $R$ are suitably interpreted
using the notion of tunnel. A \emph{tunnel} in a Dyck path
\cite{E} is a horizontal segment joining the midpoints of an up
step and a down step, remaining below the path and not
intersecting the path anywhere else. Now define $S$ and $R$ on the
set $X$ of the tunnels of a Dyck paths by declaring, for any
$x,y\in X$:
\begin{itemize}
\item $xSy$ when $x$ lies above $y$;
\item $xRy$ when $x$ is completely on the left of $y$.
\end{itemize}
See again figure \ref{esempio} for an example illustrating the
above definition.
\subsection{Pattern avoiding permutations}
Let $n,m$ be two positive integers with $m\leq n$, and let $\pi
=\pi (1)\cdots \pi (n)\in S_n$ and $\nu =\nu (1)\cdots \nu (m)\in
S_m$. We say that $\pi$ {\em contains} the pattern $\nu$ if there
exist indices $i_1 <i_2 <\ldots <i_m$ such that $(\pi ({i_1 }),\pi
({i_2 }),\ldots ,\pi ({i_m }))$ is in the same relative order as
$(\nu (1),\ldots ,\nu(m))$. If $\pi$ does not contain $\nu$, we
say that $\pi$ is {\em $\nu$-avoiding}. See \cite{B} for plenty of
information on pattern avoiding permutations. For instance, if
$\nu =123$, then $\pi =524316$ contains $\nu$, while $\pi =632541$
is $\nu$-avoiding.
We denote by $S_n (\nu )$ the set of $\nu$-avoiding permutations
of $S_n$. It is known that, for each pattern $\nu \in S_3$, $|S_n
(\nu )|=C_n$ (see, for instance, \cite{B}).
\bigskip
It is possible to give a description of the class of 312-avoiding
permutations by means of a very natural set of Catalan pairs. More
precisely, let $[n]=\{ 1,2,\ldots ,n\}$; for every permutation
$\pi \in S_n$, define the following relations $S$ and $R$ on
$[n]$:
\begin{itemize}
\item $iSj$ when $i<j$ and $(j,i)$ is an inversion in $\pi$ (see,
for instance, \cite{B} for the definition of inversion);
\item $iRj$ when $i<j$ and $(i,j)$ is a noninversion in $\pi$.
\end{itemize}
\begin{prop} The permutation $\pi \in S_n$ is 312-avoiding if and
only if $(S,R)$ is a Catalan pair of size $n$.
\end{prop}
\emph{Proof.}\quad The axioms (i) to (iv) in the definition of a
Catalan pair are satisfied by $(S,R)$ for any permutation $\pi$,
as the reader can easily check. Moreover, $\pi$ is 312-avoiding if
and only if, given any three positive integers $i<j<k$, it can
never happen that both $(j,i)$ and $(k,i)$ are inversions and
$(j,k)$ is a noninversion. This happens if and only if $S\circ R$
and $S$ are disjoint. But, from the above definitions of $S$ and
$R$, it must be $S\circ R\subseteq R\cup S$, whence $S\circ
R\subseteq R$.\hfill $\blacksquare$\bigskip
The present interpretation in terms of 312-avoiding permutations
can be connected with the previous ones using Dyck paths and
perfect noncrossing matchings, giving rise to a very well-known
bijection, whose origin is very hard to be traced back (see, for
instance, \cite{P}). We leave all the details to the interested
reader.
\subsection{Plane trees}\label{trees}
By means of the well-known bijection between perfect noncrossing
matchings and plane trees \cite{St1}, the previous example allows
us to give an interpretation of Catalan pairs in terms of plane
trees. The details are left to the reader.
\subsection{Noncrossing partitions}
Let $\mathcal{P}_n$ be the set of noncrossing partitions on the
linearly ordered set $X_n =\{ x_1 ,x_2 ,\ldots ,x_n \}$. Each
$p\in \mathcal{P}_n$ determines an equivalence relation $\sim_p$
on $X_n$. Given a generic element $x\in X_n$, we will denote its
equivalence class with $[x]_{\sim_p }$.
Given $x\in X_n$, we set $u(x)=\max_{y<[x]_{\sim_p }}y$. Thus
$u(x)$ is given by the greatest lower bound of the elements in
$[x]_{\sim_p }$ minus 1. Observe that $u(x)$ need not be defined
for all $x$.
Given $p\in \mathcal{P}_n$, define two relations $S$ and $R$ as
follows:
\begin{itemize}
\item $S$ is the transitive closure of the relation $\{ (x,u(x))\;
|\; x\in X_n \}$;
\item $xRy$ when $x<y$ and $(x,y)$ is not in $S$.
\end{itemize}
Then the pair $(S,R)$ is indeed a Catalan pair on $X_n$, and it
induces an obvious bijection between noncrossing partitions and
plane trees. Figure~\ref{noncross} depicts the noncrossing
partition corresponding to the Catalan pair $(S,R)$ represented in
figure~\ref{esempio}.
\begin{figure}[htb]
\begin{center}
\centerline{\hbox{\psfig{figure=noncross.eps,width=3in,clip=}}}
\caption{The noncrossing partition corresponding to the Catalan
pair represented in figure~\ref{esempio}.} \label{noncross}
\end{center}
\end{figure}
\section{Properties of the posets defined by $S$ and $R$}\label{ReS}
In the present section we investigate some features of the posets
associated with the (strict) order relations $R$ and $S$. In the
sequel, a poset will be denoted using square brackets, e.g.
$[X,R]$ and $[X,S]$. An immediate observation which follows
directly from the definition of a Catalan pair is the following,
which we state without proof.
\begin{prop} Given a finite set $X$, consider the graphs $X_1$ and
$X_2$ determined by the Hasse diagrams of the posets $[X,R]$ and
$[X,S]$. Then $X_1$ and $X_2$ are edge-disjoint subgraphs of the
complete graph $K(X)$ on $X$ whose union gives the whole $K(X)$.
\end{prop}
\subsection{The poset defined by $R$}
From the point of view of Catalan pars, it turns out that the
strict order relation $R$ completely defines a Catalan pair. To
prove this, we first need a technical definition which will be
useful again later.
\bigskip
Given a strict order relation $R$ on $X$, define the relation
$\sim_R$ on the set $X$ by declaring $x \sim_R y$ when, for all
$z$, it is $z \overline{R} x$ if and only if $z\overline{R}y$. It
is trivial to show that $\sim_R$ is an equivalence relation. In
what follows, the equivalence classes of $\sim_R$ will be denoted
using square brackets.
\begin{lemma}\label{y}
\begin{enumerate}
\item[(i)] If $x \sim_R y$, then $x\! \! \not{\! R}y$.
\item[(ii)] It is $x \sim_R y$ if and only if, for all $z$, $zRx$
iff $zRy$ and $xRz$ iff $yRz$.
\item[(iii)] If $(S,R)$ is a Catalan pair, then, for all $x,y \in
[z]_{\sim_R}$, it is $xSy$ or $ySx$, i.e. S is a total order on
each equivalence class of $\sim_R$.
\item[(iv)] Suppose $(S,R)$ is a Catalan pair. If $xSy$ and $x\!
\! \not{\! \sim_R}y$, then there exists $a \in X$ such that $a
\overline{R} x$ and $aSy$.
\item[(v)] For all $x,y \in X$, it is $xRy$ iff $[x]_{\sim_R} R
[y]_{\sim_R}$ (where the extension of ${\sim_R}$ to sets has an
obvious meaning).
\end{enumerate}
\end{lemma}
\emph{Proof.} \begin{enumerate}
\item[(i)] Just observe that, if $x\sim_R y$, then $x \overline{R}
y$ would imply $x\overline{R}x$, which is false.
\item[(ii)] Notice that, given that $x\sim_R y$, if $zRx$, then
obviously $z\overline{R}x$, whence $z\overline{R}y$. If we had
$yRz$, then, since $zRx$, it would also be $yRx$, which is
impossible thanks to the preceding statement $(i)$. The fact that
$xRz$ implies $yRz$ can be dealt with analogously.
\item[(iii)] Obvious after $(i)$.
\item[(iv)] From $x\! \! \not{\! \sim_R}y$ it follows, by
definition, that either there exists $a\in X$ such that
$a\overline{R}x$ and $a\! \! \not{\! \overline{R}}y$, or there
exists $b\in X$ such that $b\! \! \not{\! \overline{R}}x$ and
$b\overline{R}y$. The second possibility cannot occur since, if
such an element $b$ existed, then, from the hypothesis $xSy$ and
from (\ref{overl}), we would have $x\overline{R}b$, a
contradiction. Thus an element $a\in X$ with the above listed
properties exists. In particular, since $a\! \! \not{\!
\overline{R}}y$, it must be $a\overline{S}y$. If we had $ySa$,
then, from $xSy$, it would follow $xSa$, a contradiction.
Therefore it must be $aSy$, as desired.
\item[(v)] Suppose that $xRy$. If $a\sim_R x$, applying $(ii)$ it
follows that $aRy$. Now, if it is also $b\sim_R y$, applying
$(ii)$ once more yields $aRb$, which implies the thesis.\hfill $\blacksquare$\bigskip
\end{enumerate}
\begin{teor} If $(S_1 ,R),(S_2,R)$ are two Catalan pairs on $X$,
then they are isomorphic.
\end{teor}
\emph{Proof.}\quad From lemma \ref{y}$(iii)$, each equivalence
class of the relation $\sim_R$ is linearly ordered by the order
relations $S_1$ and $S_2$.
Define a function $F$ mapping $X$ into itself such that, if $x \in
X$ and there are exactly $k\geq0$ elements in $[x]_{\sim_R}$ less
than $x$ with respect to the total order $S_1$, then $F(x)$ is
that element in $[x]_{\sim_R}$ having exactly $k$ elements before
it in the total order given by $S_2$.
It is trivial to see that $F$ is a bijection. Since $x \sim_R
F(x)$, using lemma~\ref{y}$(v)$, we get that $xRy$ iff
$F(x)RF(y)$.
To prove that $xS_1y$ implies $F(x)S_2F(y)$ it is convenient to
consider two different cases. First suppose that $x \sim_R y$; in
this case our thesis directly follows from the definition of $F$.
On the other hand, if $x\! \! \not{\! \sim_R}y$, using lemma
\ref{y}$(iv)$, there exists an element $a \in X$ such that $a
\overline{R} x$ and $aS_1y$. Thus, considering the Catalan pair
$(S_2,R)$, it cannot be $F(x) \overline{R} F(y)$, since this would
imply (by lemma \ref{y}$(v)$) that $x \overline{R}y$, against
$xS_1y$. Therefore it must be $F(x) \overline{S_2} F(y)$. More
precisely, we get $F(x)S_2F(y)$, since, from
$F(y)S_2F(x)\overline{R}a$, we would derive $F(y)\overline{R}a$
and so $y\overline{R}a$, which is impossible. With an analogous
argument, we can also prove that $F(x)S_2 F(y)$ implies $xS_1 y$,
which concludes the proof that $F$ is an isomorphism between $(S_1
R)$ and $(S_2 ,R)$.\hfill $\blacksquare$\bigskip
For the rest of the paper, we set $\mathbf{R}(n)=\{ [X,R]\; |\;
(\exists S)(S,R)\in \mathcal{C}(n)\}$.
\bigskip
The posets $[X,R]\in \mathbf{R}(4)$ are those depicted in figure
\ref{erre}.
\begin{figure}[htb]
\begin{center}
\centerline{\hbox{\psfig{figure=erre2.eps,width=4in,clip=}}}
\caption{The 14 posets of $\mathbf{R}(4)$.} \label{erre}
\end{center}
\end{figure}
Among the possible 16 nonisomorphic posets on 4 elements, the two
missing posets are those shown in figure \ref{errenot}. They are
respectively the poset $\mathbf{2}+\mathbf{2}$ (i.e. the direct
sum of two copies of the 2-element chain) and the poset $Z_4$,
called \emph{fence of order 4} (see, for instance,
\cite{C,MZ,St1}).
\begin{figure}[htb]
\begin{center}
\centerline{\hbox{\psfig{figure=errenot.eps,width=1.5in,clip=}}}
\caption{The two posets not belonging to $\mathbf{R}(4)$. }
\label{errenot}
\end{center}
\end{figure}
The rest of this section is devoted to proving that the absence of
the two posets $\mathbf{2}+\mathbf{2}$ and $Z_4$ is not an
accident.
\bigskip
\begin{prop} If $[X,R]\in \mathbf{R}(n)$, then $[X,R]$ does not contain any
subposet isomorphic to $\mathbf{2}+\mathbf{2}$ or $Z_4$.
\end{prop}
\emph{Proof.}\quad Let $(S,R)\in {\mathcal C}(n)$ and suppose, ab
absurdo, that $\mathbf{2}+\mathbf{2}$ is a subposet of $[X,R]$.
Then, denoting with $x,z$ and $y,t$ the minimal and maximal
elements of an occurrence of $\mathbf{2}+\mathbf{2}$ in $[X,R]$,
respectively, and supposing that $xRy$ and $zRt$, we would have,
for instance, $t\overline{S}xRy$. By proposition \ref{compstar},
since $t\! \! \not{\! R}y$, it is $ySt$. However, we also have
$y\overline{S}zRt$ and $y\! \! \not{\! R}t$, whence $tSy$, which
yields a contradiction with the previous derivation.
Similarly, suppose that $Z_4$ is a subposet of $[X,R]$. Then,
supposing that $xRy$, $xRt$ and $zRt$, we have $z\overline{S}xRy$,
whence, by proposition \ref{compstar}, $ySz$. However, it is also
$ySzRt$, which implies $yRt$, and this is false.\hfill $\blacksquare$\bigskip
We will now prove that the converse of the above proposition is
also true, thus providing an order-theoretic necessary and
sufficient condition for a strict order relation $R$ to be the
second component of a Catalan pair.
\begin{prop}\label{sdir} Let $R\in \mathcal{O}(X)$ such that $[X,R]$ does not
contain subposets isomorphic to $\mathbf{2}+\mathbf{2}$ or $Z_4$.
Then $[X,R]\in \mathbf{R}(n)$.
\end{prop}
\emph{Proof.}\quad Given $X=\{ x_1 ,\ldots ,x_n \}$, we define a
binary relation $S=S(R)$ on $X$ by making use of the equivalence
relation $\sim_R$ defined at the beginning of this section. More
precisely:
\begin{enumerate}
\item[-] if $x_i \sim_R x_j$ and $i<j$, set $x_i Sx_j$;
\item[-] if $x\nsim_R y$ and $x\! \! \not{\! \overline{R}}y$, set:
\begin{description}
\item{i)} $xSy$, when there exists $z\in X$ such that
$z\overline{R}x$ and $z\! \! \not{\! \overline{R}}y$;
\item{ii)} $ySx$, when there exists $z\in X$ such that $z\! \!
\not{\! \overline{R}}x$ and $z\overline{R}y$.
\end{description}
\end{enumerate}
We claim that $(S,R)\in \mathcal{C}(n)$.
It is trivial to show that axioms (\textbf{tot}) and
(\textbf{inters}) in the definition of a Catalan pair are
satisfied.
Next we show that axiom (\textbf{comp}) holds. Indeed, suppose
that $xSyRq$ and $x\! \! \! \! \not{\! \!R}q$. From lemma
\ref{y}$(ii)$, it would follow that $x \nsim_R y$. Thus, from
$xSy$ and the definition of $S$, we deduce that there is an
element $z$ such that $z\overline{R}x$ and $z\! \! \! \not{\! \!
\overline{R}}y$. The reader can now check that the four elements
$x,y,q,z$ determine a subposet of $[X,R]$ isomorphic either to
$\mathbf{2}+\mathbf{2}$ or $Z_4$, which is not allowed.
Using an analogous argument, it can be shown that $S \circ R^{-1}
\subseteq R^{-1}$, fact that will be useful below.
Finally, it remains to prove axiom (\textbf{ord S}), i.e. that
$S\in \mathcal{O}(X)$. The fact that $S$ is irreflexive is evident
from its definition. To prove the transitivity of $S$, we first
need to prove that, given $x,y\in X$, the two relations $xSy$ and
$ySx$ cannot hold simultaneously. Indeed, if $x,y\in X$ were such
that $xSy$ and $ySx$, then it could not be $x \sim_R y$ and so, by
definition, there would exist two elements $z,q\in X$ such that
$z\overline{R}x$, $z\! \! \not{\! \overline{R}}y$, $q\! \! \not{\!
\overline{R}}x$ and $q\overline{R}y$. It is not difficult to prove
that the four elements $x,y,z,q$ have to be all distinct (using
the irreflexivity of $R$ and $S$). Now, if we consider the poset
determined by these four elements, in all possible cases a
forbidden poset comes out, and we have reached a contradiction.
Now suppose to have $xSySt$: we want to prove that necessarily
$xSt$. The cases in which we have $x \sim_Ry$ and/or $y \sim_R t$
can be dealt with using the definition of $S$. Moreover, if
$x\nsim_R y$ and $y\nsim_R t$, let $z,q$ such that
$z\overline{R}x$, $z\! \! \! \not{\! \overline{R}}y$, $q\! \! \!
\not{\! \overline{R}}t$ and $q\overline{R}y$. Thanks to the first
part of this proof (namely axiom (\textbf{comp}) and the fact that
$S \circ R^{-1} \subseteq R^{-1}$), from $xSy\overline{R}q$ it
follows that $q\overline{R}x$. On the other hand, if we had
$x\overline{R}t$, since it is $x\! \! \! \not{\! \!
\overline{R}}y$ and $t\! \! \not{\! \overline{R}}y$, it would be
$tSy$ (by the definition of $S$), which is impossible since, by
hypothesis, $ySt$, and we have just shown that the last two
relations lead to a contradiction. Therefore we must have $x\! \!
\not{\! \overline{R}}t$, which, together with $q\! \! \not{\!
\overline{R}}t$ and $q\overline{R}x$, implies that $xSt$, as
desired.\hfill $\blacksquare$\bigskip
\begin{figure}[htb]
\begin{center}
\epsfig{file=sr1.eps, width=4in} \caption{A poset in
$\mathbf{R}(9)$, together with the representation of the
associated Catalan pair as a matching.}\label{s(r)}
\end{center}
\end{figure}
In order to clarify the construction of $S(R)$ given in the proof
of proposition \ref{sdir}, consider the poset $R\in \mathbf{R}(9)$
shown in figure \ref{s(r)}(a). It is $x_1 \sim_R x_2$, hence $x_1
Sx_2$. Similarly we get $x_5 Sx_6$. Moreover, for any fixed
$i=1,\ldots ,8$, we have $x_9 \nsim _R x_i$, and there exists
$x_j$, $j\neq i$, such that $x_i \overline{R}x_j$, so we have $x_i
Sx_9$. Similarly we have $x_2 Sx_4$, $x_3 Sx_4$, $x_7 Sx_5$, $x_7
Sx_6$, $x_8 Sx_5$, $x_8 Sx_6$, and we finally obtain the Catalan
pair $(S,R)$ represented by the matching depicted in figure
\ref{s(r)}(b).
\bigskip
\emph{Remark.}\quad Observe that, as a byproduct of the last
proposition, we have found a presumably new combinatorial
interpretation of Catalan numbers: $C_n$ counts nonisomorphic
posets which are simultaneously $(\mathbf{2}+\mathbf{2})$-free and
$Z_4$-free.
\bigskip
\textbf{Open problem 1.}\quad We have shown that $(S,R)$ is a
Catalan pair if and only if $[X,R]$ does not contain neither
$\mathbf{2}+\mathbf{2}$ nor $Z_4$. The class of
$\mathbf{2}+\mathbf{2}$-free posets have been deeply studied, see
for example \cite{Fis} or the more recent paper \cite{BMCDK}. What
about $Z_4$-free posets?
\bigskip
\textbf{Open problem 2.}\quad Can we define some interesting (and
natural) partial order relation on the set $\mathbf{R}(n)$? Maybe
some of the combinatorial interpretations of Catalan pairs can
help in this task.
\subsection{Imposing some combinatorial conditions on the posets in $\mathbf{R}(n)$}
In this section we impose some conditions on the relation $R$ and
provide the corresponding combinatorial descriptions in terms of
noncrossing matchings and/or Dyck paths and/or 312-avoiding
permutations.
\begin{itemize}
\item[a)] \emph{Connected posets.}\quad First of all notice that a
generic $[X,R]\in \mathbf{R}(n)$ necessarily has at most one
connected component of cardinality greater than one (this follows
at once from the poset avoidance conditions found in the previous
section). It is not difficult to see that, in the interpretation
by means of noncrossing matchings, the fact that $[X,R]$ is
connected means that 1 and $2n$ are not matched. From this
observation it easily follows that $[X,R]$ corresponds to a non
elevated Dyck path and to a 312-avoiding permutation not ending
with 1. This also gives immediately the enumerations of the
Catalan pairs $(S,R)\in \mathcal{C}(n)$ such that $[X,R]$ is
connected. Indeed, elevated Dyck paths of semilength $n$ are known
to be enumerated by the sequence $C_{n-1}$ of shifted Catalan
numbers, whence we get immediately that the number of connected
posets belonging to $\mathbf{R}(n)$ is given by $c_n =C_n
-C_{n-1}$ when $n\geq 2$, whereas $c_0 =c_1 =1$. The resulting
generating function is therefore
\begin{displaymath}
\frac{1-x+2x^2 -(1-x)\sqrt{1-4x}}{2x}.
\end{displaymath}
\begin{figure}[htb]
\begin{center}
\epsfig{file=conn.eps, width=3in} \caption{A connected poset
$[X,R]$, and the corresponding perfect matching.}\label{conn}
\end{center}
\end{figure}
\item[b)] \emph{Lattices.}\quad In order to enumerate those posets
of $\mathbf{R}(n)$ which are also lattices, it is convenient to
interpret Catalan pairs as Dyck paths. The following proposition
then holds (where $U$ and $D$ denote up and down steps of Dyck
paths, respectively).
\begin{prop} Let $[X,R]\in \mathbf{R}(n)$ and $P$ be its associated
Dyck path. Then $[X,R]$ is a lattice if and only if $P$ starts and
ends with a peak and does not contain the pattern $DDUU$.
\end{prop}
\emph{Proof.}\quad The fact that $P$ must have a peak both at the
beginning and at the end stems from the fact that a finite lattice
must have a minimum and a maximum. If $P$ contains the pattern
$DDUU$, then denote by $x,y,z,t$ the four tunnels associated with
the four steps of the pattern. It is immediate
to see that $z$ and $t$ are both sups of $x$ and $y$ in $[X,R]$,
which implies that such a poset is not a lattice. Now suppose that
$P$ does not contain the pattern $DDUU$. Given $x,y\in X$
incomparable with respect to $R$, then, in the associated path
$P$, $x$ and $y$ are represented by two tunnels lying one above
the other (say, $x$ above $y$). Consider the down step $D_y$
belonging to $y$. It is obvious that $D_y$ is not isolated, i.e.
it is either followed or preceded by at least another down step.
Now take the first up step coming after $D_y$. Since $P$ avoids
the pattern $DDUU$, such an up step must be followed by a down
step, thus originating a tunnel $z$. It is not difficult to show
that $z$ is the least upper bound of $x$ and $y$. Thus, since any
two elements of $X$ have a least upper bound, we can conclude that
$[X,R]$ is a lattice, as desired.\hfill $\blacksquare$\bigskip
\begin{figure}[htb]
\begin{center}
\epsfig{file=lat.eps, width=4in} \caption{A lattice $[X,R]$, and
the corresponding perfect matching.}\label{lat}
\end{center}
\end{figure}
As a consequence of the last proposition, we are now able to
enumerate Catalan pairs $(S,R)$ such that $[X,R]$ is a lattice.
Indeed, the sequence counting Dyck paths avoiding the pattern
$DDUU$ is A025242 in \cite{Sl} (see also \cite{STT}).
\bigskip
\textbf{Open problem 3.}\quad It seems to be a quite difficult
task to provide a purely order-theoretic characterization of the
lattices $[X,R]$ arising in this way.
\bigskip
\item[c)] \emph{Distributive lattices.}\quad To understand when
$R$ gives rise to a distributive lattice is undoubtedly a much
easier task. Indeed, in order $[X,R]$ to be a distributive
lattice, it is necessary that it does not contain the two
sublattices $M_3$ and $N_5$~\cite{DP}, shown in figure~\ref{dis}.
This means that, in the associated matching, at most two arches
can be nested and no consecutive sets of nested arches can occur.
Equivalently, the associated Dyck path has height\footnote{The
\emph{height} of a Dyck path is the maximum among the ordinates of
its points.} at most 2, and no consecutive factors of height 2 can
occur. Therefore, an obvious argument shows that the sequence
$d_n$ counting distributive lattices in $\mathbf{R}(n)$ satisfies
the recurrence $d_n =d_{n-1}+d_{n-3}$, with $d_0 =d_1 =d_2 =1$,
having generating function $\frac{1}{1-x-x^3}$, whence $d_n
=\sum_i {n-2i\choose i}$ (sequence A000930 in \cite{Sl}). In this
case, we can also give a structural characterization of
distributive lattices in $\mathbf{R}(n)$: they are all those
expressible as
\begin{displaymath}
\left( \bigoplus_{i=1}^{r}\underline{n_i} \oplus \underline{2}^2
\right)\oplus \underline{n_{r+1}},
\end{displaymath}
where $\oplus$ denotes the \emph{linear} (or \emph{ordinal})
\emph{sum} of posets, $\underline{n}$ is the $n$-element chain and
$\underline{2}^2$ is the Boolean algebra having 4 elements (see
\cite{DP} for basic notions and notations on posets).
\begin{figure}[htb]
\begin{center}
\epsfig{file=dis.eps, width=3in} \caption{The lattices $M_3$ and $N_5$, and the corresponding perfect matchings.}\label{dis}
\end{center}
\end{figure}
\end{itemize}
\subsection{The poset defined by $S$}
Similarly to what has been done for $R$, we can define the set
$\mathbf{S}(n)=\{ [X,S]\; |\; (\exists R)(S,R)\in
\mathcal{C}(n)\}$. The posets in $\mathbf{S}(n)$ have an
interesting combinatorial characterization, which is described in
the next proposition.
\begin{prop} If $[X,S]\in \mathbf{S}(n)$, then the Hasse diagram
of $[X,S]$ is a forest of rooted trees, where the roots of the
trees are the maximal elements of $S$ and $xSy$ if and only if $y$
is a descendant of $x$ in one of the tree of the forest.
\end{prop}
\emph{Proof.}\quad First observe that, thanks to lemma \ref{S},
the poset $[X,S]$ has $k$ connected components, where $k$ is the
number of its maximal elements. Now take $x,y$ belonging to the
same connected component and suppose that $x\! \! \not{\! S}y$. We
claim that the set of all lower bounds of $\{ x,y\}$ is empty.
Indeed, if we had $z$ such that $zSx$ and $zSy$, then, supposing
(without lose of generality) that $xRy$, it would be $zRy$, a
contradiction. Thus, the Hasse diagram of each connected component
of $[X,S]$ is a direct acyclic graph, that is a tree, rooted at
its maximum element, and this concludes our proof.\hfill $\blacksquare$\bigskip
As a consequence of the previous proposition, we have the
following result.
\begin{cor} There is a bijection between $\mathbf{S}(n)$ and the
set of rooted trees with $n+1$ nodes.
\end{cor}
\emph{Proof.}\quad Just add to the Hasse diagram of each element
$[X,S]$ of $\mathbf{S}(n)$ a new root, linking such a root with an
edge to the maximum of each connected component.\hfill $\blacksquare$\bigskip
Below the rooted tree on 6 nodes associated with
$(X,S)\in \mathbf{S}(5)$ is shown, where $S=\{ (x_2 ,x_1 ),(x_4
,x_3 ),(x_5 ,x_3 )\}$.
\begin{figure}[htb]
\begin{center}
\centerline{\hbox{\psfig{figure=s.eps,width=1.5in,clip=}}}
\end{center}
\end{figure}
The above corollary implies that $|\mathbf{S}(n)|$ is given by the
number of rooted trees having $n+1$ nodes, which is sequence
A000081 in \cite{Sl}.
\bigskip
Unlike it happens with $R$, the order relation $S$ does not
uniquely determine a Catalan pair. This should be clear by
examining the following two perfect noncrossing
matchings, which are associated with the same $S$, but
determine a different $R$.
\begin{figure}[htb]
\begin{center}
\centerline{\hbox{\psfig{figure=msames.eps,width=3.5in,clip=}}}
\end{center}
\end{figure}
This fact is of course an obvious consequence of our last result,
since Catalan pairs are enumerated by Catalan numbers. Recall that
a rooted tree can be seen as a graph-isomorphism class of plane
rooted trees. Since we have shown in section \ref{trees} that
Catalan pairs can be interpreted by using plane rooted trees, it
easily follows that, given $S\in \mathbf{S}(n)$, the set of
Catalan pairs $(S,R)$ can be interpreted as the set of all plane
rooted trees which are isomorphic (as graphs) to the Hasse diagram
of $[X,S]$. Figure \ref{tris} gives an illustration of this
situation, by showing the rooted tree $T$ associated with a given
$S$ and all the plane rooted trees representing the associated
Catalan pairs, together with the alternative representation as
perfect noncrossing matchings.
\begin{figure}[htb]
\begin{center}
\centerline{\hbox{\psfig{figure=tris.eps,width=4in,clip=}}}
\caption{Representation of the Catalan pairs associated with a
given $S$.} \label{tris}
\end{center}
\end{figure}
\section{Generalizations of Catalan pairs}\label{gener1}
In this section we see how a slight modification of the axioms
defining Catalan pairs determines some combinatorial structures
and number sequences, mostly related with permutations. In
particular, we focus our attention on axiom (\textbf{comp}).
We notice that axiom (\textbf{comp}) is the reason since Catalan
pairs can be represented using perfect noncrossing matchings. If
we relax such a condition, we are able to represent some classes
of permutations which, in general, include $312$-avoiding ones.
Consider all pairs of relations $(S,R)$ on a set $X$ satisfying
axioms (\textbf{ord S}), (\textbf{ord R}), (\textbf{tot}) and
(\textbf{inters}). In this situation, we call $(S,R)$ a
\emph{factorial pair} on $X$. The set of all factorial pairs on
$X$ will be denoted $\mathcal{F}(X)$. As we did for Catalan pairs,
we work up to isomorphism, and $\mathcal{F}(n)$ will denote the
isomorphism class of factorial relations on a set $X$ of $n$
elements.
\bigskip
Each pair $(S,R) \in \mathcal{F}(X)$ can be graphically
represented using perfect matchings, extending the encoding given
in section \ref{matchings}. In the matching determined by a
factorial pair, however, two distinct arches can cross, as shown
in figure \ref{movi+}.
The interpretation of the first component of a factorial pair,
$S$, is the same as for Catalan pairs, and corresponds to
\emph{inclusion} of arches. The second component $R$ still
describes the reciprocal position of two arches but, more
generally, we have to consider the reciprocal positions $l(x)$
(left) and $r(x)$ (right) of the two vertices of an arch $x$.
Specifically, we have $xRy$ if and only if $l(x)$ lies on the left
of $l(y)$ and $r(x)$ lies on the left of $r(y)$.
\begin{figure}[htb]
\begin{center}
\centerline{\hbox{\psfig{figure=movi+.eps,width=2.5in,clip=}}}
\caption{The perfect matching whose associated permutation is
$53124$.}\label{movi+}
\end{center}
\end{figure}
\bigskip
\emph{Example.}\quad Let $(S,R) \in {\cal F}(4)$ represented in
figure \ref{movi+}. Using the notations of figure \ref{movi+}, on
the set of arches $\{ x,y,z,t,w\}$ we have $S=\{
(z,x),(z,y),(z,t),(z,w),(t,x),(t,y)\}$ and $R=\{
(x,y),(x,w),(y,w),(t,w)\}$.
\bigskip
It is clear that, for any set $X$, ${\mathcal C}(X) \subseteq
{\cal F}(X)$. Moreover, using an obvious extension of the
bijection given in section \ref{matchings}, it turns out that
$|{\cal F}(n)|=n!$. More precisely, we have the following
proposition.
\begin{prop}\label{pe} Every factorial pair $(S,R)$ of size $n$ can
be uniquely represented as a permutation $\pi \in S_n$.
\end{prop}
\emph{Proof.}\quad Given $\pi \in S_n$, just define $S$ and $R$ as
in section \ref{matchings}.\hfill $\blacksquare$\bigskip
Given a factorial pair $(S,R)$, we call the permutation $\pi$
found in the above proposition its \emph{permutation
representation}. See again figure \ref{movi+} for an example.
\bigskip
Now we come to the main point of the present section, and show how
relaxing axiom (\textbf{comp}) naturally leads to a family of
interesting combinatorial structures which, in some sense,
interpolates between the analogous combinatorial interpretations
of Catalan pairs and factorial pairs.
Denote by $\mathcal{F}_{h,k}(X)$ the class of all pairs of
relations $(S,R)$ on the set $X$ satisfying axioms (\textbf{ord
S}), (\textbf{ord R}), (\textbf{tot}), (\textbf{inters}), and such
that (\textbf{comp}) is replaced by the weaker axiom:
$$ S^h \circ R^k \subseteq R \qquad {\bf (comp \, (h,k) \, )}.$$
The next proposition (whose easy proof is left to the reader)
illustrates how the sets $\mathcal{F}_{h,k}(X)$ are related to
Catalan and factorial pairs.
\begin{prop} \label{}
\begin{enumerate}
\item[(i)] $\mathcal{C}(X) = \mathcal{F}_{1,1}(X)$.
\item[(ii)] For all $h$ and $k$ we have that $\mathcal{F}_{h,k}(X)
\subseteq \mathcal{F}(X)$.
\item[(iii)] If $a \leq b$, then $\mathcal{F}_{a,k}(X) \subseteq
\mathcal{F}_{b,k}(X)$ and $\mathcal{F}_{h,a}(X) \subseteq
\mathcal{F}_{h,b}(X)$ .
\end{enumerate}
\end{prop}
Each element of the family $\{ \mathcal{F}_{h,k}(X) : h,k \geq
1\}$, where $X$ is finite, can be characterized in terms of
permutations avoiding a set of patterns. For example, consider the
two families $\mathcal{F}_{h,1}(X)$ and $\mathcal{F}_{1,k}(X)$.
The following two propositions completely characterize them in
terms of pattern avoiding permutations. The proofs of both
propositions easily follow from the bijection given in proposition
\ref{pe}. In both propositions (as well as in the subsequent
corollary) $X$ denotes a set having $n$ elements.
\begin{prop}\label{av1} The permutation representation of
$\mathcal{F}_{1,k}(X)$ is given by $S_n ((k+2)12\cdots k(k+1))$.
\end{prop}
\begin{prop}\label{av} The permutation representation of $\mathcal{F}_{h,1}(X)$ is
given by $S_n (\pi_2 ,\pi_3 ,\ldots ,\pi_{h+1})$, where $\pi_i \in
S_{h+2}$, for every $2\leq i\leq h+1$, and $\pi_i$ is obtained
from $(h+2)(h+1)\cdots 21$ by moving $i$ to the rightmost
position.
\end{prop}
\begin{cor} \label{omega2} The cardinality of $\mathcal{F}_{2,1}(X)$
is given by the $n$-th Schr\"oder number.
\end{cor}
{\em Proof.}\quad From the previous proposition we get that the
permutation representation of $\mathcal{F}_{2,1}(X)$ is given by
$S_n (4312,4213)$. In \cite{K} it is shown that the above set of
pattern avoiding permutations (or, more precisely, the one
obtained by reversing both patterns) is counted by Schr\"oder
numbers.\hfill $\blacksquare$\bigskip
\textbf{Open problem 4.}\quad The enumeration of the sets
$\mathcal{F}_{h,k}(X)$ has to be almost completely carried out,
except for some specific cases. For instance, concerning
$\mathcal{F}_{3,1}(X)$, proposition \ref{av} states that its
permutation representation is given by $S_n (53214,54213,54312)$.
The first terms of its counting sequence are
$1,2,6,24,117,652,3988,\ldots$, which are not in \cite{Sl}.
\section{Other kinds of generalizations}\label{gener1}
Among the possible combinatorial interpretations of Catalan pairs
we have mentioned Dyck paths. In this section we show how some
slight modifications of the axioms for Catalan pairs allow us to
define different pairs of binary relations, which are naturally
interpreted as some well-known families of lattice paths and then
determine well known number sequences. We assume that the reader
is familiar with the most common families of lattice paths, such
as Schr\"{o}der and Grand-Dyck paths.
\bigskip
As usual, we deal with pairs of binary relations $(S,R)$, both
defined on a set $X$ of cardinality $n$ (this will still be
expressed by saying that $(S,R)$ is a pair \emph{of size $n$}) .
The axioms $S$ and $R$ are required to satisfy are the same as the
axioms for Catalan pairs, except for the fact that we do not
impose irreflexivity for $S$. It is immediate to see that all the
remaining axioms are coherent with our new assumption.
\subsection{Unrestricted reflexivity}
Let $\mathcal{U}(n)$ be the set of pairs of binary relations
$(S,R)$ of size $n$, satisfying axioms (\textbf{ord R}),
(\textbf{tot}), (\textbf{inters}), (\textbf{comp}), such that $S$
is a transitive relation and, as it was in the case of Catalan
pairs, $S \cap S^{-1} = \emptyset$. Of course, since we are not
imposing irreflexivity on $S$, given $x\in X$, we may have either
$xSx$ or $x\! \! \not{\! S}x$.
\bigskip
A possible combinatorial interpretation of the elements of
$\mathcal{U}(n)$ can be obtained by means of a slight modification
of the notion of a perfect noncrossing matching. Loosely speaking,
we can introduce two different kinds of arches, namely solid and
dotted arches, imposing that, when $xSx$, the arch corresponding
to $x$ is dotted. These objects will be called \emph{two-coloured
perfect noncrossing matchings} (briefly, \emph{two-coloured
matchings}).
\bigskip
It is evident that, for any Catalan pair $(S,R)\in
\mathcal{C}(n)$, we can define exactly $2^{n}$ different elements
$(S',R')\in \mathcal{U}(n)$ with the property that
\begin{displaymath}
R=R', \quad \mbox{and} \quad S=S'\setminus \mathcal{D}.
\end{displaymath}
\noindent Hence the number of elements of $\mathcal{U}(n)$ is $2^n \, C_n$,
(sequence A052701 in \cite{Sl}).
\bigskip
We obtain some more interesting combinatorial situations by giving
specific axioms for the behavior of $S$ with respect to the
diagonal $\mathcal{D}(X)$.
\subsection{Grand-Dyck paths and central binomial coefficients}
Recall that a \emph{Grand-Dyck} path of semi-length $n$ is a
lattice path from $(0,0)$ to $(2n,0)$ using \emph{up} $(1,1)$ and
\emph{down} $(1,-1)$ steps. The number of Grand-Dyck paths of
semi-length $n$ is given by the central binomial coefficient
${2n\choose n}$ \cite{St1}. We can represent a Grand-Dyck path by
using a two-coloured matching, with the convention that for the
parts of the path lying above the $x$-axis we use solid arches,
whereas for the parts of the paths lying below the $x$-axis we use
dotted arches (see figure \ref{2eggs}).
\begin{figure}[htb]
\begin{center}
\epsfig{file=2eggs.eps, width=4.8in} \caption{A Grand-Dyck path
and its representation as a two-coloured matching.}\label{2eggs}
\end{center}
\end{figure}
Of course, not every two-coloured matching represents a Grand-Dyck
path. Indeed, we must add the following constraint: if an arch $x$
is contained into an arch $y$, then $x$ and $y$ are either both
solid or both dotted.
In order to give a correct axiomatization of what can be called a
\emph{Grand-Dyck pair}, just add to the axioms for
$\mathcal{U}(n)$ the following one:
\begin{itemize}
\item if $xSy$, then $xSx$ if and only if $ySy$. \hfill
(\textbf{choose})
\end{itemize}
Denote by $\mathcal{G}(n)$ the resulting set of pairs of binary
relations, called \emph{Grand-Dyck pairs of size $n$}. It is
evident that, interpreting the relations $S$ and $R$ as in the
case of (one-coloured) matchings, and adding the convention that,
if $xSx$, then $x$ is a dotted arch, we get precisely the set of
two-coloured matchings.
For instance, referring to the example in figure \ref{2eggs}, $R$
and $S$ are as follows:
\begin{displaymath}
\begin{array}{l}
R=\{ (x,y), (x,u), (x,v), (x,z), (y,z), (y,w), (u,v), (u,z),
(u,w),
(v,z), (v,w) \}, \\
S=\{ (u,y), (v,y), (w,z), (u,u), (v,v), (y,y) \}.
\end{array}
\end{displaymath}
Axiom (\textbf{choose}) can be reformulated in a more elegant way.
\begin{prop}\label{alt} Let $\mathcal{D}(S)=\{ (x,x)\in X^2 \; |\; xSx\}$. Then axiom
{\bf (choose)} is equivalent to
\begin{displaymath}
{\cal D}(S) \circ S = S \circ {\cal D}(S).
\end{displaymath}
\end{prop}
\emph{Proof.}\quad Using (\textbf{choose}), it is easy to see that
$x(\mathcal{D}(S)\circ S)y$ if and only if $x\mathcal{D}(S)xSy$ if
and only if $xSy\mathcal{D}(S)y$ if and only if $x(S\circ
\mathcal{D}(S))y$. Conversely, suppose that $xSy$. If $xSx$, then
$x({\cal D}(S)\circ S)y$. However, by hypothesis, this is
equivalent to $x(S\circ {\cal D}(S))y$, whence $ySy$.\hfill $\blacksquare$\bigskip
\subsection{Schr\"oder paths and Schr\"oder numbers}
Recall that a \emph{Schr\"oder path} of semi-length $n$ is a path
from $(0,0)$ to $(2n,0)$ using \emph{up} steps $(1,1)$,
\emph{down} steps $(1,-1)$, and horizontal steps of length two
$(2,0)$, and remaining weakly above the $x$-axis.
\bigskip
We can represent Schr\"oder paths by using two-coloured matchings
as well. We can essentially adopt the same representation as for
Dyck paths, just using dotted arches to represent horizontal
steps. According to such a representation, dotted arches can be
contained into other arches, but they cannot contain any arch (see
figure \ref{seggs}).
\begin{figure}[htb]
\begin{center}
\epsfig{file=seggs.eps, width=4.8in} \caption{A Schr\"oder path
and its representation as a two-coloured matching.}\label{seggs}
\end{center}
\end{figure}
This condition precisely identifies those two-coloured matchings
representing Schr\"oder paths among all two-coloured matchings.
\bigskip
Let $\mathcal{S}(n)\subseteq \mathcal{U}(n)$ denote the set of
pairs of relations $(S,R)$ on $X$ of cardinality $n$ satisfying
the following axiom:
\begin{itemize}
\item if $xSx$, then $x$ is a minimal element for $S$.
\hfill (\textbf{min})
\end{itemize}
Since each combinatorial interpretations of this kind of pairs of
relations is counted by Schr\"oder numbers, we call them
\emph{Schr\"oder pairs of size $n$}.
Also for axiom (\textbf{min}) an equivalent formulation can be
given which is analogous to that of proposition \ref{alt}, and
whose proof is left to the reader:
\begin{displaymath}
S\circ \mathcal{D}(S) =\mathcal{D}(S).
\end{displaymath}
Notice that, in this case, $S$ does not commute with
$\mathcal{D}(S)$; more precisely, it is
\begin{displaymath}
S\circ \mathcal{D}(S)\subseteq \mathcal{D}(S)\circ S.
\end{displaymath}
For example, referring to the matching representation of the
Schr\"oder path given in figure \ref{seggs}, we have
$y(\mathcal{D}(S)\circ S)x$, but $(y,x)\notin S\circ
\mathcal{D}(S)$.
|
1,108,101,566,368 | arxiv | \section{Introduction}
Classical novae span a subclass of cataclysmic variables, consisting of a white dwarf which interacts
with a late-type companion star. The companion loses its mass through Roche lobe overflow,
forming an accretion disk around the white dwarf. The mass transfer from the companion induces
thermo-nuclear runaway (TNR) onto the surface of the white dwarf, which leads to the nova eruption.
Novae are important in several aspects. First of all, they have the potential to serve as standard candles
of extra-galactic distance indication. This is due to the relation between the maximum luminosity
of the light curve and the rate of decline. \cite{1929ApJ....69..103H} first noticed that brighter novae are
prone to steeper decline. The empirical `Maximum Magnitude versus Rate of Decline' (MMRD) relation
was further investigated by \cite{1936PASP...48..191Z} and studied quantitatively by \cite{1945PASP...57...69M}
and \cite{1956AJ.....61...15A}. The theoretical foundation for MMRD relation is laid down
by \cite{1981ApJ...243..926S} and further revised by \cite{1992ApJ...393..516L}.
Novae could also shed light on the underlying stellar population of the environment. For example,
\cite{1995ApJ...452..704D} proposed that fast novae ($t_2 < 12$ days) are related to stars
belonging to Population I with relatively massive white dwarfs, while slow novae are associated to Population
II stars and have less massive white dwarfs.
In addition, novae play a role in the galactic abundances. Novae have been considered as major sources of
galactic $^{13}$C, $^{15}$N and $^{17}$O, and minor contributors to $^7$Li, $^{9}$F and $^{26}$Al. However,
novae hardly contribute to the overall galactic metallicity compared to supernovae or AGB stars,
because only 10$^{-4}$ to 10$^{-5}M_{\odot}$ are ejected per nova outburst \citep{2007JPhG...34..431J}.
Recurrent novae are also regarded as possible supernovae progenitor candidates
\citep[see e.g. ][ and reference therein.]{2010ApJS..187..275S}.
The fundamental question is whether recurrent novae accumulate
enough mass onto the central white dwarf envelope and turn into supernovae progenitors even after
several novae explosions.
Last but not the least, novae are main contributors to the class of super soft X-ray sources (SSS).
\cite{2005A+A...442..879P} searched
for X-ray counterparts of the optical novae in M31, and found that novae are major sources of soft
X-ray emission. The SSS phase can provide us with information on the white dwarf mass, the ejected
and burned mass in the outburst \citep[e.g.][]{2010AN....331..187P}.
Due to the interstellar extinction in the Galactic disk, we can only observe
a small fraction of the Galactic novae that erupt each year \citep{1997ApJ...487..226S}.
Thus, we need to take
into account rather large (and likely uncertain) corrections for incompleteness when determining the
spatial distribution or estimation of the Galactic nova rate. In such case, M31 is an ideal target for
a novae survey because novae are still bright enough to be observed ($m_R$ $<$ 20 mag) and it is possible
to cover the entire M31 galaxy within several pointings.
Novae monitoring campaigns towards M31 can be dated back to the pioneering work done by Hubble in
1920s \citep{1929ApJ....69..103H}.
\begin{table*}
\centering
\label{tab.M31_campaign}
\begin{minipage}{180mm}
\caption{Principal M31 classical nova surveys}
\begin{tabular}{lllllll}
\hline
Author(s)$/$Project & Epoch & Filter(s) & Detector & Novae & Annual rate & Reference(s) \\
\hline
Hubble & 1909--1927 & B & Plates & 85 & $\sim 30$ & \citet{1929ApJ....69..103H} \\
Arp & 1953--1954 & B & Plates & 30 & $26 \pm 4$ & \citet{1956AJ.....61...15A} \\
Rosino {\it et al.} & 1955--1986 & B & Plates & 142 & - & \citet{1964AnAp...27..498R, 1973A+AS....9..347R,1989AJ.....97...83R} \\
Ciardullo {\it et al.} & 1982--1986 & B, H$\alpha$ & CCD & 40 & - & \citet{1987ApJ...318..520C, 1990ApJ...356..472C} \\
Sharov \& Alksins & 1969--1989 & B & Plates & 21 & - & \citet{1991Ap+SS.180..273S} \\
Tomaney \& Shafter & 1987--1989 & H$\alpha$ & CCD & 9 & - & \citet{1992ApJS...81..683T} \\
Shafter \& Irby & 1990--1997 & H$\alpha$ & CCD & 72 & $37_{-8}^{+12}$ & \citet{2001ApJ...563..749S} \\
Rector {\it et al.} & 1995--1999 & H$\alpha$ & CCD & 44 & - & \citet{1999AAS...195.3608R} \\
AGAPE & 1994--1996 & $R,I$ & CCD & 12 & - & \citet{2004A+A...421..509A} \\
POINT-AGAPE & 1999--2002 & $r'$, $i'$, $g'$ & CCD & 20 & $65_{-15}^{+16}$ & \citet{2006MNRAS.369..257D} \\
NMS & 2001--2002 & $R,I$ & CCD & 2 & - & \citet{2004A+A...415..471J} \\
WeCAPP & 1997--2008 & $R,I$ & CCD & 91 & - & This work \\
\hline
\end{tabular}
\end{minipage}
\end{table*}
A list of all the campaigns with published novae in M31 that we are aware of is shown in Table~\ref{tab.M31_campaign},
with most of the data compiled by \cite{2001ApJ...563..749S} and \cite{2004MNRAS.353..571D}.
Despite the extensive search towards M31, most of previous studies have only sparse observations
and thus make the analysis of nova light curve rather difficult. Our WeCAPP project is dedicated
to monitoring M31 with up to. 4 we categorize our nova candidates according to the
classification scheme of \cite{2010AJ....140...34S}. We apply the power-law decline proposed
by ~\cite{2006ApJS..167...59H} to fit the smooth class light curves in Sect. 4.1. Novae showing
cusp, oscillation or jitter features in their light curves are presented in Sect. 4.2 - 4.4.
We then correlate our nova candidates with literature to search for recurrent novae in Sect. 5
We show the rate of decline of your nova candidates and the distribution
of their speed class in Sect. 6 and end the paper with the conclusions in Sect.
7. All the light curves in our catalogue are presented in the Appendix.
\section{Observations and data reduction}
The WeCAPP project \citep{2001A+A...379..362R} was a dedicated survey
to search for microlensing events towards our neighboring galaxy
M31. We continuously monitored the bulge of M31 (when it was visible,
when the weather was cooperative and when there was an observer)
between September 1997 and March 2008 using the 0.8 m telescope of the
Wendelstein Observatory located in the Bavarian Alps.
The data was taken optimally on a daily basis in both \textit{R} and
\textit{I} filters with a field of view of 8$\farcm$3 $\times$ 8$\farcm$3. From
June 1999 to February 2002 we further extended our observations with
the 1.23 m (17$\farcm$2 $\times$ 17$\farcm$2 FOV) telescope of the Calar Alto
Observatory in Spain. After 2002 we use the Wendelstein telescope
solely to mosaic the full Calar Alto field of view with four pointings.
The position of these four pointing are indicated in Fig. \ref{fig.m31_farbig}.
The data volume and quality of the four pointings (F1, F2,
F3, F4) drastically differs during the 11 seasons.
A list of the number of nights observed in each season is shown in
Table \ref{tab.nights}.
A detailed overview of the observaions can be found in
\cite{2006A+A...445..423F} and Riffeser et. al. (in prep.).
\begin{figure}
\centering
\includegraphics[width=0.45\textwidth]{m31_farbig_new_2}
\caption{M31 composite image ($V$-, $R$-, and $I$-band) taken
at the Calar Alto Observatory. The black lines mark the four pointings
(F1 to F4) from the Wendelstein Observatory to mosaic the full FOV of
the Calar Alto Observatory.}
\label{fig.m31_farbig}
\end{figure}
\begin{table}[ht]
\setlength{\tabcolsep}{1.5mm}
\begin{center}
\caption{Lists of the analyzed nights per season from the 11-year WeCAPP campaign}
\begin{tabular}{c|rrrr|rrrr}
\hline\hline
season & \multicolumn{4}{c|}{$R$-band} & \multicolumn{4}{c}{$I$-band} \\
\hline
& F1 & F2 & F3 & F4 & F1 & F2 & F3 & F4 \\
\hline\hline
1997 - 1998 & 36 & 7 & 1 & 4 & 33 & 7 & 0 & 3 \\
1998 - 1999 & 33 & 1 & 1 & 1 & 28 & 1 & 1 & 1 \\
1999 - 2000 & 154 & 0 & 0 & 0 & 145 & 0 & 0 & 0 \\
2000 - 2001 & 184 & 108 & 124 & 108 & 159 & 89 & 104 & 89 \\
2001 - 2002 & 240 & 136 & 159 & 136 & 212 & 119 & 140 & 119 \\
2002 - 2003 & 34 & 18 & 24 & 18 & 30 & 16 & 24 & 18 \\
2003 - 2004 & 35 & 24 & 29 & 31 & 33 & 21 & 26 & 29 \\
2004 - 2005 & 25 & 23 & 26 & 25 & 19 & 16 & 19 & 19 \\
2005 - 2006 & 30 & 26 & 28 & 29 & 26 & 20 & 22 & 23 \\
2006 - 2007 & 107 & 106 & 103 & 103 & 48 & 45 & 46 & 47 \\
2007 - 2008 & 62 & 56 & 52 & 58 & 36 & 35 & 35 & 38 \\
\hline
total & 940 & 505 & 547 & 513 & 769 & 369 & 417 & 386 \\
\hline\hline
\end{tabular}
\tablefoot{Each season
starts from the 1st of May until the 30th of April in the next year. The WeCAPP campaigns began
in 1997 focusing on F1 with the Wendelstein Observatory. From 1999 until 2002 we extended our
observations by including the Calar Alto Observatory, which boosted the number of images taken in
these seasons. From 2002 on,
we use the Wendelstein Observatory solely and mosaic the full Calar Alto FOV with four pointings.}
\label{tab.nights}
\end{center}
\end{table}
To quantify a realistic time sampling of the survey we define ``good quality
data points'' as data points with PSF fluxes with an error below $0.4
\times 10^{-5}\mathrm{Jy}$. In Fig.~\ref{fig.lownoise_sampling} we show
for every night the fractional area of pixels with errors below this limit.
0\% indicates we have no observations during the night.
\begin{figure}[ht]
\centering
\includegraphics[width=0.43\textwidth]{lownoise_novae_sampling.eps}
\caption{Fraction of good quality data points in $t$ averaged over the survey area. The definition
of good quality is given in the text. The vertical grey zones indicate the time when M31 is not
observable from the location of the telescopes during May and June).
0\% indicates we have no observations during the night.}
\label{fig.lownoise_sampling}
\end{figure}
\begin{figure}[ht]
\centering
\includegraphics[width=0.4\textwidth]{lownoise_novae.eps}
\caption{Fraction of good quality data points in $(x,y)$ averaged over time $t$.
The definition of good quality is given in the text.
The low fraction in the central part is caused by the high noise of M31 itself.}
\label{fig.lownoise}
\end{figure}
Fig.~\ref{fig.lownoise} shows the spatial variation of the fraction of all data with
flux errors below the flux error limit averaged over 11 seasons. It demonstrates that we expect most of our novae in field F1
and fewer in the fields F2, F3, and F4. The field F1 was observed much more frequently than the
other one because it is the subfield with highest lensing probability.
The data was then reduced by our customized pipeline MUPIPE \citep[see ][]{2002A+A...381.1095G},
which performs CCD reduction, position alignment, photometric alignment, frame stacking and difference
imaging following the algorithm of \cite{1998ApJ...503..325A}.
After the difference imaging, we perform PSF photometry on each pixel as follows. First, we extracted
the PSF from several isolated, bright and unsaturated reference stars. Then we fit this PSF to all
variable sources. Finally, we integrate the count rates over the area of the PSF to determine the flux of the source.
The results of the project are presented in
\cite{2003ApJ...599L..17R, 2008ApJ...684.1093R} and partially contributed to \cite{2010ApJ...717..987C}.
In addition to the original
microlensing targets, the intensive observations in two bands also yields more than 20,000 variables
in the bulge of M31 \citep{2006A+A...445..423F} and the nova candidates presented in this paper.
\section{Nova detection}
To establish an automatic detection for nova candidates, we apply the following criteria for candidates selection
based on the measured $R$-band PSF flux (as mentioned in Sect. 2):\\
\begin{itemize}
\item The significance for variability must be 10$\sigma$ relative to the baseline and the measured flux excess of the variable source must be a local maximum around neighbouring pixels at a given time step.
Note that $\sigma$ throughout this paper refers to the errors of the individual PSF flux excess measurements.
\item The variable source must have a measured flux excess larger than $4\times10^{-5}$ Jy in \textit{R}-band (corresponding to
$m_R$ = $-$2.5 log($\frac{4\times 10^{-5} Jy}{F_{\mathrm{Vega,}R}}$) $\sim$ 19.7 mag, with $F_{\mathrm{Vega,}R}$ = 3060 Jy being the flux
of Vega in the $R$-band) and the first measurement after the measured maximum flux excess must have a flux
excess $>~2\times10^{-5}$ Jy.
\item To use the eruptive nature of novae, we define the strength $s$ of the outburst:
\begin{equation}
s = \frac{\Delta F_{\max}/\sigma _{\max} ^2+\Delta F _{\max+1}/\sigma _{\max+1} ^2}{1/\sigma _{\max} ^2+1/\sigma _{\max+1} ^2}
\end{equation}
where $\Delta F_{\max}$ is the measured maximum flux excess relative to the reference image and $\Delta F_{\max+1} $ is the first measurement after
the measured maximum flux excess. The $\sigma _{\max} $ and $\sigma _{\max+1}$ are the errors in the measurements of the flux excess. We require
$s$ $>$ 4.6 $\times$ 10$^{-5}$ Jy nova detection.
\begin{table}[!h]
\centering
\caption{Detection criteria for nova candidates}
\begin{tabular}{lr}
\hline
Criterion & Number \\
\hline
Full light curves & 4043256 \\
Local flux maximum with $s>$4.6$\times$10$^{^{-5}}$Jy and $a>$4.7 & 1005 \\
Masking of bright stars & 156 \\
Grouping & 105 \\
Inspection by eye & 91 \\
\hline
\label{tab.detect}
\end{tabular}
\end{table}
\item To avoid false contamination from periodically varying sources, we define the asymmetry $a$ between positive and negative
outliers in the light curve relative to the baseline:
\begin{equation}
a=\frac{\mathrm{Number~of~data~points~with~}\Delta F>5\sigma}{\mathrm{Number~of~data~points~with~}\Delta F<-5\sigma} - 1.
\end{equation}
This quantity $a$ is useful in filtering out normal variable sources, which have $a$ $\sim$ 0, while the eruptive nature of
novae lead to $a~\gg$ 1. We empirically require $a$ to be larger than 4.7 to be suitable for nova detection.
\item We than apply a special mask to filter false detections around bright stars, especially spikes.\\
\item After the masking, we apply a group algorithm to find multiple pixel detections connecting to the same nova candidate in different time steps.\\
\item In the last step, we inspect the difference images and light curves by eye to make sure that no image
artefact escapes our detection and is misinterpreted as a nova.
\end{itemize}
We combine the criteria 1-4 into one single step. The detections filtered out by each steps are shown in Table. \ref{tab.detect}.
Among the nova candidates, 24 are discovered by WeCAPP project for the first time,
while 5 of them are known but were not officially published and can be
found on the CBAT\footnote{M31 (Apparent) Novae Page, http://www.cfa.harvard.edu/iau/CBAT\_M31.html} or
Extragalactic Novae\footnote{www.rochesterastronomy.org/novae.html} webpages. The rest of the nova candidates
are published and can be found in the literature,
see e.g. \cite{2007A+A...465..375P, 2010AN....331..187P}\footnote{An up-to-date online-version of the
catalog can be found at http://www.mpe.mpg.de/$\sim$m31novae/opt/m31/index.php}. The positions and light curves of these 91 novae
are presented in Table \ref{tab.cat} and in the Appendix.
\section{Nova taxonomy}
Although all novae slightly differ, it is possible to group novae by their
light-curve or spectroscopic properties. One of the commonly used methods to characterize novae
is the `speed class' proposed by \cite{1964gano.book.....P}, who categorized novae according to
their light-curve evolution and described the decline time-scale by the time needed to drop by
2 magnitudes below the maximum ($t_2$).
\cite{1992AJ....104..725W}
did a thorough study of the spectroscopic properties of the novae, and categorized novae into
Fe (galactic thick disk novae) or He (galactic disk novae) group according to the most prominent features in their
spectra.
Della Valle \& Livio (1998) further established the connection between the
speed class and spectroscopic classification. They found
that fast novae are mainly related to the He novae, while the slow novae tend to show Fe II features in their
spectra. The prospoed
explanation behind is that He novae are from the galactic disk and prone to have massive white dwarfs,
thus having fast and steep decline. On the other hand, the Fe II novae originate from the less
massive population II stars in the galactic thick disk, and hence have a slow decline.
The speed class is not enough to fully account for the differences between novae.
\cite{2010AJ....140...34S} gathered 93 galactic novae from the
American Association of Variable Star Observers (AAVSO) and made a
thorough study using the complete coverage of their light curves.
They suggested to classify the
novae according to their distinct features during their decline, such as the plateau, the cusp by the
secondary brightening and the dip by the dust.
In this section we classify our nova candidates (if possible) following the taxonomy proposed
by \cite{2010AJ....140...34S}. Readers are referred to Table 3 and Figure 2
in \cite{2010AJ....140...34S} for the definition and exemplary light curves for different nova classes.
Note that the classification scheme of \cite{2010AJ....140...34S} is based on the $V$-band magnitude,
while we are using $R$-band and might be affected by the strong H$\alpha$ emission. We thus check
our $I$-band light curve, which does not affected by the strong H$\alpha$ emission, and identify
the apparent features in the nova classification scheme of \cite{2010AJ....140...34S} in both $R$ and $I$-band.
\subsection{S Class and the universal decline law}
The S-class novae have smooth light curves following the universal power-law decline
($F \propto t^{-1.75}$) due to free-free emission expanding shell as proposed by
~\cite{2006ApJS..167...59H}. In principle, the classification scheme of
\cite{2010AJ....140...34S} is based on the fact that all the light curves originate from the S-class.
The S-class is indeed consistent to the vast majority of our nova candidates.
To verify the universal decline law, we thus fit our candidate
light curves with a 4-parameter formula:
\begin{equation}
\Delta F = f_b + f_0 \times (t-t_0)^{\alpha},
\label{eq.free_t0}
\end{equation}
where $f_b$ is the baseline level and will be different from zero in cases where the nova candidate flux is present in the
reference frame used in difference imaging or there is a variable close to it (see e.g. the light curve of WeCAPP-N10 in
the appendix). $f_0$ gives the proportional factor between the
flux and time, $t_0$ is the onset of nova outburst and $\alpha$ is the index of the power-law decline.
After the first iteration, we found that some candidates have unreasonable $t_0$ long before the nova eruption.
For such events, we use a 5-parameter formula
\begin{equation}
\Delta F = f_b + f_0 \times (t-t_0)^{\alpha},\quad t_0 \equiv t_{_{-1}} + \delta^2
\label{eq.fixed_t0}
\end{equation}
with $t_{_{-1}}$ fixed at the last data point in the baseline just before the eruption to avoid unreasonable $t_0$.
The best-fit parameters for equations (\ref{eq.free_t0}) and (\ref{eq.fixed_t0}) are given in Table
\ref{tab.free_t0}.
For the S-class nova, we first tried to fit the power-law decline for all the nova candidates. A candidate is classified as S-class nova only
when the fitting routine finds a solution for either equation \ref{eq.free_t0}
or equation \ref{eq.fixed_t0}. N01, N09, N17, N24, N41, N49, N54, N58 and N77 are not attributed to
S-class because the fitting routine failed to find a solution.
Our best-fit value of $\alpha$ from Table \ref{tab.free_t0} for free $t_0$ solely and combined
with fixed $t_0$ are $-$1.51 and $-$1.32, respectively.
\begin{figure*}
\centering
\includegraphics[scale=0.95]{3color_with_RNe.eps}
\caption{Distribution of the WeCAPP nova candidates.
The recurrent nova candidates (see Sect. 5) are marked in green.
The overlaying image is a three-color-combined image
using the observations obtained from Calar Alto observatories in $V$, $R$ and $I$-band. The image has
a size of 17$\farcm$2 $\times$ 17$\farcm$2.}
\label{fig.all}
\end{figure*}
\clearpage
\begin{table*}[!h]
\centering
\begin{sideways}
\begin{minipage}{275mm}
\caption{WeCAPP nova catalogue}
\begin{tabular}{|ccccrlllll|}
\hline
Name & RA(2000) & Dec(2000) & $t_{\max}$ & $\Delta t_{\max}$ & Class & CBAT & Discovery (and light curve) reference(s)& X-ray obs. & Spectroscopic obs. \\
& [h:m:s] & [d:m:s] & & [day] & & & & & \\
\hline
N01 & 00:43:05.37 & 41:14:59.2 & 745.52 & 29.01 & Unclassified & 1997-10e & 1997-14 in \cite{2001ApJ...563..749S} & & \\
N02 & 00:42:52.35 & 41:16:13.2 & 750.45 & 33.95 & Unclassified & 1997-10f & 1997-10 in \cite{2001ApJ...563..749S} & & \\
N03 & 00:42:42.13 & 41:15:10.4 & 753.55 & 37.05 & Unclassified & 1997-11a & 1997-07 in \cite{2001ApJ...563..749S} & \cite{2007A+A...465..375P} & \\
\multicolumn{8}{|c}{} & \cite{2010A+A...523A..89H,2010arXiv1010.1461H} & \\
N04 & 00:42:21.76 & 41:12:16.2 & 753.55 & 37.05 & Unclassified & 1997-10c & 1997-02 in \cite{2001ApJ...563..749S} & & \\
N05 & 00:42:46.64 & 41:14:49.2 & 1109.48 & 243.19 & Unclassified & 1998-09d & IAUC 7015, \cite{2000AstL...26..433S} & & Fe II, \cite{2011arXiv1104.0222S} \\
N06 & 00:42:49.65 & 41:16:06.5 & 1251.30 & 2.00 & Unclassified & 1999-02a & This work & & \\
N07 & 00:42:49.69 & 41:15:05.6 & 1359.55 & 0.93 & Cusp & 1999-06a & IAUC 7218, PAV-78668 in \cite{2004MNRAS.351.1071A} & & Fe II, \cite{2011arXiv1104.0222S} \\
N08 & 00:43:01.85 & 41:15:38.4 & 1372.62 & 1.01 & Smooth & 1999-06b & \cite{1999BAAS...31.1420R} & & \\
N09 & 00:42:30.11 & 41:15:27.3 & 1719.62 & 553.15 & Unclassified & 2000-06a & This work & & \\
N10 & 00:42:46.75 & 41:12:51.9 & 1726.63 & 1.00 & Cusp & 2000-08b & \cite{2007A+A...465..375P} & & \\
N11 & 00:42:43.97 & 41:17:55.5 & 1754.64 & 18.01 & Oscillation & 2000-07a & PACN-00-01 in \cite{2004MNRAS.353..571D} & \cite{2005A+A...442..879P,2007A+A...465..375P} & \\
\multicolumn{8}{|c}{} & \cite{2006ApJ...643..844O} & \\
N12 & 00:42:44.65 & 41:20:40.6 & 1755.65 & 1.02 & Cusp & 2000-07b & PACN-00-03 in \cite{2004MNRAS.353..571D} & & \\
N13 & 00:42:47.45 & 41:15:07.8 & 1763.66 & 1.02 & Cusp & 2000-08a & \cite{2007A+A...465..375P} & \cite{2005A+A...442..879P,2007A+A...465..375P} & \\
N14 & 00:42:37.70 & 41:17:37.8 & 1766.64 & 1.00 & Cusp & 2000-08d & PACN-00-04 in \cite{2004MNRAS.353..571D} & & \\
N15 & 00:42:21.49 & 41:07:47.3 & 1932.34 & 1.04 & Cusp & 2001-01a & This work & \cite{2010A+A...523A..89H,2010arXiv1010.1461H} & \\
N16 & 00:43:05.26 & 41:19:08.2 & 1948.34 & 4.02 & Unclassified & 2001-01b & This work & & \\
N17 & 00:42:42.82 & 41:15:55.2 & 1940.33 & 6.04 & Unclassified & 2001-01c & This work & & \\
N18 & 00:42:54.95 & 41:16:09.2 & 1948.34 & 4.02 & Unclassified & 2001-02a & This work & & \\
N19 & 00:42:57.75 & 41:08:12.3 & 2097.56 & 124.25 & Unclassified & 2001-07b & This work & & \\
N20 & 00:42:38.76 & 41:14:44.4 & 2097.56 & 124.25 & Unclassified & 2001-07c & This work & & \\
N21 & 00:42:30.79 & 41:14:36.1 & 2130.63 & 3.00 & Unclassified & 2001-07d & IAUC 7674, PACN-01-01 in \cite{2004MNRAS.353..571D} & & \\
N22 & 00:43:18.62 & 41:09:49.0 & 2094.56 & 121.25 & Smooth & 2001-07a & PAV-74935 in \cite{2004MNRAS.351.1071A} & \cite{2005A+A...442..879P,2007A+A...465..375P} & \\
N23 & 00:43:10.62 & 41:17:58.0 & 2163.65 & 28.02 & Unclassified & 2001-08b & PACN-01-03 in \cite{2004MNRAS.353..571D} & & \\
N24 & 00:42:40.60 & 41:07:59.9 & 2163.65 & 28.02 & Unclassified & 2001-08c & PACN-01-04 in \cite{2004MNRAS.353..571D} & & \\
N25 & 00:42:18.52 & 41:12:39.3 & 2163.65 & 28.02 & Unclassified & 2001-08a & IAUC 7684, PACN-01-02 in \cite{2004MNRAS.353..571D} & & \\
N26 & 00:42:34.62 & 41:18:13.0 & 2151.60 & 0.99 & Smooth & 2001-08d & IAUC 7709, PAC-26277 in \cite{2004MNRAS.351.1071A} & \cite{2005A+A...442..879P,2007A+A...465..375P} & \\
N27 & 00:43:03.31 & 41:12:11.5 & 2190.48 & 1.90 & Jitter & 2001-10a & IAUC 7729, PACN-01-06 in \cite{2004MNRAS.353..571D} & \cite{2007A+A...465..375P} & Fe II, \cite{2011arXiv1104.0222S} \\
\multicolumn{7}{|c}{} & NMS2 in \cite{2004A+A...415..471J} & \cite{2010A+A...523A..89H,2010arXiv1010.1461H} & \\
N28 & 00:42:47.21 & 41:16:18.7 & 2197.32 & 0.98 & Oscillation & 2001-10c & This work & & \\
N29 & 00:42:39.59 & 41:09:02.9 & 2299.32 & 2.99 & Unclassified & 2001-12b & This work & & \\
N30 & 00:42:41.44 & 41:16:24.6 & 2266.30 & 0.92 & Smooth & 2001-12a & IAUC 7794 & & Fe II, \cite{2011arXiv1104.0222S} \\
\hline
\end{tabular}
\label{tab.cat}
\end{minipage}
\end{sideways}
\end{table*}
\addtocounter{table}{-1}
\begin{table*}[!h]
\centering
\begin{sideways}
\begin{minipage}{275mm}
\caption{WeCAPP nova catalogue continued.}
\begin{tabular}{|ccccrlllll|}
\hline
Name & RA(2000) & Dec(2000) & $t_{\max}$ & $\Delta t_{\max}$ & Class & CBAT & Discovery (and light curve) reference(s)& X-ray obs. & Spectroscopic obs. \\
& [h:m:s] & [d:m:s] & & [day] & & & & & \\
\hline
N31 & 00:42:33.89 & 41:18:24.0 & 2282.31 & 6.02 & Smooth & 2002-01b & IAUC 7794, PAV-26285 in \cite{2004MNRAS.351.1071A} & \cite{2005A+A...442..879P,2007A+A...465..375P} & He/N, \cite{2011arXiv1104.0222S} \\
N32 & 00:42:52.89 & 41:15:10.4 & 2283.30 & 0.99 & Cusp & 2002-01a & IAUC 7794, PAV-79136 in \cite{2004MNRAS.351.1071A} & & \\
N33 & 00:42:30.74 & 41:19:05.9 & 2325.38 & 4.02 & Smooth & 2002-02a & This work & & \\
N34 & 00:43:01.08 & 41:16:19.9 & 2476.54 & 13.00 & Smooth & 2002-07a & IAUC 7937, IAUC 7938 & & \\
N35 & 00:42:39.74 & 41:17:03.3 & 2521.57 & 38.01 & Doubtful & 2002-07b & This work & & \\
N36 & 00:42:48.66 & 41:16:26.3 & 2573.63 & 26.07 & Doubtful & 2002-08b & This work & & \\
N37 & 00:42:48.90 & 41:16:05.3 & 2661.25 & 6.82 & Smooth & 2003-01b & This work & & \\
N38 & 00:42:52.24 & 41:13:54.5 & 2797.53 & 91.24 & Doubtful & 2003-01c & IAUC 8155 & & \\
N39 & 00:42:58.38 & 41:16:08.3 & 2797.53 & 91.24 & Smooth & 2003-06a & IAUC 8155 & & \\
N40 & 00:42:45.12 & 41:17:54.0 & 2820.50 & 21.94 & Smooth & 2003-06c & IAUC 8165 & & \\
N41 & 00:42:41.14 & 41:18:32.4 & 2832.56 & 12.05 & Unclassified & 2003-06d & IAUC 8165 & & \\
N42 & 00:42:15.85 & 41:11:59.9 & 2834.44 & 5.96 & Smooth & 2003-07b & IAUC 8165, N3 in \cite{2005ASPC..330..449S} & & \\
N43 & 00:42:49.64 & 41:18:02.0 & 2867.53 & 6.11 & Doubtful & 2003-08a & IAUC 8210 & & \\
N44 & 00:42:41.20 & 41:16:16.0 & 2925.46 & 16.95 & Unclassified & 2003-08c & IAUC 8226 & \cite{2010arXiv1010.1461H} & \\
N45 & 00:42:46.74 & 41:19:47.4 & 2931.30 & 22.96 & Smooth & 2003-09b & IAUC 8222, N5 in \cite{2005ASPC..330..449S} & & \\
N46 & 00:42:46.45 & 41:15:55.6 & 2925.42 & 26.88 & Unclassified & 2003-10a & This work & & \\
N47 & 00:42:53.78 & 41:18:46.2 & 2949.54 & 8.12 & Unclassified & 2003-11a & IAUC 8248 & \cite{2007A+A...465..375P} & \\
\multicolumn{8}{|c}{} & \cite{2010A+A...523A..89H} & \\
N48 & 00:43:00.76 & 41:11:26.9 & 2978.22 & 6.86 & Smooth & 2003-11b & IAUC 8253 & \cite{2007A+A...465..375P} & \\
\multicolumn{8}{|c}{} & \cite{2010A+A...523A..89H} & \\
N49 & 00:43:04.73 & 41:12:21.9 & 2992.32 & 7.02 & Unclassified & 2003-12a & IAUC 8262, N8 in \cite{2005ASPC..330..449S} & \cite{2007A+A...465..375P} & \\
N50 & 00:42:54.14 & 41:15:12.2 & 2994.32 & 2.00 & Smooth & 2003-12b & IAUC 8262 & \cite{2007A+A...465..375P} & \\
N51 & 00:42:41.18 & 41:15:45.0 & 3006.24 & 52.88 & Unclassified & 2004-01b & This work & \cite{2007A+A...465..375P} & \\
\multicolumn{8}{|c}{} & \cite{2010arXiv1010.1461H} & \\
N52 & 00:43:08.65 & 41:15:35.3 & 3039.29 & 38.95 & Smooth & 2004-01a & N9 in \cite{2005ASPC..330..449S} & \cite{2007A+A...465..375P} & \\
N53 & 00:42:47.28 & 41:16:21.4 & 3039.29 & 52.81 & Cusp & 2004-02a & This work & \cite{2007A+A...465..375P} & \\
N54 & 00:42:40.28 & 41:14:42.5 & 3254.44 & 193.14 & Unclassified & 2004-09a & IAUC 8404 & \cite{2007A+A...465..375P} & Fe II, \cite{2011arXiv1104.0222S} \\
N55 & 00:42:43.90 & 41:17:35.0 & 3319.60 & 9.30 & Doubtful & 2004-07a & \cite{2007A+A...465..375P} & \cite{2007A+A...465..375P} & \\
N56 & 00:42:47.24 & 41:15:54.5 & 3291.45 & 7.92 & Unclassified & 2004-10b & \cite{2007A+A...465..375P} & \cite{2007A+A...465..375P} & \\
N57 & 00:42:51.84 & 41:16:18.2 & 3291.37 & 7.84 & Unclassified & 2004-10a & ATEL 346 & \cite{2007A+A...465..375P} & \\
N58 & 00:43:07.46 & 41:18:04.6 & 3319.49 & 15.13 & Unclassified & 2004-11b & \cite{2007A+A...465..375P} & \cite{2007A+A...465..375P} & He/N, \cite{2011arXiv1104.0222S} \\
\multicolumn{8}{|c}{} & \cite{2010A+A...523A..89H} & \\
N59 & 00:42:47.17 & 41:16:19.8 & 3319.49 & 15.13 & Smooth & 2004-11f & CBAT & \cite{2007A+A...465..375P} & \\
N60 & 00:42:42.81 & 41:18:27.8 & 3319.60 & 9.30 & Unclassified & 2004-11a & \cite{2007A+A...465..375P} & \cite{2007A+A...465..375P} & Fe II, \cite{2011arXiv1104.0222S} \\
\hline
\end{tabular}
\label{tab.cat1}
\end{minipage}
\end{sideways}
\end{table*}
\addtocounter{table}{-1}
\begin{table*}[!h]
\centering
\begin{sideways}
\begin{minipage}{275mm}
\caption{WeCAPP nova catalogue continued.}
\begin{tabular}{|ccccrlllll|}
\hline
Name & RA(2000) & Dec(2000) & $t_{\max}$ & $\Delta t_{\max}$ & Class & CBAT & Discovery (and light curve) reference(s)& X-ray obs. & Spectroscopic obs. \\
& [h:m:s] & [d:m:s] & & [day] & & & & & \\
\hline
N61 & 00:42:45.47 & 41:16:33.2 & 3348.42 & 28.93 & Unclassified & 2004-11d & \cite{2007A+A...465..375P} & \cite{2007A+A...465..375P} & \\
N62 & 00:42:32.29 & 41:19:25.7 & 3346.36 & 25.92 & Cusp & 2004-11c & CBAT & \cite{2007A+A...465..375P} & \\
N63 & 00:42:28.39 & 41:16:36.1 & 3382.36 & 4.09 & Unclassified & 2005-01a & \cite{2007A+A...465..375P} & \cite{2007A+A...465..375P} & Fe II, \cite{2011arXiv1104.0222S} \\
N64 & 00:42:28.10 & 41:09:54.7 & 3381.36 & 21.03 & Unclassified & 2004-12a & ATEL 379 & \cite{2007A+A...465..375P} & \\
N65 & 00:42:52.79 & 41:14:28.8 & 3426.28 & 17.92 & Smooth & 2005-02a & ATEL 421 & \cite{2007A+A...465..375P} & \\
\multicolumn{8}{|c}{} & \cite{2010A+A...523A..89H,2010arXiv1010.1461H} & \\
N66 & 00:42:36.37 & 41:18:41.8 & 3427.38 & 1.03 & Doubtful & 2005-02b & This work & & \\
N67 & 00:42:50.80 & 41:20:39.8 & 3592.47 & 100.43 & Cusp & 2005-07a & \cite{2007A+A...465..375P} & \cite{2010A+A...523A..89H} & Fe II, \cite{2011arXiv1104.0222S} \\
N68 & 00:42:52.25 & 41:19:59.4 & 3635.37 & 15.94 & Smooth & 2005-09a & CBAT & \cite{2010A+A...523A..89H} & Fe II, ATEL 850 \\
N69 & 00:42:42.11 & 41:14:01.1 & 3635.59 & 9.30 & Unclassified & 2005-09d & This work & \cite{2010A+A...523A..89H} & \\
N70 & 00:42:42.12 & 41:18:00.3 & 3661.54 & 1.17 & Unclassified & 2005-10b & ATEL 651 & \cite{2010A+A...523A..89H} & \\
\\
N71 & 00:43:13.42 & 41:16:58.9 & 3863.57 & 50.27 & Smooth & 2006-04a & ATEL 805 & \cite{2010A+A...523A..89H,2010AN....331..193H} & \\
N72 & 00:43:13.93 & 41:20:05.5 & 3863.57 & 50.27 & Smooth & 2006-05a & This work & \cite{2010A+A...523A..89H} & \\
N73 & 00:43:11.81 & 41:13:44.7 & 3880.53 & 2.98 & Unclassified & 2006-06a & This work & \cite{2010A+A...523A..89H} & Fe II, ATEL 850 \\
N74 & 00:42:32.77 & 41:16:49.1 & 3867.57 & 54.27 & Unclassified & 2006-06b & ATEL 829 & \cite{2010A+A...523A..89H,2010arXiv1010.1461H} & \\
N75 & 00:42:33.17 & 41:10:06.8 & 3984.41 & 3.89 & Smooth & 2006-09a & \cite{2007A+A...469..115C} & \cite{2010A+A...523A..89H} & \\
N76 & 00:42:41.45 & 41:14:44.5 & 4000.42 & 8.92 & Unclassified & 2006-09b & ATEL 884 & \cite{2010A+A...523A..89H} & \\
N77 & 00:42:42.39 & 41:08:45.6 & 3999.40 & 4.98 & Unclassified & 2006-09c & ATEL 887, \cite{2011ApJ...727...50S}& \cite{2010A+A...523A..89H,2010arXiv1010.1461H} & Fe II, \cite{2011arXiv1104.0222S} \\
N78 & 00:42:44.05 & 41:15:02.1 & 4096.47 & 1.09 & Unclassified & 2006-11b$^{\dagger}$ & This work & & \\
N79 & 00:42:21.08 & 41:13:45.4 & 4095.38 & 5.02 & Smooth & 2006-12a & This work & \cite{2010A+A...523A..89H,2010arXiv1010.1461H} & Fe II, \cite{2011arXiv1104.0222S} \\
N80 & 00:42:43.22 & 41:17:48.4 & 4095.52 & 5.08 & Unclassified & 2006-12c & ATEL 973 & \cite{2010A+A...523A..89H,2010arXiv1010.1461H} & \\
N81 & 00:42:51.15 & 41:14:33.5 & 4122.43 & 6.14 & Unclassified & 2007-01a & CBAT & \cite{2010A+A...523A..89H,2010arXiv1010.1461H} & \\
N82 & 00:42:53.61 & 41:12:09.9 & 4166.30 & 13.03 & Smooth & 2007-03a & CBAT & \cite{2010A+A...523A..89H,2010arXiv1010.1461H} & \\
N83 & 00:43:04.05 & 41:17:08.3 & 4296.48 & 24.96 & Smooth & 2007-07a & ATEL 1131 & \cite{2010arXiv1010.1461H} & \\
N84 & 00:42:45.91 & 41:18:04.4 & 4297.50 & 1.01 & Smooth & 2007-07b & ATEL 1139 & \cite{2010arXiv1010.1461H} & Fe II, ATEL 1186 \\
N85 & 00:43:03.29 & 41:14:53.0 & 4307.48 & 9.02 & Smooth & 2007-07c & ATEL 1146 & \cite{2010arXiv1010.1461H} & Hybrid or He/N, ATEL 1186 \\
N86 & 00:42:59.49 & 41:15:06.6 & 4337.45 & 2.95 & Unclassified & 2007-07d & ATEL 1162 & \cite{2010arXiv1010.1461H} & \\
N87 & 00:42:43.30 & 41:17:44.1 & 4314.55 & 15.01 & Smooth & 2007-07e & ATEL 1156 & \cite{2010arXiv1010.1461H} & Fe II, ATEL 1186 \\
N88 & 00:42:29.39 & 41:18:24.8 & 4356.49 & 16.92 & Smooth & 2007-08c & IAUC 7664, ATEL 1198 & \cite{2010arXiv1010.1461H} & \\
N89 & 00:43:04.18 & 41:15:54.1 & 4425.30 & 14.81 & Unclassified & 2007-11c & ATEL 1275 & \cite{2010arXiv1010.1461H} & Fe II, \cite{2011arXiv1104.0222S} \\
N90 & 00:43:19.98 & 41:13:46.3 & 4444.61 & 18.35 & Smooth & 2007-12b & This work & ATEL 1360, ATEL 1647 & He/N, \cite{2011arXiv1104.0222S} \\
\multicolumn{8}{|c}{} & \cite{2009ApJ...705.1056B} & \\
N91 & 00:42:30.37 & 41:09:53.6 & 4508.27 & 0.98 & Unclassified & 2008-02a & ATEL 1380 & \cite{2010arXiv1010.1461H} & \\
\hline
\end{tabular}
\label{tab.cat1}
\tablefoot{We show the position and the time of maximum flux (expressed in JD $-$ 2450000) of the nova candidates in columns 2, 3 and 4. The uncertainty of the position is smaller than 0$\farcs$1 (see Riffeser et. al., in prep.). The uncertainty in the time of maximum flux $\Delta t_{\max}$ in column 5 is derived from the time difference between $t_{\max}$ and the last measurement before $t_{\max}$. The light curve classification is shown in column 6, with 'unclassified' indicates those novae we are not able to classify and 'doubtful' indicates the novae are more similar to other variables than novae. In column 7 we give the corresponding CBAT nomenclature. Column 8 list the references for discovery (and light curves) in optical. The spectroscopic and X-ray observations are shown in column 9 and 10. The 24 novae without discovery references are newly discovered by WeCAPP. $\dagger$ We also detected M31N-2006-12d on the same position, which is possibly rebrightening of M31N-2006-11b given the short time difference.}
\end{minipage}
\end{sideways}
\end{table*}
\clearpage
\noindent
The power-law index for a free $t_0$ is close to the
value given by ~\cite{2006ApJS..167...59H}, while the value of $\alpha$ from a combination
of both free and fixed $t_0$ deviates from $-$1.75, which indicates that we might have missed the
true eruption date for some of the novae.
Note that we constrain the value of power-law index $\alpha$ from the $R$-band images, which are contaminated by
the H$\alpha$ emission line and might differ from the universal power-law index from ~\cite{2006ApJS..167...59H}.
\begin{figure}
\centering
\includegraphics[scale=0.57]{S_nova.eps}
\caption{S Class novae with free $t_0$. The single offsets are $-$15.04 for N08, $-$3.59 for N30,
$-$2.81 for N31, 2.84 for N37, $-$18.02 for N39, $-$0.88 for N42, $-$9.89 for N68, $-$8.06 for N72,
$-$11.81 for N79 and 1.02 for N82, 5.14 for N83, $-$13.66 for N84 and 5.37 for N90 respectively.
Note that for most of the data points the error bars are smaller than the symbol of the data points.
Here we only show the decline part of the light curve.
Full light curves can be found in the appendix.}
\label{fig.s-class_free_t0}
\end{figure}
\begin{figure}
\centering
\includegraphics[scale=0.4]{S_nova_fixed.eps}
\caption{S Class novae with fixed $t_0$. The single offsets are $-$8.84 for N22, 15.17 for N26,
12.44 for N33, $-$6.44 for N34, 4.97 for N40, $-$10.40 for N45, 2.53 for N48,
10.92 for N50, $-$13.97 for N52, 8.05 for N59, $-$0.22 for N65, $-$1.84 for N71, 16.24 for N75, 6.70 for N85,
$-$11.06 for N87 and $-$5.14 for N88 respectively.
Here we only show the decline part of the light curve. Full light curves can be found in the appendix.}
\label{fig.s-class_fix_t0}
\end{figure}
\begin{table}
\centering
\caption{Power-law decline fitting for s-class nova}
\begin{tabular}{lcc}
\multicolumn{3}{c}{}\\
\multicolumn{3}{c}{Free $t_0$}\\
\hline
Name & $t_0$(JD$-$2450000) & $\alpha$ \\
\hline
N08 & 1337.9 $\pm$ 0.5 & $-$2.07 $\pm$ 0.02 \\
N30 & 2257.3 $\pm$ 0.1 & $-$1.55 $\pm$ 0.01 \\
N31 & 2277.9 $\pm$ 0.2 & $-$1.29 $\pm$ 0.02 \\
N37 & 2647.4 $\pm$ 1.4 & $-$3.44 $\pm$ 0.28 \\
N39 & 2776.6 $\pm$ 1.1 & $-$1.31 $\pm$ 0.04 \\
N42 & 2831.0 $\pm$ 0.2 & $-$1.21 $\pm$ 0.03 \\
N68 & 3627.7 $\pm$ 0.6 & $-$1.08 $\pm$ 0.05 \\
N72 & 3845.5 $\pm$ 2.4 & $-$2.04 $\pm$ 0.16 \\
N79 & 4088.9 $\pm$ 0.4 & $-$0.89 $\pm$ 0.03 \\
N82 & 4155.1 $\pm$ 1.6 & $-$2.06 $\pm$ 0.23 \\
N83 & 4288.2 $\pm$ 2.1 & $-$2.55 $\pm$ 0.51 \\
N84 & 4289.8 $\pm$ 0.9 & $-$1.05 $\pm$ 0.08 \\
N90 & 4437.3 $\pm$ 1.5 & $-$3.20 $\pm$ 0.44 \\
\hline
\multicolumn{3}{c}{}\\
\multicolumn{3}{c}{Fixed $t_0$}\\
\hline
Name & $t_0$(JD$-$2450000) & $\alpha$ \\
\hline
N22 & 1964.3 & $-$4.35 $\pm$ 0.17 \\
N26 & 2145.5 & $-$1.92 $\pm$ 0.04 \\
N33 & 2321.4 & $-$1.22 $\pm$ 0.02 \\
N34 & 2447.5 & $-$1.70 $\pm$ 0.06 \\
N40 & 2798.6 & $-$1.67 $\pm$ 0.03 \\
N45 & 2908.3 & $-$0.97 $\pm$ 0.03 \\
N48 & 2971.4 & $-$0.92 $\pm$ 0.01 \\
N50 & 2985.3 & $-$1.37 $\pm$ 0.04 \\
N52 & 3003.3 & $-$1.43 $\pm$ 0.03 \\
N59 & 3304.4 & $-$1.63 $\pm$ 0.32 \\
N65 & 3408.4 & $-$1.44 $\pm$ 0.10 \\
N71 & 3813.3 & $-$2.92 $\pm$ 0.22 \\
N75 & 3980.5 & $-$2.28 $\pm$ 0.15 \\
N85 & 4298.5 & $-$1.16 $\pm$ 0.05 \\
N87 & 4299.5 & $-$1.22 $\pm$ 0.11 \\
N88 & 4339.6 & $-$1.48 $\pm$ 0.40 \\
\hline
\label{tab.free_t0}
\end{tabular}
\end{table}
\subsection{C Class}
The light curves of C-class novae have cusp shape, which first follow a power-law decline,
then rise steeply to a second maximum and finally have a sharp drop.
The characteristic C-class light curve has
a secondary maximum emerging between 1 to 8 months after the primary peak \citep{2010AJ....140...34S}.
\cite{2009arXiv0904.2228K} found that C-class light curve can be well-fitted by an exponential
component superimposed on the smooth decline. They further proposed that the cusps can originate
from a secondary ejection and the break-out into the optically thick nova winds.
\cite{2009ApJ...694L.103H} also connect the formation of the cusp shape to the input of the magnetic energy
from rotating white dwarf. In addition, the sharp drop before the light curve returns to the power-law decline is
attributed to the sudden formation of dust as proposed by \cite{2008AJ....136.1815L}.
We have found in total 10 candidates showing cusp features in our WeCAPP catalog and show their light curves in
Fig. \ref{fig.c-class}.
\begin{figure}[!h]
\centering
\includegraphics[scale=0.6]{C_nova_short.eps}
\includegraphics[scale=0.6]{C_nova_long.eps}
\caption{C Class novae. The offsets applied to the magnitudes are $-$15.07 for N07, $-$11.88 for N10, $-$8.26 for N12,
$-$17.71 for N13, $-$11.58 for N14, $-$13.79 for N15, $-$9.16 for N32, $-$19.19 for N53, $-$19.15 for N62
and $-$13.04 for N67 respectively. Here we only show the decline part of the light curve.
Full light curves can be found in the appendix.}
\label{fig.c-class}
\end{figure}
\subsection{O Class}
The O-class light curve follows the S-class light curve, but with the exception that
at a given time interval the light curve shows quasi-sinusoidal self-similar oscillations during the course of decline.
It has been shown that the white dwarf of the O-class novae is both highly magnetic and massive
\citep{2010AJ....140...34S}. However, these can not be the only effect leading to oscillation because
the nova V1500 Cyg in \cite{2010AJ....140...34S} which fulfills these requirements but does not show
oscillation. There have been many proposals for the mechanism of oscillation, but none of them have been
compared to and verified by observation \citep[see Sect. 4 in ][]{2010AJ....140...34S}.
The oscillation starts generally around 3 mag below the peak, which indicates
that we might have missed the peak in our nova candidates N11 and N28, where the light curves are shown in
Fig. \ref{fig.o-class} and in the appendix.
In Fig. \ref{fig.o-class} we show the two O-class candidates discovered during our observation campaign.
\begin{figure}[!h]
\centering
\includegraphics[scale=0.41]{O_nova.eps}
\caption{O Class novae. The offset is $-$15.77 for N11 and $-$17.61 for N28 respectively.
Here we only show the decline part of the light curve. Full light curves can be found in the appendix.}
\label{fig.o-class}
\end{figure}
\subsection{J Class}
The characteristics of J-class novae are the jitters on top of the smooth decline.
These jitters are symmetric and sharp-topped flares superposed on the base of S-class light curve,
which is the major difference from the O-class novae, while the latter bears oscillations
up and down the smooth decline. The jitter usually has variations with amplitude larger than half of
a magnitude. Jitters do not occur in the late tail of the light curve and most of them occur within
3 mag below the peak. \cite{2010AJ....140...34S} further propose for two subclasses according to the
emergence of the jitters: one subclass has jitters only near the peak, while the other has jitters
spread all over the light curve roughly until the nova is 3 mag dimmer than the peak.
Among our candidates we found one evident J-class light curve, which belongs to the second subclass
of \cite{2010AJ....140...34S} and is shown in Fig. \ref{fig.j-class}.
\begin{figure}[!h]
\centering
\includegraphics[scale=0.41]{J_nova.eps}
\caption{J Class nova. The single offsets are $-$16.99 for N27.
Here we only show the decline part of the light curve. Full light curves can be found in the appendix.}
\label{fig.j-class}
\end{figure}
It has been reported that there is a gradual increase of the time intervals between two successive jitters
\citep{1992A&A...257..599B, 2005A&A...429..599C, 2009ApJ...701L.119P, 2010arXiv1010.5611T}, while
\cite{2010AJ....140...34S}, using the same data set as \cite{2009ApJ...701L.119P}, found no distinctive trend. We thus
tried to search for such trend in our nova candidate N27 and performed a fitting with the following equation:
\begin{equation}
\log (t_J - t_{J-1}) = a\log (t_J-t_{max}) + b,
\end{equation}
where $t_J$ is the time of the $J$-th jitter.
\begin{figure}
\centering
\includegraphics[scale=0.6]{J_peak_time.eps}
\caption{J-class peak intervals for nova N27, using $R$-band data.}
\label{fig.j-peak}
\end{figure}
The jitters used in the fitting are indicated by the vertical marks in Fig. \ref{fig.j-class}. The time intervals between the
successive jitters are shown in \ref{fig.j-peak}. Our best-fitted
value is $a$ = 0.64$\pm$0.09 and $b$ = 0.11$\pm$0.16. The slope is smaller
than the values of DK Lac ($a$ = 0.88) and V4745 Sgr ($a$ = 0.79) derived by \cite{2009ApJ...701L.119P} and a = 0.79 for the
6 novae presented by \cite{2010arXiv1010.5611T}. With only one J-class nova candidate in our catalog, we can not
tell if this is a difference between the nova in M31 and Galactic novae, or it is simply a variation among individual
novae.
\subsection{Other classes}
Besides the above-mentioned classes, there remains three more classes in the taxonomy of \cite{2010AJ....140...34S}:
\begin{itemize}
\item Flat topped (F) class which has an extended interval at the peak with near constant brightness.
\item Dust dip (D) class where the decline is interrupted by another very steep decline and followed by the
recovery to just below the original decline.
\item Plateau (P) class that the smooth decline is interrupted by a long-lasting and nearly flat interval,
succeeded by the return to the original decline.
\end{itemize}
Among our candidates, however, we do not find evident light curves belonging to these classes. This could be partially
attributed to the set-up of our observation campaign. For example, the dust dip for the extreme shallow dips
in \cite{2010AJ....140...34S} occur more than 1 month after the peak, with the dip to be about 6 mag dimmer than
the light curve maximum. Such magnitude variation can hardly be observed in M31, because it is too faint to be
discerned. This implies that we might misclassified the D-class novae into other classes. The non-detection of the P-class novae can be explained by the filter system we used.
\cite{2006ApJS..167...59H} pointed out that the true plateau from the continuum radiation is best observed in
the \textit{y}-band filter. Since we are using the \textit{R} and \textit{I}-filter, it is possible that the
plateau phase does not exist in the $R$ and $I$-bands due to the influence of the emission lines
during the course of decline.
To summarize, we have classified 42 nova candidates and find 69\% to be S-class, 24\% to be C-class, 5\% to be O-class and 2\% to be J-class, while
\cite{2010AJ....140...34S} find 38\% to be S-class, 1\% to be C-class, 4\% to be O-class and 16\% to be J-class in their sample.
\section{Recurrent Novae}
\footnotesize
\begin{table*}[t]
\centering
\caption{Recurrent Nova candidates}
\begin{minipage}{165mm}
\begin{tabular}{cccccccc}
\hline
WeCAPP ID & RA & DEC & NAME & RA & DEC & Separation & $\Delta_{M31}$ \\
\hline
N02 (M31N-1997-10f) & 00:42:52.35 & +41:16:13.2 & M31N-2008-08b & 00:42:52.38 & +41:16:12.9 & 0$\farcs$54 & 1$\farcm$50\\
N19 (M31N-2001-07b) & 00:42:57.75 & +41:08:12.3 & M31N-1963-09c & 00:42:57.73 & +41:08:12.4 & 0$\farcs$32 & 8$\farcm$29\\
N19 (M31N-2001-07b) & 00:42:57.75 & +41:08:12.3 & M31N-1968-09a & 00:42:57.71 & +41:08:11.9 & 0$\farcs$72 & 8$\farcm$29\\
N19 (M31N-2001-07b) & 00:42:57.75 & +41:08:12.3 & M31N-2010-10e & 00:42:57.76 & +41:08:12.3 & 0$\farcs$15 & 8$\farcm$29\\
N29 (M31N-2001-12b) & 00:42:39.59 & +41:09:02.9 & M31N-1997-11k & 00:42:39.59 & +41:09:02.9 & 0$\farcs$00 & 7$\farcm$12\\
N29 (M31N-2001-12b) & 00:42:39.59 & +41:09:02.9 & M31N-2009-11b & 00:42:39.61 & +41:09:03.2 & 0$\farcs$42 & 7$\farcm$12\\
N83 (M31N-2007-07a) & 00:43:04.05 & +41:17:08.3 & M31N-1990-10a & 00:43:04.05 & +41:17:07.5 & 0$\farcs$80 & 3$\farcm$82\\
\hline
\end{tabular}
\tablefoot{We give the WeCAPP name, the positions (also see
Fig. \ref{fig.all}), the corresponding novae
fulfilling the 1$\farcs$0 criterion, the separation and the distance from the center of M31 ($\Delta_{M31}$).}
\label{tab.recurrent-nova}
\end{minipage}
\end{table*}
\normalsize
Recurrent novae are potential supernovae progenitors \citep{2010ApJS..187..275S}. We compare the position of
our nova candidates
with the catalog by \cite{2007A+A...465..375P, 2010AN....331..187P}.
We have found 4 recurrent novae candidates by selecting novae in the literature which are located within
1 arcsec to our nova candidates (see Table \ref{tab.recurrent-nova} and Fig. \ref{fig.RNe}).
Among the potential recurrent nova candidates N29 has 3 outbursts in 12 years, which would be an unprecedented short period.
As pointed out by \cite{2009ATel.2286....1H}, the outburst appears earlier in UV and H$\alpha$ than in the $R$-band which does not
fit very well to the nova scheme. They thus suggest an alternative scheme, that this event could be a dwarf nova in the Milky Way.
N19 has 4 outbursts detected so far. Because the short time separation between the first two outbursts, \cite{1989SvAL...15..382S} have doubted
its nova nature and suggested it to be a U Gem type foreground Galactic dwarf nova. However, the spectroscopic observations of the 4th outburst
in 2010 \citep{2010ATel.3006....1S} have confirmed it as a He/N spectroscopic class nova located in M31. In addition, \cite{2010ATel.3038....1P}
have also reported the SSS turn-on $\sim$ 15 days after the first optical detection in 2010.
To test how likely an uncorrelated nova is falling into the 1 arcsec area, we perform a test by using the upper-right quarter of our
pointing F1, which has the highest M31 light contribution from the bulge and contains 42 novae. The ratio of the area occupied by
the 1$\farcs$0 circle of these 42 novae to the total area of this quarter (300$\times$300 arcsec$^2$), implies the chance of an uncorrelated
nova to coincide with an existing nova is low (1.5 : 1000).
As most of the recurrent novae are not found in this quadrant (see Fig. \ref{fig.all} and Table \ref{tab.recurrent-nova}),
the chance of coincidence is even smaller for the majority of the recurrent nova candidates.
Note that we use stricter selection criteria to search for recurrent novae, thus we have less candidates than
presented by \cite{2007A+A...465..375P, 2010AN....331..187P}.
\begin{figure}[!h]
\centering
\includegraphics[scale=0.45]{RN.eps}
\caption{Position of the recurrent. The larger circle in the center indicates the 1$\farcs$0 radius for our selection criteria. The position
of potential recurrent nova candidates are marked by the smaller circles.}
\label{fig.RNe}
\end{figure}
\cite{2006ApJS..167...59H} suggested that recurrent novae all bear the plateau light curve. However,
in our light curve we did not detect evident plateaus. The main reason is we do not have comprehensive coverage
of the light curves.
Despite of the lack of highly sampled observation, it would be hard to find such plateaus because the
light curves
in \textit{R} or \textit{I} are contaminated by the bright emission lines. \cite{2008ASPC..401..206H} thus
advocate observations in Str\"omgren $y$-band (centered at 547 nm) since it is designed to cut the strong emission lines in the
wide $V$ bandpass filter and can follow the continuum flux more accurately. However, the Str\"omgren $y$-filter is narrow
and requires longer exposure time, so we use the $I$-filter instead of the Str\"omgren $y$-filter for the confirmation of
microlensing event from achromaticity when the WeCAPP was initiated.
\begin{figure}[!h]
\centering
\includegraphics[scale=0.56]{R_nova.eps}
\caption{Recurrent nova candidates light curves. The single offsets are $-$11.76 for N02, $-$16.76 for N19,
$-$18.86 for N29 and $-$9.86 for N83 respectively. Here we only show the decline part of the light curve.
Full light curves can be found in the appendix.}
\label{fig.rn_lc}
\end{figure}
\section{Rate of decline and the speed class}
\begin{table*}[!th]
\centering
\caption{Speed class of nova according to \cite{1989clno.conf....1W}}
\begin{tabular}{llrccc}
\hline
Speed class & $t_2$ & d$V$/d$t$ & \multicolumn{2}{c}{M31 sample} & MW sample \\
& [day] & [mag/day] & This work & \cite{2004MNRAS.353..571D} & \cite{2010AJ....140...34S} \\
\hline
Very fast & $\le$ 10 & $\ge$ 0.2 & 8 & 1 & 35 \\
Fast & 11-25 & 0.18-0.08 & 18 & 3 & 27 \\
Moderately Fast & 26-80 & 0.07-0.025 & 46 & 11 & 23 \\
Slow & 81-150 & 0.024-0.013 & 11 & 2 & 7 \\
Very slow & 151-250 & 0.013-0.008 & 7 & 3 & 3 \\
& & $\le$0.08 & 1 & 0 & 2 \\
\hline
\label{tab.speed}
\end{tabular}
\end{table*}
In this section we present the rate of decline for our nova sample. Due to the observing gaps before
some of the apparent maxima, it is hard to recover the true maximum fluxes during the nova eruption. Nevertheless,
the apparent maxima can serve as a lower limit of the true maxima. One can derive upper limits for the
$t_2$ values (the time required for the nova to faint by two magnitudes) if the apparent maxima are taken as the true maxima.
We have retrieved the $t_2$ values for our sample as follows: we perform a linear fitting on the decline part of
each light curve and determine the rate of decline, dm/dt (in units of magnitude/day). We then use dm/dt to
derive the $t_2$ value relative to the observed maximum magnitude for all the novae. The result is shown in Fig. \ref{fig.mmrd_all}.
Besides the linear fitting, we also applied the universal decline law proposed by ~\cite{2006ApJS..167...59H} to
retrieve the $t_2$ value from the observed magnitude at the apparent maximum. This procedure can only be done for
30 S-class novae, because the fitting routine fails to find a solution for other classes. The result is shown in
Fig. \ref{fig.mmrd_s-class}. The reader should keep in mind, that Fig. \ref{fig.mmrd_all} or \ref{fig.mmrd_s-class}
does not give the exact MMRD relation, but serves as an upper limit in the $t_2$ and lower limit in the magnitude.
\begin{figure}
\centering
\includegraphics[scale=0.5]{mag_t2_lin.eps}
\caption{Distribution of the observed apparent maximum brightness in $R$-band and the
fitted $t_2$ for all nova candidates. The red and blue points are referred to equation (\ref{eq.free_t0})
with $t_0$ as free parameter and equation (\ref{eq.fixed_t0}) with
$t_0$ as fixed parameter, respectively. The green points are novae belong to other classes.
The $t_2$ value is derived from the dm/dt and
the observed apparent maximum in the light curves. See the main text for detailed description.}
\label{fig.mmrd_all}
\end{figure}
\begin{figure}
\centering
\includegraphics[scale=0.5]{mag_t2_pow.eps}
\caption{Distribution of the observed apparent maximum and the fitted $t_2$ for S-class novae. The $t_2$ value is derived from
the universal decline law \citep{2006ApJS..167...59H} and the observed apparent maximum in the light curves. See the main text
for detailed description. The red and blue points are referred to equation (\ref{eq.free_t0}) with $t_0$ as free parameter and equation (\ref{eq.fixed_t0}) with
$t_0$ as fixed parameter, respectively.}
\label{fig.mmrd_s-class}
\end{figure}
As pointed out by \cite{1989clno.conf....1W}, the value of dm/dt is also an indicator of the speed class. We thus use the dm/dt values
of our sample to derive the frequency of different speed classes of nova in M31. A comparison of the dm/dt distribution
in our sample to the novae presented by \cite{2004MNRAS.353..571D} is shown in Fig. \ref{fig.n_fq}, the two M31 samples agree rather well
within the statistical errors (shown are only the $\sqrt{n}$ number count errors). The Milky Way data set of \cite{2010AJ....140...34S}
differs significantly by presenting a much higher fraction of very fast novae than the M31 data. For a fair comparison, one would
have to correct the \cite{2010AJ....140...34S} sample for its severe observational selection effects as being observed from inside the
(dusty) Milky Way disk. A more detailed comparison is beyond the scope of this paper as it requires detailed modeling of the distribution
of stellar and dust population of both galaxies.
\begin{figure}
\centering
\includegraphics[scale=0.5]{N_dRdt.eps}
\caption{The distribution of the speed class of novae in M31 (see Table \ref{tab.speed},
for definition see \cite{1989clno.conf....1W}). The red line is derived from our sample and the green line is from
the M31 novae presented by \cite{2004MNRAS.353..571D}. The blue line represents the Milky Way novae by \cite{2010AJ....140...34S}.}
\label{fig.n_fq}
\end{figure}
\section{Conclusion and outlook}
We have presented the position, outburst time and the maximum brightness of the 91 nova candidates
discovered during the time span of the WeCAPP project. Light-curve classifications under the taxonomic
scheme of \cite{2010AJ....140...34S} have been shown and the full \textit{R} and \textit{I}-band
light curves of each individual nova during the outburst are also presented in the Appendix.
In this work we successfully applied a scheme developed for a Milky Way nova
sample which, because of observational selection effects, is certainly dominated by the galactic disk,
to a nova sample of a different host galaxy which is mostly observed towards the bulge of this host. The
differences in member ratios for the subclasses as defined by \cite{2010AJ....140...34S} between the
Milky Way and our M31 WeCAPP sample probably reflect to some extent different observational selection
effects, but bear the potential for further conclusions on the differences between stellar populations
in M31 and the Milky Way, once the selection effects are proper accounted for.
We provide the full light curve data of the novae on request, as well as
the postage stamps of the reduced, stacked, or difference-imaging frames.
Part of this catalogue has been used to find the X-ray counter-part by \cite{2005A+A...442..879P, 2007A+A...465..375P}
and showed that super soft X-ray sources (SSS) in M31 are mostly constituted by the novae during eruption. The turn on
and turn off of the SSS phase provide us the information of the ejected and accreted mass onto the surface
of the white dwarf.
Besides the X-ray monitoring campaign, there is also a survey of M31 novae in infrared using
\textit{Spitzer Space Telescope} \citep{2011ApJ...727...50S}, which indicates a correlation
between the dust formation timescales and the nova
speed class. Such studies would not be possible without the speed class determined by the optical observations.
Ground-based optical surveys, such as PTF \citep{2009PASP..121.1395L,2009PASP..121.1334R},
PanSTARRS \citep{2002SPIE.4836..154K} and LSST \citep{2002SPIE.4836...10T}, will continue to play
an important role in the regime of multi-wavelength novae observation and help us to gain insight of the
underlying physical mechanism of novae.
\begin{acknowledgements}
We are grateful for the comments from the anonymous referee. We thank Sara B\"uhler and Silona Wilke for their contributions in observation.
This work was supported by the DFG cluster of excellence `Origin and Structure of the Universe' (www.universe-cluster.de).
\end{acknowledgements}
\bibliographystyle{aa}
|
1,108,101,566,369 | arxiv | \section{Introduction}
\label{sec:intro}
Physical entities typically loose their individual identities within a highly correlated whole which is not describable in terms of the mere knowledge of its physical constituents. Viewing a time series as a whole system with the series values as its constituents, the corresponding autocorrelation can be interpreted as the correlations among the system's constituents. The random-matrix theory (RMT) is the method of choice for identifying such correlations~\cite{Laloux, Plerou1, Plerou2, Plerou3}. However, a blind comparison of such correlation information with the statistical characteristics of the relevant RMT \emph{per se} reveals only the discrepancy from a purely random matrix statistics without relying on an \emph{a priori} physical knowledge about the system. The fractional Gaussian noise (fGn)~\cite{Mandelbrot}, though, provides an appropriate measure to trace back systematically the fingerprints of such physical information left already on the correlation profile of the system. Some recent considerations of such a figure of merit include invoking an fGn-based criterion for the visibility method by Lacasa \emph{et al}.~\cite{Lacasa} and an fGn analysis of the level crossing algorithm by Vahabi \emph{et al}.~\cite{Vahabi}.
FGns are known, well-studied, and describable by a single Hurst exponent. This promises the following proposed strategy for keeping track of the correlations: One first calculates the autocorrelation matrix of the empirical series and then rather than comparing the outcome with the Hurst exponent of a white noise (known to be $0.5$) in RMT, a detailed comparison is being made with the whole family of the Hurst exponents of the fGns. In this way, the \emph{optimal} Hurst exponent yielding the most consistent interpretation of those empirical series can be extracted. In practice, a direct comparison of the ensemble average of the eigenvalues spectra of the time series associated with various values of $H$ with that of the original time series can provide such an optimal Hurst exponent. It is the purpose of the present paper to demonstrate such ideas.
This paper is structured as follows: Sec.~\ref{sec:methodology} sets the scene by providing the details of the main technique introduced for an autocorrelation analysis of a time series as an alternative to RMT. The statistics of the fGn series based on such an autocorrelation analysis is next detailed in Sec.~\ref{sec:Statistics of fGn}. Sec.~\ref{sec:real_life} illustrates the application of the proposed method through two examples in the context of the finance and turbulence. Finally, Sec.~\ref{sec:conclusions} contains our conclusions and an assessment of the applicability range of the proposed formalism in other relevant contexts.
\section{Methodology}
\label{sec:methodology}
\subsection{Random-matrix theory}
\label{subsec:RMT}
RMT is a widely used mathematical technique for studying the statistical characteristics of large complex systems where the nature of the underlying interactions and their associations to the ensuing correlations is not known \emph{ab initio}. For instance, the experimental data from nuclear scattering exhibit a great deal of complexity associated with the stochastic behavior of the nuclear resonances and their spacings~\cite{Porter}. Eugene Wigner used RMT to describe the statistical distribution of such nuclear resonances~\cite{Wigner1} and found a striking resemblance between such a distribution and the one associated with the eigenvalues of a known class of random matrices called Gaussian orthogonal ensemble (GOE). RMT has since then found a wide range of applications in various and seemingly unconnected areas of mathematics and physics including the number theory~\cite{Schroeder}, finance~\cite{Laloux, Plerou1, Plerou2, Plerou3, Namaki1, Namaki2, Namaki3}, and quantum many-body systems~\cite{Guhr}. Such an extensive list of applications is believed to be rooted in the existence of the \emph{universality classes} which, in turn, is conceivably a consequence of the underlying symmetries and the law of large numbers~\cite{Ergun}. Here, we provide only a brief overview of the method at the minimal level on which the subsequent materials rely and refer instead the interested reader to Refs.~\cite{Brody, Ergun, Guhr, Mehta} for further and deeper details.
Random matrices are matrices with random elements which are drawn from some probability distribution subject to some required symmetries. Dyson already demonstrated that all random matrix ensembles do fall into three universality classes called the orthogonal ensemble (OE), unitary ensemble (UE) and symplectic ensemble (SE)~\cite{Dyson1,Dyson2,Dyson3}. The universality class associated with a given random matrix ensemble is determined by the transformation-invariance properties of its distribution function and the type of physical quantity (e.g. the Hamiltonian, scattering or transfer matrix) the random ensemble represents. As an instance, consider an $M \times M$ Hamiltonian matrix $\mathbf{H}$ with \emph{independent} matrix elements $H_{ij}$ taken from some random distribution $P_{ij}$ and, whereupon, a total distribution function of the product form $P(\mathbf{H}) \equiv \prod_{ij} P_{ij}(H_{ij})$. Then the invariance of the distribution function of the Hamiltonian $P(\mathbf{H)}$ so defined under each orthogonal, unitary, or symplectic transformation specifies its functional form as a Gaussian distribution of the form
\begin{eqnarray}
\label{eq:P(H)}
P_{M\beta}(\mathbf{H}) = c_{M\beta} \exp\bigg(-\frac{M\beta}{4\sigma^2}\textrm{Tr}\{\mathbf{H}^2\}\bigg) \; ,
\end{eqnarray}
where $c_{M\beta}$ is the normalization constant, $\sigma$ denotes the standard deviation of the off-diagonal matrix elements, and the cases $\beta= 1, 2, 4$ correspond to a Gaussian orthogonal ensemble (GOE), a Gaussian unitary ensemble (GUE), and a Gaussian symplectic ensembles (GSE), respectively.
The joint probability distribution of the eigenvalues of the random matrix $\mathbf{H}$ (denoted by $\lambda_1,...,\lambda_M$) for all Gaussian ensembles can be calculated from \Eq{eq:P(H)} and is given by
\begin{multline}
\label{eq:P(l)}
P_{M\beta}(\lambda_1,...,\lambda_M) \\ = c_{M\beta} \prod_{i<j} |\lambda_i-\lambda_j|^\beta \exp\bigg(-\frac{M\beta}{4\sigma^2} \sum_{k=1}^M \lambda_k^2\bigg) \;,
\end{multline}
where the dependence $|\lambda_i-\lambda_j|^\beta$ indicates the repulsion of the adjacent eigenvalues. For the specific case of $2 \times 2$ matrices, the distribution of the spacings between the nearest-neighbor eigenvalues $s \equiv \Delta \lambda$ can be obtained from the latter equation and is known to be given by~\cite{Ergun}
\begin{eqnarray}
\label{eq:P(s)}
P_{\beta}(s) = c_{\beta} s^\beta \exp\big(-a_{\beta}s^2 \big) \; ,
\end{eqnarray}
where $c_{\beta}$ and $a_{\beta}$ are some $\beta$-dependent constants. The relation is often referred to as the Wigner surmise~\cite{Wigner3} and can be shown to be yet valid to a good approximation even in the limit $M \rightarrow \infty$ ~\cite{Gaudin}.
\subsection{Fractional Gaussian noise}
\label{subsec:fGn}
FGns arise naturally as a by-product of the idea of fractional Brownian motion (fBm) introduced originally by Kolmogorov~\cite{Kolmogorov} as a generalization of the ordinary Brownian motion (Bm). Further developments were made by Mandelbrot and van Ness~\cite{Mandelbrot} who proposed a stochastic integral representation of fBm as a continuous-time integrator $B_{H}$ of the form
\begin{multline}
\label{eq:fBm}
B_H(t)=\frac{1}{\Gamma(H+1/2)} \bigg(\int_{-\infty}^{0} \big[(t-\theta)^{H-1/2} \\ -(-\theta)^{H-1/2}\big]dB(\theta)+\int_0^t(t-\theta)^{H-1/2}dB(\theta)\bigg) \; ,
\end{multline}
where $\Gamma(H)$ is the usual Gamma function and $0<H<1$ denotes the Hurst exponent. Note that for $H=0.5$ the ordinary Bm is recovered. Just as the Bm, fBm is also a Gaussian process and belongs to the zero-mean class of the stochastic processes~\cite{Mandelbrot}. The only difference between them, though, lies in their autocovariance obtainable from the latter equation and given by~\cite{Mandelbrot}
\begin{eqnarray}
\label{eq:autocovariance}
\langle B_H(t)\;B_H(t')\rangle_{\mathrm{ens.}}=\frac{c_H}{2} \big[{t'}^{2H}+t^{2H}-(t'-t)^{2H}\big] \; ,
\end{eqnarray}
where $\langle\cdots\rangle_{\mathrm{ens.}}$ denotes the ensemble average, $0 < t \le t'$, and the coefficient $c_H$ is defined through~\cite{Barton}
\begin{eqnarray}
\label{eq:c_H}
c_H \equiv \Gamma(1-2H)\cos(\pi H)/\pi H \; .
\end{eqnarray}
As is easily seen from the autocovariane expression (\ref{eq:autocovariance}), fBm is not stationary and its standard deviation varies with time albeit is endowed with stationary increments~\cite{Bradley}.
Moreover, it is known that the distribution functions associated with a Gaussian process are uniquely determined from the knowledge of the mean and autocovariance structures~\cite{Feller}. Therefore, given that $B_H (at)$ and $|a|^H B_H (t)$ (with $a$ as an arbitrary parameter) have equal values of mean and autocovariance, we can deduce that they feature the same distributions. This, in turn, implies the self-similarity of the fBm. Further details of the fGns can be found in Ref.~\cite{Bradley} and the references therein.
The derivative of fBm, on the other hand, yields the so-called fGn~\cite{Bradley}. Although such a derivative does not exist in a rigorous mathematical sense, but nevertheless an fGn can be defined through the discrete increments of fBm as
\begin{eqnarray}
\label{eq:G_H}
G_H(k)\equiv B_H(k+1)-B_H(k) \quad \mathrm{for} \quad k\geq1 \; .
\end{eqnarray}
The quantity so defined has a normal distribution for every integral input parameter $k$, but exhibits a long-range dependence save for the case of $H=0.5$ associated with the ordinary Bm. Its autocovariance for integral values of $n$ also takes the form
\begin{multline}
\label{eq:auto_cov_fBm}
\langle G_H(k)\;G_H(k+n)\rangle_{\mathrm{ens.}} = \frac{c_H}{2} \big[|n-1|^{2H}-2|n|^{2H} \\ +|n+1|^{2H}\big] \; .
\end{multline}
It follows then that fGn is a stationary Gaussian process with the type of the underlying correlation determined from the sign of the corresponding autocovariance. Depending on the value of the Hurst exponent, three regimes are identifiable:
(i) For $H= 0.5$ there is no correlation and the sample mimics the white noise.
(ii) For $H < 0.5$ the noise is \emph{negatively} correlated and the sample fluctuates faster than the white noise.
(iii) For $H > 0.5$ the noise is \emph{positively} correlated and the sample fluctuates slower than the white noise.
It is noteworthy to mention that the autocovariance of fGn behaves asymptotically as $c_H H (2H-1)n^{2H-2}$ with a long-range dependence for $0.5<H<1$ and tends to zero as $n \to \infty$, implying the ergodicity of fGns~\cite{Samorodnitsky,Sinai}. Furthermore, fGn features self-similarity just as the fBm.
In this work, a wavelet-based simulation~\cite{Dieker} has been utilized as an approximation method for generating fGn series.
\subsection{Autocorrelation matrix of a time series}
\label{subsec:autocorr_fGn}
Quantification of the correlations among the system's constituents is an issue of central importance in all areas of physics and has been the subject of intensive scientific investigations. Basic insight can be gained into the nature of such correlations by diagonalizing the so-called \emph{correlation matrix} whose elements quantify the way the system's constituents may affect each other. The so-obtained statistics of the eigenvalues and eigenvectors is compared with that of a random matrix where deviations from which characterize the desired correlation information~\cite{Plerou3}.
In this work, however, we propose a different approach according to which one first treats an empirical time series as the \emph{whole} system and the series values as its constituents. As such, the corresponding autocorrelation can be interpreted as the correlations among the system's constituents. We propose, furthermore, fGns as the figure of merit to interpret the data from the target empirical series by a comparison between two in terms of the statistics of their autocorrelation matrix. The statistical comparison is performed via studying the dependence of three major characteristics of the autocorrelation matrix of fGns on the Hurst exponent $H$, namely the distribution of the eigenvalues, the distribution of the nearest-neighbor spacings between the eigenvalues, and the number of significant participants in an eigenvector, which shall be all detailed in Sec.~\ref{sec:Statistics of fGn}. We point out our proposed fGn criterion is inspired by the RMT in considering such statistical figures of merit heavily exploited in the latter theory~\cite{Laloux, Plerou1, Plerou2, Plerou3}.
The symmetric \emph{autocorrelation matrix} $\mathbf{C}$ of a time series $X=\{X_t:t=1,\ldots,T\}$ of length $T$ is given through its matrix elements
\begin{eqnarray}
\label{eq:auto_corr_C}
C_{t,t+\bigtriangleup t} =\frac{\langle X_t X_{t+\Delta t}\rangle_{\mathrm{time}}-\langle X_t\rangle_{\mathrm{time}} \langle X_{t+\Delta t}\rangle_{\mathrm{time}}}{\sigma^2} \; ,
\end{eqnarray}
where $\sigma = \sqrt{\langle X^2\rangle_{\mathrm{time}} -\langle X\rangle_{\mathrm{time}}^2}$ is the standard deviation of $X_t$, and $\langle \ldots\rangle_{\mathrm{time}}$ denotes the time average over the period of the series. Note that the time lag $\Delta t$ ranges from $1-t$ to $N-t$ to assure the construction of an $N \times N$ autocorrelation matrix for every $t \in [1,N]$.
In order to realize an unbiased and uniform comparison between fGns with different Hurst exponents, their mean and variance are set to 0 and 1, respectively. Note that the latter amounts to setting the coefficient $c_H$ in \Eq{eq:auto_cov_fBm} to unity. We can establish then the autocorrelation matrix and investigate its eigenvalues spectrum (ES), the distribution of nearest-neighbor eigenvalues spacings (NNES), and inverse participation ratio (IPR). It must be noted, however, that such a recipe is prone to introduce two major size effects into the numerical calculations as follows: The finite length $T$ of the time series $X$ and the finite size $N$ of the autocorrelation matrix $\mathbf{C}$. Among the two, the former finite size effect associated with the finite length of the time series can nevertheless be removed by construction thanks to the ergodicity feature of fGns~\cite{Samorodnitsky,Sinai} which makes the ensemble average in \Eq{eq:auto_cov_fBm} to be equal to the time average in \Eq{eq:auto_corr_C}. This, in turn, allows one to read the autocorrelation matrix $\mathbf{C}$ directly from the expression of \Eq{eq:auto_cov_fBm}. We call the autocorrelation matrix so obtained the \emph{length-free autocorrelation matrix} (LFAM) $\mathbf{\tilde{C}}$ whose matrix elements may be calculated from
\begin{multline}
\label{eq:length-free_auto_cov}
\tilde{C}_{t, t+\Delta t} = \frac{1}{2} \big[|\Delta t-1|^{2H}-2|\Delta t|^{2H} \\ +|\Delta t+ 1|^{2H}\big] \; ,
\end{multline}
and provide in the subsequent section the results of the calculation of its ES, the distribution of NNES, and IPR for an illustrative value of $N$ and finally the finite-size effect associated with the finite size $N$ of such an autocorrelation matrix $\mathbf{\tilde{C}}$.
\section{Statistics of the fractional Gaussian noise}
\label{sec:Statistics of fGn}
\subsection{Eigenvalues distribution of the length-free autocorrelation matrix}
\label{subsec:Eigenvalue_dist}
As an illustrative case, the ES of the LFAM with $N= 2000$ for various values of the Hurst exponents $H$ is shown in Fig.~\ref{fig:ES}. First of all, it is seen that the eigenvalues $\lambda_i$ ($i = 1 ,\cdots, N$) are positive for all values of $H$. Besides, three behavior regimes for the spectrum depending on the value of $H$ may be identified:
\begin{figure}[tb]
\centering
\includegraphics[width=1\linewidth]{ES}
\caption[The eigenvalues spectrum]{(Color online) The ES of the LFAM $\mathbf{\tilde{C}}$ of the size $2000\times2000$ for various values of the Hurst exponent $H$. The probability distribution function $P(\lambda)$ gives the abundance of a particular eigenvalue $\lambda$ in the spectrum or equivalently the quantity $P(\lambda) \Delta\lambda$ can be interpreted as the probability that an eigenvalue of $\mathbf{\tilde{C}}$ arises in the interval $[\lambda, \lambda + \Delta\lambda]$. The vertical dashed line also corresponds to the LFAM with a Hurst exponent $H=0.5$ for which the distribution function exhibits a Dirac-delta like singularity indicating that all the eigenvalues are equal to unity.
}
\label{fig:ES}
\end{figure}
(i) For $H<0.5$ the eigenvalue with the maximal abundance $\lambda_{\mathrm{max}}$ is the largest one and bounded from above.
(ii) For $H= 0.5$ all eigenvalues equal unity giving rise to a Dirac delta function denoted by a dashed line in Fig.~\ref{fig:ES}.
(iii) For $H>0.5$ the eigenvalue with the maximal abundance $\lambda_{\mathrm{max}}$ is the smallest one and bounded from below. Besides, the limiting value of $\lambda_{\mathrm{max}}$ turns out to be unity upon reaching the value of $H=0.5$ both from above and below.
\subsection{Distribution of the nearest-neighbor eigenvalues spacings for the length-free autocorrelation matrix}
\label{subsec:Eigenvalue_spacing}
\begin{figure}[tb]
\centering
\includegraphics[width=1\linewidth]{NNS}
\caption[Distribution of nearest-neighbor eigenvalues spacings]{(Color online) Distribution of the NNES of the LFAM of the size $2000 \times 2000$ for various values of the Hurst exponent $H$ while employing a Gaussian broadening recipe as the required unfolding transformation. The vertical axis represents the probability distribution function of NNES $P(s)$ as the abundance of a particular NNES $s$ in the spectrum or equivalently the quantity $P(s) \Delta s$ can be interpreted as the probability that an NNES of the value $s$ arises in the interval $[s, s + \Delta s]$ in the ES of the LFAM $\mathbf{\tilde{C}}$.}
\label{fig:NNES}
\end{figure}
We now aim at calculating the distribution of NNES of the LFAM for a given value of $N$. Here since the level spacing varies throughout the eigenvalues spectrum, we need an \quot{unfolding} procedure to transform the original eigenvalues $\lambda_i$ into properly rescaled and dimensionless ones $\tilde{\lambda_i}$ ~\cite{Mehta,Brody,Guhr}. More precisely, the unfolding procedure provides a local rescaling of the eigenvalues spectrum with respect to the local average of the level spacing. As a result of such a rescaling scheme, the local average of level density remains constant and independent of $\lambda_i$ and thereby comparable to those from RMT. As such, rather than the \quot{bare} distribution of the original eigenvalue spacings, i.e, $\lambda_{i+1}-\lambda_{i}$, that of the \emph{unfolded eigenvalues}, i.e., $s_i\equiv \tilde{\lambda}_{i+1}-\tilde{\lambda}_i$ is analyzed.
Figure~\ref{fig:NNES} illustrates the distribution of the unfolded NNES for the LFAM of the size $2000\times2000$ and for various values of the Hurst exponents while adopting a Gaussian broadening method~\cite{Bruus,Haake} for the sake of realizing the desired unfolding of the eigenvalues. It can be inferred from such a figure that the levels tend to each other on average (the NNES associated with the maximal value of $P(s)$ approaches zero) upon increasing $H$. They also end up closer to each other for $H> 0.5$ compared to the regime $H<0.5$.
Figure~\ref{fig:NNES_total} illustrates additionally the change in the distribution of NNES through a density plot for a finer spectrum of the values of $H$ compared to the previous figure.
\begin{figure}[tb]
\centering
\includegraphics[width=1\linewidth]{NNS_total}
\caption[Density plot]{Density plot of the distribution of NNES $P(s)$ for an interval of $H \in [0.3 , 0.7]$ on a color scale ranging between 0 and 6.5. The white ribbon in the middle of the vertical axis reflects the fully degenerate ES associated with the situation in which $H=0.5$ [compare to Fig.~\ref{fig:ES}]. The highest abundance of NNES associated with the peaks in Fig.~\ref{fig:NNES}, on the other hand, follow an almost diagonal trend of relatively narrow dark regions in the plot.}
\label{fig:NNES_total}
\end{figure}
\subsection{Inverse participation ratio of the eigenvectors of the length-free autocorrelation matrix}
\label{subsec:IPR}
We exploit here the notion of the inverse participation ratio (IPR) heavily arisen in the context of the theory of localization~\cite{Guhr} to determine the number of significant participants of each eigenvector given by
\begin{eqnarray}
\label{IPR}
I^k=\sum_{n=1}^N (u_n^k)^4 \;,
\end{eqnarray}
where $k = 1,\cdots, N$ and $u_n^k$ is the $n$'th component of the $k$'th eigenvector $\textbf{u}^k$. The number of significant components of an eigenvector is \emph{inversely} proportional to the value of the IPR so defined.
Figure~\ref{IPR_total} depicts the IPR of all eigenvectors of the LFAM with respect to the associated eigenvalues $\lambda$ for two different Hurst exponents $0.4$ and $0.6$. As can be learnt from the plot, the IPR is an ascending function of $\lambda$ for $H=0.4$ and a descending one for $H=0.6$. We have checked numerically that the same trend continues with other numerically accessible values in two distinguishable regimes of $H<0.5$ and $H>0.5$, respectively. For $H = 0.5$, the IPR evidently remains constant.
\begin{figure}[!b]
\centering
\includegraphics[width=1\linewidth]{IPR_total}
\caption[IPR]{(Color online) The IPR of all eigenvectors of an LFAM of the size $2000\times2000$ for two illustrative values of $H=0.4$ and $H=0.6$.}
\label{IPR_total}
\end{figure}
\begin{figure}[tb]
\centering
\includegraphics[width=1\linewidth]{IPR}
\caption[???]{IPR of the eigenvectors associated with the largest eigenvalues of the LFAM plotted as a function of the Hurst exponent $H$. The vertical dashed line indicates the critical Hurst exponent that separates the negatively correlated regime from the positively correlated one associated with $H<0.5$ and $H>0.5$, respectively.}
\label{IPR}
\end{figure}
\begin{figure}[!b]
\centering
\includegraphics[width=1\linewidth]{NNS_size_effect}
\caption[Finite-size]{(Color online) Finite-size effect of the LFAM due the finiteness of its size $N$ on the distribution of NNES $P(s)$ for two values of the Hurst exponent $H$. To explore such an effect, for a fixed value of the Hurst exponent $H$, the distribution $P(s)$ has been plotted versus the NNES $s$ for three different values of the finite size $N$.}
\label{NNES_size_effect}
\end{figure}
The results for the IPR of the eigenvectors associated with the largest eigenvalues of a typical LFAM of the size $2000\times2000$ for different values of $H$ is additionally plotted in Fig.~\ref{IPR}. The result implies that the number of important participants of the eigenvector associated with the largest eigenvalue remains almost constant throughout the negatively correlated regime $(H<0.5)$ whereas it turns out to be relatively larger in the positively correlated region $(H>0.5)$ and rises upon increasing the Hurst exponent $H$.
\subsection{Finite size effect of the length-free autocorrelation matrix}
\label{subsec:size_effect}
Figure~\ref{NNES_size_effect} illustrates the effect of the finite size $N$ of the LFAM on the distribution of the NNES by providing the data for various values of the finite size $N$. As is evident in the figure, the size effect influences the data associated with $H>0.5$ much more significantly than those of $H<0.5$.
\section{Application to real-life time-series}
\label{sec:real_life}
\begin{figure}[t]
\centering
\includegraphics[width=1\linewidth]{ES_finance}
\caption[ES_finance]{(Color online) The ES of the normalized daily return of DJIA, SPX, SSE, and TSE (the bars) contrasted to the ES of the optimal Hurst exponent of the associated fGn family (the solid line). A remarkable agreement with fGns is noticed upon such a comparison.}
\label{ES_finance}
\end{figure}
Finally, we demonstrate our ideas and in particular the relevance of the fGn criterion introduced in previous sections for the analysis of two paradigmatic real-life contexts, namely the finance time series and the phenomenon of turbulence. Before embarking on the application of the method to such examples, we first provide the details on how to obtain the optimal Hurst exponent associated with the searched member of the family of fGns that gives the most accurate description of the underlying correlations in a time series compared to the other exponents: To this end, one first produces ensembles of fGn series of the same finite length as the real one for various values of $H$. One proceeds by calculating the ES and NNES distribution of the ensemble average of the fGn series associated with such values of $H$ and compare eventually the outcome with that of the real time series. The comparison for a particular value of $H$ is made based on the calculation of the \emph{relative error} in ensemble average of the ESs with respect to that of the original time series. More precisely, given a time series of length $T$ and $N\times N$ autocorrelation matrix with eigenvalues $\mathbf{\Lambda}^0\equiv(\lambda_1^0, \lambda_2^0, \ldots\, \lambda_N^0)$, we generate $M$ fGns of size $T$ for each value of $H$ and construct $M$ autocorrelation matrices with eigenvalues $\mathbf{\Lambda}^m\equiv(\lambda_1^m, \lambda_2^m, \ldots,\lambda_N^m)$, $m = 1,2,\ldots,M$. Finally, for each Hurst exponent $H$, the two-norm relative error~\cite{Anderson} of ensemble average
$\overline{\mathbf{\Lambda}}=\frac{1}{M}\sum_{m=1}^{M}\mathbf{\Lambda}^m$ with respect to $\mathbf{\Lambda}^0$ is obtained. The optimal Hurst exponent corresponds then to the one leading to the smallest relative error.
\subsection{Financial time-series}
\label{sec:financial series}
\begin{figure}[!t]
\centering
\includegraphics[width=1\linewidth]{NNS_finance}
\caption[NNES_finance]{(Color online) The NNES distribution of the normalized daily return of DJIA, SPX, SSE, and TSE shows a good agreement with that of the optimal Hurst exponent of the associated fGn family.}
\label{NNES_finance}
\end{figure}
\begin{figure*}[t]
\centering
\includegraphics[width=1\linewidth]{ES_and_NNS_of_turbulence}
\caption[ES_and_NNES_Velocity]{(Color online) (a) The ES and (b) the NNES distribution of the normalized turbulence velocity time series with $Re=210000$ compared to those of the fGns family with various values of the Hurst exponents. None of the used Hurst exponents provide a fair fit to the turbulence data implying the existence of non-fGn correlations in the turbulence phenomenon. One may note additionally that it is unnecessary to examine fGns with $H<0.5$ in the figure, since it is known that the velocity profile of turbulence exhibits correlations of positive nature comparable with the fGn family only with $H>0.5$.}
\label{ES_and_NNES_Velocity}
\end{figure*}
For financial time series we have used various actual databases covering securities from the Dow Jones Industrial Average (DJIA), the Standard \& Poor's 500 (SPX), the Shanghai Stock Exchange (SSE), and the Tehran Stock Exchange (TSE). We have aimed at analyzing the daily change of the DJIA and SPX from January 1, 1980 until January 1, 2011. The starting points for SSE and TSE are January 1, 2001 and January 1, 1996, respectively. In order to quantify the correlations, we first calculate the daily return time series of stock given by
\begin{eqnarray}
\label{eq:return}
R_t=\ln(S_{t+1})-\ln(S_t) \;,
\end{eqnarray}
where $S_t$ denotes the price at time $t$ of each of the four stocks considered above. Since each stock has a different standard deviation, we define a normalized return of the form
\begin{eqnarray}
\label{eq:normelized return}
r_t \equiv \frac{R_t-\langle R\rangle_{\mathrm{time}}}{\sigma} \; ,
\end{eqnarray}
where $\sigma =\sqrt{\langle R^2\rangle_{\mathrm{time}}-\langle R\rangle_{\mathrm{time}}^2}$ is the standard deviation of $R_t$. We construct then the autocorrelation matrices (associated with the four stock series) of a paradigmatic size $1500 \times 1500$ for the financial normalized return series so obtained. On the other hand, $M=500$ fGns associated with the Hurst exponents in some chosen positively correlated range $H=0.525,\ldots,0.8$ with the Hurst exponent increment $\delta H \equiv 0.025$ and the same length as that of the stock series are next generated. For each stock, the optimal Hurst exponent is found by searching the least relative error. Following the outlined recipe at the beginning of the current section, the errors of approximating $\mathbf{\Lambda}^0$ by $\overline{\mathbf{\Lambda}}$ for DJIA, SPX, SSE, and TSE read 0.040, 0.036, 11.25, and 0.094, respectively.
Figures~\ref{ES_finance} and~\ref{NNES_finance} illustrate the outlined operational prescription for finding the desired optimal Hurst exponents associated with each stock market. The results reveal a remarkable agreement with fGns in spite of the non-Gaussianity in returns of the stock market. On the other hand, since the distance from $H=0.5$ (associated with the while noise limit) can be regarded as a measure of the efficiency of a market, the method yields, at the same time, an operational recipe for telling apart an efficient market from an emerge one.
\subsection{Turbulence}
\label{sec:Turbulence}
\begin{figure*}[tb]
\centering
\includegraphics[width=1\linewidth]{NNS_of_turbulence}
\caption[NNES_of_turbulence]{(Color online) Adjacent eigenvalues spacing $s_i=\tilde{\lambda}_{i+1}-\tilde{\lambda_i}$, $i=1,2,\ldots,N$ for (a) velocity profile $v(t)$ and (b) velocity increment profile $\delta v(t)=v(t+1)-v(t)$ of low and high Reynold's numbers. The two Reynolds numbers display noticeably different trends in their adjacent eigenvalues spacing in contrast to their indistinguishable ES and NNES distribution.}
\label{NNES_of_turbulence}
\end{figure*}
Another important example belongs to the paradigm of the turbulence velocity profile $V(t)$ with two Reynold's numbers $Re = 36000$ and 210000. These time series indicate the local velocity measured at a fixed point in the turbulent region of a round free jet. In order to realize a fair comparison of such series with fGns and also with each other we invoke a normalization scheme as
\begin{eqnarray}
\label{eq:normelized return}
v(t)=\frac{V(t)-\langle V \rangle_{\mathrm{time}} }{\sigma} \;,
\end{eqnarray}
where $\sigma=\sqrt{\langle V^2\rangle_{\mathrm{time}}-\langle V\rangle_{\mathrm{time}}^2}$ is the standard deviation of $V(t)$. We then proceed with obtaining the statistics of such a normalized $v(t)$ series similar to that of the financial time series. In this calculation, the size of the autocorrelation matrix is taken $N=2000$ and $M=500$ fGns series with the same length as the turbulence series are generated for each Hurst exponent in the range $H=0.5,\ldots,0.9$ with $\delta H = 0.1$.
Figure~\ref{ES_and_NNES_Velocity} contrasts the results of the calculation of the ES and NNES distribution associated with the high Reynold's number $Re = 210000$ to those of the fGns with various Hurst exponents. We point out the choice of the high Reynold's number for the purpose of this figure is fully arbitrary and one could equally consider the low number with almost the same ES and NNES distributions giving rise to an invisible distinction of the associated data (the bars in the plot) from those of the high Reynold's number. Quite strikingly, none of the Hurst exponents provide a good fit to the turbulence data. Unlike the financial correlations, this implies significant difference between the nature of turbulent correlations and those of fGn. The search for an alternative family of time series capable of capturing the turbulent correlations thus remains open.
Finally, as a by-product of the autocorrelation analysis of fGn, we have found out that in spite of the indistinguishability of the low and high Reynold's number in straight analysis of the \emph{distribution} of the eigenvalues as described above, a proper distinction between them becomes feasible upon analyzing instead their adjacent eigenvalues spacing $s_i=\tilde{\lambda}_{i+1}-\tilde{\lambda_i}$, $i=1,2,\ldots,N$ as illustrated on a semi-log scale in Fig.~\ref{NNES_of_turbulence}. The results for the adjacent eigenvalues spacing of (a) the velocity profile $s_i^{[v]}$ as well as (b) the velocity increment profile $s_i^{[\delta v]}$ with ${\delta v(t)} \equiv v(t+1)-v(t)$ reveal discernibly different trends for different Reynold's numbers.
\section{Concluding remarks}
\label{sec:conclusions}
In this work, we have analyzed the autocorrelation matrix of a time series using an RMT technique. For this purpose, it has been demonstrated that the fGns family provides an \emph{in situ} benchmark and figure of merit for accessing correlation information of the empirical systems in a way which is unattainable to a brute force RMT approach. In a nutshell, such information encompass the followings:
(i) The average of the eigenvalues of the fGn's autocorrelation matrix in the negatively correlated region is smaller than the one in the negatively correlated region.
(ii) The eigenvalues of the positively correlated series (associated with the higher values of the Hurst exponents) tend to attract each other whereas the negatively correlated ones (associated with the lower values of the Hurst exponents) rather show a tendency to repel each other.
(iii) The number of significant participants in the eigenvector associated with the largest eigenvalue proves larger in the positively correlated series compared to the one in the negatively correlated region.
(iv) In the first context of the financial time series, it appeared that although the return PDF of the stock market is known to be non-Gaussian, but its correlation content exhibits a good agreement with an fGn. This, in turn, promises to provide a powerful tool for distinguishing an efficient market from the emerge one.
(v) In the context of the turbulence, our results suggest a significant discrepancy from fGns in spite of a Gaussian velocity profile assumed in the description of the phenomenon. Nonetheless, our approach provides a systematic recipe for distinguishing various Reynold's numbers.
|
1,108,101,566,370 | arxiv | \section{Introduction}
Models for the spread of information among networked entities were
studied for decades in sociology and
economics~\cite{ThresholdModels:1978,EasleyKleinbergBook:2010,JacksonNetworks:Book2010}.
A diffusion process is initiated from a seed set of
nodes (entities) and progresses in steps: Initially, only the seed nodes are
activated. In each step additional nodes may become active
based on the current set of active nodes. The progression can be deterministic or stochastic.
The $t$-stepped influence of a seed set $S$ of nodes is then
defined as its expected reachability (total number of active nodes) in
$t$ steps.
{\em Influence maximization} (IM) is the problem of finding a set $S$
of nodes of specified cardinality $|S|=s$ and
maximum {\em influence}. The IM problem was formulated nearly two decades ago by Richardson and
Domingos \cite{RichardsonDomingos:KDD2001,RichardsonDomingos:KDD2002}
and inspired by the application of viral marketing.
In a seminal paper, Kempe, Klienberg, and Tardos
\cite{KKT:KDD2003} studied stochastic diffusion models and introduced two elegant special cases, the {\em Independent Cascade
(IC)} and {\em Generalized Threshold (GT)} diffusion
models. Their work sparked extensive followup
research and large scale implementations
\cite{MosselR:STOC2007,CWY:KDD2009,JHC:ICDM2012,NguyenTD:TON2017}.
Currently IM is applied in multiple domains with
linked entities for tasks as varied as diversity-maximization
(the most representative subset of the population) and sensor
placement that maximize coverage \cite{Leskovec:KDD2007,ChenLakshmananCastillo_book:2013,MirzasoleimanKSK:NIPS2013}.
\begin{wrapfigure}{R}{0.5\textwidth}
\vspace{-10pt}
\includegraphics[width=0.5\textwidth]{cascade.pdf}
\caption{\small A 2-step cascade from two seed nodes.
}\label{cascade:ex}
\vspace{-10pt}
\end{wrapfigure}
We consider
{\em stochastic diffusion models (SDM)} $\mathcal{G}(V)$ over $|V|=n$
nodes that are specified by a distribution $\boldsymbol{\phi} \sim \mathcal{G}$ over sets
$\boldsymbol{\phi} := \{\phi_v\}_{v\in V}$
of monotone non-decreasing boolean {\em
activation functions}
\[\phi_v:2^{V\setminus \{v\}}\rightarrow \{0,1\} .\]
A diffusion process starts with a seed set $S \subset V$ of nodes and $\boldsymbol{\phi} \sim \mathcal{G}$.
At step $0$ we activate the seed nodes $\CReach^0(\boldsymbol{\phi},S) := S$.
The diffusion then proceeds deterministically:
At step $t>0$ all active nodes remain active and we activate any
inactive node $v$ where
$\phi_v(\CReach^{t-1}(\boldsymbol{\phi},S))=1$:
\[
\CReach^{t+1}(\boldsymbol{\phi},S) := \{v\in V \mid \phi_v(\CReach^t(\boldsymbol{\phi},S))=1\} .\]
The {\em $\tau$-steps reachability set} of a seed set $S$ is
the random variable $\CReach^\tau(\boldsymbol{\phi},S)$ for $\boldsymbol{\phi}\sim
\mathcal{G}$ and
respectively the $\tau$-steps {\em reachability}, $\RReach^\tau(S)$, is the
random variable that is the number of active nodes $|\CReach^\tau(\boldsymbol{\phi},S)|$ for $\boldsymbol{\phi}\sim \mathcal{G}$.
Finally, the influence value of $S$ is defined to be the expectation
\[ \texttt{I}^\tau(S) := \E[\RReach^\tau(S)] = \E_{\boldsymbol{\phi}\sim\mathcal{G}}[|\CReach^\tau(\boldsymbol{\phi},S)|] .\] We
refer to the case where the diffusion is allowed to progress until there is no
growth as {\em unrestricted} diffusion and this corresponds to
$\tau=n-1$. The influence $\texttt{I}^\tau(S)$ is a monotone set function.
We say that an SDM is {\em submodular} when the influence function is
submodular and that it is {\em independent} if the activation functions $\phi_v$ of different nodes are independent random variables.
The IM problem for seed set size $s$ and $\tau$ steps is
to find \[\arg\max_{S: |S|\leq k}\texttt{I}^\tau(S). \]
The reader might be more familiar with well-studied special cases of
this general formulation.
{\em Live-edge} diffusion models
$\mathcal{G}(V,\mathcal{E})$ are specified by a graph $(V,\mathcal{E})$
with $|V|=n$ nodes and $|\mathcal{E}|=m$ directed edges and a
distribution $E\sim \mathcal{G}$ over subsets $E \subset \mathcal{E}$
of "live" edges. When expressed as an SDM, the activation functions that correspond to $E$ have
$\phi_v(T)=1$ if and only if there is an edge from a node in $T$ to $v$ in the graph $(V, E)$.
Live-edge models are always submodular: This because
$|\CReach^\tau(E,S)|$, which
\ignore{
}
is the number of nodes reachable from $S$ in $(V,E)$ by
paths of length at most $\tau$, is a coverage function and hence
monotone and submodular. Therefore, so is the influence function $\texttt{I}^\tau(S)$,
which is an expectation of a distribution over coverage functions.
A live-edge model is independent if we only have dependencies between incoming edges to the same node.
The Independent Cascade (IC) model is the special case of an
independent live-edge model where all
edges $e\in \mathcal{E}$ are independent Bernoulli random variables selected with probabilities $p_e$ ($e\in
\mathcal{E}$).
Another well-studied class are {\em generalized threshold} (GT) models
\cite{KKT:KDD2003, MosselR:STOC2007}. A GT model
$\mathcal{G}(V,\boldsymbol{f})$ is specified by a set $\boldsymbol{f} := \{f_v\}_{v\in V}$
of monotone functions $f_v:2^V\rightarrow [0,1]$. The
randomization is specified by a set of threshold values
$\boldsymbol{\theta}\sim \mathcal{G}$ where
$\boldsymbol{\theta} :=\{\theta_v\}_{v\in V}$.
The corresponding activation functions to $\boldsymbol{\theta}$ are
\[ \phi_v(T) := \text{Indicator}( \theta_v \leq f_v(T)) .\]
A well-studied subclass are {\em Independent GT (IGT)} where we
require that $\boldsymbol{f}$ are
submodular and nodes $v\in V$ have independent threshold values
$\theta_v \sim U[0,1]$.
Mossel and Roch \cite{MosselR:STOC2007,0612046p29:online} proved that
IGT models are submodular, which is surprising because the functions
$|\CReach^\tau(\boldsymbol{\phi},S)|$ are generally not submodular.
Their proof was provided for unrestricted diffusion but extends to the case where we stop the process
after $\tau$ steps.
Finally, Linear threshold (LT) models \cite{ThresholdModels:1978,KKT:KDD2003} are a
special case of IGT where we
have an underlying directed graph and each edge $(u,v)$ is associated
with a fixed weight value $b_{u v}\geq 0$ so that for all $v\in V$
$\sum_u b_{uv} \leq 1$ and the functions are defined as the sums
$f_v(A) := \sum_{u\in A \cap N(v)} b_{u v}$. Kempe et al showed \cite{KKT:KDD2003} that each LT model is equivalent
to an independent live-edge model.
One of the challenges brought on by the IM formulation is
computational efficiency.
Kempe et al \cite{KKT:KDD2003} noted that the IM problem generalizes the classic Max Cover problem even with
$\tau=1$ and a live-edge model with a fixed set of live edges ($p_e =1$ for all $e\in \mathcal{E}$). Therefore, IM inherits Max Cover's hardness
of approximation for ratio
better than $1-(1-1/s)^s \geq 1-1/e$ \cite{feige98} for a cover with $s$ sets.
On the positive,
with submodular models, an approximation ratio of $1-(1-1/s)^s$ can be
achieved by the first $s$ nodes of a greedy sequence generated by sequentially adding a node with maximum marginal value \cite{submodularGreedy:1978}.
A challenge of applying Greedy with stochastic models, however, is that even point-wise evaluation of the influence
function can be computationally intensive. Exact evaluation even for IC models is
\#P hard \cite{CWW:KDD2010}. As for approximation,
Kempe et al proposed to work with {\em averaging oracles}
\[\hat{{\sf A}}^\tau (T) :=
\frac{1}{\ell} \sum_{i=1}^\ell |\CReach^\tau(\boldsymbol{\phi}_i,T)|\]
that
average the reachability values
obtained from a set $\{\boldsymbol{\phi}_i\}_{i=1}^\ell$ of i.i.d.\
simulations. Recall that in the general SDM formulation, a simulation is
specified by a set $\boldsymbol{\phi}$ of node activation functions.
For live-edge models, a simulation is simply a set of
concurrently live edges $E$. In GT models, a simulation is specified
by a set of thresholds $\boldsymbol{\theta}$.
\begin{wrapfigure}{r}{0.20\textwidth}
\vspace{-10pt}
\begin{example}\label{polysimu:ex}
\includegraphics[width=0.20\textwidth]{example.pdf}
{\small Node $v$ has influence $\texttt{I}^{\tau=2}(v)=100$ but variance
$\approx 100n$.
}
\end{example}
\vspace{-10pt}
\end{wrapfigure}
The averaging oracle has some appealing properties: First, it is
robust compared to estimators tailored to models that satisfy specific assumptions (see
related work section) in that
for any diffusion model $\mathcal{G}$, also with complex and unknown dependencies
(between activation functions of different nodes or between edges in
live-edge models), for any set $S$, $\hat{{\sf A}}(S)$ is an unbiased
estimate of the exact influence value $\texttt{I}^\tau(S)$ and estimates are
accurate as long as the variance of $\RReach^\tau(S)$ is
"sufficiently" small. Second, in terms of practicality, the oracle is directly available from
simulations and does not require learning or inferring the underlying diffusion
model that generated the data
\cite{SaitoNK2008,GoyalBonchi:wsdm2010,GRLK:KDD2010}. Therefore, the
results are not sensitive to modeling assumptions and learning
accuracy~\cite{Chen:KDD2016,HeKempe:KDD2016}. Often, estimation of model
parameters requires a large number of simulations: Even for IC models,
Example~\ref{tinymatter:ex} shows that edges with tiny probabilities that require many
simulations to estimate can be critical for IM accuracy.
Third, in terms of computation,
when the reachability functions $\CReach^\tau(\boldsymbol{\phi},T)$ are monotone and submodular
(as is the case with live-edge models), so is their average $\hat{{\sf A}}$, and hence
the oracle optimum can be approximated by the greedy algorithm. Prior
work addressed the efficiency of working with averaging oracles by
improving the efficiency of greedy maximization
\cite{Leskovec:KDD2007,CELFpp:WWW2011} and applied
sketches \cite{ECohen6f} to
efficiently estimate $\hat{{\sf A}}(S)$ values
\cite{CWY:KDD2009,binaryinfluence:CIKM2014}.
The fundamental
question we study here is the {\em sample complexity}
of IM, that is, the number of i.i.d.\
simulations needed to recover an approximate maximizer of the influence function $\texttt{I}^\tau$. Formally, for parameters $(\epsilon,\delta)$, identify a seed set $T$ of size $s$ so that
$\Pr\left[\texttt{I}^\tau(T)\geq (1-\epsilon) \textsc{OPT}^\tau_s\right]\geq 1-\delta$,
where $\textsc{OPT}^\tau_s := \max_{S \mid |S|\leq s} \texttt{I}^\tau(S)$ is the exact
maximum. Note that the recovery itself is generally computationally hard
and the sample complexity only considers the information we can glean from a set of simulations.
Kempe et al provided an upper bound of
\begin{equation}\label{naive:eq}
O\left(\epsilon^{-2} s n \log \frac{n}{\delta}\right)\
\end{equation}
on the sample complexity of the harder {\em Uniform Relative-Error
Estimation (UREE)} problem where for a given $(\epsilon,\delta)$ we bound the number of simulations so that with
probability
$1-\delta$, for all subsets $S$ such that $|S|\leq s$, $\hat{{\sf A}}(S)$ approximates $\texttt{I}^\tau(S)$ within relative error of $\epsilon$. The sample complexity of UREE upper bounds that of IM because the oracle maximizer $\arg\max_{S \mid |S|\leq s} \hat{{\sf A}}(S)$ must be an approximate maximizer.
We provide the argument for \eqref{naive:eq} here because it is basic and broadly applies to all
SDMs: The reachability values $\CReach^\tau(\boldsymbol{\phi},S)$, and hence their expectation, $\texttt{I}^{\tau}(S)$
have values in $[1,n]$. Using the multiplicative Chernoff bound (with values divided by $n$)
we obtain that
$O(\epsilon^{-2} n \ln \delta^{-1})$
simulations guarantee a relative error of $\epsilon$ with probability at least
$(1-\delta)$ for the estimate of any particular set
$S$. Interestingly, this bound is tight for point-wise estimation even for IC models:
Example~\ref{polysimu:ex} shows a family of models where $\tau=2$ and
$\Omega(\epsilon^{-2} n)$ simulations are required for estimating the influence value of a
single node.
The UREE sample complexity bound \eqref{naive:eq} follows from applying a
union bound over all $\binom{n}{s}=O(n^s)$ subsets.
\begin{wrapfigure}{r}{0.2\textwidth}
\vspace{-18pt}
\begin{example}\label{tinymatter:ex}
\includegraphics[width=0.2\textwidth]{tinymatter.pdf}
{\scriptsize Star graph: The center node has influence value $101$ and all other
nodes have influence $1$.}
\end{example}
\vspace{-16pt}
\end{wrapfigure}
The generic upper bound has prohibitive linear dependence on the
number of nodes $n$ (that Example~\ref{polysimu:ex} shows is
unavoidable for UREE even for IC models).
A simple example shows that we can not hope for an umbrella improvement for IM:
Consider the star graph family of Example~\ref{tinymatter:ex} when
edges are dependent so that
either all edges are live or none is. Clearly $n/100$ simulations are
necessary to detect a 1-step approximate maximizer (which must be the actual
maximizer).
The remaining hope is that we can obtain stronger bounds on the IM sample
complexity for models with weaker or no dependencies such as the IC
and IGT models.
This question eluded researchers for nearly two
decades.
\ignore{
Influence maximization from averaging oracles $\hat{{\sf A}}$ has practical appeal
also because it does not require learning a model and hence is robust to modeling or inference errors.
The starting point in applications is typically raw activity data of interacting
entities. When performing the optimization on a model, it needs to first be learned or inferred~\cite{SaitoNK2008,GoyalBonchi:wsdm2010,GRLK:KDD2010}.
Simulations (sets of simultaneously live edges), on the other hands, can be gleaned directly as
activity snapshots of the network or as aggregated activity over time windows.
The model inference and optimization pipeline is sensitive to modeling assumptions and accuracy of estimating model parameters~\cite{Chen:KDD2016,HeKempe:KDD2016}. The phenomenon generating the data may have complex dependencies between edge random variables that are lost in a simplistic model (e.g.\ an IC model can not capture dependencies) and requires a massive amount of data to model properly. Even estimating marginal edge probabilities $p_e$ requires a large amount of data: Edges with tiny, polynomially small probabilities, can be critical
for the accuracy of influence maximization (see Example~\ref{tinymatter:ex})
but a polynomial number of
$\Omega(1/p_e)$ independent ``observations'' of the state of the edge is required in order to accurately estimate each $p_e$. The large amount of raw data required to produce a "sufficiently accurate" model
may not be available or can be costly to obtain.}
\subsection*{Contributions and overview}
We study the sample complexity of influence maximization from averaging oracles computed from i.i.d.\
simulations.
One of our main contributions is an upper bound of
\begin{equation}
O\left(\epsilon^{-2} s \tau \log \frac{n}{\delta}\right)
\end{equation}
on the IM sample complexity of independent strongly submodular SDMs.
Informally, strong submodularity means that the
influence function of any
``reduced'' model (model derived from original one by setting a
subset $T\subset V$ of nodes as active) is submodular.
The IC and IGT models are special cases of strongly submodular
independent SDMs.
Interestingly, we provide similar sample complexity bounds for
natural families of models that are not independent: Mixtures of small number of
strongly submodular SDMs
and what we call
{\em $b$-dependence} live-edge models that allow for positive dependence of small groups
of edges with a shared tail node.
Our bound improves over prior work by replacing the prohibitive linear dependence in the number of nodes
$n$ in \eqref{naive:eq} with the typically much
smaller value $\tau$. While on worst-case instances unrestricted
diffusions may require $\Omega(n)$ steps, understanding
the sample complexity in terms of $\tau$ is important:
First, IM with explicit step limits
\cite{CLZ:AAAI2012,LiuCZ:ICDM2012,Gomez-RodriguezBS:ICML2011,DSGZ:nips2013,timedinfluence:2015},
is studied for applications where activation time matters. Moreover,
due to the
``small world'' phenomenon \cite{TraversMilgram:1969}, in ``natural''
settings we can expect most
activations (even with unrestricted diffusions) to occur within a small
number of steps. In the latter case, unrestricted influence values are
approximated well by corresponding step-limited influence with $\tau\ll n$.
Our improvement is surprising as generally a linear-in-$n$ number of
simulations is necessary for estimating influence values of some
nodes or to estimate essential model parameters (for example, the edge
probabilities in IC models), and this is the case even when $\tau$ is
very small. This shows that the maximization problem is in an
information sense inherently
easier and can circumvent these barriers.
We overview our results and implications -- complete proofs can be
found in the appendix. We review related work in
Section~\ref{related:sec} and place it in the context of our results.
In Section~\ref{prelim:sec} we formulate quality measures for influence
oracles and relate unrestricted and step-limited influence.
In particular, we observe that for IM it suffices that the oracle
provides good estimates (within a small relative error) of larger
influence values. This allows us to circumvent the
lower bound for point-wise relative-error estimates shown in Example~\ref{polysimu:ex}.
In Section~\ref{varbounds:sec} we state our main technical result that
upper bounds $\mathop{\sf Var}[\RReach^\tau(T)]$ by
$\tau \texttt{I}^\tau(T) \max_{v \in V
\setminus T} \texttt{I}^{\tau-1}( v)$
for independent strongly submodular SDMs. This variance upper bound facilitates
estimates with small
relative error for sets with larger influence values and
additive error for sets with small influence values.
We also provide a family of
IC models that shows that the linear dependence on $\tau$ in the
variance bound is necessary. We derive similar variance bounds to
mixtures of strongly submodular independent SDMs and $b$-dependence models. All our
subsequent sample complexity bounds apply generically to any SDM
(submodular/independent or not) that satisfies variance bounds of this form. In Section~\ref{averaging:sec} we review {\em averaging oracles} and bound the sample complexity using variance upper bounds.
In section~\ref{moa:sec} we present our
{\em median-of-averages} oracle
that amplifies the confidence guarantees of the averaging oracle and facilitates a tighter sample complexity bound.
\ignore{
obtaining them
using $O(\epsilon^{-2} \tau \log
\delta^{1})$ i.i.d. simulations. The construction uses the ``median
trick'' \cite{ams99} and organizes the simulations as
$O(\log \delta^{-1})$
pools of $O(\epsilon^{-2}\tau)$ simulations, returning the median of
the averages.
Our main result then follows by applying our amplified oracle with
$\delta $ adequate to provide a uniform error bound for all $\binom{n}{s}$ subsets of
cardinality at most $s$, which requires $O(\log({\delta^{-1}}) s \tau
\epsilon^{-2} \log n$ i.i.d. simulations.
}
In Section~\ref{adaptive:sec} we provide a data-adaptive framework that provides guarantees while avoiding the worst-case sample complexity upper bounds on models when a smaller number of simulations suffices.
\ignore{
Our simulation bounds are worst-case and in practice we can expect
that a much smaller number of simulations suffices.
Moreover, our oracles are such that ``validation'' of influence for a smaller
number of can be done
with a much smaller number of simulations. This allows us to
adaptively increase the number of simulations when the validation fails.
}
In Section~\ref{greedy:sec} we consider computational efficiency and present a greedy
maximization algorithm based on our median-of-averages oracles that
returns a $(1-(1-1/s)^s -\epsilon$ approximate maximizer with
probability $1-\delta$. The
design generically applies to any SDM with a submodular
influence function that satisfies the variance bounds.
\ignore{
\\
***
Intuition for Theorem 4.1
\\
In section \ref{varbounds:sec} we show an optimal bound on the variance of the reachability of each set of nodes $T$ in IC model by:
\[ \mathop{\sf Var}[\RReach^\tau(T)] \leq \tau \texttt{I}^\tau(T) \max_{v \in V
\setminus T} \texttt{I}^{\tau-1}( v) .\]
The main idea of the proof is inductively simulate the diffusion from $T$. For every step we define a discrete random variable $A$ that says who are the nodes we activate in the next step. We use the \textit{total variance formula} to express the reachability of $T$ as:
\[
\mathop{\sf Var}[\tReachonearg{\tau}{T}] = {\mathop{\sf Var}}_{A}[\E[\tReachonearg{\tau}{T | A}]] + \E_{A }[\mathop{\sf Var}[\tReachonearg{\tau}{T | A}]]
.\]
We show that thanks to the monotonicity and submodularity of the influence we can bound the first term by $\texttt{I}^\tau(T) \max_{v \in V \setminus T}$. We use the second term to perform an induction step and construct a new IC model with one less step than before, the seed set in the new IC model is the value of $A$ that maximizes the variance of the reachability. Repeating this process $\tau$ times gives us the required bound.
***
}
\section{Related work} \label{related:sec}
\begin{wrapfigure}{r}{0.4\textwidth}
\vspace{-22pt}
\begin{example}\label{dependence:ex}
\includegraphics[width=0.4\textwidth]{dependence.pdf}
{\scriptsize Model where with probability $\frac{1}{2}$ all red edges are active and otherwise all blue edges are active. The influence values $\texttt{I}^4(v)$ are shown in black. Simulation averages and RR samples with full simulations provide unbiased estimates of influence values $\E[\hat{\texttt{I}}^4(v)]=\texttt{I}^4(v)$. However, "efficient" RRS, which works with the marginal edge probabilities ($p_e=\frac{1}{2}$) or with decomposed simulations is biased, with $\E[\hat{\texttt{I}}^4(v)]$ shown in green. We can see that the bias induces large errors and also yields an erroneous maximizer.}
\end{example}
\vspace{-20pt}
\end{wrapfigure}
Our focus here is on influence estimates obtained from averages of
i.i.d.\ simulations of a model. We note that alternative approaches
can be more
effective for specific families of models.
In particular, for IC models, state of the art large-scale greedy
approximate maximization algorithms
\cite{TXS:sigmod2014,TSX:sigmod2015,Nguyen:SIGMOD2016,HuangWBXL:VLDB2017}
are not based on simulation averages. The estimates are also obtained by
building for each node a sample of its "influence set" but instead they use a finer building block of i.i.d.\ {\em Reverse Reachability (RR)
searches}.
The random RR search method was proposed in \cite{ECohen6f} to
estimate size of reachability sets in graphs
and Borg et al \cite{BBCL:SODA2014} adapted it to
IC models.
The method can be applied in principle for any live-edge model:
A basic RR search is conducted by selecting a node $v\in V$ uniformly at random and performing a BFS search on
reversed edges that is pruned at length $\tau$. The search "flips" edges
as their head node is reached according to conditional distribution on $\mathcal{G}$. The index number of the RR search is
then added to the sample set of each node that is "reached" by the search.
Influence of a subset $S$ can then be unbiasedly estimated from the cardinality of
the union of the samples of nodes $v\in S$ and the greedy algorithm
can be applied to the sets of samples for approximate maximization. To obtain an approximate influence maximizer we need to perform RR searches
until some node has a sample of size $O\left(\epsilon^{-2} s \log (n/\delta)\right)$. In the worst
case, this requires $O\left(\epsilon^{-2} s n\log (n/\delta)\right)$ RR searches.
For general live-edge models, an independent RR search can
always be obtained
from a simulation $E\sim \mathcal{G}$ by randomly drawing
a node and performing a reverse search from it using edges $E$. The
same simulation, however, can not generally be reused to generate
multiple independent RR searches. This way of obtaining RR searches
works for general live-edge models (with arbitrary dependencies) but requires $O(\epsilon^{-2} n s \log (n/\delta))$ simulations, which does not
improve over the generic upper bound \eqref{naive:eq}.
The appeal of the RR searches method is that it can be implemented
very efficiently for independent live-edge (including IC or LT) models.
The total work performed requires only
$O(\epsilon^{-2} m s (\log (n/\delta)))$ "edge flips" that can be
easily performed using specified edge probabilities $p_e$ for IC models.
Moreover, the basic building block of RR searches are local
simulations of sets of incoming edges of specified nodes and the full
computation requires at most $O(\epsilon^{-2} s \log (n/\delta))$
local simulations for each node.
When we have
full simulations generated by an independent live-edge model these ``local''
simulations are independent and the required number of "local
simulations" can be obtained by decomposing $O(\epsilon^{-2} s\log
(n/\delta))$ full simulations. But the caveat is that this approach breaks the coherence of
simulations, as we construct each RR search from components taken from
multiple simulations. These "efficient" implementations
(i.e. based on decomposed simulations or edge flips
according to marginal probabilities) may "catastrophically
fail" when dependencies exist: The influence estimates obtained are biased and cause
large errors even when the variance is
low. Example~\ref{dependence:ex} shows a simple mixture model (of two
degenerate IC models) where "efficient" RRS has large error due to
bias but averages of few simulations provide accurate estimates.
To summarize, with RRS, the implementation that works with full simulations is robust to dependencies but is inefficient and
the efficient implementation breaks ungracefully even with light dependencies.
Simulations averages
Thus we believe that
both basic approaches to approximate IM, simulation averages and
RRS offer distinct advantages: Simulation averages are robust in that
they remain unbiased and are accurate on any SDM, including dependent ones,
for which the variance is sufficiently small whereas RRS offers more efficiency with pure independence live-edge models.
\section{Preliminaries} \label{prelim:sec}
We consider stochastic diffusion models $\mathcal{G}(V)$ as outlined in the
introduction.
We denote by $\CReach^\tau(\boldsymbol{\phi},T)$ the $\tau$-steps reachability set
of $T$ when we use a specific set $\boldsymbol{\phi}$ of activation functions. We
will use the notation $\CReach^\tau(T)$ (with the parameter $\boldsymbol{\phi}$
omitted) for the random
variable $\CReach^\tau(\boldsymbol{\phi},T)$ obtained when we draw $\boldsymbol{\phi} \sim \mathcal{G}$ according to
the model.
\paragraph{Utility functions}
For simplicity, the discussion in the introduction took the utility of
a reachable set to be the number of reachable nodes $\VReach^\tau(\boldsymbol{\phi},T) :=
|\CReach^\tau(\boldsymbol{\phi},T)|$. Generally, we can consider
{\em utility functions}
$H:2^V\rightarrow \Re_+$
that are nonnegative monotone non-decreasing with $H(\emptyset) =0$:
\begin{equation} \label{submodval:eq}
\VReach^\tau(\boldsymbol{\phi},T) := H(\CReach^\tau(\boldsymbol{\phi},T)) .
\end{equation}
Submodular utility is particularly natural and studied by
Mossel and Roch \cite{MosselR:STOC2007}.
Additive utility is the special case where nodes
have nonnegative weights $w:V \rightarrow \mathcal{R}^{+}$ and
\begin{equation} \label{additiveval:eq}
\VReach^\tau(\boldsymbol{\phi},T) := \sum_{v\in \CReach^\tau(\boldsymbol{\phi},T)}w(v) .
\end{equation}
We consider a diffusion model $\mathcal{G}(V,H)$ together with a utility
function $H$. The random variable
$\RReach_{\mathcal{G}}^\tau(T)$ is the utility of the reachable set,
that is, $\VReach^\tau(\boldsymbol{\phi},T)$ when $\boldsymbol{\phi} \sim \mathcal{G}$. The influence
function is then
the expected utility of the reachable set
\[\texttt{I}^\tau(T) := \E[\RReach^\tau(T)] = \E_{\boldsymbol{\phi}\sim
\mathcal{G}}\VReach^\tau(\boldsymbol{\phi},T)\ .\]
We denote the maximum influence value of a subset of cardinality
$s$ by $\textsc{OPT}^\tau_s := \max_{S: |S|\leq s} \texttt{I}^\tau(S)$.
It follows from the definition that for any SDM $\mathcal{G}(V,H)$
with utility $H$, the influence $\texttt{I}^{\tau}(T)$
is monotone non-decreasing in $\tau$ and in the set $T$ and
the optimum values $\textsc{OPT}^\tau_s$ are non-decreasing in $\tau$ and $s$.
Generally, influence functions $\texttt{I}^\tau(T)$ of SDMs
may not be submodular even when utility is additive.
The influence function is submodular for
live-edge and for IGT models \cite{MosselR:STOC2007}
with submodular utility.
\paragraph{Reduced models}
We work with the following notion of model reduction.
Let $\mathcal{G}(V,H)$ be an independent SDM with
submodular utility. For a set of nodes $T\subset V$, we define the
{\em reduced model} $\mathcal{G}'(V',H')$ of
$\mathcal{G}$ with respect to $T$:
The reduced model contains the nodes $V'=V\setminus T$.
The activation function
$\phi'_v \sim \mathcal{G}'$ for $v\in V\setminus T$ are obtained by drawing
$\phi_v \sim \mathcal{G}$ conditioned on $\phi_v(T)=0$ and take
\[ \text{for all } S\subset V\setminus (T\cup\{v\}),\ \phi'_v(S) :=
\phi_v(S\cup T) \]
(Note that since we have
independent SDM we can separately consider the distribution of
activation functions of each node).
The utility in $\mathcal{G}'$ is the marginal utility in
$\mathcal{G}$ with respect to $T$:
\[ \text{for all $S\subset V\setminus T$,\ } H'(S) := H(S\cup T)-H(T)
. \]
The reduced model $\mathcal{G}'(V',H')$ is also an independent SDM with submodular utility:
Activations
functions $\boldsymbol{\phi}'\sim \mathcal{G}'$ are independent and monotone and the
utlity is monotone with $H'(\emptyset)=0$ and submodular.
\paragraph{Strongly submodular SDM}
We say that an independent SDM $\mathcal{G}(V,H)$ is {\em strongly submodular} if the
utility function $H$ is submodular and the influence
function $\texttt{I}^\tau_{\mathcal{G}'}$ is submodular with any reduced model
$\mathcal{G}'$ and step limit $\tau\geq 0$.
IC and IGT models are strongly submodular SDMs (see Theorem~\ref{ICIGTstrong:thm}).
The variance and thus sample complexity upper bounds that we
present in the sequel apply to any strongly submodular SDM.
We will also provide bounds for some dependent families of
models. One family is a slight generalization of IC models that we refer to as {\em $b$-dependence}. Here
edges are partitioned into disjoint groups, where
each group contains at most $b$ edges emanating from the same node. The edges in a group must be either all live or none live (are positively dependent).
\subsection{Relating step-limited and unrestricted Influence}
When unrestricted diffusion from a seed set $S$ is such that most activations occur within $\tau$ steps, the unrestricted influence $\texttt{I}(S)$ is approximated well by $\tau$-step influence $\texttt{I}^\tau(S)$.
We can also relate unrestricted influence with small expected steps-to-activation to step-limited influence:
For a seed set $S$, node $v$, and length $d$, we denote by
$p(S,v,d)$ the probability that node $v$ is activated in a diffusion
from $S$ in step $d$.
For additive utility functions \eqref{additiveval:eq} by definition, $\texttt{I}^\tau(S) = \sum_{v\in V} w(v) \sum_{d\leq \tau} p(S,v,d)$.
The {\em expected length of an activation path} from $S$ (in
unrestricted diffusion) is:
\begin{equation}
\overline{D}(S) := \frac{\sum_{v\in V} w(v) \sum_{d\leq n} d\cdot
p(S,v,d)}{\texttt{I}(S)}\ .
\end{equation}
The following lemma is an immediate consequence of Markov's
inequality and shows that
$\tau$-stepped influence with $\tau=O(\overline{D}(S))$
approximates well the unrestricted influence:
\begin{lemma} \label{unrestricted:lemma}
For all $S$ and $\epsilon>0$,
$\texttt{I}^{\overline{D}(S) \epsilon^{-1}}(S) \geq (1-\epsilon) \texttt{I}(S)$.
\end{lemma}
\subsection{Influence Oracles} \label{sec:influence-oracle}
We say that a set function $\hat{F}$ is an {\em $\epsilon$-approximation} of another set function $F$ \textit{at a point} $T$ if
$\left| \hat{F} (T) - F(T)
\right| \leq \epsilon \max\{F(T),\textsc{OPT}_1(F) \}$, where $\textsc{OPT}_s(F) := \max_{S\mid |S|\leq s} F(S)$. That is, the
estimate $\hat{F}$ has a small relative error for sets $T$ with $F(T) \geq \textsc{OPT}_1(F)$ and a small absolute
error of $\epsilon \textsc{OPT}_1(F)$ for sets $T$ with $F(T) \leq \textsc{OPT}_1(F)$. We say that $\hat{F}$ provides a {\em uniform} $\epsilon$-approximation for all subsets $T$ in a collection $C$ if $\hat{F}$ is an $\epsilon$-approximation for all $T\in C$.
An {\em influence oracle},
$\hat{\texttt{I}}^\tau$, is a randomized data structure that is
constructed from a set of i.i.d.\ simulations of a model.
The influence oracle, $\hat{\texttt{I}}^\tau$, defines a set function (we use the same name $\hat{\texttt{I}}^\tau$ for the set function) that for any input query set $T\subset
V$, returns a value $\hat{\texttt{I}}^\tau(T)$.
For $\epsilon < 1$ and $\delta<1$ we say that an oracle provides
$(\epsilon,\delta)$ {\em approximation guarantees
with respect to $\texttt{I}^{\tau}$} if for any set $T$ it is an $\epsilon$-approximation with probability at least $1-\delta$. That is
\begin{align}
\forall T \text{ such that} \texttt{I}^{\tau}(T) \geq \textsc{OPT}^\tau_1, &
\Pr\left[ \frac{\left| \hat{\texttt{I}}^\tau (T) - \texttt{I}^\tau(T)
\right|}{\texttt{I}^\tau(T) } \geq
\epsilon\right] \leq \delta\ . \label{highpart}\\
\forall T \text{such that } \texttt{I}^\tau (T)
\leq \textsc{OPT}^\tau_1, &
\Pr\left[ \left| \hat{I}^\tau (T) - \texttt{I}^\tau(T) \right| \geq
\epsilon \textsc{OPT}^\tau_1\right] \leq \delta\ .\label{lowpart}
\end{align}
where $\textsc{OPT}^\tau_1 :=\textsc{OPT}_1({I}^\tau ) $.
Example~\ref{polysimu:ex} shows that this type of requirement is what we can
hope for with an oracle that is constructed from a small number of simulations.
The $(\epsilon,\delta)$ requirements are {\em for each}
particular set $T$. If we are interested in stronger guarantees that with probability
$(1-\delta)$ the approximation uniformly holds {\em for all} sets in a collection $\mathcal{C}$,
we can use an oracle that provides $(\epsilon, \delta_A=\delta/|\mathcal{C}|)$
guarantees. The $\epsilon$-approximation guarantee for all sets in $\mathcal{C}$ then follow using a union
bound argument:
The probability that all $|\mathcal{C}|$ sets are approximated correctly is at
most $|\mathcal{C}| \delta_A \leq \delta$.
\section{Variance Bounds} \label{varbounds:sec}
We consider upper
bounds
on
the variance
$\mathop{\sf Var}\left[\RReach^\tau(T)\right]$ of the reachability of a set of
nodes $T$ that have the following particular form
\begin{equation} \label{factorc:eq}
\mathop{\sf Var}[\tReachonearg{\tau}{T}] \leq c \texttt{I}^\tau(T) \max\{ \texttt{I}^\tau(T), \max_{v \in V} \texttt{I}^\tau( v)\}\end{equation} for some $c\geq 1$.
The sample complexity bounds we present in the sequel apply to any SDM that satisfies these bounds.
In the remaining part of this section we state
our variance upper bound for strongly submodular SDMs
and extensions and a tight worst-case lower bound for IC models.
\subsection{Variance upper bound}
The following key theorem facilitates our main results. We show that
any strongly submodular SDM satisfy the bound \eqref{factorc:eq} with $c=\tau$.
\notinproc{The proof is technical and provided in Appendix~\ref{varUBproof:sec}.}
\begin{theorem}[Variance Upper Bound
Lemma]\label{var_upper_bound:thm}
Let $\mathcal{G}(V,H)$ be a strongly submodular SDM.
Then for any step limit
$\tau \geq 0$, and a set $T \subset V$
of nodes we have
\[ \mathop{\sf Var}[\RReach^\tau(T)] \leq \tau \texttt{I}^\tau(T) \max_{v \in V
\setminus T} \texttt{I}^{\tau-1}( v) .\]
\end{theorem}
Some natural dependent SDMs have a variance bound of the form \eqref{factorc:eq}:
\notinproc{(See Appendix~\ref{bdependence:sec} for proofs.)}
\begin{corollary} \label{extendIC:coro}
IC models with $\tau$-steps and $b$-dependence satisfy the bound \eqref{factorc:eq} with $c=2b\tau$.
Any mixture of $\tau$-steps strongly submodular SDMs where each model has
probability at least $p$ satisfy the bound \eqref{factorc:eq} with $c=(\tau+1)/p$.
\end{corollary}
\subsection{Variance lower bound} \label{sec:var-lower-bound}
We provide a family of IC models for which this variance upper bound is asymptotically tight. This shows that the
dependence of the variance bound on $\tau$ is necessary.
\begin{theorem} [Variance Lower Bound]
For any $\tau>0$ there is an IC model
$\mathcal{G}^\tau=(V,\mathcal{E})$ with a node $v \in V$ of maximum
influence such that
$\mathop{\sf Var}[\RReach^\tau(v)] \geq \frac{1}{12} \tau \texttt{I}^\tau( v)^2$
\end{theorem}
Our family of models $\mathcal{G}^\tau=(V,\mathcal{E})$ are such that
$(V,\mathcal{E})$ is a complete directed binary tree of depth
$\tau \geq 1$ rooted at $v\in V$ with all edges
directed away from the root and $p_e =
1/2$ for all $e\in \mathcal{E}$.
We show \notinproc{(details in Appendix~\ref{varLB:sec})} that:
\begin{align*}
\texttt{I}^\tau(v) &= \tau \\
\mathop{\sf Var}[\RReach^\tau(v)] &= \frac{1}{12} \tau
(\tau-1)(2\tau-1) .
\end{align*}
\section{The Averaging Oracle} \label{averaging:sec}
The {\em averaging oracle} uses i.i.d.\ simulations
$\{\boldsymbol{\phi}_i\}_{i=1}^\ell$. For a query $T$ it
returns the average utility of the reachability set of $T$:
$\hat{{\sf A}}^\tau (T) = \mathop{{\texttt{{\sf Ave}}}}_{i\in [\ell]} \VReach^\tau(\boldsymbol{\phi}_i,T) :=
\frac{1}{\ell} \sum_{i=1}^\ell \VReach^\tau(\boldsymbol{\phi}_i,T)\ .$
We quantify the approximation guarantees of an averaging oracle in terms of a variance bound of the form
\eqref{factorc:eq}.
\begin{lemma} \label{ave:lemma}
Consider an SDM that for some $c \geq 1$ satisfies a variance bound of the form \eqref{factorc:eq}. Then
for any $\epsilon, \delta<1$, an averaging oracle constructed from
$\ell \geq \epsilon^{-2} \delta^{-1} c $ i.i.d.\ simulations
provides $(\epsilon,\delta)$ guarantees.
In particular for strongly submodular SDMs, we use the variance bound in Theorem
\ref{var_upper_bound:thm} and obtain
these approximation
guarantees using $\ell \geq \epsilon^{-2} \delta^{-1} \tau$ i.i.d.\ simulations.
\end{lemma}
\begin{proof}
Using variance properties of the average of i.i.d.\ random variables, we get that for any query $T$
\[\mathop{\sf Var}[\hat{{\sf A}}^\tau (T)] = \frac{1}{\ell} \mathop{\sf Var}[\RReach^\tau(T)] \leq \frac{1}{\ell} c \texttt{I}^\tau(T)
\max\{\texttt{I}^\tau(T), \textsc{OPT}^{\tau}_1\} \ .\]
The claims follow using Chebyshev's inequality that states that for
any random variable $X$ and $M$,
$\Pr[|X-\E[X]| \geq \epsilon M] \leq
\epsilon^{-2}\mathop{\sf Var}[X]/M^2$. We apply it to the random variable
$\hat{{\sf A}}^\tau (T)$ that has expectation $I^\tau(T)$ and plug
in the variance bound.
To establish \eqref{highpart} we use $M= \texttt{I}^\tau(T)$
and to establish \eqref{lowpart} we use $M=\textsc{OPT}^\tau_1$.
\end{proof}
\subsection{Sketched averaging oracle} \label{sketchedave:sec}
For live-edge models with additive utility
\eqref{additiveval:eq}, the query efficiency of the averaging oracle can be improved with off-the-shelf use of
$\tau$-step combined
reachability sketches \cite{ECohen6f,
binaryinfluence:CIKM2014,timedinfluence:2015,ECohenADS:TKDE2015}.
The sketching is according to a sketch-size parameter $k$ that also determines the sketches computation time and accuracy of the estimates that sketches provide. A sketch of size $O(k)$ is computed for each node $v$ so that for any set of nodes $S$, $\sum_{i=1}^r \VReach^\tau(E_i,S)$ can be efficiently estimated from the sketches of the nodes $v\in S$.
The computation of the sketches from an arbitrary set of simulations $\{ E_i \}$ uses at most
$\sum_i |E_i| + k\sum_{v} \max_i d_v(E_i)$ edge traversals, where $d_v(E_i)$ is the maximum in-degree of node $v$ over simulations $\{E_i\}$. In the case of an IC model, the expected number of traversals is $(k+\ell)\sum_{e} p_e$. Sketching with general node weights can be handled as in~\cite{ECohenADS:TKDE2015}.
The estimates obtained from the sketches are unbiased with coefficient of variation $1/\sqrt{k-2}$ and are concentrated: Sketches of size
$k=O(\epsilon^{-2}\log(\delta^{-1}))$ provide estimates with relative error $\epsilon$ with probability $1-\delta$.
\ignore{
A sketch of size $k$ is computed for each $v\in V$ and the average
reachability of a set $T$ can be estimated from the sketches of $v\in
T$.
The NRMSE of the estimate is $1/\sqrt{k-2}$ and it is also well
concentrated. From multiplicative Chernoff bounds, the probability of the estimate exceeding
$(1+\epsilon)$ times value is at most $e^{-\epsilon^2 k/2}$ and the estimate being below $(1-\epsilon)$ times
value is at most $e^{-\epsilon^2 k/(2+\epsilon)}$
(for
$\epsilon\leq 1$).
We can also say that we have confidence $1-\delta$ with sketch size
$\epsilon^{-2} \log \delta^{-1}$.
In particular, the additional variance introduced by sketching the
average estimate is $\hat{{\sf A}}^\tau(T)^2/(k-2)$ and a choice of
$k=O(\epsilon^{-2})$ provides similar guarantees to the unsketched
oracle with much more efficient computation.
Another useful property of sketches is that
with a slight multiplicative overhead of $\log \tau$ on preprocessing time
and sketch size, the estimate can support
queries for $t$-stepped influence for any $t\leq \tau$.
}
\section{Confidence Amplification: The median-of-averages oracle} \label{moa:sec}
The statistical guarantees we provide for our averaging oracle are
derived from variance bounds. The limitation is that the
number of simulations we need
to provide $(\epsilon,\delta)$ guarantees is linear in
$\delta^{-1}$ and therefore the number of simulations we need to provide uniform guarantees (via a union bound argument) grows linearly with the number of subsets.
\ignore{
$c$ subsets we need to construct it with confidence parameter
$\delta' = \delta/c$ and apply a union bound. The linear dependence
on $\delta^{-1}$ implies therefore a linear dependence on $c$ with is prohibitive for large values of $c$.
}
In order to find an approximate optimizer, we would like to have a uniform $\epsilon$-approximation for all
the ${n \choose s}$
subsets of size at most $s$ but doing so with an averaging oracle would require too many simulations.
We adapt to our setting a classic confidence amplification technique~\cite{ams99} to
construct an oracle where the number of simulations grows logarithmically in the
confidence parameter $\delta^{-1}$.
A {\em median-of-averages} oracle is specified by a number $r$ of {\em
pools} with $\ell$ simulations in each pool. The
oracle is therefore constructed from
$r \ell$ i.i.d.\ simulations $\boldsymbol{\phi}_{ij}$
for $i\in [r]$ and $j\in [\ell]$.
The simulations of each pool are used in an averaging oracle that
for the $i$th pool ($i\in [r]$) returns the estimates
$\hat{{\sf A}}^\tau_i (T)$. The median-of-averages oracle returns
the median value of
these $r$ estimates
\begin{equation} \label{MoAests:eq}
\widehat{{\sf mA}}^\tau(T) := \mathop{\texttt{{\sf Median}}}_{i\in [r]} \hat{{\sf A}}^\tau_i (T) =
\mathop{\texttt{{\sf Median}}}_{i\in [r]} \mathop{{\texttt{{\sf Ave}}}}_{j\in [\ell]} \VReach^\tau(\boldsymbol{\phi}_{ij},T)\
.
\end{equation}
We establish that when the i.i.d\ simulations are from a model that
has variance bound \eqref{factorc:eq} for some $c\geq 1$, the
median-of-averages oracle provides $(\epsilon,\delta)$
approximation guarantees using
$112 \epsilon^{-2} c \ln \delta^{-1}$ i.i.d.\ simulations.
\begin{lemma} ~\label{MEoracle:lemma}
Consider an SDM that for some $c\geq 1$
satisfies the variance bound \eqref{factorc:eq}. Then for every $\epsilon$ and $\delta$,
a median-average oracle $\widehat{{\sf mA}}$
organized with $r = 28 \ln \delta^{-1}$ pools of $\ell = 4\epsilon^{-2} c$ simulations in each
provides $(\epsilon,\delta)$ approximation guarantees.
\end{lemma}
\begin{proof}
An averaging oracle with $\ell$ simulations provides $(\epsilon,\delta_A)$ approximation guarantees for $\delta_A = 1/4$. Therefore, the probability of correct estimate for any subset is at least $3/4$.
We now consider the estimates $\hat{{\sf A}}_j$ obtained from the $r$ pools
when sorted in increasing order. The estimates that are not correct (too low or too high) will be at the prefix and suffix of the sorted order.
The expected number of correct estimates is at least $\mu \geq
\frac{3}{4} r$. The probability that the median estimate is not
correct is bounded by the probability that number of correct estimates
is $\leq r/2$, which is $\leq \frac{2}{3}\mu$.
From multiplicative Chernoff bounds, the probability of a sum of
Bernoulli random variables beings below
$(1-\epsilon')\mu$ is at most $e^{-\epsilon'^2 \mu /(2+\epsilon')}$.
Using $\epsilon'=1/3$
we have
$\epsilon'^2 \mu /(2+\epsilon') = \frac{1}{9} \frac{3}{4} \frac{3}{7}
28 \ln \delta^{-1} = \ln \delta^{-1}$.
\end{proof}
As a corollary, we obtain a sample complexity bound for influence maximization from variance bounds:
\begin{theorem} \label{simupper:thm}
Consider an SDM that satisfies the variance bound~\eqref{factorc:eq} for some $c\geq 1$. Then for any $\epsilon<1$ and $\delta<1$, using
$112 \epsilon^{-2} c s \ln \frac{n}{\delta}$ i.i.d.\ simulations we can return $T$ such that
\[\Pr\left[ \texttt{I}^\tau(T) \geq (1-2\epsilon) \textsc{OPT}^\tau_s \right] \geq 1-\delta\ .\]
\end{theorem}
\begin{proof}
We construct a median-of-averages oracle with
$\ell = 4\epsilon^{-2} c$
and
$r = 28 \ln \delta_{MA}^{-1}$ where
$\delta_{MA} = \delta /{n \choose s}$. From Lemma~\ref{MEoracle:lemma}
using a union bound over the ${n \choose s}$ sets we obtain that with probability $1-\delta$ the oracle provides a uniform $\epsilon$-approximation for all subsets of size at most $s$. Let $S$ be a set with maximum influence $\texttt{I}(S) = \textsc{OPT}^\tau_s$ and
let $T$ be the oracle optimum \[T := \arg\max_{S \mid |S|\leq s}\widehat{{\sf mA}}(S) .\]
We have
\[
\texttt{I}(T) \geq (1 - \epsilon)\widehat{{\sf mA}}(T) \geq (1 - \epsilon)\widehat{{\sf mA}}(S) \geq (1-\epsilon)^2 \texttt{I}(S) \geq (1-2\epsilon)\textsc{OPT}^\tau_s\ .
\]
We comment that the $(1 - 2\epsilon)$ ratio is not tight and we can obtain a bound closer to $(1-\epsilon)$. This because the particular set $S$ to be approximated more tightly by the oracle (that uses enough simulations to support a union bound).
\end{proof}
\section{Optimization with Adaptive sample size} \label{adaptive:sec}
The bound on the number of simulations we derived in
Theorem~\ref{simupper:thm} (through a median-of-averages oracle) and
also the naive bound~\eqref{naive:eq} (for the averaging oracle) are
worst-case. This is obtained by using enough simulations to have the
oracle provide a uniform $\epsilon$-approximation with probability at least $1 - \delta$ on any problem instance.
To obtain the uniform approximation we applied
a union bound over ${n \choose s}$ subsets that
resulted in an increase in the number of required
simulations
by an $s \log n$ factor
over the base $(\epsilon,\delta)$ approximation guarantees.
On real data sets a much smaller number of
simulations than this worst-case often suffices.
We
are interested in algorithms that adapt to such data and return a seed set of approximate maximum influence using a respectively smaller number of simulations and while providing
statistical guarantees on the quality of the end result.
To do so, we apply an
adaptive optimization framework \cite{multiobjective:2015} (some example applications are~\cite{binaryinfluence:CIKM2014,Nguyen:SIGMOD2016,topk:conext06,CCKcluster18}).
This framework consists of a ``wrapper'' that take as inputs oracle constructions from simulations and a base algorithm that performs an optimization over an oracle.
The wrapper invokes the algorithm on oracles
constructed using an increasing number of simulations until a
validation condition on the quality of the result is met.
The details are provided in\onlyinproc{ the supplementary material}\notinproc{ Appendix~\ref{adaptivemore:sec}}.
We denote by $r(\epsilon,\delta)$ the number of simulations that provides $(\epsilon, \delta)$ guarantees and we obtain the following results:
\begin{theorem} \label{optAadaptive:thm}
Suppose that on our data the averaging (respectively, median-of-averages) oracle
$\hat{I}$ has the
property that with $r$ simulations, with probability at least $1-\delta$, the oracle optimum
$T := \arg\max_{S \mid |S|\leq s}\hat{I}(S)$
satisfies
\begin{eqnarray*}
\texttt{I}^\tau(T) &\geq& (1-\epsilon)\textsc{OPT}_s^\tau .\\
\end{eqnarray*}
Then with probability at least
$1-5\delta$, when using
$2\max\{r,r(\epsilon,\delta)\} + O\left(\epsilon^{-2}c \left(\ln{\frac{1}{\delta}} + \ln \left(\ln\ln \frac{n}{\delta}+ \ln s\right)\right)\right)$ simulations with the
median-of-averages oracle and
$2\max\{r,r(\epsilon,\delta)\} + O\left(\epsilon^{-2}c\left(\ln{\frac{1}{\delta}} + \ln \left( \ln\ln \frac{n}{\delta}+ \ln n \right) \right)\right)$ simulations with the averaging
oracle, the wrapper outputs a
set $T$ such that $\texttt{I}^\tau(T) \geq (1-5\epsilon)\textsc{OPT}_s^\tau$.
\end{theorem}
The wrapper can also be used with a base algorithm that is an
approximation algorithm. For live-edge models, our averaging oracle is monotone and
submodular and hence we can apply greedy to
efficiently compute a set with approximation ratio at least
$1-1/e$ (with respect to the oracle). If we use greedy as our
base algorithm we obtain the following:
\begin{theorem} \label{greedyadaptive:thm}
If the averaging oracle $\hat{{\sf A}}$ is submodular and has the
property that with $\geq r$ simulations, with probability at least $1-\delta$, it provides a uniform $\epsilon$-approximation for all subsets of size at most $s$, then with
$2\max\{r,r(\epsilon,\delta)\} + O\left(\epsilon^{-2}c\left(\ln{\frac{1}{\delta}} + \ln \left( \ln\ln \frac{n}{\delta}+ \ln n \right) \right)\right)$ simulations we can find in
polynomial time a
$(1-(1-1/s)^s)(1 - 5\epsilon)$ approximate solution with confidence $1-5\delta$.
\end{theorem}
\section{Approximate Greedy Maximization} \label{greedy:sec}
In this section we consider the computational efficiency of maximization over our oracle $\hat{\texttt{I}}$ that approximates a monotone submodular influence function $\texttt{I}^\tau$.
The maximization problem is computationally hard: The brute force method evaluates $\hat{\texttt{I}}(S)$ on all $\binom{n}{s}$ subsets $S$ of size $s$ in order to find the oracle maximizer.
An efficient algorithm for approximate maximization of a monotone submodular function $\hat{F}$ is greedy that sequentially builds a seed set $S$ by adding a node $u$ with maximum marginal
contribution $\arg\max_{u\in V} (\hat{F}(S\cup\{u\})-\hat{F}(S))$ at each step. To implement greedy we only need to evaluate at each step the function on a linear number of subsets $\hat{F}(S\cup\{u\})$ for $u\in V$ and thus overall we do $sn$ evaluations of $\hat{F}$ on subsets.
With a monotone and submodular $\hat{F}$, for any $s\geq 1$ the subset $T$ that consists of the first $s$ nodes in a greedy sequence satisfies \cite{submodularGreedy:1978}:
\[\hat{F}(T) \geq ( 1-(1-1/s)^s) \max_{S \mid |S|\leq s} \hat{F}(S)\ge (1-1/e) \textsc{OPT}_s(\hat{F})\ .
\]
If our functions $\hat{F}$ provides a uniform $\epsilon$-approximation of another function $F$ for all subsets of size at most $s$, then $F(T) \geq (1-(1-1/s)^s)(1-2\epsilon)\textsc{OPT}_s(F)$ (See the proof of \ref{simupper:thm}).
The averaging oracle is monotone and submodular~\cite{KKT:KDD2003}
when reachability functions are as in live-edge models.
Unfortunately our median-of-averages oracle which facilitates tighter
bounds on the number of simulations is monotone but may not be submodular
even for models where the averaging oracle is submodular. Generally when this is the case, greedy may fail (as highlighted in recent work by
Balkanski et al~\cite{BalkanskiRS:STOC2017}).
Fortunately, greedy is effective on a function $\hat{F}$ that is monotone but not necessarily submodular as long as $\hat{F}$ "closely approximates" a monotone submodular $F$ in that
marginal contributions of the form \[F(u \mid S) := F(S\cup\{u\})-F(S)\] are approximated well by
$\hat{F}(u \mid S)$ \cite{binaryinfluence:CIKM2014}. We apply this to establish the following lemma:
\begin{lemma} \label{almostsubmodular:thm}
The greedy algorithm applied to a function $\hat{F}$ that is monotone and provides a uniform $\epsilon_A$-approximation of a monotone submodular function $F$
where
$\epsilon_A = \frac{\epsilon(1-\epsilon)}{14s}$
returns a set $T$ such that
$F(T)\geq (1-(1-1/s)^s)(1- \epsilon)\textsc{OPT}_s(F)$.
\end{lemma}
Our proof of Lemma~\ref{almostsubmodular:thm}
generally applies to an approximate oracle $\hat{F}$ of any monotone submodular function $F$\notinproc{ and is presented in Appendix~\ref{greedyproof:sec}}.
For approximate IM we obtain the following as a corollary:
\begin{theorem}
Consider a submodular SDM ${\mathcal G}(V,H)$ that for some $c\geq 1$ satisfies
the variance bound \eqref{factorc:eq}.
Consider a median-of-averages oracle constructed with
$O(\epsilon^{-2} s^3 c \ln \frac{n}{\delta})$ simulations of
$\mathcal{G}$ arranged as $r=O(s\ln \frac{n}{\delta})$ pools with $\ell= O(\epsilon^{-2}s^2 c)$ simulations each. Then with probability $1-\delta$, the set $T$ that contains the first $s$ nodes returned by greedy on the oracle satisfies $\texttt{I}^\tau(T) \geq (1-(1-1/s)^s)(1-\epsilon)\textsc{OPT}^\tau_s$.
\end{theorem}
\begin{proof}
From
Lemma~\ref{MEoracle:lemma}, with appropriate constants, this configuration provides us with $(\epsilon/(14 s),\delta)$ approximation guarantees. From Lemma~\ref{almostsubmodular:thm} greedy provides the stated approximation ratio.
\end{proof}
Greedy on the median-of-averages oracle can be implemented generically
for any SDM $\mathcal{G}$
by explicitly maintaining the reachability sets $\CReach(\boldsymbol{\phi}_{ij},\{v\}\cup
S)$ for all nodes $v\in V$ in each simulation $\boldsymbol{\phi}_{ij}$ as the greedy
selects nodes into the seed set $S$.
For each step, we compute the oracle value (see \eqref{MoAests:eq}) and
select $v$ for which the value for $\{v\}\cup S$ is maximized:
\[ \arg\max_{v\in V\setminus S} \widehat{{\sf mA}}^\tau(\{v\}\cup S) .\]
We obtain approximation guarantees, however, only when the conditions
of monotone submodular influence function and variance bounds are satisfied.
For
specific families of models, we can consider tailored efficient
implementations that
incrementally maintain reachability sets and values.
For live-edge models with additive
utility~\eqref{additiveval:eq} we consider an implementation of greedy
on a median-of-averages oracle. This can be done by explicit
maintenance of reachability sets or by using
sketches~\cite{ECohen6f,binaryinfluence:CIKM2014,timedinfluence:2015,ECohenADS:TKDE2015}
(see Section~\ref{sketchedave:sec}). We obtain the following bounds
(proof is deferred to Appendix Section~\ref{greedymoa:sec})
\begin{theorem} \label{greedyalg:thm}
Let $\mathcal{G}$ be a live-edge model with an additive
utility function \eqref{additiveval:eq} that satifies the
variance bound \eqref{factorc:eq}. Then
greedy on median-of-averages oracle can be implemented with explicit reachability sets in time
\begin{equation}
O(\epsilon^{-2} s^3 c \ln
\left(\frac{n}{\delta}\right) \overline{m}n)\ ,
\end{equation}
where $\overline{m}$ is the average number of edges per simulation (For an IC model, $c=\tau$ and $\E[\overline{m}]=\sum_{e \in \mathcal{E}} p_e$).
When using sketches, the time bound is
\begin{equation}
O(\epsilon^{-2}s^3\ln\frac{n}{\delta}(c \overline{m} +s(m^*+ns)\ln n)),
\end{equation}
where $m^* = \sum_v \max_{ij} d_v(E_{ij})$. For an IC model, $c=\tau$ and $m^*=\sum_e p_e$ in expectation.
\end{theorem}
\section*{Conclusion}
We explore the "sample complexity" of IM on stochastic diffusion
models and show that an approximate
maximizer (within a small relative error) can be recovered from a
small number of simulations as long as the variance is appropriately
bounded. We establish the variance bound for the large class of
strongly submodular stochastic diffusion models. This includes IC models (where edges
are drawn independently) and IGT models (where node thresholds are
drawn independently) and natural extensions that allow for some dependencies.
Our sample complexity bound significantly improves over the previous bounds by replacing the linear dependence in the number of nodes by a logarithmic dependence on the number of nodes and linear dependence on the length of the activation paths (which are usually very short).
An interesting question for future work is to address the gap between
the sample complexity and the larger number of simulations currently needed for
greedy maximization.
\subsection*{Acknowledgements}
This research is partially supported by the Israel Science Foundation (Grant No. 1841/14).
\small
\bibliographystyle{plain}
|
1,108,101,566,371 | arxiv | \section{Introduction}
Quadratic number fields were proposed as a setting for public-key
cryptosystems in the late 1980s by Buchmann and Williams
\cite{BWKeyEx,BWKeyExReal}. Those cryptosystems were generalized to number fields of arbitrary dimension about a decade later \cite{arb_dim_1,arb_dim_2,arb_dim_3}. Their security relies on the hardness of the discrete logarithm problem and the principality testing problem. The complexity of the algorithms for solving these problem on a number field $\K$ of discriminant $\Delta$ is bounded by $L(1/2,O(1))$, where the subexponential function is defined as
$$L(\alpha,\beta ) = e^{\beta\log_2|\Delta|^{\alpha}\log_2\log_2|\Delta|^{1-\alpha}}.$$
This complexity is asymptotically slower than the one for factoring which reduces to the problem of computing the class number, and although the discrete logarithm problem in the Jacobian of elliptic curves remains exponential, there is no known reduction between this problem and the discrete logarithm problems in number fields either. Therefore, studying the hardness of the discrete logarithm problem and of the principality testing problem on number fields is of cryptographic interrest since they provide alternative cryptosystems whose security is unrelated to those currently being used.\\
In this paper, we exhibit the first infinite class of number fields for which these problems can be solved in expected time bounded by $L(1/3 , O(1) ) $. We follow the approach of Biasse \cite{biasseL13} who described a class of number fields on which class group and regulator computation can be done in expected time $L(1/3,O(1))$, and the one of Enge, Gaudry and Thom\'{e} \cite{Enge,Enge2} who described and algorithm for solving the discrete logarithm problem in complexity $L(1/3 , O(1))$ in certain algebraic curves.\\
\section{Number fields}
Let $\K$ be a number field of degree $n$, $\theta\in\K$, and $T[X] = \sum_{i\leq n} t_iX^i\in\Z[X]$ such that
$$\K = \Q[X]/T(X) = \Q(\theta).$$
We denote by $\mathcal{O}_{\mathbb{K}}$ its maximal order and by $Cl(\mathcal{O}_{\mathbb{K}})$ the ideal class group of its maximal order. The ideal class group of an order is a finite group of cardinality denoted by $h(\mathcal{O}_{\mathbb{K}})$ which is unknown to both parties in number field cryptosystems. Solving the discrete logarithm problem with respect to $\mathfrak{a}$ and $\mathfrak{b}\in Cl(\mathcal{O}_{\mathbb{K}})$ consists of finding $x\in\Z$ such that
$$\mathfrak{b} = \mathfrak{a}^x.$$
The principality testing problem with respect to an ideal $I$ of $\mathcal{O}_{\mathbb{K}}$ consists of deciding if there exists $\alpha\in\mathcal{O}_{\mathbb{K}}$ such that
$$I = (\alpha),$$
and if so, computing $\alpha$. Direct computation of $\alpha$ in subexponential time is impossible because of the size of its coefficients, thus obliging us to give a compact representation of this value, that is to say a vector $\overrightarrow{v}=(v_1,\hdots,v_k)$ and $\gamma_1,\hdots,\gamma_k\in\K$ satisfying
$$\alpha = \gamma_1^{v_1}\hdots\gamma_k^{v_k}.$$
In number fields of fixed degree (typically when the dimension is 2), these problems can be solved in subexponential time. The strategy described in \cite{Bsub} consists of defining a factor base $\mathcal{B}$ containing the primes of norm bounded by an integer $B$ and reduce random power-products $\p_1^{e_1}\hdots\p_g^{e_g}$ of elements $\p_i\in\mathcal{B}$ untill an equivalent $\mathcal{B}$-smooth ideal is found. Whenever this occurs, we can derive a row of the so-called relation matrix which after a suitable linear transformation yields the structure of $Cl(\mathcal{O}_{\mathbb{K}})$, and enables us to solve instances of the discrete logarithm problem and principal ideal problem. If the degree is no longer assumed to be fixed, then every reduction step is exponential in the degree of $\K$ since it uses the LLL algorithm \cite{LLL}.
\section{Main idea}
Let $d := \max_i \left\lbrace \log_2(t_i)\right\rbrace$, we require that
\begin{align}\label{cond_n}
&n= n_0\log_2\left( |\Delta|\right)^{\alpha}(1+o(1))\\
& d= d_0\log_2\left( |\Delta|\right)^{1-\alpha}(1+o(1)),\label{cond_d}
\end{align}
for some $\alpha\in\left[ \frac{1}{3},\frac{2}{3}\right[$, and some constants $n_0$ and $d_0$. We define $\kappa:=n_0d_0$. We also denote by $s$ the number of real places, by $t$ the number of complex places and we define $r:=t+s-1$. We also require that $\Z[\theta] = \mathcal{O}_{\mathbb{K}}$.
\subsubsection*{Example}
Let $\Delta\in\Z$, and $\K_{n,K}$ be an extension of $\Q$ defined by a polynomial of the form:
$$T(X) = X^n - K,$$
with
\begin{align*}
&\log K = \left\lfloor \log_2\left( |\Delta|\right)^{1-\alpha}\right\rfloor\\
& n= \left\lfloor\log_2\left( |\Delta|\right)^{\alpha}\right\rfloor,
\end{align*}
for some $\alpha\in\left[ \frac{1}{3},\frac{2}{3}\right[$. Then, $\mathcal{O}_{\K_{n,K}}$ has discriminant satisfying:
$$\log_2(\text{Disc}(\mathcal{O}_{\K_{n,K}}))=\log( n^{n}K^{n-1}) = \log_2(|\Delta|) (1+o(1)).$$
If in addition we require that $n$ and $K$ be the largest prime numbers below their respective bounds such that:
$$n^2 \nmid K^{n-1}-1,$$
then we meet the last restriction $\Z[\theta] = \mathcal{O}_{\K_{n,K}}$ (for a proof, see \cite{cohen}, Chapter 6 \textsection 1).\\
In \cite{biasseL13}, it is shown that the computation of the group structure of $Cl(\mathcal{O}_{\mathbb{K}})$ and of the regulator of $\mathcal{O}_{\mathbb{K}}$ with a number of bits of precision in $L(1/3,O(1))$ could be achieved in expected time $L(1/3,O(1))$ under some assumption that we will specify in the following. The main idea is to use sieving based technique to create relations of the form
$$(\phi) = \p_1^{e_1}\hdots\p_n^{e_n},$$
where $\phi\in\mathcal{O}_{\mathbb{K}}$ and the $\p_i$ are non inert prime ideals of norm bounded by a certain integer $B$; we denote this set by $\mathcal{B}$. Every time such a relation is found, the vector
$$(e_1,\hdots,e_n,\log|\phi|_1,\hdots,\log|\phi|_r)$$
is added as a row of the relation matrix $M$, which has the following shape
\[M=
\left(
\begin{BMAT}[2pt,3cm,1cm]{c.c}{c}
M_{\Z} & M_{\R}
\end{BMAT}
\right). \]
Then, provided the rows of $M$ generate the whole lattice of relations, the Smith normal form of $M_{\Z}$ yields the group structure of $Cl(\mathcal{O}_{\mathbb{K}})$ whereas its kernel yields $R$.\\
Now, given two ideals $\mathfrak{a}$ and $\mathfrak{b}$ such that $\exists x\in\Z\ \bg = \ag^x$, computing their discrete logarithm can be done by decomposing them over $\mathcal{B}$,
\begin{align*}
\mathfrak{a} &= \p_1^{e_1}\hdots \p_n^{e_n} \\
\mathfrak{b} &= \p_1^{f_1}\hdots \p_n^{f_n},
\end{align*}
and performing a linear algebra phase consisting of solving one linear system. Likewise, if we need to test the principality of an ideal $I$ and compute $\alpha$ such that $I = (\alpha)$, then it sufficies to find $b:=[e_1,\hdots,e_n]$ such that
$$I = \p_1^{e_1}\hdots\p_n^{e_n}.$$
$I$ is principal if and only if $b$ belongs to the lattice of relations. We thus solve $XM_{\Z} = b$ and derive $\alpha$ from the coefficients of $X$ and the generators $\phi_i$ of the relations of $M$. We thus see here that solving the discrete logarithm problem and testing the principality rely on our ability to decompose an arbitrary ideal into a power product of elements of $\mathcal{B}$.\\
To do this, we follow the approach of Enge, Gaudry and Thom\'{e} for algebraic curves \cite{Enge,Enge2} involving a $Q$-descent strategy. Given an ideal $I$, it consists of decomposing it as a power product of prime ideals (not necessarily in $\mathcal{B}$), and then decomposing those primes as power products of primes of a lower norm untill we only have prime ideals of norm bouned by $B$.
\section{Relation matrix}
Let $\rho$ be a constant to be determined later, and $B$ a smoothness bound satisfying:
$$B = \lceil L(1/3,\rho)\rceil.$$
We define the factor base $\mathcal{B}$ as the set of all non inert prime ideals of norm bounded by $B$. This factor base has cardinality:
$$N:=|\mathcal{B}| = L(1/3,\rho+o(1)).$$
The sieving phase consists of enumerating $\phi\in\mathcal{O}_{\mathbb{K}}$ of the form
$$\phi = A(\theta),$$
with $A[X]\in\Z[X]$ of degree $k$ whose coefficients $a_i$ have their logarithm bounded by an integer $a$ such that there exist two constants $\delta$ and $\nu$ to be determined later satisfying:
\begin{align}
a &\leq \left\lceil \delta \frac{\kappa\log_2|\Delta|/n}{(\log_2|\Delta|/\mathcal{M})^{1/3}} \right\rceil \label{bound_a}\\
k &\leq \left\lceil \nu \frac{n}{(\log_2|\Delta|/\mathcal{M})^{1/3}} \right\rceil, \label{bound_k}
\end{align}
with $\mathcal{M}:=\log_2\log_2|\Delta|$. Landau-Mignotte's theorem \cite{mignotte} states that if $D\mid T$ with $\deg D=m$, then the coefficients $d_j$ of $D$ satisfy $|d_j|\leq 2^{m-1}(|T| + t_n)$, where $|T|$ is the euclidian norm of the vector of the coefficients of $T$. Applying this to $D = X-\sigma_i(\theta)$ and $m=1$ allows us to obtain:
$$\log(|\theta|_i)\leq \log(|T|+t_n) \in O(\log\left( |\Delta|\right) ^{1-\alpha}),$$
for $i\leq r$. From $\phi = A(\theta)$, and $a$ and $k$ respectively bounded by (\ref{bound_a}) and (\ref{bound_k}), we have
$$\log|\phi|_i \leq O(\log\left( |\Delta|\right) ^{2/3}\M^{1/3}).$$
We can thus derive a bound on the maximum value $|M_\Z|$ of the norm a coefficient of $M_\Z$.
\begin{proposition}\label{bound_B}
$ |M_{\Z}|$ satisfies:
$$ |M_{\Z}| = O(\left( \log_2|\Delta|\right) ^{2/3}\left( \log_2\log_2|\Delta|\right) ^{1/3}).$$
\end{proposition}
During the relation collection phase, we collect $N+Kr$ relations, where $K$ is a constant. We rely on the following heuristic to make sure that we generate the full lattice of relations.
\begin{heuristic}\label{heuristic:lattice}
The $N+Kr$ relations collected this way generate the full lattice of relations.
\end{heuristic}
\section{Smoothness}\label{sec:smooth}
We need to evaluate the smoothness of ideals with respect to $\mathcal{B}$. Let $\psi_{\mathcal{I}}(\iota,\mu)$ be the set of ideals $I$ such that $\mathcal{N}(I)\leq \iota$ which are smooth with respect to the set of primes $\p$ satisfying $\mathcal{N}(\p)\leq \mu$ and $\psi(x,y)$ be the set of integers of logarithm bounded by $x$ smooth with respect to primes of logarithm bouded by $y$. $\psi$ was first described in \cite{Canfield} by Canfield, Erd\"{o}s Pomerance. We need to make the following assumption on the smoothness of ideals.
\begin{heuristic}\label{heuristic:norm}
We assume that
\begin{equation}\label{eq:smooth_ideal}
\frac{\psi(\iota,\mu)_{\mathcal{I}}}{e^\iota}\geq\exp\left( -u \left( \log_2 u + \log_2\log_2 u -1 + O\left( \frac{\log_2\log_2 u}{\log_2 u}\right) \right) \right),
\end{equation}
for $u = \iota / \mu$. In addition, assume that $\mathcal{N}(\phi)$ behaves like a random number whose logarithm satisfies
$$\log_2(\mathcal{N}(\phi))\leq \iota:=\kappa\log_2\left( |\Delta|\right) ^{2/3}\mathcal{M}^{1/3}(\delta + \nu + o(1) ),$$
and whose distribution is given by
\begin{equation}\label{eq:smooth_principal}
\frac{\psi(\iota,\mu)}{e^\iota}\geq\exp\left( -u \left( \log_2 u + \log_2\log_2 u -1 + O\left( \frac{\log_2\log_2 u}{\log_2 u}\right) \right) \right).
\end{equation}
\end{heuristic}
The assertion concerning $\psi_{\mathcal{I}}$ can be proved in the quadratic case\cite{seysen} but remain conjectural for arbitrary $n$ \cite{Bsub}. In the context of curves, Enge, Gaudry and Thom\'{e} used a theorem due to Hess to derive the equivalent of \eqref{eq:smooth_ideal} for divisors in the jacobian of a curve, but had to use a similar heuristic for \eqref{eq:smooth_principal}.
Using \cite{Canfield}, and carrying out the same computation as in the proof of theorem 1 of \cite{Enge,Enge2}, one readily shows the following result on the probability of finding a relation:
\begin{proposition}\label{smoothness}
Let :
\begin{align*}
\iota&= \lfloor\log_2 L(\phi , c)\rfloor = \lfloor c\log_2\left( |\Delta|\right) ^{\phi}\M^{1-\phi}\rfloor \\
\mu&= \lceil\log_2 L(\beta,d)\rceil= \lceil d\log_2\left( |\Delta|\right) ^{\beta}\M^{1-\beta}\rceil,
\end{align*}
then we have
\begin{align*}
\frac{\psi(\iota,\mu)_\mathcal{I}}{e^{\nu}}& \geq L\left( \phi-\beta,\frac{-c}{d}(\phi-\beta)+o(1)\right)\\
\frac{\psi(\iota,\mu)}{e^{\nu}} & \geq L\left( \phi-\beta,\frac{-c}{d}(\phi-\beta)+o(1)\right)
\end{align*}
\end{proposition}
Proposition \ref{smoothness} allows us to bound the expected time for finding a $\mathcal{B}$-smooth ideal. In \textsection \ref{sec:decomp}, we show how to decompose prime ideals of the form $p\mathcal{O}_{\mathbb{K}} + (\theta - v_p)\mathcal{O}_{\mathbb{K}}$ over a set of prime ideals of the same form with a smaller norm. In the general case, prime ideals can have a ramification index $f\geq 2$ and thus be of the form $p\mathcal{O}_{\mathbb{K}} + T_p(\theta)\mathcal{O}_{\mathbb{K}}$ where $\deg(T_p) = f$. However, it can be shown that the ramified primes have Dirichlet density 0, allowing us to consider that $\mathcal{B}$-smooth decomposition with unramified primes occur with the same probability as in Proposition \ref{smoothness}. A proof of this result can be found in Chapter IV, Proposition 4.5 of \cite{janusz} for example.
Proposition \ref{smoothness} with parameters $\beta = \frac{1}{3}$, $ d = \rho$, $\phi = \frac{2}{3}$ and $c = \kappa(\delta+\nu+o(1))$
shows that the expected number of trials to obtain a relation is at most $L\left( 1/3,\frac{\kappa(\nu+\delta)}{3\rho}+o(1)\right)$. Since the factor base has size $N\in O\left( L(1/3,\rho)\right) $, the complexity of the relation collection phase with respect to the parameters $\rho,\nu,\kappa,\delta$ is in
$$L\left( 1/3,\frac{\kappa(\nu+\delta)}{3\rho}+\rho+o(1)\right).$$
These parameters are chosen to ensure that the overall time be optimal. The linear algebra phase is polynomial in the dimension of $M$ wich is given by $L(1/3,\rho + o(1))$. We need to compute the regulator, which can be done in expected time $L(1/3,3\rho + o(1))$ provided the bit precision is also bounded by $L(1/3,3\rho + o(1))$ (see \cite{biasseL13}). It is shown in \cite{JaSto} that linear systems of the form $X M_\Z$ can be solved in time
$$O\left( N^3(\log_2 n + \log_2|M_\Z|)^2\right),$$
where $|M_\Z|$ is the largest absolute value of a coefficient of $M_\Z$. The computation of a discrete logarithm in $Cl(\mathcal{O}_{\mathbb{K}})$ with Vollmer's method \cite{Vdl} is done by solving a system of the form $XM_\Z'$ where $M_\Z'$ is $M_\Z$ augmented with two extra rows whose coefficients are proved to be bounded by $e^{o(\log_2|\Delta|^{1/3}\mathcal{M}^{2/3})}$ in \textsection \ref{sec:DLP}. The linear algebra phase thus has a complexity bounded by $L(1/3,3\rho + o(1))$. We emphasize here that we do not need to compute the group structure of $Cl(\mathcal{O}_{\mathbb{K}})$, thus avoiding the computation of the Hermite Normale Form of $M_\Z$. We can prove that the optimal strategy is to spend the same amount of time for the relation collection and for the linear algebra. Therefore, the parameters must satisfy
\begin{equation}\label{constr1}
\kappa\nu\delta = 3\rho.
\end{equation}
In addition, the number of $\phi$ in the search space is in $O\left( L(1/3) , \nu\delta\kappa\right)$. We thus have the additional constraint on the parameters
\begin{equation}\label{constr2}
\nu\delta\kappa = \frac{\kappa(\nu+\delta)}{3\rho}+\rho,
\end{equation}
ensuring that the search space is large enough to yield the $N+Kr$ relations. From \eqref{constr1} and \eqref{constr2}, we obtain
\begin{align*}
\nu\delta &= \frac{3\rho}{\kappa} \\
\nu + \delta &= \frac{6\rho^2}{\kappa}.
\end{align*}
Thus, $\delta$ and $\nu$ are roots of the polynomial
$$X^2 - \frac{6\rho^2}{\kappa}X + \frac{3\rho}{\kappa}.$$
These roots exist provided we have
$$\rho\geq \sqrt[3]{\frac{\kappa}{3}}.$$
The optimal choice is to minimize $\rho$, thus fixing the parameters $\delta$ and $\nu$:
$$\delta = \nu = \sqrt{\frac{3\rho}{\kappa}}.$$
The total running time becomes $L(1/3 ,c + o(1))$, with
$$c = 3\rho= \sqrt[3]{9\kappa}.$$
\section{Decomposition over $\mathcal{B}$}\label{sec:decomp}
Assuming Heuristics \ref{heuristic:lattice} and \ref{heuristic:norm}, we can study the complexity of the $Q$-descent. In what follows, we show how to decompose an ideal as a power product of elements of $\mathcal{B}$ starting with a lemma allowing us to find integers $\alpha_1,\hdots,\alpha_{k+1}$ minimizing $\sum_i\alpha_i v_i$ for some $v_i$.
\begin{lemma}\label{lemma:smooth}
Let $v_1,\hdots,v_{k+1}$ be integers satisfying $\log|v_i|\leq D$ for some integers $D$ and $k$ defined by
$$k := \left\lfloor \sigma \frac{n}{(\log_2|\Delta|/\mathcal{M})^{1/3-\tau/2}}\right\rfloor\ \ D := \log_2\left( L(1/3+\tau,c)\right) ,$$
where $\sigma ,\tau, c > 0$. Then for any integer $z$, there exist at least $2^{kz}$ $(k+1)$-tuples $(\alpha_1,\hdots,\alpha_{k+1})$ satisfying
\begin{align*}
\log_2|\alpha_i|&\leq D/k + z \\
\log_2\left|\sum_i \alpha_i v_i \right| &\leq D/k + z.
\end{align*}
\end{lemma}
\begin{proof}
Let us define the $k+1$ dimensional lattice $\Lambda$ generated by the rows of
\[ A:=\left( \begin{array}{ccccc}
1 & 0 & \hdots & 0 & v_1 \\
0 & 1& \ddots &\vdots & \vdots \\
\vdots & \ddots & \ddots & 0 & \vdots \\
0 & \hdots & 0& 1 & v_{k+1}\\
\end{array} \right).\]
For any element $x\in\Lambda$, there exist $(\alpha_1,\hdots,\alpha_{k+1})\in\Z^{k+1}$ such that
$$x = (\alpha_1,\hdots,\alpha_{k+1},\sum_i\alpha_i v_i).$$
The determinant $d(\Lambda)$ of $\Lambda$ satisfies
$$d(\Lambda)= \sqrt{\det\left( AA^T\right)}= \sqrt{\sum_{i\leq k+1}v_i + \sum_{i\leq k+1} v_iv_{k+1-i}}\leq\left( \sqrt{2k+1}\right) 2^D.$$
Let $X\subset\R^{k+2}$ be the symmetric and convex set of points defined by
$$X = \left\lbrace (x_1,\hdots,x_{k+2})\mid \forall i\ |x_i| \leq D/k + z\right\rbrace .$$
The volume $V(X)$ equals $2^{k+2}e^{(k+2)(D/k+z)}$, and from Theorem II of III.2.2 in \cite{cassel} we know that if
$$V(X) > m 2^{k+2}d(\Lambda),$$
then $X$ intersect $\Lambda$ in at least $m$ pairs of points $\pm x\in\R^{k+2}$. It thus suffices to prove that
$$2^{kz} < \frac{2^{(k+2)(\frac{D}{k}+z)}}{\sqrt{2k+1}e^D} = 2^{kz}.\frac{2^{2\frac{D}{k} + 2z}}{\sqrt{2k+1}},$$
which is satisfied since
$$\frac{D}{k} = \frac{c}{\sigma}\log_2|\Delta|^{2/3 - \alpha +\tau/2}\log_2\log_2|\Delta|^{1/3-\tau/2}\gg \log_2 (2k+1).$$
\end{proof}
Using Lemma \ref{lemma:smooth}, we can state the analogue of Theorem 8 in \cite{Enge2}. Please note here that the proof we give is almost verbatim, the main difference being the use of Lemma \ref{lemma:smooth}.
\begin{theorem}\label{theo:smooth}
Assuming Heuristic \ref{heuristic:norm}, we can decompose any ideal $I$ of $\mathcal{O}_{\mathbb{K}}$ into a power product of elements of $\mathcal{B}$ in time
$$L(1/3 , b + \varepsilon),$$
with $b = \sqrt[3]{\frac{24\kappa}{9}}$ and any $\varepsilon > 0$.
\end{theorem}
\begin{proof}
Let $I$ be an ideal of norm bounded by $|\Delta|$. We can assume this without loss of generality since any class of $Cl(\mathcal{O}_{\mathbb{K}})$ contains an ideal of norm bounded by $(2/\pi)^s\sqrt{\Delta}$Let $I = u\mathcal{O}_{\mathbb{K}} + (\theta - v)\mathcal{O}_{\mathbb{K}}$ be an ideal of norm bounded by $ L(1/3+\tau,c)$ for some $c>0$ and $0\leq \tau \leq 2/3$. The ideal we start has $\tau = 2/3$ and $c=1$. Indeed, it can be proved that any class of $Cl(\mathcal{O}_{\mathbb{K}})$ contains an ideal of norm bounded by $|\Delta|$. We search a $L(1/3 + \tau/2,c')$-smooth $\phi\in I$ for a $c'$ depending on $c$. Such a $\phi$ satisfies $I\mid (\phi)$ and thus $I$ can be decomposed as a power product of the prime ideals involved in the decomposition of $(\phi)$. We repeat this process untill we obtain a decomposition only involving elements of $\mathcal{B}$. At each stage, we consider $\phi$ belonging to the lattice of polynomials of degree bounded by
$$k := \left\lfloor \sigma \frac{n}{(\log_2|\Delta|/\mathcal{M})^{1/3-\tau/2}}\right\rfloor,$$
where $\sigma > 0$ is a constant to be determined later. These $\phi$ form a $\Z$-lattice generated by
$$(v_0,\theta - v_1 , \hdots , \theta^k - v_k),$$
with $v_0 = u$ and $v_i = v^i\mod u$ for $i\geq 1$. We want to spend the same time $L(1/3,e+o(1))$ at each smoothing step for $e>0$ to be optimised later. The sieving space has to be of the same size. We thus look for $L(1/3,e+o(1))$ distinct $(k+1)$-tuples $(\alpha_1,\hdots,\alpha_{k+1})\in\Z^{k+1}$. Using Lemma \ref{lemma:smooth}, we prove that for every integer $z$, we can find $2^{kz}$ such tuples satisfying $\log_2|\alpha_i|\leq D/k + z$ for $i\leq k+1$ and $\log_2\left|\sum_i \alpha_i v_i \right| \leq D/k + z$. We ajust the value of $z$ to make sure that all the $L(1/3,e+o(1))$ obtained during the sieving phase satisfy this property by solving $2^{kz} = L(1/3, e + o(1))$. This yields
$$z = \frac{1}{n}\log_2 L(2/3 - \tau/2,e/\sigma + o(1)).$$
Carrying on the same computation as in \cite{Enge,Enge2}, we can prove that the norm of the $\phi$ we create during the sieving phase satisfies
$$\mathcal{N}(\phi)\leq L(2/3 + \tau/2 , (c+e)/\varphi + o(1)).$$
From Heuristic \ref{heuristic:norm} and Proposition \ref{smoothness} we expect to find at least one $ L(1/3+\tau/2 ,c')$-smooth $\phi$ for
$$c' = \frac{1}{3e}((c+e)/\sigma + \sigma\kappa).$$
This quantity is minimised by $\sigma = \sqrt{(c+e)/\kappa}$ which yields
$$c' = \frac{2\sqrt{\kappa}}{3e}\sqrt{c+e}.$$
Starting with $\tau_0 = 2/3$ and $c_0 = 1$, we obtain a power-product of places of norm bounded by $L(1/3+\tau_1 , c_1)$ with $\tau_1 = 1/3$ and $c_1 = 2\sqrt{\kappa(c_0+e)}/3e$. After $i$ step we get an ideal $L(1/3 + 1/(3.2^{i-1}),c_i)=L(1/3,c_i\mathcal{M}^{\frac{1}{3.2^{i-1}}})$-smooth where
$$\tau_i = \frac{1}{3.2^{i-1}},\ \ c_i = \frac{2\sqrt{\kappa}}{3e}\sqrt{c_{i-1}+e}.$$
The sequence $c_i$ converves to a finite limit $c_\infty$ given by
$$c_\infty = \chi/2\left( \chi + \sqrt{\chi^2 + 4e}\right) ,$$
where $\chi = 2\sqrt{\kappa}/3e$. Let $\xi>0$ be an arbitrary constant. Afer a number of steps only depending on $e$, $\kappa$ and $\xi$, we have $c_i < c_\infty(1+\xi)$, and after $O(\log_2\log_2|\Delta|)$ steps $ \mathcal{M}^{\frac{1}{3.2^{i-1}}} < (1+\xi)$. We can thus decompose $I$ as a power-product of prime ideals of norm bounded by
$$L(1/3 , c_\infty (1+\xi)).$$
As each node of the tree has arity $\log_2|\Delta|$, the number of nodes in the tree is in $L(1/3,o(1))$ and the complexity of the algorithm is in $L(1/3,e+o(1))$. As we want to decompose $I$ as a power product of primes of norm bounded by $L(1/3,\rho)$, we compute the effort to reach $c_\infty = \rho$. As in \cite{Enge,Enge2}, we write $9e^{1/3} = E\kappa$ for $E$ with $E$ to be determined later. The equation $\rho = c_\infty$ simplifies as
$$\left( \frac{3}{E}\right)^{1/3} = \frac{2}{E}(1+ \sqrt{1+E}).$$
The least non negative solution $E_0$ satifies $E_0\approx 24$, which yields
$$e = \sqrt[3]{\frac{24\kappa}{9}}=:b$$
\end{proof}
The time taken to decompose an ideal over $\mathcal{B}$ is subexponential with a constant $b + \varepsilon$ stricly lower than the one minimizing the time taken by the relation collection and the linear algebra (see \textsection \ref{sec:smooth}). Therefore, there is no need for a more elaborated optimization of the parameters encapsulating the time for decomposing an ideal over $\mathcal{B}$.
\section{Discrete Logarithm algorithm and principality testing}\label{sec:DLP}
We follow the approach of Vollmer in quadratic fields\cite{Vdl} to compute discrete logarithms without computing the group structure of $Cl(\mathcal{O}_{\mathbb{K}})$. Given two ideals $\ag$ and $\bg$ such that there exists an integer $x$ satisfying $\bg = \ag^x$, we wish to compute $x$. We enlarge the factor base with $\ag$ and $\bg$ and let $\mathcal{B}' = \mathcal{B}\cup\left\lbrace \ag,\bg\right\rbrace $. Then we use the methods of \textsection \ref{sec:decomp} to decompose $\ag$ and $\bg$ over $\mathcal{B}$, thus creating two extra relations over $\mathcal{B}'$
\begin{equation}\label{eq:decomp_ab}
\p_1^{e_1}\hdots\p_N^{e_N}\ag=\alpha_\ag,\ \ \ \p_1^{f_1}\hdots\p_N^{f_N}\bg=\alpha_\bg.
\end{equation}
Then we construct the extended relation matrix
\[ M_\Z' := \left(
\begin{BMAT}(@)[2pt,1.5cm,1.5cm]{c.c}{c.c}
\begin{BMAT}(2pt,1cm,1cm){c}{c}
M_\Z
\end{BMAT} &
\begin{BMAT}(2pt,0.5cm,1cm){c}{c}
(0)
\end{BMAT} \\
\begin{BMAT}(2pt,1cm,0.5cm){c}{cc}
\overrightarrow{v_{\mathfrak{b}}} \\
\overrightarrow{v_{\mathfrak{a}}}
\end{BMAT} &
\begin{BMAT}(2pt,0.5cm,0.5cm){cc}{cc}
1 & 0 \\
0 & 1
\end{BMAT}
\end{BMAT}
\right),
\]
where $\overrightarrow{v_{\mathfrak{a}}} = (e_1,\hdots,e_N)$ and $\overrightarrow{v_{\mathfrak{b}}} = (f_1,\hdots,f_N)$. The relation $\bg = \ag^x$ corresponds to the row vector $\overrightarrow{v_{x}}:=(0,\hdots,1,-x)$ which is a combination of the rows of $M_\Z'$ under Heuristic \ref{heuristic:lattice}. Therefore, there exists $X=(x_1,\hdots,x_{N + rK})$ such that $XM_\Z' = \overrightarrow{v_{x}}$. In particular $x_{N+Kr} = -x$. We can thus obtain $x$ by solving $XA = \overrightarrow{v}$ where
\[ A := \left(
\begin{BMAT}(@)[1pt,1cm,1cm]{c.c}{c.c}
\begin{BMAT}(2pt,1cm,1cm){c}{c}
M_\Z
\end{BMAT} &
\begin{BMAT}(1pt,0.25cm,0.75cm){c}{c}
(0)
\end{BMAT} \\
\begin{BMAT}(1pt,0.75cm,0.25cm){c}{cc}
\overrightarrow{v_{\mathfrak{b}}} \\
\overrightarrow{v_{\mathfrak{a}}}
\end{BMAT} &
\begin{BMAT}(1pt,0.75cm,0.25cm){c}{cc}
1 \\
0
\end{BMAT}
\end{BMAT}
\right)\ \ \text{and}\ \ \overrightarrow{v} = (0,\hdots,0,1).
\]
It is shown in \cite{JaSto} that the complexity of this step in in
$$O\left( N^3(\log_2 n + \log_2|A|)^2\right),$$
where $|A| = \max|a_{ij}|$. We already know a bound on the norm of the coefficients of $M_\Z$, but we still have to bound those of $\overrightarrow{v_{\mathfrak{a}}}$ and $\overrightarrow{v_{\mathfrak{b}}}$.
\begin{lemma}\label{lemma:size_coeff}
The size of the coefficients of $\ag$ and $\bg$ is bounded by $O\left(\log_2|\Delta|^{\log_2\log_2|\Delta|}\right)$.
\end{lemma}
\begin{proof}
At each smoothing step, an ideal $I$ of norm satisfying $\mathcal{N}(I)\leq O(\log_2|\Delta|)$ is smoothed. The largest possible exponent $e$ of this decomposition occurs if $I = \p_1^e$ thus yielding
$$e\leq \frac{\mathcal{N}(I)}{\mathcal{N}(\p_1)}\leq \frac{\mathcal{N}(I)}{2}\in O(\log_2|\Delta|).$$
The depth of the tree is bounded by $O\left( \log_2\log_2|\Delta||\right)$, so the size of the maximal coefficient occuring in the decomposition of $\ag$ and $\bg$ is bouned by $O\left(\log_2|\Delta|^{\log_2\log_2|\Delta|}\right)$.
\end{proof}
We know from Proposition \ref{bound_B} that $ |M_{\Z}| = O(\left( \log_2|\Delta|\right) ^{2/3}\left( \log_2\log_2|\Delta|\right) ^{1/3})$, allowing us to conclude that the overall expected time of the discrete logarithm algorithm is bounded by $L(1/3,3\rho + o(1))$ where $\rho = \sqrt[3]{\frac{\kappa}{3}}.$
\begin{proposition}
Let $\ag$ and $\bg$ be ideals such that there exists $x\in\Z$ satisfying $\bg = \ag^x$. Under Heuristics \ref{heuristic:lattice} and \ref{heuristic:norm}, the expected time to compute $x$ is in
$$L\left( 1/3 , 3\rho +o(1)\right),$$
where $\rho = \sqrt[3]{\frac{\kappa}{3}}$.
\end{proposition}
Now let us study how we can decide whether a given arbitrary ideal $I$ is principal, and if so compute
$\alpha$ such that $I = (\alpha)$. To this end, we first decompose $I$ over $\mathcal{B}$ using Theorem \ref{theo:smooth}. We thus obtain a vector $b\in\Z^N$ representing the decomposition of $I$ over $\mathcal{B}$. As we assume Heuristic \ref{heuristic:lattice}, $b$ belongs to the lattice spanned by the rows of $M_Z$ if and only if $I$ is principal. Therefore, solving $XM_\Z = b$ allows us to decide whether $I$ is principal.
Using the same strategy as for the analysis of the discrete logarithm problem algorithm, we can prove that this step has complexity $L(1/3,3\rho + o(1))$.
\begin{proposition}
Under Heuristics \ref{heuristic:lattice} and \ref{heuristic:norm}, the expected time to decide if $I$ is principal and to compute a compact representation if $\alpha$ such that $I = (\alpha)$ is bounded by
$$L\left( 1/3 , 3\rho +o(1)\right),$$
where $\rho = \sqrt[3]{\frac{\kappa}{3}}$.
\end{proposition}
|
1,108,101,566,372 | arxiv | \section{Introduction}
\label{sec:int}
Giant extrasolar planets that orbit their host stars at distances shorter than $\approx$ 1 AU but farther away than the hot-Jupiter pile-up at $\approx$ 0.1 AU, are termed ``warm" giants. They have been efficiently discovered by radial velocity (RV) surveys \citep[e.g.,][]{hebrard16,jenkins17}, and have a wide distribution for their eccentricities, with a median of $\approx0.25$. The origin for these eccentricities is a topic of active research because
the migration of planets through interactions with the protoplanetary disc predicts circular orbits \citep{dunhill:2013}, while planet-planet scattering after disc dispersal at typical warm giant orbital distances should generate usually planet collisions rather than high eccentricity excitations \citep{petrovich:2014}.
Transiting giants are key for constraining theories of orbital evolution of exoplanets. Besides providing the true mass of the planet, follow-up observations can be carried out to constrain the sky-projected spin-orbit angle (obliquity) of the system, which is a tracer of the migration history of the planet \citep[e.g.,][]{zhou:2015, esposito:2017, mancini:2018}. While the obliquity for hot giant ($P < 10$ d) systems can be affected by strong tidal interactions \citep{triaud:2013,dawson:2014}, the periastra of warm giants are large enough that significant changes in the spin of the outer layers of the star are avoided, and thus the primordial obliquity produced by the migration mechanism should be conserved.
Unfortunately, the number of known transiting warm giants around nearby stars is still very low. In addition to the scaling of the transit probability as $a^{-1}$, the photometric detection of planets with $P > 10$ days requires
a high duty cycle, which puts strong limitations on the ability of ground-based wide-angle photometric surveys \citep[e.g.,][]{bakos:2004,pollacco:2006,bakos:2013} to discover warm giants. From the total of $\approx 250$ transiting giant planets detected from the ground, only 5 have orbital periods longer than 10 d \citep{kovacs:2010,hatp17,wasp117,brahm:2016:hs17,wasp130}. On the other hand, the \textit{Kepler} and CoRoT space missions found dozens of warm giants \citep[e.g. ][]{corot9,corot10,dawson:2012,borsato:2014}, but orbiting mostly faint stars, for which detailed follow-up observations are very challenging.
Due to their relatively low equilibrium temperatures ($\ensuremath{T_{\rm eq}} < 1000$ K), transiting warm giants are important objects for characterizing the internal
structure of extrasolar giant planets since their atmospheres are not subject to the yet unknown mechanisms that inflate the radii of typical hot Jupiters \citep[for a review see][]{fortney:2010}.
For warm giants, standard models of planetary structure can be used to infer their internal composition from mass and radii measurements \citep[e.g.,][]{thorngren:2016}.
In this work we present the discovery of an eccentric warm giant planet orbiting a bright star, having physical parameters similar to those of Saturn. This discovery was made in the context of the K2CL collaboration, which has discovered a number of planetary systems using K2 data \citep{brahm:2016:k2,espinoza:2017:k2,jones:2017,giles:2018,soto:2018,k2-232,k2-261}.
\section{Observations} \label{sec:obs}
\subsection{K2}
Observations of campaign 15 (field centered at RA=15:34:28 and DEC=-20:04:44) of the K2 mission \citep{howell:2014} took place between August 23 and November 20 of 2017. The data of K2 campaign 15 was released on March 2018.
We followed the steps described in previous K2CL discoveries to process the light curves and identify transiting planet candidates. Briefly, the K2 light curves for
Campaign 15 were detrended using our implementation of the EVEREST algorithm \citep{luger:2016}, and a Box-Least-Squares \citep[BLS;][]{BLS} algorithm was used to find candidate
box-shaped signals. The candidates that showed power above the noise
level were then visually inspected to reject evident eclipsing binary systems
and/or variable stars. We identified 23 candidates in this field. Among those candidates, K2-287\ (EPIC 249451861) stood out as a high priority candidate for follow-up due to its relative long period, deep flat-bottomed transits, and bright host star ($V=11.4$ mag).
The detrended light curves of the six transits observed for K2-287\ by K2 are displayed in Figure~ \ref{fig:lc}.
\begin{figure*}
\plotone{CL001-15_phot.pdf}
\caption{De-trended K2 photometry of K2-287. Black points are individual 30-min cadence K2 data The transits of K2-287b\ are clearly seen. \label{fig:lc}}
\end{figure*}
\subsection{Spectroscopy}
We obtained 52 R=48000 spectra between March and July of 2018 using the FEROS spectrograph \citep{kaufer:99} mounted on the 2.2 MPG telescope in La Silla observatory. Each spectrum achieved a signal-to-noise ratio of $\approx90$ per spectral resolution element. The instrumental drift was determined via comparison with a simultaneous fiber illuminated with a ThAr+Ne lamp. We obtained additionally 25 R=115000 spectra between March and August of 2018 using the HARPS spectrograph \citep{mayor:2003}. Typical signal-to-noise ratio for these spectra ranged between 30 and 50 per spectral resolution element.
Both FEROS and HARPS data were processed with the \texttt{CERES}\ suite of echelle pipelines \citep{brahm:2017:ceres}, which produce radial velocities and bisector spans in addition to reduced spectra.
Radial velocities and bisector spans are presented in Table~\ref{tab:rvs} with their corresponding uncertainties, and the radial velocities are displayed as a function
of time in Figure~\ref{fig:rvstime}. No large amplitude variations were identified which could be associated with eclipsing binary scenarios for the K2-287\ system and no additional stellar components were evident in the spectra. The radial velocities present a time
correlated variation in phase with the photometric ephemeris, with an amplitude consistent with the one expected to be produced by a giant planet. We find no correlation between the radial velocities and the bisector spans (95\% confidence intervals for the Pearson coefficient are $[-0.19,0.21]$, see Figure~\ref{fig:bis}).
\begin{figure*}
\plotone{cl001-15_rvs-time.pdf}
\caption{Radial velocity (RV) curve for K2-287\ obtained with FEROS (red) and HARPS (black). The black line corresponds to the Keplerian model with the posterior parameters found in Section \ref{sec:glob}.\label{fig:rvstime}}
\end{figure*}
\begin{figure}
\plotone{rv-bs.pdf}
\caption{Radial velocity (RV) versus bisector span (BIS) scatter plot using data from our spectroscopic observations of K2-287. We find that the data is consistent with no correlation. \label{fig:bis}}
\end{figure}
\subsection{Ground-based photometry}
\label{ssec:ground}
On July 14 of 2018 we observed the primary transit of K2-287\ with the Chilean-Hungarian Automated Telescope (CHAT), installed at Las Campanas Observatory, Chile.
CHAT is a newly commissioned 0.7m telescope, built by members of the HATSouth \citep{bakos:2013} team, and dedicated to the follow-up of transiting exoplanets.
A more detailed account of the CHAT facility will be published at a future date (Jord\'an et al 2018, in prep\footnote{\url{https://www.exoplanetscience2.org/sites/default/files/submission-attachments/poster_aj.pdf}}).
Observations were obtained in the Sloan i' band and the adopted exposure time was of 53 s per image, resulting in a peak pixel
flux for K2-287\ of $\approx$ 45000 ADU during the whole sequence. The observations covered
a fraction of the bottom part of the transit and the egress (see Figure~\ref{fig:pht}). The same event was also monitored by one telescope of the Las Cumbres Observatory 1m network \citep{brown:2013:lcogt} at Cerro Tololo Inter-American Observatory, Chile. Observations were obtained with the Sinistro camera with 2mm of defocus in the Sloan i band. The adopted exposure time for the 88 observations taken was 60 s, and reduced images were obtained with the standard Las Cumbres Observatory pipeline (BANZAI pipeline).
The light curves for CHAT and the Las Cumbres 1m telescope were produced from the reduced images using a dedicated pipeline (Espinoza et al 2018, in prep).
The light curves were detrended by describing the systematic trends as a Gaussian Process with an exponential squared kernel depending on time, airmass and centroid position and whose parameters are estimated simultaneously with those of the transit. A photometric jitter term is also included; this parameter is passed on as a fixed parameter in the final global analysis that determines the planetary parameters (\S~\ref{sec:glob}). In more detail, the magnitude time series is modeled as
\begin{equation}
m_i = Z + x_1c_{1,i} + x_2 c_{i,2} + \delta_i + \epsilon_i
\end{equation}
\noindent where $Z$ is a zeropoint, $c_1$ and $c_2$ are comparison light curves, $x_1$ and $x_2$ are parameters weighting the light curves, $\delta$ is the transit model and $\epsilon$ is a Gaussian Process to model the noise. The subscript $i$ denotes evaluation at the time $t=t_i$ of the time series. For the Gaussian process, we assume a kernel given by
\begin{equation}
k_{ij} = A \exp\left[- \sum_m \alpha_m(x_{m,i} - x_{m,j})^2\right] + \sigma^2\delta_{ij}.
\end{equation}
The variables $x_{m}$ are normalized time ($m=0$), flux centroid in $x$ ($m = 1$) and flux centroid in $y$ ($m=2$); $\delta_{ij}$ is the Kronecker delta. The normalization is carried out by setting the mean to 0 and the variance to 1. The priors on the kernel hyper parameters were taken to be the same as the ones defined in \citet{gibson:2014}, the priors for the photometric jitter term $\sigma$ and $A$ were taken to be uniform in the logarithm between $0.01$ and $100$, with $\sigma$ and $A$ expressed in mmag.
In Figure~\ref{fig:chat-lcogt} we show the CHAT and LCOGT light curves with the weighted comparison stars subtracted along with the Gaussian process posterior model for the systematics.
\begin{figure*}
\plottwo{phot_CHAT.pdf}{phot_LCOGT.pdf}
\caption{Ground-based light curves for the July 14 2018 transit of K2-287b\ obtained with CHAT (left panel) and a LCOGT 1m telescope at CTIO (right panel). The red lines represent the posterior Gaussian process models for remaining systematics after subtracting the transit and weighted comparison stars and obtained as described in \S\ref{ssec:ground} \label{fig:chat-lcogt}}
\end{figure*}
\subsection{GAIA DR2}
Observations of K2-287\ by GAIA were reported in DR2 \citep{gaia, gaia:dr2}. From GAIA DR2, K2-287\ has a parallax of $6.29 \pm 0.05$ mas,
an effective temperature of $\ensuremath{T_{\rm eff}} = 4994 \pm 80$ K and a radius
of $\ensuremath{{\rm R}_{\star}} = 1.18 \pm 0.04 \,\, \ensuremath{{\rm R}_{\odot}}$. We used the observed
parallax for K2-287\ measured by GAIA for estimating a more precise value of \ensuremath{{\rm R}_{\star}}\ by combining it with the atmospheric parameters obtained from the spectra as described in \S~\ref{sec:ana}. We corrected the GAIA DR2 parallax for the systematic offset of -82 $\mu$as reported in \citet{stassun:2018}.
Two additional sources to K2-287\ are identified by GAIA inside the adopted K2 aperture ($\approx 12\arcsec$). However, both stars are too faint ($\Delta G > 7.8$ mag) to produce any significant effect on the planetary and stellar parameters found in \S~\ref{sec:ana}. The radial velocity variations in-phase with the transit signal, which are caused by K2-287, confirm that the transit is not caused by a blended stellar eclipsing binary on one of the companions.
\section{Analysis} \label{sec:ana}
\subsection{Stellar parameters}
As in previous K2CL discoveries we estimated the atmospheric parameters of the host star by comparing the co-added high resolution spectrum to a grid of synthetic models through the \texttt{ZASPE}\ code \citep{brahm:2016:zaspe}. In particular, for K2-287\ we used the co-added FEROS spectra, because they provide the higher signal-to-noise ratio spectra, and because the synthetic grid of models used by \texttt{ZASPE}\ was empirically calibrated using FEROS spectra of standard stars. Briefly, \texttt{ZASPE}\ performs an iterative search of the optimal model through $\chi^2$ minimization on the spectral zones that are most sensitive to changes in the atmospheric parameters. The models with specific values of atmospheric parameters are generated via tri-linear interpolation of a precomputed grid generated using the ATLAS9 models \citep{atlas9}. The interpolated model is then degraded to match the spectrograph resolution by convolving it with a Gaussian kernel that includes the instrumental resolution of the observed spectrum and an assumed macroturbulence value given by the relation presented in \citet{valenti:2005}. The spectrum is also convolved with a rotational kernel that depends on \ensuremath{v \sin{i}}, which is considered as a free parameter. The uncertainties in the estimated parameters are obtained from Monte Carlo simulations that consider that the principal source of error comes from the systematic mismatch between the optimal model and the data, which in turn arises from poorly constrained parameters of the atomic transitions and possible deviations from solar abundances.
We obtained the following stellar atmospheric parameters for K2-287: \ensuremath{T_{\rm eff}}=5695 $\pm$ 58 K, \ensuremath{\log{g}}=4.4 $\pm$ 0.15 dex, \ensuremath{{\rm [Fe/H]}}=0.20 $\pm$ 0.04 dex, and \ensuremath{v \sin{i}}=3.2 $\pm$ 0.2 km s$^{-1}$. The \ensuremath{T_{\rm eff}}\ value obtained with \texttt{ZASPE}\ is significantly different to that reported by GAIA DR2, but is consistent that of the K2 input catalog \citep{huber:2016}.
The stellar radius is computed from the GAIA parallax measurement, the available photometry, and the atmospheric parameters. As in \citet{k2-261}, we used a \texttt{BT-Settl-CIFIST} spectral energy distribution model \citep{baraffe:2015} with the atmospheric parameters derived with \texttt{ZASPE}\ to generate a set of synthetic magnitudes at the distance computed from the GAIA parallax. These magnitudes are compared to those presented in table \ref{tab:stprops} for a given value of \ensuremath{{\rm R}_{\star}}. We also consider an extinction coefficient A$_V$ in our modeling which affects the synthetic magnitudes by using the prescription of \citet{cardelli:89}. We explore the parameter space for \ensuremath{{\rm R}_{\star}}\ and A$_V$ using the \texttt{emcee} package \citet{emcee:2013}, using uniform priors in both parameters. We found that K2-287\ has a radius of $\ensuremath{{\rm R}_{\star}}=1.07 \pm 0.01$ \ensuremath{{\rm R}_{\odot}}\ and has a reddening of A$_V=0.56 \pm 0.03$ mag, which is consistent with what is reported by GAIA DR2.
Finally, the stellar mass and evolutionary stage for K2-287\ are obtained by comparing the estimation of \ensuremath{{\rm R}_{\star}}\ and the spectroscopic \ensuremath{T_{\rm eff}}\ with the predictions of the Yonsei-Yale evolutionary models \citep{yi:2001}. We use the interpolator provided with the isochrones to generate a model
with specific values of \ensuremath{{\rm M}_{\star}}, age, and \ensuremath{{\rm [Fe/H]}}, where \ensuremath{{\rm [Fe/H]}}\ is fixed to the value found in the spectroscopic analysis. We explore the parameter space for \ensuremath{{\rm M}_{\star}}\ and stellar age using the
\texttt{emcee} package, using uniform priors in both parameters. We find that the mass and age of K2-287\ are $\ensuremath{{\rm M}_{\star}} = 1.036 \pm 0.033$ $\ensuremath{{\rm M}_{\odot}}$ and 5.6 $\pm$ 1.6 Gyr (see Figure \ref{fig:iso}), similar to those of the Sun. The stellar parameters we adopted for K2-287\ are summarized in Table~\ref{tab:stprops}.
\begin{figure}
\plotone{CL001_15_txt_iso.pdf}
\caption{Yonsei-Yale isochrones for the metallicity of K2-287\ in the \ensuremath{T_{\rm eff}}--\ensuremath{{\rm R}_{\star}}\ plane. From left to right the isochrones correspond to 1, 3, 5, 7 and 9 Gyr. The position of K2-287\ is at the center of the blue shaded region, which marks the 3$\sigma$ confidence region for \ensuremath{T_{\rm eff}}\ and \ensuremath{{\rm R}_{\star}}.\label{fig:iso}}
\end{figure}
\begin{deluxetable*}{lrc}[b!]
\tablecaption{Stellar properties of K2-287\ \label{tab:stprops}}
\tablecolumns{3}
\tablewidth{0pt}
\tablehead{
\colhead{Parameter} &
\colhead{Value} &
\colhead{Reference} \\
}
\startdata
Names \dotfill & K2-287 & EPIC \\
& 2MASS J15321784-2221297 & 2MASS \\
& TYC 6196-185-1 & TYCHO \\
& WISE J153217.84-222129.9 & WISE \\
RA \dotfill (J2000) & 15h32m17.84s & EPIC\\
DEC \dotfill (J2000) & -22d21m29.74s & EPIC\\
pm$^{\rm RA}$ \hfill (mas yr$^{-1}$) & -4.59 $\pm$ 0.11& GAIA\\
pm$^{\rm DEC}$ \dotfill (mas yr$^{-1}$) & -17.899 $\pm$ 0.074 & GAIA\\
$\pi$ \dotfill (mas)& 6.288 $\pm$ 0.051 & GAIA \\
\hline
K$_p$ \dotfill (mag) & 11.058 & EPIC\\
B \dotfill (mag) & 12.009 $\pm$ 0.169 & APASS\\
g' \dotfill (mag) & 11.727 $\pm$ 0.010 & APASS\\
V \dotfill (mag) &11.410 $\pm$ 0.129 & APASS\\
r' \dotfill (mag) & 11.029 $\pm$ 0.010 & APASS\\
i' \dotfill (mag) & 10.772 $\pm$ 0.020 & APASS\\
J \dotfill (mag) & 9.677 $\pm$ 0.023 & 2MASS\\
H \dotfill (mag) & 9.283 $\pm$ 0.025 & 2MASS\\
K$_s$ \dotfill (mag) & 9.188 $\pm$ 0.021 & 2MASS\\
WISE1 \dotfill (mag) & 9.114 $\pm$ 0.022 & WISE\\
WISE2 \dotfill (mag) & 9.148 $\pm$ 0.019 & WISE\\
WISE3 \dotfill (mag) & 9.089 $\pm$ 0.034 & WISE\\
\hline
\ensuremath{T_{\rm eff}} \dotfill (K) & 5695 $\pm$ 58& \texttt{zaspe}\\
\ensuremath{\log{g}} \dotfill (dex) & 4.398 $\pm$ 0.015 & \texttt{zaspe}\\
\ensuremath{{\rm [Fe/H]}} \dotfill (dex) & +0.20 $\pm$ 0.04 & \texttt{zaspe}\\
\ensuremath{v \sin{i}} \dotfill (km s$^{-1}$) & 3.2 $\pm$ 0.2 & \texttt{zaspe}\\
\ensuremath{{\rm M}_{\star}} \dotfill (\ensuremath{{\rm M}_{\odot}}) & 1.056 $\pm$ 0.022 & YY + GAIA\\
\ensuremath{{\rm R}_{\star}} \dotfill (\ensuremath{{\rm R}_{\odot}}) & 1.07 $\pm$ 0.01 & GAIA + this work\\
Age \dotfill (Gyr) & 4.5 $\pm$ 1 & YY + GAIA\\
$\rho_\star$ \dotfill (g cm$^{-3}$) & 1.217 $\pm$ 0.045& YY + GAIA\\
\enddata
\end{deluxetable*}
\subsection{Global modeling}
\label{sec:glob}
\begin{figure*}
\plotone{CL001-15_transit.pdf}
\caption{The top panels show from left to right: the phase folded Kepler $K2$ photometry ($K_p$ band), the CHAT follow up photometry ($i$ band), and the LCO follow-up photometry ($i$ band) for K2-287. For the three cases, the model generated with the derived parameters of \texttt{EXONAILER} is plotted with a blue line. The bottom panels show the corresponding residuals. \label{fig:pht}}
\end{figure*}
\begin{figure*}
\plotone{CL001-15_rv.pdf}
\caption{The top panel presents the radial velocities for K2-287\ (filled circles) obtained with FEROS and HARPS as a function of the orbital phase. The RV model with the derived orbital parameters for K2-287b\ corresponds to the blue solid line. The bottom panel shows the residuals obtained
for these radial velocity measurements.
\label{fig:phr}}
\end{figure*}
In order to determine the orbital and transit parameters of the K2-287b\
system we performed a joint analysis of the detrended K2 photometry, the
follow-up photometry, and the radial velocities.
As in previous planet discoveries of the K2CL collaboration, we
used the \texttt{exonailer} code which is described in
detail in \citet{espinoza:2016:exo}. Briefly, we model the transit light
curves using the \texttt{batman} package \citep{kreidberg:2015} by taking
into account the effect on the transit shape produced by the long integration time of the
long-cadence K2 data \citep{ kipping:2010}.
To avoid systematic biases in the determination of the transit parameters
we considered the limb-darkening coefficients as additional free parameters in the transit modeling \citep{EJ:2015}, with the complexity of limb-darkening law chosen following the criteria presented in \citet{espinoza:2016:lds}. In our case, we select the quadratic limb-darkening law, whose coefficients were fit using the
uninformative sampling technique of \citet{Kipping:LDs}.
We also include a
photometric jitter parameter for the K2 data, which allow us to have an estimation of the level of stellar noise in the light curve. The radial velocities are modeled with
the \texttt{radvel} package \citep{fulton:2018}, where we considered systemic velocity and jitter factors for the data of each spectrograph.
We use the stellar density estimated in our stellar modeling as an extra ``data point" in our global fit as described in \citet{k2-232}. Briefly, there is a term in the likelihood of the form
\begin{eqnarray*}
p(\vec{y}_{\rho_*}|\theta ) = \frac{1}{\sqrt{2\pi \sigma^2_{\rho_*}}} \exp -\frac{(\rho_* - \rho_*^m)^2}
{2\sigma_{\rho_*}^2} ,
\end{eqnarray*}
where
\begin{eqnarray*}
\rho^m_* = \frac{3\pi}{GP^2}\left(\frac{a}{R_*}\right)^3
\end{eqnarray*}
by Newton's version of Kepler's law, and
$\rho_*$ and $\sigma_{\rho_*}$ are the mean stellar density and its standard-deviation,
respectively, derived from our stellar analysis. In essence, because the period $P$ is
tightly constrained by the observed periodic transits, this extra term puts a strong
constraint on $a/R_*$, which in turn helps to extract information about the eccentricity $e$ and argument of periastron $\omega$ from the duration of the transit. Resulting planet parameters are set out in Table~\ref{tab:plprops}, the best-fit orbit solution in Figures~\ref{fig:rvstime} and \ref{fig:phr} and the best-fit light curves in Figure~\ref{fig:pht}.
\begin{deluxetable*}{lrc}[b!]
\tablecaption{Planetary properties of the K2-287\ system. For the priors, $N(\mu,\sigma)$ stands for a normal distribution with mean $\mu$ and standard deviation $\sigma$, $U(a,b)$ stands for a uniform distribution between $a$ and $b$, and $J(a,b)$ stands for a Jeffrey's prior defined between $a$ and $b$.\label{tab:plprops}}
\tablecolumns{3}
\tablenum{2}
\tablewidth{0pt}
\tablehead{
\colhead{Parameter} &
\colhead{Prior} &
\colhead{Value} \\
}
\startdata
P (days) & $N(14.893,0.01)$ & 14.893291 $\pm$ 0.000025\\
T$_0$ (BJD)& $N(2458001.722,0.01)$& 2458001.72138 $\pm$ 0.00016 \\
$a$/R$_\star$ & $U(1,300)$ & 23.87$_{-0.31}^{+0.30}$ \\
\ensuremath{{\rm R_P}}/\ensuremath{{\rm R}_{\star}} & $U(0.001,0.5)$ & 0.08014$_{-0.00098}^{+0.00086}$ \\
$\sigma_w^{\rm K2}$ (ppm) & $J(10,50000)$ & 47.7$^{+0.54}_{0.54}$\\
q$_1^{\rm K2}$ & $U(0,1)$ & 0.32$^{+0.06}_{-0.05}$ \\
q$_2^{\rm K2}$ & $U(0,1)$& 0.57$^{+0.13}_{-0.11}$ \\
q$_1^{\rm CHAT}$ &$U(0,1)$ & 0.83$^{+0.12}_{-0.17}$ \\
q$_2^{\rm CHAT}$ &$U(0,1)$ & 0.15$^{+0.16}_{-0.11}$ \\
q$_1^{\rm LCO}$ & $U(0,1)$& 0.62$^{+0.20}_{-0.19}$ \\
q$_2^{\rm LCO}$ & $U(0,1)$& 0.08$^{+0.11}_{-0.06}$ \\
K (m s$^{-1}$) & $N(0,100)$& 28.8$^{+2.3}_{-2.2}$ \\
$e$ & $U(0,1)$ & 0.478$^{+0.025}_{-0.026}$ \\
$i$ (deg) & $U(0,90)$ & 88.13$^{+0.1}_{-0.08}$\\
$\omega$ (deg) & $U(0,360)$ & 10.1$^{+4.6}_{-4.2}$ \\
$\gamma_{\rm FEROS}$ (m s$^{-1}$)& $N(32963.2,0.1)$& 32930.41$^{+0.10}_{-0.10}$ \\
$\gamma_{\rm HARPS}$ (m s$^{-1}$)& $N(32930.4,0.1)$ & 32963.19$^{+0.10}_{-0.10}$ \\
$\sigma_{\rm FEROS}$ (m s$^{-1}$)& $J(0.1,100)$ & 16.0$^{+2.1}_{-1.8}$ \\
$\sigma_{\rm HARPS}$ (m s$^{-1}$) & $J(0.1,100)$ & 4.8$^{+1.8}_{-1.6}$ \\
\hline
\ensuremath{{\rm M_P}}\ (\ensuremath{{\rm M_{J}}})& & 0.315 $\pm$ 0.027\\
\ensuremath{{\rm R_P}}\ (\ensuremath{{\rm R_J}})& & 0.847 $\pm$ 0.013\\
$a$ (AU) & & 0.1206$_{-0.0008}^{+0.0008}$\\
\ensuremath{T_{\rm eq}}\tablenotemark{a} (K) & & 804$_{-7}^{+8}$\\
\enddata
\tablenotetext{a}{Time-averaged equilibrium temperature computed according to equation~16 of \citet{mendez:2017}}
\end{deluxetable*}
\section{Discussion} \label{sec:dis}
By combining data from the Kepler K2 mission and ground based photometry and spectroscopy,
we have confirmed the planetary nature of a $P=14.9$ d candidate around the $V=11.4$ mag G-type
star K2-287. We found that the physical parameters of K2-287b\ (\ensuremath{{\rm M_P}} = \ensuremath{0.317 \pm 0.026 }\ \ensuremath{{\rm M_{J}}}, \ensuremath{{\rm R_P}} = \ensuremath{0.833 \pm 0.013 }\ \ensuremath{{\rm R_J}}) are consistent to those of Saturn. The non-inflated structure of K2-287b\ is
expected given its relatively low time-averaged equilibrium temperature of \ensuremath{T_{\rm eq}} = 808 $\pm$ 8 K.
In Figure \ref{fig:mr} the mass and radius of K2-287b\ are compared to those for the full population of transiting planets with parameters measured to a precision of 20\% or better.
Two other transiting planets, orbiting fainter stars, that share similar structural properties to K2-287b\ are HAT-P-38b \citep{sato:2012} and HATS-20b \citep{bhatti:2016}, which have equilibrium temperatures that are higher but relatively close to the $\ensuremath{T_{\rm eq}} \approx 1000$ K limit below which the inflation mechanism of hot Jupiters does not play a significant role \citep{kovacs:2010,demory:2011}.
By using the simple planet structural models of \citet{fortney:2007} we find that the observed properties of K2-287b\ are consistent with having a solid core of $M_c = 31 \pm 4 M_{\oplus}$. However, models that consider the presence of solid material in the envelope of the planet are required to
obtain a more reliable estimate for the heavy element content of K2-287b\ \citep[e.g.,][]{thorngren:2016}.
\begin{figure*}
\plotone{m-r.pdf}
\caption{Mass-Radius diagram for the full population of transiting planets with both parameters measured to at least 20\% precision. The points are color-coded by equilibrium temperature. K2-287b\ is the object in the plot that has error bars and is indicated by the arrow. The dashed
gray lines correspond to iso-density curves of 0.3, 3, and 30 g cm$^{-3}$, while the solid
line represents the prediction of the \citet{fortney:2007} structural model with a central core
mass of 10 M$_{\oplus}$.
Due to its relatively low equilibrium temperature, K2-287b\ lies in a sparsely populated region of the parameter space of moderately compact giant planets. \label{fig:mr}}
\end{figure*}
\begin{figure*}
\plotone{p-j.pdf}
\caption{Population of well characterized giant planets having \ensuremath{{\rm R_P}} $>$ 0.4 \ensuremath{{\rm R_J}}\ in the orbital period -- J magnitude plane. K2-287b\ is inside a black square. The size of the points represent the eccentricity of the orbit, while the color indicates the discovery method/mission (blue: ground-based photometry, yellow: RV planets, orange: CoRoT, red: \textit{Kepler}, green: \textit{Kepler} K2). The \textit{Kepler} K2 mission has been the most effective source for discovering transiting bright (J $<$ 11) warm (P $>$ 10 d) giant planets. \label{fig:pj}}
\end{figure*}
The numerous radial velocity measurements obtained for the K2-287\ system allow us to constrain the eccentricity of the planet to be $e=0.478 \pm 0.025$. Even though K2-287b\ is among the most eccentric extrasolar planets to have a period shorter than 50 days, its periastron distance is not
small enough to cause a significant migration by tidal interactions throughout the main sequence lifetime of the host star. Specifically, by using the equations of \citet{jackson:2009}, we find that in the absence of external sources of gravitational interaction, K2-287b\ should have possessed an eccentricity of $e\approx0.65$ and a semi-major
axis of $a\approx0.15$ AU when the system was 0.1 Gyr old. Under the same assumptions, we expect that K2-287b\ would be engulfed by its host star at an age of $\approx$12 Gyr before being able to reach full circularization at a distance of $a\approx0.1$ AU.
These orbital properties for K2-287b\ and those of the majority of eccentric warm giants are not easy to explain. If K2-287b\ was formed \textit{in situ} \citep{huang:2016} at 0.15 AU or migrated to this position via interactions with the protoplanetary disc \citep{lin:1997}, its eccentricity could have been excited by the influence of another massive object in the system after disc dispersal. However, planet-planet scattering \citep{ford:2008} at these close-in orbits generally produces planet collisions rather than eccentricity excitation \citep{petrovich:2014}.
An alternative proposition for the existence of these eccentric systems is that they
are being subject to secular gravitational interactions produced by another distant
planet or star in the system \citep{rasio:1996}, with the planet experiencing long term
cyclic variations in its eccentricity and spin orbit angle. In this scenario, the planet
migrates by tidal interactions only during the high eccentricity stages, but it is usually
found with moderate eccentricities. Further observations on the K2-287\ system could help
support this mechanism as the responsible for its relatively high eccentricity, particularly given that \citet{petrovich:2016} concludes that high-eccentricity migration excited by an outer planetary companion can account for most of the warm giants with $e>0.4$. Specifically, long term radial velocity monitoring and the search for transit timing variations could be used to detect the relatively close companions to migrating warm Jupiters proposed by \citet{dong:2014}. Future astrometric searches of companions with GAIA could also be used to find companions and infer the predicted mutual inclination between both orbits, which are predicted to be high \citet{anderson:2017}.
Finally, it is worth noting that an important fraction of the transiting warm giants amenable for detailed characterization ($J<11$ mag) have been discovered in the last couple of years thanks to the K2 mission (see Figure~\ref{fig:pj}). The combination of relatively long observing campaigns per field, and the increased number of fields monitored, have allowed the discovery and dynamical characterization of several warm giant planets with data from the K2 mission \citep[see Figure~\ref{fig:pj}, ][]{k2-24,k2-99,barragan:2017,shporer:2017,k2-232,k2-234,k2-261,k2-261b}.
While not particularly designed to discover warm giants, the TESS mission \citep{tess} is expected to discover $\approx$ 120 additional warm giants with $\ensuremath{{\rm R_P}} > 4R_\oplus$ and an incident flux $F < 150 F_\oplus$, where $F_\oplus$ is the incident flux at Earth, around $J\lesssim 11$ mag stars \citep{barclay:2018}. With such population at hand, it will be possible to compare the distributions of eccentricities and obliquities to predictions from different migration mechanisms \citep[e.g. ][]{petrovich:2016} in order to establish a clearer picture about how eccentric warm giant planets originate.
\acknowledgements
A.J.\ acknowledges support from FONDECYT project 1171208, CONICYT project BASAL AFB-170002, and by the Ministry for the Economy, Development, and Tourism's Programa Iniciativa Cient\'{i}fica Milenio through grant IC\,120009, awarded to the Millennium Institute of Astrophysics (MAS). R.B.\ acknowledges support from FONDECYT Post-doctoral Fellowship Project 3180246, and from the Millennium Institute of Astrophysics (MAS).
M.R.D.\ acknowledges support by CONICYT-PFCHA/Doctorado Nacional 21140646, Chile. A.Z.\ acknowledges support by CONICYT-PFCHA/Doctorado
Nacional 21170536, Chile.
J.S.J.\ acknowledges support by FONDECYT project 1161218 and
partial support by CONICYT project BASAL AFB-170002.
This paper includes data collected by the K2 mission. Funding for the K2 mission is provided by the NASA Science Mission directorate.
This work has made use of data from
the European Space Agency (ESA) mission Gaia (\url{https:
//www.cosmos.esa.int/gaia}), processed by the Gaia Data
Processing and Analysis Consortium (DPAC, \url{ https://www.cosmos.esa.int/web/gaia/dpac/consortium}). Funding for
the DPAC has been provided by national institutions, in particular
the institutions participating in the Gaia Multilateral
Agreement.
Based on observations collected at the European
Organisation for Astronomical Research in the Southern
Hemisphere under ESO programmes 0101.C-0497, 0101.C-0407, 0101.C-0510.
\vspace{5mm}
\facilities{CHAT~0.7m, LCOGT~1m, MPG~2.2m, ESO~3.6m, \textit{Kepler}, GAIA, APASS, 2MASS, WISE}
\software{EXO-NAILER \citep{espinoza:2016:exo},
CERES \citep{brahm:2017:ceres,jordan:2014},
ZASPE \citep{brahm:2016:zaspe,brahm:2015},
radvel \citep{fulton:2018}
}
|
1,108,101,566,373 | arxiv | \section{Introduction}
Probability theory in linear spaces has long been considered and extended to more general models which are nonlinear, such as hyperspaces of linear space or metric spaces generally. Basic objects such as expectation, conditional expectation of random element taking values in metric space also have attracted attention of many researchers. Probably the first
author introduced a concept of mathematical expectation of a random element with values in a metric space was Doss \cite{Do} in 1949. After this paper, other authors gave many different definitions of expectation and conditional expectation in different kinds of metric spaces via various ways. We can mention the works of \'{E}mery and Mokobodzki \cite{EM}, Herer \cite{He1, He2, He3}, Raynaud de Fitte \cite{Fi}, Sturm \cite{CS, St}, or the monograph of Molchanov \cite{Mo}.
In 2006, Ter\'{a}n and Molchanov \cite{TM} introduced the concept of convex combination space and the class of these spaces is larger than not only the class of Banach spaces but also the class of hyperspace of compact subsets, as well as the class of upper semicontinuous functions (also called fuzzy sets) with compact support in Banach space \cite{TM}. Besides, the authors also provided many interesting illustrative examples of this concept, e.g., the space of all cumulative distribution functions or the space of upper semicontinuous functions with $t$-norm. Convex combination space is a metric space endowed with a convex combination operation and the extension from
linear space to convex combination space is not trivial. Some very basic sets, such as singletons and balls, may fail to be convex in convex combination space. This may not match with usual intuition but occurs in many practical models. For example, consider the hyperspace of all compact subsets of Banach space with the convex combinations being generated by the Minkovski addition and scalar multiplication. Then $\lambda A +(1-\lambda)A$ does not equal to $A$ unless $A$ is convex, it means that $A$ is non-convex singleton in such a space. Another example is the space of integrable probability distributions, where the convex combinations is generated by the convolution operation (see \cite{TM, Te}). For random element taking values in convex combination space, its expected value was constructed by Ter\'{a}n and Molchanov. This notion of expectation extended the corresponding one when considering not only in Banach space but also in hyperspace of compact subsets. Furthermore, the authors also established the Etemadi strong law of large numbers (SLLN) for normalized sums of pairwise independent, identically distributed (i.i.d.) random elements in this kind of space (\cite{TM}, Theorem 5.1), other applications can be found in \cite{QT, Te, TQN}.
Although convex combination space may have many singletons being not convex, it always contains a subspace (we will call convexifiable domain) in which every singletons and balls are convex, moreover the authors in \cite{TM} shown that this subspace has some properties resembling linearity. Therefore, it is natural to ask whether this convexifiable domain can be embedded isometrically into some normed linear space such that the structure of convex combination is preserved. A worth note is that the expectation of every integrable random element taking values in convex combination space always belongs to this convexifiable domain. Therefore, if embedding is established, we will have more tools to explore this type of expectation as well as properties of convex combination space.
In this paper, we will answer the question mentioned above. Namely, we will show that the convexifiable domain of a complete convex combination space can be embedded into a Banach space such that the embedding is isometric and the structure of convex combination is preserved, this will be presented in Section 3.
Main applications of the approach via embedding theorem will be presented in Section 4. On the one hand, some nice properties of expectation including both representation of expected value through continuous affine mappings and Jensen's inequality (was proved first by Ter\'{a}n \cite{Te} and will be proved again in this work in another way) will be given. On the other hand, the notion of conditional expectation of integrable random element taking values in convex combination space will be also introduced and discussed. Thanks to embedding theorem, we establish some basic properties of conditional expectation, Jensen's inequality, convergences of martingales and ergodic theorem.
Finally, some miscellaneous applications and remarks will be discussed in Section 5.
\section{Preliminaries}
For the reader's convenience, we now present a short introduction to the approach
given by Ter\'an and Molchanov in \cite{TM}. Let
$(\mathfrak X,d)$ be a metric space, for $u, x\in \mathfrak X$, we denote $\|x\|_u:=d(u, x)$. Based on
$\mathfrak X$, introduce a \emph{convex combination operation}, which
for all $n\geqslant 2$, numbers $\lambda_1,\ldots,\lambda_n>0$ that satisfy
$\sum_{i=1}^n\lambda_i=1$, and all $u_1,\ldots, u_n\in\mathfrak X$,
this operation produces an element of $\mathfrak X$, which is denoted by $[\lambda_1,u_1;\ldots;\lambda_n,u_n]$
or $[\lambda_i,u_i]_{i=1}^n$. Note that $[\lambda_1,u_1;\ldots;\lambda_n,u_n]$ and the shorthand $[\lambda_i,u_i]_{i=1}^n$
have the same intuitive meaning as the more familiar $\lambda_1u_1+\cdots+\lambda_nu_n$
and $\sum_{i=1}^n \lambda_iu_i$, but $\mathfrak X$ is not assumed to have any addition or multiplication. Suppose that $[1,u]=u$ for every $u\in\mathfrak X$ and that the following
properties are satisfied:\\
(CC.i) (Commutativity) $[\lambda_i,u_i]_{i=1}^n=[\lambda_{\sigma(i)},u_{\sigma(i)}]_{i=1}^n$
for every permutation $\sigma$ of $\{1,\ldots,n\}$;\\
(CC.ii) (Associativity) $[\lambda_i,u_i]_{i=1}^{n+2}=\big[\lambda_1,u_1;\ldots;\lambda_n,u_n;\lambda_{n+1}+\lambda_{n+2},
\big[\frac{\lambda_{n+j}}{\lambda_{n+1}+\lambda_{n+2}},u_{n+j}\big]_{j=1}^2\big];$\\
(CC.iii) (Continuity) if $u,v\in\mathfrak X$ and
$\lambda^{(k)}\rightarrow\lambda\in (0;1)$ as $k\rightarrow\infty$,
then
$[\lambda^{(k)},u;1-\lambda^{(k)},v]\rightarrow
[\lambda,u;1-\lambda,v]$;\\
(CC.iv) (Negative curvature) if $u_1,u_2,v_1,v_2\in\mathfrak X$ and
$\lambda\in (0,1)$, then
$$d([\lambda,u_1;1-\lambda,u_2],[\lambda,v_1;1-\lambda,v_2])\leqslant\lambda
d(u_1,v_1)+(1-\lambda) d(u_2,v_2);$$
Based on the inductive method and (CC.ii), this axiom can be extended to convex combinations of $n$ elements, as follows: if $u_i, v_i \in \mathfrak X$, $\lambda_i \in (0;1)$ with $\sum_{i=1}^n \lambda_i =1$, then $d([\lambda_i, u_i]_{i=1}^n, [\lambda_i, v_i]_{i=1}^n)\leqslant \sum_{i=1}^n \lambda_i d(u_i, v_i).$\\
(CC.v) (Convexification) for each $u\in\mathfrak X$, there exists
$\lim_{n\rightarrow\infty}[n^{-1},u]_{i=1}^n$, which will be
denoted by $K_\mathfrak X u$ (or $Ku$ when no confusion can
arise), and
$K$ is called the \emph{convexification operator}.\\
Then, the metric space $(\mathfrak X, d)$ endowed with a convex
combination operation is referred to as the \emph{convex combination
space} (CC space for short) and we denote $(\mathfrak X, d, [.,.])$ or $\mathfrak X$ shortly. We can find from axiom (CC.v) that $[n^{-1}, u]$ is different from $u$ in general, so $Ku$ and $u$ may be not identical. If $Ku=u$, then $u$ will be called \emph{convex point} of $\mathfrak X$, subspace $K(\mathfrak X)$ will called \textit{convexifiable domain}. If $K(\mathfrak X)=\mathfrak X$ then $\mathfrak X$ is said to be \emph{convexifiable} and then $[.,.]$ will be called \textit{unbiased} convex combination operation. Conditions (CC.i)--(CC.v) above imply the following properties:
(2.1) (\cite{TM}, Lemma 2.1) For every $u_{11},\ldots,u_{mn}\in\mathfrak X$ and
$\alpha_1,\ldots,\alpha_m,\beta_1,\ldots,\beta_n>0$ with
$\sum_{i=1}^m\alpha_i=\sum_{j=1}^n\beta_j=1$, we have
$[\alpha_i,[\beta_j,u_{ij}]_{j=1}^n]_{i=1}^m=[\alpha_i\beta_j,u_{ij}]_{i=1,j=1}^{i=m,j=n}.$
(2.2) (\cite{TM}, Lemma 2.2) The convex combination operation is jointly
continuous in its $2n$ arguments.
(2.3) (\cite{TM}, Proposition 3.1) The convexification operator $K$ is linear, that
is $K([\lambda_j,u_j]_{j=1}^n)=[\lambda_j,Ku_j]_{j=1}^n$.
(2.4) (\cite{TM}, Corollary 3.3) If $u\in\mathfrak X$ and
$\lambda_1,\ldots,\lambda_n>0$ with $\sum_{j=1}^n\lambda_j=1$, then
$K([\lambda_j,u]_{j=1}^n)=Ku=[\lambda_j,Ku]_{j=1}^n$. Hence, $K$ is
an idempotent operator in $\mathfrak X$.
(2.5) (\cite{TM}, Proposition 3.5) For $\lambda_1,\lambda_2,\lambda_3>0$ with
$\lambda_1+\lambda_2+\lambda_3=1$ and $u,v\in\mathfrak X$,
$$[\lambda_1,u;\lambda_2,Kv;\lambda_3,Kv]=[\lambda_1u;(\lambda_2+\lambda_3),Kv].$$
(2.6) (\cite{TM}, Proposition 3.6) The mapping $K$ is non-expansive with respect to
metric $d$, i.e., $d(Ku,Kv)\leqslant d(u,v)$.
\noindent\textbf{Remark 1.} Let $\lambda_k \subset (0;1)$, $\lambda_k\to 0$ and $u, v\in \mathfrak X$. By (CC.iv) and property (2.4), we have
\begin{align*}
d([\lambda_k, Ku ; 1-\lambda_k, Kv], Kv)=d([\lambda_k, Ku ; 1-\lambda_k, Kv], [\lambda_k, Kv ; 1-\lambda_k, Kv])\leqslant \lambda_kd(Ku, Kv)\to 0
\end{align*}
as $k \to \infty$. It follows $[\lambda_k, Ku ; 1-\lambda_k, Kv] \to Kv$ and this remark ensures to extend weights $\lambda_i$ from $(0;1)$ to $[0;1]$ for elements in $K(\mathfrak X)$, it means that we can define $[\lambda_i, x_i]_{i\in I}=[\lambda_i, x_i]_{i\in J}$, where $x_i\in K(\mathfrak X)$, $\sum_{i\in I} \lambda_i=\sum_{i\in J} \lambda_i =1$, $J=\{i\in I : \lambda_i>0\}$.
\begin{prop}
If $(\mathfrak X, d)$ is a separable and complete CC space, then so is $(K(\mathfrak X), d)$.
\end{prop}
\begin{proof}
The separability of $K(\mathfrak X)$ is obvious. It follows from Proposition 3.7 in \cite{TM} that $K(\mathfrak X)$ is a closed subset of complete metric space $\mathfrak X$, hence $K(\mathfrak X)$ is complete.
\end{proof}
\section{Embedding theorem}
First, we need to recall the embedding for convex structure given by \'{S}wirszck \cite{Sw}. In his work, \'{S}wirszck introduced the notion of semiconvex set as follows: A \emph{semiconvex set} is a set $\mathbb S$ together with a family of binary operations $\{P_\lambda : \mathbb S \times\mathbb S \to \mathbb S, \lambda \in (0;1)\}$ satisfying the following axioms: For $x, y, z \in \mathbb S$ and $\lambda, \mu \in (0;1)$, (S.i) (Reflexivity) $P_\lambda(x, x)=x$; (S.ii) (Symmetry) $P_\lambda(x, y)=P_{(1-\lambda)}(y, x)$; (S.iii) (Associativity) $P_r(P_\lambda(x, y), z)=P_{r\lambda}(x, P_\mu(y,z))$ for $r=\mu/(1-\lambda+\lambda \mu)$. Sometimes for completeness, we also include the binary identity functions $P_1$ and $P_0$ defined as $P_1(x, y)=x$ and $P_0(x, y)=y$. Then (S.i) and (S.ii) hold for $\lambda \in [0;1]$, and (S.iii) holds with $\lambda(1-\mu)\neq 1$. Also in \cite{Sw}, the author also shown that a semiconvex set $\mathbb S$ may be embedded as a convex subset of a vector space if and only if it satisfies cancellation law, that is, $P_r(x, y)=P_r(x, z)$ for any $x, y, z \in \mathbb S$, $r\in (0;1)$ implies that $y=z$. Therefore, if the cancellation law in $\mathbb S$ holds, then there exist a vector space $(\mathbb V, +, .)$ and an one-to-one correspondence $\rho: \mathbb S \to \rho(\mathbb S)=\mathbb U\subset \mathbb V$ such that $\rho(P_\lambda(x, y))=\lambda\rho(x) + (1-\lambda) \rho(y)$ for all $x, y \in \mathbb S$, $\lambda \in [0;1]$. For more details, the readers can refer to \cite{Fl, Sw}.
\begin{prop}
If $(\mathfrak X, d, [.,.])$ is a CC space, then $K(\mathfrak X)$ is a semiconvex set with $P_\lambda(x, y)=[\lambda,x\,;1-\lambda,y]$, $x, y\in K(\mathfrak X)$, $\lambda \in [0;1]$.
\end{prop}
\begin{proof}
It is easy to see that the axioms (S.i), (S.ii) and (S.iii) are implied by property (2.4), (CC.i) and (CC.ii) respectively.
\end{proof}
The following proposition establishes a metric cancellation law in $K(\mathfrak X)$ and it plays the key role in obtaining the embedding theorem.
\begin{prop} \emph{(Metric cancellation law)} Let
$\mathfrak X$ is a CC space and $x, y, z \in K(\mathfrak X)$, $\lambda\in [0; 1]$. Then,
\begin{align*}
d([\lambda, x ; 1-\lambda , y], [\lambda, x ; 1-\lambda , z])=(1-\lambda)d(y, z).
\end{align*}
In particular, the algebraic cancellation law holds, i.e., if $[\lambda, x\,;1-\lambda , y] = [\lambda, x \,; 1-\lambda , z]$ for some $\lambda \in [0; 1)$, then $y=z$.
\end{prop}
\begin{proof} If $\lambda=0$ or $\lambda =1$, then the conclusion is trivial. We now consider $\lambda \in (0;1)$.
\textit{Step 1.} - The first auxiliary result: If $\lambda_k \subset (0;1)$ and $\lambda_k\to 0$, then $[\lambda_k, u ; 1-\lambda_k, Kv]\to Kv$ as $k \to \infty$ for $u, v\in \mathfrak X$. It is easy to see due to
\begin{align*}
d([\lambda_k, u ; 1-\lambda_k, Kv], Kv)=d([\lambda_k, u ; 1-\lambda_k, Kv], [\lambda_k, Kv ; 1-\lambda_k, Kv])\leqslant \lambda_kd(u, Kv)\to 0
\end{align*}
as $k \to \infty$.\\
- The second auxiliary result: If $u, v \in K(\mathfrak X)$ then $d([\lambda, u ; 1-\lambda, v], u)=(1-\lambda)d(u, v)$ and $d([\lambda, u ; 1-\lambda, v], v)=\lambda d(u, v)$. Indeed, by (CC.iv) and (2.5)
\begin{align*}
&d([\lambda, u ; 1-\lambda, v], u)=d([\lambda, u ; 1-\lambda, v], [\lambda, u ; 1-\lambda, u])\leqslant (1-\lambda)d(u, v)\\
& d([\lambda, u ; 1-\lambda, v], v)=d([\lambda, u ; 1-\lambda, v], [\lambda, v ; 1-\lambda, v])\leqslant\lambda d(u, v)
\end{align*}
and by triangular inequality,
$$d(u, v)\leqslant d([\lambda, u ; 1-\lambda, v], u) + d([\lambda, u ; 1-\lambda, v], v)\leqslant (1-\lambda)d(u, v) + \lambda d(u, v) =d(u, v).$$
Thus, $d([\lambda, u ; 1-\lambda, v], u)=(1-\lambda)d(u, v)$ and $d([\lambda, u ; 1-\lambda, v], v)=\lambda d(u, v)$.
\textit{Step 2.} We denote by $m(x, y)=[1/2, x ; 1/2, y]$ the midpoint of $x, y$ and it is easy to see that $m(x, y)$ also belongs to $K(\mathfrak X)$. By (CC.iv) we have
\begin{align*}
d(m(x, y), m(x, z))=d([1/2, x ; 1/2, y], [1/2, x ; 1/2, z])\leqslant 2^{-1}d(y, z).
\end{align*}
A set of four ordered points $(x, y, z, t)$ is called \textit{parallelogram} (according to this order) if $m(x,z)=m(y,t)$. In this step, we will prove that if $(x, y, z, t)$ is a parallelogram then $d(x, y)=d(t, z)$. Without loss of generality, assume that $d(x, y)\geqslant d(t, z)$. Now it is sufficient to prove that $d(x, y)\leqslant d(t, z)$. Putting $m(x, z)=m(y, t)=m_1$, we have
\begin{align*}
&d(m_1, m(y, z))=d(m(y, t), m(y, z))\leqslant 2^{-1}d(t, z),\\
&d(m_1, m(y, z))=d(m(z, x), m(z, y))\leqslant 2^{-1}d(x, y) \tag{3.1}.
\end{align*}
Moreover, \begin{align*}
m(m(x, t), m(y, z))&=[1/2, [1/2, x ; 1/2, t]; 1/2, [1/2, y; 1/2, z]]=[1/4,x ; 1/4, y; 1/4, z ; 1/4, t]\\
&=[1/2, [1/2, x ; 1/2, z] ; 1/2, [1/2, y ; 1/2, t]]=[1/2, m_1 ; 1/2, m_1]=m_1,
\end{align*}
it means that $m_1$ is also the midpoint of $m(x, t)$ and $m(y, z)$. Thus $d(m_1, m(y, z))=2^{-1}d(m(x, t), m(y, z))$ by Step 1. Combining with (3.1) we obtain
\begin{align*}
d(m(x, t), m(y, z))\leqslant d(t, z)\;\mbox{ and }\;d(m(x, t), m(y, z))\leqslant d(x, y).\tag{3.2}
\end{align*}
On the other hand,
\begin{align*}
m(x, m(y, z))&=[1/2, x; 1/2, [1/2, y ; 1/2, z]]=[1/2, x ; 1/4, y ; 1/4, z]=[1/4, x ; 1/4, y ; 1/2, m_1]\\
&=[1/4, x ; 1/4, y ; 1/2,[1/2, y ; 1/2, t]]=[1/4, x ; 1/2, y ; 1/4, t]=m(y, m(x, t))
\end{align*}
and it implies that $(x, y, m(y, z), m(x, t))$ is a parallelogram. Applying (3.2), we obtain
$$d\big(m^{(2)}(x, t), m^{(2)}(y, z)\big)\leqslant d(m(x, t), m(y, z))\leqslant d(t,z)\;\mbox{ and }\;d\big(m^{(2)}(x, t), m^{(2)}(y, z)\big)\leqslant d(x, y),$$
where $m^{(2)}(x, t)=m(x, m(x, t))=[3/4, x ; 1/4, t]$, $m^{(2)}(y, z)=m(y, m(y, z))=[3/4, y ; 1/4, z]$. Continuing this process, we derive
\begin{align*}
d\big(m^{(k)}(x, t), m^{(k)}(y, z)\big)\leqslant d(t,z)\;\mbox{ and }\;d\big(m^{(k)}(x, t), m^{(k)}(y, z)\big)\leqslant d(x, y)\;\mbox{ for all } k\in \mathbb N, k\geqslant 3\tag{3.3}
\end{align*}
with $m^{(k)}(x, t)=m\big(x, m^{(k-1)}(x, t)\big)=\big[(2^k-1)/2^k, x ; 1/2^k, t\big]$, $m^{(k)}(y, z)=\big[(2^k-1)/2^k, y; 1/2^k, z\big]$. Taking $k\to \infty$ in (3.3), applying Step 1 and the continuity of metric $d$, we obtain $d(x, y)\leqslant d(t, z)$. This completes Step 2.
\textit{Step 3.} The proposition will be completed in this step. Putting $u=[\lambda, x ; 1-\lambda , y]$, $v=[\lambda, x ; 1-\lambda , z]$ and $w=[\lambda, y ; 1-\lambda, z]$, we get
\begin{align*}
&m(u, w)=[1/2, [\lambda, x ; 1-\lambda , y] ; 1/2, [\lambda, y ; 1-\lambda, z]]=[\lambda/2, x ; 1/2, y ; (1-\lambda)/2, z]\\
&m(v, y)=[1/2, [\lambda, x ; 1-\lambda , z] ; 1/2, y]=[\lambda/2, x ; 1/2, y ; (1-\lambda)/2, z].
\end{align*}
Thus, $(u, v, w, y)$ is a parallelogram and it follows from Step 2 that $d(u, v)=d(y, w)$. On the other hand, $d(y, w)=d(y, [\lambda, y ; 1-\lambda, z])=(1-\lambda)d(y, z)$ by Step 1, so $d(u, v)=(1-\lambda)d(y, z)$. The proposition is proved.
\end{proof}
\begin{theo} Let $(\mathfrak X, d, [.,.])$ be a complete and convexifiable CC space. Then, there exist a Banach space $(\mathbb E, \|.\|)$ and a map $j: \mathfrak X \to \mathbb E$, where $j(\mathfrak X)=\mathbb F$ is a subset of $\mathbb E$ such that
(i) $\mathbb F$ is closed and convex;
(ii) $j([\lambda, x\,;1-\lambda, y])=\lambda j(x)+(1-\lambda)j(y)$ for every $x, y \in \mathfrak X$, $\lambda\in [0;1];$
(iii) $d(x, y)=\|j(x)-j(y)\|$ for all $x, y \in \mathfrak X$.
Furthermore, if $\mathfrak X$ is separable then $\mathbb E$ is also separable.
\begin{proof}
Since $\mathfrak X$ is convexifiable, it follows from Proposition 3.1, Proposition 3.2 and the result of \'{S}wirszck \cite{Sw} mentioned above that there exist a vector space $(\mathbb V, +, .)$ and an one-to-one correspondence $\rho: \mathfrak X \to \rho(\mathfrak X)=\mathbb U\subset \mathbb V$ such that $\mathbb U$ is a convex subset of $\mathbb V$ and $\rho([\lambda, x ; 1-\lambda, y])=\lambda \rho (x) + (1-\lambda) \rho(y)$ for all $x, y \in \mathfrak X$, $\lambda \in [0;1]$. Thanks to translation, we can assume without loss of generality that $0:=0_{\mathbb V}\in \mathbb U$ and denote $\rho^{-1}(0)=\theta \in \mathfrak X$. This ensures that if $u$ belongs to $\mathbb U$ then $\lambda u$ also belongs to $\mathbb U$ whenever $\lambda \in [0;1]$, moreover $\lambda u =\rho ([\lambda, x ; 1-\lambda, \theta])$, where $\rho(x)=u$. The metric structure on $\mathbb U$ is induced naturally from the corresponding one on $\mathfrak X$, and we also use symbol $d$ to denote the metric on $\mathbb U$. Namely, if $u=\rho(x)$, $v=\rho(y)\in \mathbb U$, then $d(u, v)=d(\rho(x), \rho(y))=d(x, y)$. Thus, if $(\mathfrak X, d)$ is complete (resp. separable) then $(\mathbb U, d)$ is also complete (resp. separable). From Proposition 3.2, we have
\begin{align*}
d(\lambda u, \lambda v)=d([\lambda, x ; 1-\lambda, \theta], [\lambda, y ; 1-\lambda, \theta])=\lambda d(x, y)=\lambda d(u, v),\,\mbox{ for } \lambda \in [0;1] \mbox{ and } u, v \in \mathbb U.\tag{3.4}
\end{align*}
Let us denote by $\mathbb K = \{\lambda u : u\in \mathbb U, \lambda \geqslant 0\}$ the subset of $\mathbb V$ containing $\mathbb U$. For $x, y \in \mathbb K$, they will have form $x=\alpha u$, $y=\beta v$ with $\alpha, \beta \geqslant 0$, $u, v\in \mathbb U$, then $x+y=\alpha u + \beta v=(\alpha+\beta)\big(\frac{\alpha}{\alpha+\beta}u+\frac{\beta}{\alpha+\beta}v\big)$. It implies from the convexity of $\mathbb U$ that $\frac{\alpha}{\alpha+\beta}u+\frac{\beta}{\alpha+\beta}v\in \mathbb U$. Hence, $x+y\in \mathbb K$ and $\mathbb K$ is a convex cone of $\mathbb V$. We define the mapping $d_* : \mathbb K \times \mathbb K \to [0, \infty)$ as follows:
\begin{align*}
d_*(0,0)&=d(0,0)=0;\\
d_*(x, y)&=
d_*(\alpha u, \beta v)=(\alpha+\beta).d\Big(\frac{\alpha}{\alpha+\beta}u, \frac{\beta}{\alpha+\beta}v\Big),\mbox{ for } x=\alpha u, y=\beta v, \alpha, \beta \geqslant 0, \alpha+\beta >0, u, v \in \mathbb U.
\end{align*}
The mapping $d_*$ is well-defined, independent of the choice of $\alpha u$ and $\beta v$. To see this, let $x=\alpha' u', y=\beta' v'$, $\alpha', \beta' \geqslant 0$, $u', v' \in \mathbb U$, then $\alpha u=\alpha' u'$, $\beta'v'=\beta v$ and using (3.4) (note that in degeneration cases $\alpha+\beta=0$ or $\alpha'+\beta'=0$, the proof is trivial),
\begin{align*}
d_*(\alpha u , \beta v)&=(\alpha+\beta).d\Big(\frac{\alpha}{\alpha+\beta}u, \frac{\beta}{\alpha+\beta}v\Big)=(\alpha+\beta+\alpha'+\beta').d\Big(\frac{\alpha}{\alpha+\beta+\alpha'+\beta'}u, \frac{\beta}{\alpha+\beta+\alpha'+\beta'}v\Big)\\
&=(\alpha+\beta+\alpha'+\beta').d\Big(\frac{\alpha'}{\alpha+\beta+\alpha'+\beta'}u', \frac{\beta'}{\alpha+\beta+\alpha'+\beta'}v'\Big)\\
&=(\alpha'+\beta').d\Big(\frac{\alpha'}{\alpha'+\beta'}u', \frac{\beta'}{\alpha'+\beta'}v'\Big)=d_*(\alpha' u' , \beta' v').
\end{align*}
It is clear that if $(x, y)\in \mathbb U \times \mathbb U$ then $d_*(x, y)=d(x, y)$, and (3.4) can be extended for $(x, y, \lambda)$ from $\mathbb U\times \mathbb U\times [0;1]$ to $\mathbb K \times \mathbb K \times [0,\infty)$ by
\begin{align*}
d_*(\lambda x, \lambda y)=d_*(\lambda \alpha u, \lambda \beta v)=\lambda(\alpha+\beta).d\Big(\frac{\alpha}{\alpha+\beta}u, \frac{\beta}{\alpha+\beta}v\Big)=\lambda d_*(\alpha u, \beta v)=\lambda d_*(x, y), \tag{3.5}
\end{align*}
for $\lambda \geqslant 0$ and $x, y \in \mathbb K$. We now show that $d_*$ is a metric on $\mathbb K$. Indeed, the symmetry and non-negative of $d_*$ are clear. If $d_*(x, y)=0$ then $d\big(\frac{\alpha}{\alpha+\beta}u, \frac{\beta}{\alpha+\beta}v\big)=0$ and we obtain $\frac{\alpha}{\alpha+\beta}u = \frac{\beta}{\alpha+\beta}v$. It follows $\alpha u = \beta v$ and $x=y$. Now for $x=\alpha u, y=\beta v, z=\gamma w \in \mathbb K$, $u, v, w \in \mathbb U, \alpha, \beta, \gamma \geqslant 0$, applying (3.5)
\begin{align*}
d_*(x, y)&=(\alpha+\beta+\gamma).d_*\Big(\frac{\alpha}{\alpha+\beta+\gamma} u , \frac{\beta}{\alpha+\beta+\gamma} v\Big)=(\alpha+\beta+\gamma).d\Big(\frac{\alpha}{\alpha+\beta+\gamma} u , \frac{\beta}{\alpha+\beta+\gamma} v\Big)\\
&\leqslant (\alpha+\beta+\gamma).d\Big(\frac{\alpha}{\alpha+\beta+\gamma} u , \frac{\gamma}{\alpha+\beta+\gamma} w\Big) + (\alpha+\beta+\gamma).d\Big(\frac{\gamma}{\alpha+\beta+\gamma} w , \frac{\beta}{\alpha+\beta+\gamma} v\Big)\\
&=d_*(\alpha u, \gamma w)+d_*(\gamma w, \beta v)=d_*(x,z)+d_*(z, y),
\end{align*}
we obtain the triangular inequality. On the other hand,
\begin{align*}
d_*(x+z, y+z)&=d_*(\alpha u +\gamma w, \beta v + \gamma w)\\
&=2(\alpha+\beta+\gamma).d\Big(\frac{\alpha}{2(\alpha+\beta+\gamma)} u +\frac{\gamma}{2(\alpha+\beta+\gamma)} w\,, \frac{\beta}{2(\alpha+\beta+\gamma)} v +\frac{\gamma}{2(\alpha+\beta+\gamma)} w\Big)\\
&=2(\alpha+\beta+\gamma).d\Big(\frac{\alpha}{2(\alpha+\beta+\gamma)} u\,, \frac{\beta}{2(\alpha+\beta+\gamma)} v\Big)=d_*(\alpha u, \beta v)=d_*(x, y),
\end{align*}
it means that the metric $d_*$ satisfies the cancellation law in $\mathbb K$. Recall that in degeneration cases, the proofs of triangular inequality and cancellation law are easy and we omit them. Applying R{\aa}dstr\"{o}m's embedding theorem (\cite{Ra}, Theorem 1), there exist a real normed linear space $(\mathbb B, \|.\|)$ and a map $\widetilde{j}: \mathbb K \to \widetilde{j}(\mathbb K)=\mathbb W\subset \mathbb B$ such that: (a) $\widetilde{j}(\lambda x + \mu y)=\lambda \widetilde{j}(x)+\mu \widetilde{j}(y)$ for $x, y \in \mathbb K$ and $\lambda, \mu \geqslant 0$; (b) $d_*(x, y)=\|\widetilde{j}(x)-\widetilde{j}(y)\|$ for all $x, y\in \mathbb K$; (c) $\mathbb W$ is a convex cone of $\mathbb B$. Moreover, we can choose the normed linear space such that it is complete, i.e., $\mathbb B$ is a Banach space (if necessary, we denote by $\overline{\mathbb B}$ the completion of $\mathbb B$ and embed $\mathbb K$ to $\overline{\mathbb B}$). It is not hard to check that $\widetilde{j}(\mathbb U)$ is a convex subset contained in $\mathbb B$, complete under the metric induced by the
norm of $\mathbb B$. Putting $j=\widetilde{j}_\circ \rho: \mathfrak X \to \mathbb B$ and $\mathbb F = j(\mathfrak X)$, we find that $\mathbb F$ is a closed, convex subset of $\mathbb B$, moreover $j(\theta)=0$. Define $\mathbb E$ to be the closed linear subspace of $\mathbb B$ generated by $\mathbb F$. It is easy to check that the subspace $\mathbb E$ is a Banach space and the conclusions (i), (ii), (iii) of theorem hold. The remaining conclusion when $\mathfrak X$ is separable, then $\mathbb F$ is too and this implies the separability of $\mathbb E$, so
this observation completes the proof.
\end{proof}
\end{theo}
In 2011, Brown \cite{Br} introduced the notion of convex-like structure in metric space and it was suitably restated in \cite{CF} as follows. Let $(\mathfrak X, d)$ be a complete metric space. Take $\mathfrak X^{(n)}=\mathfrak X\times\cdots\times\mathfrak X$ to be the $n$-fold Cartesian product and Prob$_n$ the set of probability measures on the $n$-element set $\{1, 2, \ldots, n\}$ endowed with the $\ell_1$-metric $\|\mu-\nu\|=\sum_{i=1}^n|\mu(i)-\nu(i)|$. We say that $(\mathfrak X, d)$ has a \textit{convex-like structure} if for every $n\in \mathbb N$ and $\mu\in$ Prob$_n$ there is given a continuous map $\gamma_\mu: \mathfrak X^{(n)}\to \mathfrak X$ such that
($\gamma.1$)\; $\gamma_\mu(x_1,\ldots,x_n)=\gamma_{\mu \circ \sigma}(x_{\sigma(1)},\ldots,x_{\sigma(n)})$ for every permutation $\sigma$ of $\{1,\ldots,n\}$;
($\gamma.2$)\; if $x_1=x_2$, then $\gamma_\mu(x_1,x_2,\ldots,x_n)=\gamma_\nu(x_1, x_3,\ldots, x_n)$, where $\nu\in$ Prob$_{n-1}$ is given by $\nu(1)=\mu(1)+\mu(2)$ and $\nu(j)=\mu(j+1)$, $2\leqslant j \leqslant n-1$;
($\gamma.3$)\; if $\mu(i)=1$, then $\gamma_\mu(x_1,\ldots,x_n)=x_i$;
($\gamma.4$)\; $d(\gamma_\mu(x_1,\ldots,x_n), \gamma_\mu(y_1,\ldots,y_n))\leqslant \sum_{i=1}^n\mu(i)d(x_i, y_i)$ for all $y_1, \ldots, y_n \in \mathfrak X$;
($\gamma.5$)\; for all $\mu_1\in$ Prob$_n$, $\mu_2\in$ Prob$_m$, $\nu\in$ Prob$_2$, then $\gamma_\nu(\gamma_{\mu_1}(x_1,\ldots, x_n), \gamma_{\mu_2}(y_1,\ldots,y_m))=\gamma_\eta(x_1,\ldots,x_n,y_1,\ldots,y_m)$, where $\eta \in$ Prob$_{n+m}$ is given by $\eta(i)=\nu(1)\mu_1(i), 1\leqslant i\leqslant n$ and $\eta(j+n)=\nu(2)\mu_2(j), 1\leqslant j\leqslant m$.
\begin{prop}
Let $(\mathfrak X, d)$ be a complete metric space. Then, $\mathfrak X$ is a convexifiable CC space if and only if $\mathfrak X$ has a convex-like structure. In other words, a convexifiable CC space and metric space with a convex-like structure are identical.
\begin{proof}
On $\mathfrak X$, when a convex-like structure and a convex combination operation determine each other by the identity
$$\gamma_\mu(x_1,\ldots,x_n)=[\mu(1), x_1 ; \ldots; \mu(n), x_n]\;\mbox{ for } \mu \in \mbox{Prob}_n,$$
then the axioms ($\gamma.1$) and ($\gamma.4$) are equivalent to the axioms (CC.i) and (CC.iv) respectively.
- Suppose that $\mathfrak X$ is a convexifiable CC space. Then the axioms ($\gamma.2$), ($\gamma.3$), ($\gamma.5$) follow from (2.5), Remark 1, (2.1) respectively. Hence $\mathfrak X$ has convex-like structure.
- Suppose that $\mathfrak X$ has a convex-like structure. Then, the axiom (CC.ii) follows from ($\gamma.5$); axiom (CC.v) is satisfied thanks to ($\gamma.2$) and in this case, the operation $[.,.]$ is unbiased. In order that $\mathfrak X$ becomes a convexifiable CC space, it remains to check the axiom (CC.iii). Namely, for $u,v\in\mathfrak X$ and
$\lambda_k \rightarrow\lambda\in (0;1)$, we need to prove that $\gamma_{\lambda_k, 1-\lambda_k}(u,v) \to \gamma_{\lambda, 1-\lambda}(u,v)$ as $k \to \infty$, where $\gamma_{\lambda, 1-\lambda}$ is a convenient notation of $\gamma_\mu$ for $\mu\in$ Prob$_2$, $\mu(1)=\lambda, \mu(2)=1-\lambda$. For $0<\alpha\leqslant \beta<1$,
\begin{align*}
d(\gamma_{\alpha, 1-\alpha}(u,v), \gamma_{\beta, 1-\beta}(u,v))&=d(\gamma_\eta(u,v,v), \gamma_\eta(u,u,v))\;\;(\mbox{by }(\gamma.2) \mbox{ with } \eta(1)=\alpha, \eta(2)=\beta-\alpha, \eta(3)=1-\beta)\\
&\leqslant (\beta-\alpha)d(u,v)\;\;(\mbox{by } (\gamma.4)).
\end{align*}
Changing the role of $\alpha, \beta$, we obtain $d(\gamma_{\alpha, 1-\alpha}(u,v), \gamma_{\beta, 1-\beta}(u,v))\leqslant |\beta-\alpha|d(u,v)$ for $\alpha, \beta \in (0;1)$. Applying this inequality, we have (CC.iii).
\end{proof}
\end{prop}
\noindent \textbf{Remark 2.} After all proofs in this paper completed, we have just been known the notion of convex-like structure by the supplying of Tobias Fritz and have been aware that a similar result to Theorem 3.3 was established before by Capraro and Fritz in \cite{CF}. In their work, they proved that a convex-like structure is affinely and isometrically isomorphic to a closed convex subset of a Banach space (\cite{CF}, Theorem 9). Combining this result with Proposition 3.4 above, a convexifiable CC space also can be embedded into Banach space. However, the scheme for embedding in our proof is slightly different from theirs, our final goal for embedding is to apply R{\aa}dstr\"{o}m's result. To be more specific, in \cite{CF}, Theorem 9: Convex-like structure on $\mathfrak X$ $\to$ establish algebraic cancellation law $\to$ embed $\mathfrak X$ into vector space (by Stone's embedding) $\to$ prove the translation-invariant of metric $\to$ extend metric to affine hull and to whole vector space which becomes Banach space; while in Theorem 3.3: Convexifiable CC space $\mathfrak X$ $\to$ establish metric cancellation law and as its corollary, obtain algebraic cancellation law $\to$ embed $\mathfrak X$ into vector space (by \'{S}wirszck's embedding) $\to$ construct convex cone containing $\mathfrak X$ and metric in this cone $\to$ embed into Banach space (by R{\aa}dstr\"{o}m's embedding). Therefore, we still present Theorem 3.3 as an independent rediscovery of Theorem 9 in \cite{CF}.
\section{Applications}
Throughout Section 4 and Section 5, $(\Omega,\mathcal F,P)$ is a
complete probability space without atoms, for $A\in\mathcal F$, the notation $I(A)$ (or $I_A$) is the indicator function of $A$.
Suppose that $(\mathfrak X,d)$ is a metric space and $\mathcal G$ is a sub-$\sigma$-algebra of $\mathcal F$. A mapping $X:\Omega\rightarrow\mathfrak X$ is said to be $\mathcal G$\textit{-measurable}
if $X^{-1}(B)\in\mathcal G$ for all
$B\in\mathcal{B}(\mathfrak X)$, where $\mathcal{B}(\mathfrak X)$
is the Borel $\sigma$-algebra on $\mathfrak X$. An $\mathcal F$-measurable mapping will be called \textit{random element} and when a
random element $X$ takes finite values in $\mathfrak X$,
it is called a \emph{simple random element}. A random element $X:\Omega\rightarrow\mathfrak X$
is said to be $p$-\emph{order integrable} ($p>0$) if $d^p(u, X)$ is an integrable
real-valued random variable for some $u\in \mathfrak X$ and when $p=1$, $X$ is said to be \textit{integrable} briefly. Note that this definition does not
depend on the selection of element $u$. The space (of equivalence classes) of all $\mathcal G$-measurable, $p$-order integrable random elements in $\mathfrak X$ will be denoted by
$L_\mathfrak X^p(\mathcal G)$. We also use $L_\mathfrak X^p$ to denote $L_\mathfrak X^p(\mathcal F)$ and the metric on
$L_\mathfrak X^p(\mathcal G)$ is defined by $\Delta_p(X,Y)=(Ed^p(X,Y))^{1/p}$, $p\geqslant 1$.
The \emph{distribution} $P_X$ of an $\mathfrak X$-valued random element $X$ is defined by $P_X(B)=P(X^{-1}(B)),\forall B\in\mathcal{B}(\mathfrak X),$ and two $\mathfrak X$-valued random elements $X,Y$ are said to be \emph{identically distributed} if $P_X=P_Y$. The collection of $\mathfrak X$-valued random elements $\{X_i, i\in I\}$ is said to be \emph{independent} (resp. \textit{pairwise independent}) if the collection of $\sigma$-algebras $\{\sigma(X_i), i\in I\}$ is independent (resp. pairwise independent), where $\sigma (X)=\{X^{-1}(B), B\in\mathcal{B}(\mathfrak X)\}$.
Next, we recall some notions introduced by Ter\'an and Molchanov \cite{TM}. Assume that $(\mathfrak X, d)$ is a separable and complete CC space. For a simple random element $X=[I_{\Omega_i}, x_i]_{i=1}^n$, the \emph{expectation} of
$X$ is defined by $EX=[P(\Omega_i),Kx_i]_{i=1}^n.$
It is easy to prove that if $X, Y$ are simple random elements, then $d(EX, EY)\leqslant Ed(X, Y).$
We fix $u_0\in K(\mathfrak X)$ (by (CC.v),
$K(\mathfrak X)\neq\emptyset)$ and $u_0$ will be considered as the
special element of $\mathfrak X$. Since the metric space $\mathfrak X$
is separable, there exists a countable dense subset $\{u_j, j\geqslant1\}$
of $\mathfrak X$. For each $n\geqslant 1$, we define the mapping
$\psi_n:\mathfrak X\rightarrow\mathfrak X$ such that
$\psi_n(x)=u_{m_n(x)}$, where
$m_n(x)$ is the smallest $i\in\{0,\ldots,n\}$ such that $d(u_i,x)=\min_{0\leqslant j\leqslant
n}d(u_j,x)$. Then, $d(u_0, \psi_n(x)) \leqslant 2d(u_0, x)$ for all $n$ and all $x\in \mathfrak X$.
Since $\mathfrak X$ is separable and complete, an integrable
$\mathfrak X$-valued random element can be approximated by a
sequence of simple random elements. Namely, for $X\in L_\mathfrak X^1$ then $X=\lim_{n\to \infty}\psi_n(X)$ and the \emph{expectation} of $X$
is defined by $EX=\lim_{n\rightarrow\infty}E\psi_n(X).$ By the approximation method, we also prove that if $X,Y\in L_\mathfrak X^1$, then $d(EX, EY)\leqslant
Ed(X,Y)$.
A set $A \subset \mathfrak X$ is called \emph{convex} if $[\lambda_i, u_i]_{i=1}^n \in A$ for all $u_i \in A$ and positive numbers $\lambda_i$ that sum to 1. For $A\subset \mathfrak X$, we denote as $coA$ the \emph{convex hull} of $A$, which is the smallest convex subset containing $A$, and $\overline{co}A$ is the closure of $coA$ in $\mathfrak X$. Let $k(\mathfrak X)$ (resp. $ck(\mathfrak X)$) be the set of nonempty compact (resp. convex compact) subsets of $\mathfrak X$ and denote by $D_\mathfrak X$ the Hausdorff metric on $k(\mathfrak X)$, that is $D_\mathfrak X (A, B)=\max\{\sup_{a\in A}\inf_{b\in B} d(a, b), \sup_{b\in B}\inf_{a\in A}d(b, a)\}$ for $A, B \in k(\mathfrak X)$. It follows from Theorem 6.2 \cite{TM} that if $\mathfrak X$ is a separable complete CC space, then the space $k(\mathfrak X)$ with the convex combination
$$[\lambda_i, A_i]_{i=1}^n=\{[\lambda_i, u_i]_{i=1}^n : u_i \in A_i, \;\text{for all}\; i\}$$
and Hausdorff metric $D_\mathfrak X$ is a separable complete CC space, where the convexification operator $K_{k(\mathfrak X)}$ is given by
$$K_{k(\mathfrak X)}A=\overline{co}K_{\mathfrak X}(A)=\overline{co}\{K_{\mathfrak X}u : u\in A\}.$$
This is a nice feature of CC space. Based on this property, if a result holds for elements in CC space then it can be uplifted to the space of nonempty compact subsets. In addition, $K_{k(\mathfrak X)}(k(\mathfrak X))=ck(K_\mathfrak X(\mathfrak X))$ by Proposition 5.1 in next section. Further details can be found in \cite{TM}.
From now until the end of paper, we always assume that $(\mathfrak X, d)$ is a separable and complete CC space. Proposition 2.1 implies that $(K(\mathfrak X), d)$ is also separable, complete and convexifiable CC space. Therefore, it follows from Theorem 3.3 that $K(\mathfrak X)$ can be embedded isometrically as a closed, convex subset of separable Banach space $\mathbb E$ via mapping $j$. Moreover, if $X$ is an integrable $\mathfrak X$-valued random element, then $KX$ is an integrable $K(\mathfrak X)$-valued random element.
\subsection{On some properties of expectation}
\begin{theo}
Let $X$ be an integrable $\mathfrak X$-valued random element. Then, $j(E X)=j(E(KX))= E j(KX)$ where $j: K(\mathfrak X) \to \mathbb E$ is the mapping mentioned in Theorem 3.3 and $Ej(KX)$ is the Bochner integral of $j(KX)$. In particular, if $X$ is an integrable $K(\mathfrak X)$-valued random element, then $j(EX)=Ej(X)$.
\end{theo}
\begin{proof} First, observe that $j(KX)$ is a Borel-measurable random element in separable Banach space $\mathbb E$ and $E\|j(KX)\|=Ed(\theta, KX)\leqslant Ed(\theta, X)<\infty$, where the element $\theta$ was mentioned in proof of Theorem 3.3. This remark ensures for the existence of Bochner integral of $j(KX)$.
Next, Lemma 3.3 in \cite{Te} implies that $EX=E(KX)$, hence it is sufficient to prove $j(E(KX))=Ej(KX)$. It will be done via using the technique of approximation by simple random elements. If $X$ is simple, i.e., $X=[I_{\Omega_i}, x_i]_{i=1}^n$, then
\begin{align*}
j(E(KX))=j([P(\Omega_i), Kx_i]_{i=1}^n)=\sum_{i=1}^n P(\Omega_i) j(Kx_i)=Ej(KX).
\end{align*}
In general case $X\in L_\mathfrak X^1$, there exists a sequence of simple random elements $\{X_n=\psi_n(X)\}_{n\geqslant 1}$ such that $Ed(X_n, X)\to 0$ and $EX_n\to EX$ as $n\to \infty$. Since the convexification operator $K$ is non-expansive with respect to
metric $d$, we have $d(E(KX_n),E(KX))\leqslant Ed(KX_n, KX)\leqslant Ed(X_n, X)\to 0$. On the other hand, the continuity of mappings $j$ and $K$ follows that $j(KX_n)\to j(KX)$, moreover $$\|j(KX_n)\|=d(KX_n, \theta)\leqslant d(X_n, \theta)\leqslant d(X_n, u_0)+d(u_0, \theta)\leqslant 2d(X, u_0)+d(u_0, \theta)\in L_\mathbb R^1.$$
Applying the Lebesgue dominated convergence theorem in $\mathbb R$ and combining with the case above, we obtain
\begin{align*}
j(E(KX))=j\big(\lim_{n\to \infty} E(KX_n)\big)=\lim_{n\to \infty} j(E(KX_n))=\lim_{n\to \infty} Ej(KX_n)=Ej(KX).
\end{align*}
The proof is completed.
\end{proof}
By Theorem 4.1, we immediately derive the following corollary.
\begin{coro}
1) For $X_i\in L^1_{\mathfrak X}$, we have $E([\lambda_1, X_1 ; \lambda_2, X_2])=[\lambda_1, EX_1 ; \lambda_2, EX_2]$.\\
2) Suppose that $X\in L^1_{\mathfrak X}$ and $\xi$ be a real-valued random variable such that $0< \xi < 1 $ a.s. If $\xi$ and $X$ are independent, then $E([\xi, X; 1-\xi, u])=[E\xi, EX; 1-E\xi, Ku]$, $u\in \mathfrak X$.\\
3) Let $\xi$ be a real-valued random variable such that $0< \xi < 1 $ a.s. Then
$E([\xi, u; 1-\xi, v])=[E\xi, Ku; 1-E\xi, Kv]$
for all $u, v \in \mathfrak X$.
\end{coro}
\begin{proof} Applying Theorem 4.1 and property (2.3), we have
\begin{align*}
j(E([\lambda_1, X_1 ; \lambda_2, X_2]))&=Ej([\lambda_1, KX_1 ; \lambda_2, KX_2])=E(\lambda_1 j(KX_1)+\lambda_2j(KX_2))\\
&=\lambda_1 j(EX_1)+\lambda_2 j(EX_2)=j([\lambda_1, EX_1 ; \lambda_2, EX_2]).\\
j(E([\xi, X; 1-\xi, u]))&=Ej([\xi, KX ; 1-\xi, Ku])=E(\xi. j(KX))+(1-E\xi)j(Ku)\\
&=E\xi.Ej(KX)+(1-E\xi)j(Ku)=j([E\xi, EX; 1-E\xi, Ku]).\\
j(E([\xi, u; 1-\xi, v]))&=E(\xi. j(Ku)+(1-\xi)j(Kv))=j([E\xi, Ku; 1-E\xi, Kv]).
\end{align*}
The proof is completed by the injection of $j$. Note that the conclusions in this corollary can be proved directly by using the technique of approximation by simple random elements.
\end{proof}
Consider a mapping $\varphi: \mathfrak X \to \mathbb R$, it will be called \emph{convex} if $\varphi([\lambda_i, x_i]_{i=1}^n)\leqslant\sum_{i=1}^n\lambda_i \varphi(x_i),$
for all $x_1,\ldots, x_n \in \mathfrak X$, $\lambda_1,\ldots, \lambda_n \in (0;1), \sum_{i=1}^n\lambda_i=1$; It will be called \emph{midpoint convex} if $\varphi([1/2, x ; 1/2, y])\leqslant (\varphi(x)+\varphi(y))/2$ for every $x, y \in \mathfrak X$; It will be called \emph{lower semicontinuous} if $\varphi(x)\leqslant \liminf_n \varphi(x_n)$ whenever $x_n\to x$; It will be called \emph{affine} if both $\varphi$ and $-\varphi$ are convex. If $\mathfrak X$ is convexifiable, then the notions of convex and affine can be extended for weights $\lambda_1,\ldots, \lambda_n \in [0;1]$. It is easy to see that if $f$ is affine, then so is $f+c$ for every $c\in \mathbb R$. Denote by $\mathfrak X'$ the set of all continuous affine mappings $f: \mathfrak X \to \mathbb R$.
\begin{lemm}
If $\mathfrak X$ is convexifiable and $\mathfrak X$ has more than one element, then $\mathfrak X'$ separates points of $\mathfrak X$. In other words, if $f(x)=f(y)$ for all $f\in \mathfrak X'$, then $x=y$.
\end{lemm}
\begin{proof}
Assume that there exist two elements $x, y\in \mathfrak X$ and $x\neq y$ such that $f(x)=f(y)$ for all $f\in \mathfrak X'$. Let $(\mathbb E, \|.\|)$ be the Banach space with dual $\mathbb E^*$ and $j: \mathfrak X\to \mathbb E\supset \mathbb F=j(\mathfrak X)$ is the mapping as in Theorem 3.3. Since $f$ is affine on $\mathfrak X$, $\widetilde{f}=f_\circ j^{-1}$ is also affine on $\mathbb F$, where $j^{-1}: \mathbb F \to \mathfrak X$ is inverse mapping of $j$. We denote $\widetilde{\mathfrak X}'=\{\widetilde{f}=f_\circ j^{-1}: \mathbb F \to \mathbb R, f\in \mathfrak X'\}$ and $\mathbb F^*=\{g|_{\mathbb F}: \mathbb F \to \mathbb R, g|_{\mathbb F} \mbox{ is restriction of } g\in \mathbb E^* \mbox{ on } \mathbb F\}$. It is easy to see that $\mathbb F^* \subset \widetilde{\mathfrak X}'$ and $\mathfrak X'\stackrel{\kappa}{=}\widetilde{\mathfrak X}'$ (the notation $A\stackrel{\kappa}{=}B$ means that there exists an one-to-one correspondence $\kappa: A\to B$). It follows from $x\neq y$ that $j(x)\neq j(y)$ and by the Hahn-Banach separation theorem, there exists $h\in \mathbb E^*$ such that $h(j(x))\neq h(j(y))$. Moreover, since $j(x), j(y) \in \mathbb F$, we have $h|_{\mathbb F}(j(x))\neq h|_{\mathbb F}(j(y))$. Choosing $\overline{f}=(h|_{\mathbb F})_\circ j$, we obtain $\overline{f}\in \mathfrak X'$ and $\overline{f}(x)\neq \overline{f}(y)$, this is the contradiction. It implies $x=y$, so $\mathfrak X'$ separates points of $\mathfrak X$.
\end{proof}
\noindent \textbf{Remark 3.} If $\mathfrak X$ is not convexifiable, then $\mathfrak X'$ does not separate points of $\mathfrak X$ in general. Indeed, let $(\mathfrak X,\|.\|)$ be the separable Banach space and denote by $d$ the metric associated with norm $\|.\|$. For $r>1$, we consider the operation $^r[.,.]$ on $\mathfrak X$ as follows: $^r[\lambda_i, x_i]_{i=1}^n=\sum_{i=1}^n \lambda_i^r x_i$. As shown in Example 5 in \cite{TM}, $^r[.,.]$ is the convex combination operation ($r$-th power combination) on $(\mathfrak X,d)$ and the corresponding convexification operator $K_rx=0$ for all $x \in \mathfrak X$. It implies that $K_r(\mathfrak X)=\{0\}$ and $\mathfrak X$ is not convexifiable. For $x\in \mathfrak X$ and $f\in \mathfrak X'$ arbitrarily, $f\big(\,^r[n^{-1}, x]_{i=1}^n\big)=\sum_{i=1}^nn^{-1}f(x)=f(x)$ for all $n$. Taking $n\to \infty$ and using the continuity of $f$, we have $f(x)=f(K_rx)=f(0)$. It means that $f$ is a constant function on $\mathfrak X$, so $\mathfrak X'$ contains only constant functions (moreover $\mathfrak X'\stackrel{\kappa}{=}\mathbb R$). Hence, $\mathfrak X'$ does not separate points of $\mathfrak X$.
\begin{theo} Let $\mathfrak X$ be a convexifiable CC space and $X$ be an integrable $\mathfrak X$-valued random element. Then,
(i) $f(X)\in L_\mathbb R^1$ for all $f\in \mathfrak X'$;
(ii) An element $m\in \mathfrak X$ is the expectation of $X$ if and only if $f(m)=Ef(X)$ for all $f\in \mathfrak X';$
\end{theo}
\begin{proof} Throughout this proof, we use the notations as in Theorem 3.3 and Lemma 4.3.
(i) We will prove that for each $f\in \mathfrak X'$, there exists a constant $C$ such that $|f(x)|\leqslant C(d(\theta, x)+1)$ for all $x\in \mathfrak X$. To do this, it is sufficient to prove that for each $\widetilde{f}\in \widetilde{\mathfrak X}'$, $|\widetilde{f}(x)|\leqslant C(\|x\|+1)$ for all $x\in \mathbb F$. Assume to the contrary that the conclusion does not hold, then there exists a sequence $\{x_n\}_{n\geqslant 1} \subset \mathbb F$ such that $|\widetilde{f}(x_n)|>n(\|x_n\|+1)$ for all $n$. Since $0<((1+\|x_n\|)n)^{-1}\leqslant 1$ for all $n\geqslant 1$ and $0\in \mathbb F$, the convexity of $\mathbb F$ implies $\frac{x_n}{(1+\|x_n\|)n}\in \mathbb F$. We have
\begin{align*}
\widetilde{f}\Big(\frac{x_n}{(1+\|x_n\|)n}\Big)=\widetilde{f}\Big(\frac{1}{(1+\|x_n\|)n}.x_n+\Big(1-\frac{1}{(1+\|x_n\|)n}\Big).0\Big)=\frac{1}{(1+\|x_n\|)n}\widetilde{f}(x_n)+\Big(1-\frac{1}{(1+\|x_n\|)n}\Big)\widetilde{f}(0).
\end{align*}
It follows
\begin{align*}
\Big|\widetilde{f}\Big(\frac{x_n}{(1+\|x_n\|)n}\Big) - \Big(1-\frac{1}{(1+\|x_n\|)n}\Big)\widetilde{f}(0)\Big|=\frac{|\widetilde{f}(x_n)|}{(1+\|x_n\|)n}>1\;\mbox{ for all } n.\tag{4.1}
\end{align*}
Taking $n\to \infty$, the continuity of $\widetilde{f}$ implies that the LHS of (4.1) tends to 0, this is the contradiction. Therefore, $|f(X)|\leqslant C(d(\theta, X)+1)$ and this inequality implies $f(X)\in L_\mathbb R^1$.
(ii) Since $X\in L_\mathfrak X^1$, the conclusion (i) ensures for the existence of $Ef(X)$ for all $f\in \mathfrak X'$. The necessity part of (ii) is easy, it can be proved through using the technique of approximation by simple random elements, so we omit the proof. We now prove the sufficiency part. Assume that $f(m)=Ef(X)$ for all $f\in \mathfrak X'$, the necessity part follows that $f(m)=f(EX)$ for all $f\in \mathfrak X'$. If $\mathfrak X$ has one element, then $EX=m$ obviously. If $\mathfrak X$ has more than one element, then applying Lemma 4.3, we obtain $m=EX$.
\end{proof}
Note that for $f\in \mathfrak X'$, $$f(Kx)=f\big(\lim_{n\to\infty}[n^{-1}, x]_{i=1}^n\big)=\lim_{n\to \infty} n^{-1}\sum_{i=1}^n f(x)=f(x)$$ for all $x\in \mathfrak X$. Hence, the following corollary is obtained immediately from Theorem 4.4.
\begin{coro}
Let $\mathfrak X$ be a CC space and $X$ be an integrable $\mathfrak X$-valued random element. Then, $f(X)=f(KX)\in L_\mathbb R^1$ for all $f\in \mathfrak X'\subset (K(\mathfrak X))'$ and an element $m\in K(\mathfrak X)$ is the expectation of $X$ if and only if $f(m)=Ef(KX)$ for all $f\in (K(\mathfrak X))'.$
\end{coro}
\begin{prop} \emph{(\cite{Te}, Theorem 3.1)} Let $\varphi: \mathfrak X \to \mathbb R$ be midpoint convex and lower semicontinuous, and let $X$ be an integrable $\mathfrak X$-valued random element. Then $\varphi(EX)\leqslant E\varphi(X)$ whenever $\varphi(X)$ is integrable.
\end{prop}
\begin{proof}
This proposition established Jensen's inequality in CC space and it is a main result of Ter\'an \cite{Te}. It was proved nicely in \cite{Te} by using SLLN. Beside the approach of Ter\'an, we will present in this proof another method through combining embedding theorem and a corresponding version of Jensen's inequality in Banach space.
First, we will prove that if $\varphi: \mathfrak X\to\mathbb R$ is midpoint convex and lower semicontinuous then $\varphi(Kx)\leqslant \varphi(x)$, $x\in \mathfrak X$. Indeed, since $[n^{-1}, x]_{i=1}^n \to Kx$, the subsequence $\big\{[2^{-m}, x]_{i=1}^{2^m}\big\}_{m\geqslant 1}$ also tends to $Kx$ when $m\to \infty$. Applying the first part of proof of Proposition 5.3 (will be given in next section), we have
$$\varphi(Kx)=\varphi\big(\lim_{m\to \infty} [2^{-m}, x]_{i=1}^{2^m}\big)\leqslant \liminf_{m\to \infty} \varphi\big( [2^{-m}, x]_{i=1}^{2^m}\big)\leqslant \liminf_{m\to \infty}\,2^{-m}\sum_{i=1}^{2^m} \varphi(x)=\varphi(x).$$
This reason implies $\varphi(KX)\leqslant \varphi(X)$, in particular $\varphi^+(KX)\leqslant \varphi^+(X)$ where $\varphi^+=\max\{0, \varphi\}$. We now consider two cases as follows:
\textit{Case 1.} $\varphi(KX)$ is integrable. This implies that $E\varphi(KX)$ is finite and $E\varphi(KX) \leqslant E\varphi(X)$. With $j^{-1}: \mathbb F \to K(\mathfrak X)$, putting $\widetilde{\varphi}=\varphi_\circ j^{-1} : \mathbb F \to \mathbb R$, we derive
$$\widetilde{\varphi}(x/2 + y/2)=\varphi\big([1/2, j^{-1}(x)\,; 1/2, j^{-1}(y)]\big)\leqslant \big(\varphi_\circ j^{-1}(x)+\varphi_\circ j^{-1}(y)\big)/2=\big(\widetilde{\varphi}(x) + \widetilde{\varphi}(y)\big)/2$$ for all $x, y \in \mathbb F$, it means that $\widetilde{\varphi}$ is midpoint convex on $\mathbb F$. Since $\varphi$ is lower semicontinuous on $\mathfrak X$ and $j^{-1}$ is isometric, $\widetilde{\varphi}$ is lower semicontinuous on $\mathbb F$. Then, $\widetilde{\varphi}$ is midpoint convex as well as lower semicontinuous on $\mathbb F$, it implies that $\widetilde{\varphi}$ is convex on $\mathbb F$. Applying Jensen's inequality (\cite{Pe}, Theorem 3.10(ii)), we get $\widetilde{\varphi}(E(j(KX)))\leqslant E \widetilde{\varphi}(j(KX))$. On the other hand, Theorem 4.1 follows that $\widetilde{\varphi}(j(E(KX)))= \widetilde{\varphi}(E (j(KX)))$, and this is equivalent to $\varphi(E(KX))= \widetilde{\varphi}(E (j(KX)))$. Combining the arguments above, we obtain $\varphi(EX)=\varphi(E(KX))\leqslant E\varphi(KX)\leqslant E\varphi(X)$.
\textit{Case 2.} $\varphi(KX)$ is not integrable. Putting $\varphi_n=\max\{-n, \varphi\}$, $n=1, 2, \ldots$, we have $\varphi_n \searrow \varphi$ and $\varphi_n(KX)$ is integrable for each $n$ thanks to $\varphi^+ \geqslant \varphi_n\geqslant -n$. It is not hard to check that $\{\varphi^+, \varphi_n, n\geqslant 1\}$ is also a collection of lower semicontinuous and midpoint convex functions on $\mathfrak X$. According to Case 1, we obtain $\varphi_n(EX)\leqslant E\varphi_n(X)$ for all $n$. Taking $n\to \infty$ and using the monotone convergence theorem, we derive $\varphi(EX)\leqslant E\varphi(X)$. This completes the proof.
\end{proof}
\subsection{On notion of conditional expectation}
The notion of conditional expectation of a random element taking values in concrete metric spaces was introduced by some authors via various ways. For example, Herer \cite{He3} constructed this notion in finitely compact metric space with nonnegative curvature. Sturm \cite{St} dealt with problem in global NPC space and conditional expectation was defined as a minimizer of the ``variance''. Other definitions can be found in \cite{CS, He2, Fi}. In this part, we will discuss the notion of conditional expectation in CC space $\mathfrak X$ and stress that all presented results below will extend corresponding ones in Banach space. The scheme to construct this notion will be proceeded through approximation method traditionally.
Let $X\in L_\mathfrak X^1$. If $X=[I_{(X=x_i)}, x_i]_{i=1}^n$ is simple, then the \emph{conditional expectation} of $X$ relative to a $\sigma$-algebra $\mathcal G\subset \mathcal F$ is defined by $E(X|\mathcal G)=[E(I_{(X=x_i)}|\mathcal G), Kx_i]_{i=1}^n$ (A). With this definition, maybe the readers naturally wonder that why we do not use another form of conditional expectation, such as $E(X|\mathcal G)=[E(I_{(X=x_i)}|\mathcal G), x_i]_{i=1}^n$ (B). This can be clarified that the definition (B) will not extend the notion of expectation when $\mathcal G=\{\emptyset, \Omega\}$, and a more profound reason is that (B) will depend on the representation of $X$ while (A) will not (see property (2.5)). Hence, the definition (A) is more suitable than (B).
From the definition (A) above, we can prove with some simple calculations that if $X$ and $Y$ are simple random elements, then $d(E(X|\mathcal G), E(Y|\mathcal G))\leqslant E(d(X, Y)|\mathcal G)$ a.s.,
where $\mathcal G$ is some sub-$\sigma$-algebra of $\mathcal F$. We now consider the general case, let $X$ be an integrable random element, i.e., $X\in L^1_\mathfrak X$, the condition expectation of $X$ is defined (up to a null set) by $E(X|\mathcal G)=\lim_{n\to \infty} E(\psi_n(X)|\mathcal G)$ a.s., where the mapping $\psi_n$ was mentioned in the first part of Section 4. Note that the limit in the RHS exists due to the completeness of $L^1_\mathfrak X(\mathcal G)$. It is easy to see from the above definition that if $X\in L^1_\mathfrak X$ then $E(X|\mathcal G)\in L^1_{K(\mathfrak X)}(\mathcal G)$. Moreover, by applying approximation method and the Lebesgue dominated convergence theorem for conditional expectation in $\mathbb R$, we also find $d(E(X|\mathcal G), E(Y|\mathcal G))\leqslant E(d(X, Y)|\mathcal G)$ for $X, Y \in L^1_\mathfrak X$ and in particular, $\|E(X|\mathcal G)\|_a\leqslant E(\|X\|_a|\mathcal G), a\in K(\mathfrak X)$.
First, we will establish the Lebesgue dominated convergence theorem for conditional expectation in CC space.
\begin{prop}
Let $X_n, X$ be integrable $\mathfrak X$-valued random elements. Assume that the following hold:
(i) $d(X_n, X) \to 0$ a.s. as $n\to \infty$,
(ii) there exist a function $f \in L^1_{\mathbb R}$ and some $a\in \mathfrak X$ such that $\|X_n\|_{a} \leqslant f$ a.s. for all $n$.\\
Then $d(E(X_n|\mathcal G), E(X|\mathcal G))\to 0$ a.s. as $n\to \infty$.
\end{prop}
\begin{proof}
By triangular inequality, $d(X_n, X)\leqslant \|X_n\|_{a}+\|X\|_{a}\leqslant f +\|X\|_{a}$ a.s. Since $\|X\|_{a} +f \in L^1_\mathbb R$, it follows from the Lebesgue dominated convergence theorem for conditional expectation in $\mathbb R$ that
$$\lim_{n\to \infty} d(E(X_n|\mathcal G), E(X|\mathcal G)) \leqslant \lim_{n\to \infty} E(d(X_n, X)|\mathcal G)=E(\lim_{n\to \infty}d(X_n, X)|\mathcal G)=0\;\mbox{ a.s.}$$
The proof is completed.
\end{proof}
\begin{theo}
Let $X$ be an integrable $\mathfrak X$-valued random element. Then, $j(E(X|\mathcal G))=j(E(KX|\mathcal G))= E(j(KX)|\mathcal G)$ a.s., where $j: K(\mathfrak X) \to \mathbb E$ is the mapping presented in Theorem 3.3.
\end{theo}
\begin{proof}
As mentioned in Theorem 4.1, $j(KX)$ is a random element in $\mathbb E$ and $j(KX)\in L_{\mathbb E}^1$. Hence, there exists the conditional expectation $E(j(KX)|\mathcal G)$, moreover $E(j(KX)|\mathcal G)\in j(K(\mathfrak X))$ a.s. First, if $X=[I_{(X=x_i)}, x_i]_{i=1}^n$ is simple, then by the definition of conditional expectation and the idempotence of $K$
\begin{align*}
E(KX|\mathcal G)&=E([I_{(X=x_i)}, Kx_i]_{i=1}^n|\mathcal G)=[E(I_{(X=x_i)}|\mathcal G), KKx_i]_{i=1}^n=E(X|\mathcal G)\;\mbox{ a.s.}\\
j(E(KX|\mathcal G))&=j([E(I_{(X=x_i)}|\mathcal G), Kx_i]_{i=1}^n)=\sum_{i=1}^n E(I_{(X=x_i)}|\mathcal G) j(Kx_i)=E(j(KX)|\mathcal G)\;\mbox{ a.s.}
\end{align*}
Next, if $X\in L_\mathfrak X^1$ then there exists a sequence $\{X_n, n\geqslant 1\}$ of simple random elements such that $X_n\to X$, $\|X_n\|_{u_0}\leqslant 2\|X\|_{u_0}$, $E(X_n|\mathcal G)\to E(X|\mathcal G)$ a.s. Applying Proposition 4.6, we obtain $E(KX_n|\mathcal G)\to E(KX|\mathcal G)$ a.s. Moreover, it follows from the previous case that $E(KX_n|\mathcal G)=E(X_n|\mathcal G)$ for all $n$, and the uniqueness of limit implies $E(KX|\mathcal G)=E(X|\mathcal G)$ a.s. Since $j$ is continuous,
$$j(E(KX|\mathcal G))=j\big(\lim_{n\to \infty} E(KX_n|\mathcal G)\big)=\lim_{n\to \infty} j(E(KX_n|\mathcal G))=\lim_{n\to \infty} E(j(KX_n)|\mathcal G)=E(j(KX)|\mathcal G)\;\mbox{ a.s.},$$
where the last limit holds due to Lebesgue's dominated convergence theorem for conditional expectation in Banach space. The proof is completed.
\end{proof}
It is well-known that definition of conditional expectation $E(X|\mathcal G)$ via approximate method in separable Banach space $\mathcal E$ is equivalent to the result: ``\textit{For $X\in L_{\mathcal E}^1$, then $Y=E(X|\mathcal G)$ if and only if $Y\in L_{\mathcal E}^1(\mathcal G)$ and $EXI_A=EYI_A$ for all $A\in \mathcal G$}''. The same equivalence in CC space will be established in following result and its proof is based on embedding theorem.
\begin{theo}
Let $X\in L^1_\mathfrak X$ and $a\in K(\mathfrak X)$. Then $Y=E(X|\mathcal G)$ if and only if $Y\in L_{K(\mathfrak X)}^1(\mathcal G)$ and
$E([I_A, X ; I_{\overline{A}}, a])=E([I_A, Y; I_{\overline{A}}, a])$ for all $A\in \mathcal G$.
\end{theo}
\begin{proof}
\textit{Necessary:} If $Y=E(X|\mathcal G)$ then $Y\in L_{K(\mathfrak X)}^1(\mathcal G)$ obviously. For $A\in \mathcal G$, by Theorem 4.1 and Theorem 4.8,
\begin{align*}
j(E([I_A, Y; I_{\overline{A}}, a]))&=E(I_A j(Y)+I_{\overline{A}}j(a))=E(I_A j(E(X|\mathcal G))+I_{\overline{A}}j(a))=E(I_A E(j(KX)|\mathcal G)+I_{\overline{A}}j(a))\\
&=E(E(I_A j(KX)|\mathcal G)+I_{\overline{A}}j(a))=E(I_A j(KX)+I_{\overline{A}}j(a))=j(E([I_A, X ; I_{\overline{A}}, a])).
\end{align*}
The injection of $j$ implies $E([I_A, X ; I_{\overline{A}}, a])=E([I_A, Y; I_{\overline{A}}, a])$.
\emph{Sufficiency:} Assume that there exists $Y\in L_{K(\mathfrak X)}^1(\mathcal G)$ such that $E([I_A, X ; I_{\overline{A}}, a])=E([I_A, Y; I_{\overline{A}}, a])$ for all $A\in \mathcal G$. We now need to prove that $Y=E(X|\mathcal G)$. Observe that the conditional expectation $E(X|\mathcal G)$ exists due to $X\in L_\mathfrak X^1$. By the hypothesis, we have $j(E([I_A, X ; I_{\overline{A}}, a]))=j(E([I_A, Y; I_{\overline{A}}, a]))$ for all $A\in \mathcal G$, this is equivalent to $E(I_Aj(KX))=E(I_A j(Y))$ for all $A\in \mathcal G$. It is obvious that $j(Y)$ is $\mathcal G$-measurable and integrable, so $j(Y)=E(j(KX)|\mathcal G)$. On the other hand, $E(j(KX)|\mathcal G)=j(E(X|\mathcal G))$ by Theorem 4.8. Thus $j(Y)=j(E(X|\mathcal G))$ and it follows that $Y=E(X|\mathcal G)$.
\end{proof}
The proposition below will give some basic properties of conditional expectation. The proof is easy thanks to Theorem 4.1 and Theorem 4.8.
\begin{prop} Let $X, Y \in L^1_\mathfrak X$. Then, the following hold for $\omega \in \Omega$ a.s.:\\
1) $E(E(X|\mathcal G))=EX$.\\
2) If $\sigma(X)$ and $\mathcal G$ are independent then $E(X|\mathcal G)=EX$.\\
3) If $X$ is $\mathcal G$-measurable then $E(X|\mathcal G)=KX$.\\
4) If $\xi$ is a real-valued random variable with $0< \xi< 1$ and $\xi$ is $\mathcal G$-measurable, then
$$E([\xi, X ; 1-\xi, Y]|\mathcal G)=[\xi, E(X|\mathcal G) ; 1-\xi , E(Y|\mathcal G)].$$
In particular, $E([\lambda, X ; 1-\lambda, Y] | \mathcal G)=[\lambda, E(X|\mathcal G) ; 1-\lambda, E(Y|\mathcal G)]$ for $\lambda \in (0;1)$.\\
5) If $\mathcal G_1, \mathcal G_2$ are two $\sigma$-algebras and $\mathcal G_1\subset \mathcal G_2$ then $E(E(X|\mathcal G_1)|\mathcal G_2)=E(E(X|\mathcal G_2)|\mathcal G_1)=E(X|\mathcal G_1)$.
\end{prop}
The Jensen inequality for conditional expectation in CC space will be given in the following proposition. Note here that this result does not totally extend Proposition 4.6.
\begin{prop}
Let $\varphi : \mathfrak X\to \mathbb R$ be a midpoint convex and continuous function, sub-$\sigma$-algebra $\mathcal G\subset \mathcal F$ and let $X \in L^1_\mathfrak X$ such that $\varphi(X)\in L^1_\mathbb R$. Then $\varphi(E(X|\mathcal G))\leqslant E(\varphi(X)|\mathcal G)$ a.s.
\end{prop}
\begin{proof}
Combining Jensen's inequality for Banach-valued conditional expectation (e.g., see Theorem in \cite{TW}) with embedding Theorem 3.3 and using simultaneously the same scheme as in proof of Proposition 4.6, we will have the conclusion.
\end{proof}
According to Theorem 4.4(i) and Proposition 4.11, we immediately derive the following corollary.
\begin{coro}
1) If
$X\in L^p_\mathfrak X$ then $\|E(X|\mathcal G)\|_a^p\leqslant E(\|X\|_a^p|\mathcal G)$ a.s., for arbitrarily $a\in K(\mathfrak X), p\geqslant 1$.\\
2) If $X\in L^1_\mathfrak X$ then $f(E(X|\mathcal G))=E(f(X)|\mathcal G)=E(f(KX)|\mathcal G)$ a.s. for all $f\in \mathfrak X'$.
\end{coro}
Similar to Banach space, the notion of martingale in CC space can be defined as follows: Let $\{X_n, n\geqslant 1\}\subset L_\mathfrak X^1$ and $\{\mathcal F_n, n\geqslant 1\}$ be an increasing sequence of sub-$\sigma$-algebras of $\mathcal F$. The collection $\{X_n, \mathcal F_n, n\geqslant 1\}$ is said to be \textit{martingale} if $X_n$ is $\mathcal F_n$-measurable and $E(X_{n+1}|\mathcal F_n)=X_n$ a.s. for all $n\geqslant 1$. Thanks to Corollary 4.12(1), it is easy to verify that if $\{X_n, \mathcal F_n, n\geqslant 1\}$ is a martingale then $\{\|X_n\|^p_a, \mathcal F_n, n\geqslant 1\}$ is a real-valued submartingale for $a\in \mathfrak X$, $p\geqslant 1$ arbitrarily. The convergence of martingales will be established in proposition below.
\begin{prop}
(i) Let $\{\mathcal F_{n}, n\geqslant 1\}$ be an increasing sequence of sub-$\sigma$-algebras of $\mathcal F$ and let $\mathcal F_{\infty}=\sigma(\cup_{n\geqslant 1}\mathcal F_{n})$. If $X\in L_\mathfrak X^p$ with some $p\geqslant 1$, then $E(X|\mathcal F_{n})\to E(X|\mathcal F_{\infty})$ a.s. and in $L_\mathfrak X^p$ as $n\to \infty$.
(ii) Let $\{\mathcal F_{-n}, n\geqslant 1\}$ be a decreasing sequence of sub-$\sigma$-algebras of $\mathcal F$ and let $\mathcal F_{-\infty}=\cap_{n\geqslant 1}\mathcal F_{-n}$. If $X\in L_\mathfrak X^p$ with some $p\geqslant 1$, then $E(X|\mathcal F_{-n})\to E(X|\mathcal F_{-\infty})$ a.s. and in $L_\mathfrak X^p$ as $n\to \infty$.
\end{prop}
\begin{proof} With the hypothesis in (i) and (ii), $\{E(X_n|\mathcal F_n), \mathcal F_n, n\geqslant 1\}$ is a martingale and $\{E(X_n|\mathcal F_{-n}), \mathcal F_{-n}, n\geqslant 1\}$ is an inverse martingale respectively. Combining convergence theorems for Banach space-valued martingales (e.g., see Pisier \cite{Pi}, Theorem 1.5 and Theorem 1.14 for conclusion (i); Theorem in \cite{Pi}, Ch.I, Section 1.5 for conclusion (ii)) with the embedding Theorem 3.3, we obtain immediately the proof.
\end{proof}
The last result in this section, we will establish a version of Birkhoff's ergodic theorem in CC space. Let $\tau: \Omega \to \Omega$ be an $\mathcal F$-measurable transformation. A transformation $\tau$ is a \textit{measure-preserving}
or, equivalently, $P$ is said to be $\tau$-\emph{invariant measure}, if $P(\tau^{-1}(A))=P(A)$
for all $A\in \mathcal F$. A
set $A\in \mathcal F$ satisfying $\tau^{-1}(A)=A$ is said to be $\tau$-\emph{invariant set} and the family of all $\tau$-invariant sets will constitute a sub-$\sigma$-algebra $\mathcal I_\tau$ of $\mathcal F$ . We say that
$\tau$ is an \emph{ergodic} if $\mathcal I_\tau$ is trivial, i.e., $P(A)=0$ or $P(A)=1$ whenever $A\in \mathcal I_\tau$.
\begin{theo}
Let $\tau$
be a measure-preserving
transformation of the probability space $(\Omega, \mathcal F, P)$ and $\mathcal I_\tau$ be the $\sigma$-algebra of invariant events with respect to $\tau$. If $X\in L_\mathfrak X^1$, then $[n^{-1}, X_\circ\tau^i]_{i=0}^{n-1} \to E(X|\mathcal I_\tau)$ a.s. as $n \to \infty$.
\end{theo}
\begin{proof}
Recall that in Th\'eor\`eme 3.1 in \cite{Fi}, Raynaud de Fitte proved a version of ergodic theorem in metric space by using the technique of approximation by discrete range random elements. To prove our result, we will present here another technique via using the embedding theorem.
Since $X$ is integrable, Theorem 3.2 in \cite{Pa} implies that for each natural number $m$, there exists a compact subset $\mathcal K_{u,m}=\mathcal K_m$ of $\mathfrak X$ such that $E(d(X, u)I(X\notin \mathcal K_m))<1/m$ and without loss of generality, we can assume that $\mathcal K_m\subset \mathcal K_{m+1}$ for all $m$. For each $n, m\geqslant 1$, defining $Y_{m,n-1}=X_\circ \tau^{n-1}$ if $X_\circ \tau^{n-1} \in \mathcal K_m$ and $Y_{m,n-1}=u$ if $X_\circ \tau^{n-1} \notin \mathcal K_m$, we have
\begin{align*}
d([n^{-1}, X_\circ\tau^i]_{i=0}^{n-1}, E(X|\mathcal I_\tau))\leqslant & d([n^{-1}, X_\circ\tau^i]_{i=0}^{n-1}, [n^{-1}, Y_{m,i}]_{i=0}^{n-1})+ d([n^{-1}, Y_{m,i}]_{i=0}^{n-1}, [n^{-1}, KY_{m,i}]_{i=0}^{n-1})\\
&+d([n^{-1}, KY_{m,i}]_{i=0}^{n-1}, [n^{-1}, KX_\circ\tau^i]_{i=0}^{n-1})+d([n^{-1}, KX_\circ\tau^i]_{i=0}^{n-1}, E(X|\mathcal I_\tau)) \tag{4.2}.
\end{align*}
We will estimate four parts in RHS of inequality (4.2) as follows. First, since $\mathcal K_m \cup \{u\}$ is compact and $Y_{m,n} \in \mathcal K_m \cup \{u\}$ for each $m$, Proposition 5.5 (will be given in next section) follows $d([n^{-1}, Y_{m,i}]_{i=0}^{n-1}, [n^{-1}, KY_{m,i}]_{i=0}^{n-1}) \to 0$ as $n\to \infty$. Second, according to properties (2.3), (2.6) and the definition of $Y_{m,n}$, we obtain
\begin{align*}
&d([n^{-1}, KY_{m,i}]_{i=0}^{n-1}, [n^{-1}, KX_\circ\tau^i]_{i=0}^{n-1})\leqslant d([n^{-1}, Y_{m,i}]_{i=0}^{n-1}, [n^{-1}, X_\circ\tau^i]_{i=0}^{n-1})\\
&\leqslant n^{-1}\sum_{i=0}^{n-1} d(Y_{m,i}, X_\circ\tau^i)= n^{-1}\sum_{i=0}^{n-1} d(X_\circ\tau^i, u)I(X_\circ\tau^i \notin \mathcal K_m)= n^{-1}\sum_{i=0}^{n-1} (d(X, u)I(X\notin \mathcal K_m))_\circ\tau^i.
\end{align*}
For each $m$, applying the classic Birkhoff ergodic theorem for real-valued random variable $d(X, u)I(X\notin \mathcal K_m)$, we derive
$$n^{-1}\sum_{i=0}^{n-1} (d(X, u)I(X\notin \mathcal K_m))_\circ\tau^i \to E(d(X, u)I(X \notin \mathcal K_m)|\mathcal I_\tau)\; \mbox{ a.s. as }n \to \infty.$$
Next, applying Theorem 3.3
$$d([n^{-1}, KX_\circ\tau^i]_{i=0}^{n-1}, E(X|\mathcal I_\tau))=\Big\|n^{-1}
\sum_{i=0}^{n-1} j(KX_\circ\tau^i) -j(E(X|\mathcal I_\tau))\Big\|=\Big\|n^{-1}
\sum_{i=0}^{n-1} j(KX)_\circ\tau^i -E(j(KX)|\mathcal I_\tau)\Big\|\to 0$$ a.s.
as $n\to \infty$, where the convergence comes from Birkhoff's ergodic theorem for Banach-valued random element $j(KX)$ (\cite{Pa}, Ch.VI, Theorem 9.4). Combining above arguments, we obtain
\begin{align*}
\limsup_{n\to \infty} d([n^{-1}, X_\circ\tau^i]_{i=0}^{n-1}, E(X|\mathcal I_\tau)) \leqslant 2E(d(X, u)I(X \notin \mathcal K_m)|\mathcal I_\tau) \;\mbox{ a.s. for all } m.
\end{align*}
Finally, to get the conclusion of theorem, it is sufficient to prove that $E(d(X, u)I(X \notin \mathcal K_m)|\mathcal I_\tau) \to 0$ a.s. as $m \to \infty$. Observe that $\{E(d(X, u)I(X \notin \mathcal K_m)|\mathcal I_\tau), m\geqslant 1\}$ is a non-increasing sequence, so the almost surely convergence is equivalent to the convergence in probability. For $\varepsilon >0$ arbitrarily,
\begin{align*}
P(|E(d(X, u)I(X \notin \mathcal K_m)|\mathcal I_\tau)| >\varepsilon)\leqslant \varepsilon^{-1} E(d(X, u)I(X \notin \mathcal K_m))\leqslant \varepsilon^{-1} m^{-1} \to 0\;\mbox{ as } m \to \infty,
\end{align*}
and this completes the proof of theorem.
\end{proof}
\section{Miscellaneous applications and remarks}
\begin{prop}
If $\mathfrak X$ is a complete CC space, then $K_{k(\mathfrak X)}(k(\mathfrak X))=ck(K_\mathfrak X(\mathfrak X))$. So the CC space $(ck(K_\mathfrak X(\mathfrak X)), D_\mathfrak X)$ can be embedded isometrically into a Banach space such that convex combination structure is preserved.
\end{prop}
\begin{proof}
For $A\in K_{k(\mathfrak X)}(k(\mathfrak X))$, there exists $B\in k(\mathfrak X)$ such that $A=K_{k(\mathfrak X)}B=\overline{co}K_\mathfrak X(B)$. It follows from the continuity of $K_\mathfrak X$ that $K_\mathfrak X(B)\in k(K_\mathfrak X(\mathfrak X))$, so $\overline{co}K_\mathfrak X(B)$ is a compact and convex subset of $K_\mathfrak X(\mathfrak X)$. It means $A=\overline{co}K_\mathfrak X(B) \in ck(K_\mathfrak X(\mathfrak X))$, thus $K_{k(\mathfrak X)}(k(\mathfrak X))\subset ck(K_\mathfrak X(\mathfrak X))$. The inverse implication is easy to obtain thanks to the observation that $A=\overline{co}K_\mathfrak X(A)=K_{k(\mathfrak X)}A$ for $A\in ck(K_\mathfrak X(\mathfrak X))$.
\end{proof}
Lemma 3.3 in \cite{QT} established an inequality in CC space and it is a useful tool to obtain many limit theorems (see \cite{QT, TQN}). Now by applying Theorem 3.3, this lemma may be proved more easily as follows:
\begin{prop}
\emph{(\cite{QT}, Lemma 3.3)} Let $\{a_i, b_i, 1\leqslant i\leqslant n\} \subset [0, 1]$ be a collection of nonnegative constants with $\sum_{i=1}^n a_i=\sum_{i=1}^n b_i=1$. Then $d([a_i, Kx_i]_{i=1}^n, [b_i, Kx_i]_{i=1}^n)\leqslant \sum_{i=1}^n|a_i-b_i| d(x_i, u),$
where $x_1,\ldots, x_n, u \in \mathfrak X$ are arbitrary.
\end{prop}
\begin{proof} With the notations as in Theorem 3.3, we have
\begin{align*}
d([a_i, Kx_i]_{i=1}^n, [b_i, Kx_i]_{i=1}^n)&=\Big\|\sum_{i=1}^n a_i j(Kx_i)-\sum_{i=1}^n b_i j(Kx_i)\Big\|=\Big\|\sum_{i=1}^n(a_i-b_i)j(Kx_i)\Big\|\\
&=\Big\|\sum_{i=1}^n(a_i-b_i)(j(Kx_i)-j(Ku))\Big\|\leqslant \sum_{i=1}^n|a_i-b_i|\|j(Kx_i)-j(Ku)\|\\
&=\sum_{i=1}^n|a_i-b_i|d(Kx_i, Ku)\leqslant \sum_{i=1}^n|a_i-b_i|d(x_i, u),
\end{align*}
where the last estimation follows from property (2.6).
\end{proof}
\noindent\textbf{Remark 4.} The inequality
\begin{align*}
d([a_i, x_i]_{i=1}^n, [b_i, x_i]_{i=1}^n)\leqslant \sum_{i=1}^n|a_i-b_i| d(x_i, u)\tag{5.1}
\end{align*}
does not hold in general for $x_1,\ldots, x_n \in \mathfrak X$. It will be shown via the following example:
\noindent \textbf{Example 1.} Let $(\mathfrak X, \|.\|)$ be a Banach space and we consider the operator $^2[\lambda_i, x_i]_{i=1}^n =\sum_{i=1}^n \lambda_i^2 x_i$. As shown in Example 5 of \cite{TM}, $(\mathfrak X, \|.\|,\,^2[.,.])$ is a CC space. For $0 \neq x, y \in \mathfrak X$, we have
\begin{align*}
d\big(\,^2[4/5, x ; 1/5, y], \, ^2[2/5, x ; 3/5, y]\big)=\|(16x/25+y/25)-(4x/25 + 9y/25)\|=\|12x/25-8y/25\|.
\end{align*}
Choosing $y=-x/2$, we get $\|12x/25-8y/25\|=16\|x\|/25$. On the other hand, $|4/5-2/5|.\|x\|+|1/5-3/5|.\|y\|=3\|x\|/5<16\|x\|/25$, so (5.1) fails with $u=0$.
The result below is the Etemadi SLLN in CC space and it was proved in \cite{TM} via approximation method by simple random elements. However, a different proof can be obtained by combining Etemadi's SLLN in Banach space (\cite{Et}, Remark 2) with embedding Theorem 3.3 and using simultaneously the same scheme as in proof of Theorem 4.14.
\begin{prop} \emph{(\cite{TM}, Theorem 5.1)} Let $\{X, X_n, n\geqslant 1\}$ be a sequence of pairwise i.i.d. $\mathfrak X$-valued random elements. Then, $[n^{-1}, X_i]_{i=1}^n \to EX$ a.s. as $n\to \infty$.
\end{prop}
The following proposition will present a special form of Jensen's equality and it plays an important role in establishing general case. This inequality can be prove easily by combining Theorem 3.3 and a corresponding version in Banach space, moreover it was proved directly by Ter\'{a}n (\cite{Te}, Lemma 3.2.). However, in proof of this inequality below, we will give another direct manner which seems to be more simple than the one of Ter\'{a}n \cite{Te}.
\begin{prop}
Let $\varphi: \mathfrak X\to \mathbb R$ be a midpoint convex function and $\{x_i\}_{i=1}^n \subset K(\mathfrak X)$ be a sequence of convex points of $\mathfrak X$. If $\{q_i\}_{i=1}^n$ is a sequence of positive rational numbers with $\sum_{i=1}^n q_i =1$, then
$\varphi([q_i, x_i]_{i=1}^n)\leqslant \sum_{i=1}^n q_i \varphi(x_i).$
Furthermore, if $\varphi$ is lower semicontinuous then $
\varphi([r_i, x_i]_{i=1}^n)\leqslant \sum_{i=1}^n r_i \varphi(x_i)$, where $r_i> 0$, $\sum_{i=1}^n r_i =1$.
\end{prop}
\begin{proof}
The first case, we present the proof of inequality above when $q_i=1/n, i=1, \ldots , n$. Namely, we now prove that
\begin{align*}
\varphi([n^{-1}, x_i]_{i=1}^n)\leqslant n^{-1}\sum_{i=1}^n \varphi(x_i).\tag{5.2}
\end{align*}
The proof of (5.2) is by induction on $n$. If $n=2$, (5.2) holds clearly by definition of midpoint convex function. Suppose that (5.2) holds for $n=2^k$ $(k\in \mathbb N)$, we will prove that (5.2) also holds with $n=2^{k+1}$. Indeed, for $\{x_1, x_2, \ldots , x_{2^{k+1}}\}\subset K(\mathfrak X)$, we obtain
\begin{align*}
\varphi\Big(\big[2^{-(k+1)}, x_i\big]_{i=1}^{2^{k+1}}\Big)&=\varphi\Big(\Big[1/2, \big[2^{-k}, x_i\big]_{i=1}^{2^k}\,; 1/2, \big[2^{-k}, x_i\big]_{i=2^k+1}^{2^{k+1}}\Big]\Big)\leqslant \frac{1}{2}\varphi\Big(\big[2^{-k}, x_i\big]_{i=1}^{2^k}\Big)
+ \frac{1}{2}\varphi\Big(\big[2^{-k}, x_i\big]_{i=2^k+1}^{2^{k+1}}\Big)\\
&\leqslant \frac{1}{2^{k+1}}\sum_{i=1}^{2^k}\varphi(x_i)+ \frac{1}{2^{k+1}}\sum_{i=2^k+1}^{2^{k+1}}\varphi(x_i)=\frac{1}{2^{k+1}}\sum_{i=1}^{2^{k+1}}\varphi(x_i).
\end{align*}
Therefore, inequality (5.2) holds for all $n=2^k$ $(k\in \mathbb N)$. Moreover, when $n$ has form $2^k$, (5.2) holds not only for $\{x_i\} \subset K(\mathfrak X)$ but also for $\{x_i\}\subset \mathfrak X$. The next step, we will prove that if (5.2) is satisfied for $n>2$ then it is also satisfied for $n-1$. Now let $\{x_1, x_2, \ldots, x_{n-1}\}\subset K(\mathfrak X)$ and denote $x_n=[(n-1)^{-1}, x_i]_{i=1}^{n-1}\in K(\mathfrak X)$, it follows from properties (CC.i), (CC.ii), (2.5) and induction hypothesis that
\begin{align*}
\varphi([(n-1)^{-1}, x_i]_{i=1}^{n-1})&=\varphi\left(\left[n^{-1}, x_1 ; n^{-1}, x_2 ; \ldots ; n^{-1}, x_{n-1} ; n^{-1}, \left[(n-1)^{-1}, x_i\right]_{i=1}^{n-1}\right]\right)\\
&=\varphi([n^{-1}, x_i]_{i=1}^n) \leqslant \frac{1}{n}\sum_{i=1}^n\varphi(x_i)= \frac{1}{n}\sum_{i=1}^{n-1}\varphi(x_i)+\frac{1}{n}\varphi(x_n)\\
&=\frac{1}{n}\sum_{i=1}^{n-1}\varphi(x_i)+\frac{1}{n}\varphi([(n-1)^{-1}, x_i]_{i=1}^{n-1}).
\end{align*}
This implies that
$$\varphi([(n-1)^{-1}, x_i]_{i=1}^{n-1})\leqslant \frac{1}{n-1}\sum_{i=1}^{n-1}\varphi(x_i).$$
The second case, when each $q_i$ is rational, it can be expressed as $q_i=k_i/m$, where $m, k_i$ are natural numbers for all $i=1, \ldots, n$. Then, we have
\begin{align*}
\varphi([q_i, x_i]_{i=1}^n)&=\varphi([k_i/m, x_i]_{i=1}^n)\\
&=\varphi\big([\underbrace{m^{-1}, x_1 ; \ldots ; m^{-1}, x_1}_{k_1 \mbox{ times}} ; \ldots ; \underbrace{m^{-1}, x_n ; \ldots ; m^{-1}, x_n}_{k_n\mbox{ times}}]\big)\;\mbox{ (by (2.5))}\\
&\leqslant \frac{k_1}{m}\varphi(x_1)+\cdots+ \frac{k_n}{m}\varphi(x_n)=\sum_{i=1}^n q_i \varphi(x_i)\;\mbox{ (by (5.2))}.
\end{align*}
For the remaining conclusion, when $\varphi$ is lower continuous and $r_i>0$. Then, each positive real number $r_i$ is the limit of some sequence of positive and increasing rational numbers $\{q_{ij}\}_{j=1}^\infty$. Thus, by the continuity of convex combination operation, we obtain
\begin{align*}
\varphi([r_i, x_i]_{i=1}^n)&=\varphi\big(\lim_{j\to \infty}[q_{1j}, x_1 ; \ldots; q_{nj}, x_n ; 1-(q_{1j}+\cdots+q_{nj}), a]\big)\;\mbox{ (for some }a\in K(\mathfrak X))\\
&\leqslant \liminf_{j\to \infty} \varphi([q_{1j}, x_1 ; \ldots; q_{nj}, x_n ; 1-(q_{1j}+\cdots+q_{nj}), a])\\
&\leqslant \liminf_{j\to \infty}\left(q_{1j}\varphi(x_1)+\cdots+q_{nj}\varphi(x_n)+(1-(q_{1j}+\cdots+q_{nj}))\varphi(a)\right)\;\mbox{ (by the second case)}\\
&=\lim_{j\to \infty}\left(q_{1j}\varphi(x_1)+\cdots+q_{nj}\varphi(x_n)+(1-(q_{1j}+\cdots+q_{nj}))\varphi(a)\right)\\
&=r_1\varphi(x_1)+\cdots+r_n\varphi(x_n).
\end{align*}
Combining above arguments, the proposition is proved.
\end{proof}
Since the embedding theorem is only available for convexifiable domain $K(\mathfrak X)$ while initial conditions are usually imposed on CC space $\mathfrak X$, it is necessary to estimate quantities in $\mathfrak X$ with themselves in $K(\mathfrak X)$ after affecting convexification operation. The following proposition is such a result.
\begin{prop}
Let $\mathcal K$ be a compact subset of $\mathfrak X$ and $\{x_n, n\geqslant 1\} \subset \mathcal K$. Then, $d\big([n^{-1}, x_i]_{i=1}^n, [n^{-1}, Kx_i]_{i=1}^n\big)\to 0$ as $n \to \infty$.
\end{prop}
\begin{proof}
For $\varepsilon >0$ arbitrarily, there exists a finite collection $\{t_1,\ldots,t_m\}$ of elements of $\mathcal K$ such that $\mathcal K \subset \cup_{i=1}^m B(t_i, \varepsilon)$, where $B(u, r)=\{x\in \mathfrak X : d(u, x)<r\}$. Denote $A_1=\mathcal K \cap B(t_1, \varepsilon), \ldots, A_l =\mathcal K \cap B(t_l,\varepsilon)\cap\big\{\cup_{k=1}^{l-1}B(t_k,\varepsilon)\big\}^c$, $l=2,\ldots, m$. For each $n$, let us define $y_n=t_l$ if $x_n\in A_l$, so $d(x_n, y_n)<\varepsilon$ for all $n$. By triangular inequality and (2.6),
\begin{align*}
d\big([n^{-1}, x_i]_{i=1}^n, [n^{-1}, Kx_i]_{i=1}^n\big) &\leqslant d\big([n^{-1}, x_i]_{i=1}^n, [n^{-1}, y_i]_{i=1}^n\big)+ d\big([n^{-1}, y_i]_{i=1}^n, [n^{-1}, Ky_i]_{i=1}^n\big)\\
&\;\;\;\;\;+ d\big([n^{-1}, Ky_i]_{i=1}^n, [n^{-1}, Kx_i]_{i=1}^n\big)\\
&\leqslant 2 n^{-1} \sum_{i=1}^n d(x_i, y_i)+ d\big([n^{-1}, y_i]_{i=1}^n, [n^{-1}, Ky_i]_{i=1}^n\big)\leqslant 2\varepsilon + (I_n).
\end{align*}
We now show that $(I_n)=d\big([n^{-1}, y_i]_{i=1}^n, [n^{-1}, Ky_i]_{i=1}^n\big) \to 0$ as $n\to \infty$. For each $l=1, \ldots, m$, put
\begin{align*}
z_{l,n}=\mbox{card}\{1\leqslant i \leqslant n : y_i =t_l\},\; \mbox{ and } \mathcal T_n=\{l : 1\leqslant l \leqslant m, z_{l, n}>0\},\;n\geqslant 1.
\end{align*}
Then, $\{z_{l,n}, n\geqslant 1\}$ is the non-decreasing sequence for each $l$.
By (CC.i) and property (2.1), we obtain
\begin{align*}
[n^{-1}, y_i]_{i=1}^n=\big[n^{-1}z_{l, n},\big[z_{l, n}^{-1}, t_l\big]_{i=1}^{z_{l, n}}\big]_{l\in \mathcal T_n} \;\mbox{ and }\;[n^{-1}, Ky_i]_{i=1}^n=\big[n^{-1}z_{l, n},\big[z_{l, n}^{-1}, Kt_l\big]_{i=1}^{z_{l, n}}\big]_{j\in \mathcal T_n}=\big[n^{-1}z_{l, n}, Kt_l\big]_{l\in \mathcal T_n}.
\end{align*}
For each $l=1, \ldots, m$, we have
$\lim_{n\to \infty}d([n^{-1}, t_l]_{i=1}^n, Kt_l)= 0$ by the definition of $K$. Thus, there exists $n_{\varepsilon, m}\in \mathbb N$ such that for all $n\geqslant n_{\varepsilon, m}$ and for all $l=1,\ldots, m,$
\begin{align*}
d([n^{-1}, t_l]_{i=1}^n, Kt_l)<\frac{\varepsilon}{m}.\tag{5.3}
\end{align*}
We put
$$N_{l,\varepsilon, m}=\max_{1\leqslant k < n_{\varepsilon, m}}d\big([k^{-1}, t_l]_{i=1}^k, Kt_l\big),\;N_{\varepsilon, m}=\max_{1\leqslant l \leqslant m}N_{l,\varepsilon, m}$$
and choose the smallest integer number $n'_{\varepsilon, m}$ such that $n'_{\varepsilon, m}\geqslant \varepsilon^{-1} m.N_{\varepsilon, m}.n_{\varepsilon,m}.$ Now, for $n\geqslant n'_{\varepsilon, m}$:\\
- If $z_{l, n} \geqslant n_{\varepsilon, m}$, then it follows from (5.3) and $n^{-1}z_{l, n}\leqslant 1$ that
$$\frac{z_{l, n}}{n}d\Big(\big[z_{l, n}^{-1},t_l\big]_{i=1}^{z_{l, n}}, Kt_l\Big)<\frac{\varepsilon}{m}.$$
- If $0<z_{l, n}< n_{\varepsilon, m}$, then
\begin{align*}
\frac{z_{l, n}}{n}d\Big(\big[z_{l, n}^{-1}, t_l\big]_{i=1}^{z_{l, n}}, Kt_l\Big)< \frac{n_{\varepsilon, m}}{n'_{\varepsilon, m}}.N_{\varepsilon, m}\leqslant \frac{\varepsilon}{m}.
\end{align*}
Hence, for all $n\geqslant n'_{\varepsilon, m}$
$$\frac{z_{l, n}}{n}d\Big(\big[z_{l, n}^{-1}, t_l\big]_{i=1}^{z_{l, n}},Kt_l\Big)\leqslant \frac{\varepsilon}{m}.$$
This implies that
\begin{align*}
(I_n)\leqslant \sum_{l\in \mathcal T_n}\frac{z_{l, n}}{n}d\Big(\big[z_{l, n}^{-1}, t_l\big]_{i=1}^{z_{l, n}},Kt_l\Big)\leqslant \varepsilon
\end{align*}
for all $n\geqslant n'_{\varepsilon, m}$, so $(I_n)\to 0$ as $n\to \infty$. The proof is completed.
\end{proof}
\begin{center}
\textbf{Acknowledgements}
\end{center}
The author wishes to thank Assoc. Prof. Pedro Ter\'{a}n (Escuela Polit\'ecnica de Ingenier\'ia, Departamento de Estad\'istica e I.O. y D.M., Universidad de Oviedo, E-33071 Gij\'on, Spain) for helpful discussions.
The author also would like to thank Dr. Tobias Fritz (Perimeter Institute for Theoretical Physics, Waterloo ON, Canada) for letting us know the reference \cite{CF}.
|
1,108,101,566,374 | arxiv | \section{Introduction}
Streaming services are witnessing a rapid growth in mobile networks. According to Allot Communications \cite{allot}, HTTP streaming service made up 37 percent of mobile broadband traffic during the second half of 2010. This presents new challenges for operators that are used to classify services into real-time (voice-like) and elastic (data-like) services. Indeed, classical QoS metrics in mobile networks are blocking rates for real-time traffic and average user throughput for elastic one, and operators dimension their networks for satisfying targets on those metrics \cite{Hegde}. However, the particular nature of streaming applications, halfway between real-time and elastic services, is raising
the following difficult questions in wireless environments. First, which QoS metrics best represent the QoE perceived by users. Second, how to predict these QoE metrics for a given traffic intensity and to dimension the network accordingly.
The first step towards defining QoE and predicting it is to understand how streaming is played. In general, media players at the devices are equipped with a playout buffer that stores arriving packets. As
long as there are packets in the buffer, the video is played smoothly. Once the buffer empties, the spacing between packets does not follow
the original one. These {\em starvations} cause large {\em jitters} and are particularly annoying for end users that see frozen images. One feasible way to avoid starvations is to introduce a start-up (also called prefetching) delay before playing the stream,
and a rebuffering delay after each starvation event. Then after a
number of media
frames accumulate in the buffer, the media player starts to work. This leads to two important sets of QoE metrics: starvation properties (probability, frequency, etc.) and startup/re-buffering delays.
Once the behavior of media streaming service is understood, the particularity of offering it over wireless networks is considered. Indeed, the wireless channel is subject to a large variability due to fading, mobility, etc. On top of this, it is a shared channel where multiple users are served simultaneously and cell capacity is divided among them. This introduces two variability time scales: flow level (tens of seconds) driven by the departures/arrivals of calls and wireless channel variability time scale (milliseconds) driven by the fast fading. In addition, the variable bit-rate (VBR) streaming leads to a variable service rate at
the time scale of tens of milliseconds.
\subsection{Related Literature}
Starting from the mid-nineties, many works focused on performance analysis for real time video delivery over wireless networks. A large attention was given to enhance video coding in order to combat errors introduced by the wireless channel variability. \cite{Stuhlmuller} derived a theoretical framework for the picture quality after video transmission over lossy channels, based on a 2-state Markov model describing burst errors on the symbol level. Authors in \cite{Zhang} and \cite{JSAC02:He} proposed methods for estimating the channel distortion and its impact on performance. These works mainly focused on ensuring robustness of video delivery over a variable wireless channel but did not consider the impact of flow level dynamics. A more recent set of works considered flow level performance in cellular networks delivering real time video. Authors in \cite{Hegde} proposed a queuing theory model for deriving QoS when integrating elastic and video traffic in cellular networks; video QoS was expressed by a blocking rate, while average throughputs and delays represent QoS for elastic traffic. Authors in \cite{SalahTVT} derived the Erlang-like capacity region for a traffic mix including real time video, the aim being to dimension the network for ensuring a target QoS. \cite{karrayTWireless} derived the stability region of the network and showed how it is impacted by real-time video traffic.
With the increased popularity of streaming services over wireless systems, more attention has been dedicated to deriving QoE performance metrics for this new streaming service, knowing the initial buffering period and its relationship with starvation. QoE issue has been addressed in the important works \cite{TMM10:Luan,TMM08:Liang,JSAC11:ParandehGheibi,Infocom12:Xu}. These works adopt different methodologies and assumptions for deriving QoE metrics. \cite{TMM10:Luan} considered a general G/G/1 queue where the arrival and service rates are characterized by their first two moments, while \cite{TMM08:Liang} considered a particular wireless channel model where the channel oscillates between \emph{good} and \emph{bad} states following the extended Gilbert model \cite{Sanneck}. Authors in \cite{JSAC11:ParandehGheibi} considered a particular P2P video streaming based on random linear network coding; this simplifies the packet requests at the network layer and allows to model the receiver buffer as an M/D/1 queue. Finally, an M/M/1 queue model has been adopted in \cite{Infocom12:Xu}, allowing to derive explicit formula for QoE metrics.
As of the tools used in the literature for deriving QoE metrics, they differ in the adopted system models. \cite{TMM10:Luan} adopted a diffusion approximation where the discrete buffer size is replaced with a Brownian motion whose drift and diffusion coefficients are calculated based on the first two moments of the arrival and service rates. \cite{TMM08:Liang} presented a probabilistic analysis based on an a priori knowledge of the playback and arrival curves. \cite{JSAC11:ParandehGheibi} calculated bounds on the playback interruption probability based on the adopted M/D/1 buffer model. Explicit formula of the exact distribution of the number of starvations has been obtained in \cite{Infocom12:Xu} based on a Ballot theorem approach \cite{Takacs}. Authors in \cite{Infocom12:Xu} also proposed an alternative approach for computing QoE metrics based on a recursive algorithm that performs better than the Ballot Theorem in terms of complexity. They further studied the
QoE metrics of a persistent video streaming in cellular networks in \cite{Networking12:Xu}.
The above-described works on QoE estimation are very useful for catching the impact of variability of the wireless channel due to fast fading or even user's mobility. However, the underlying models fail to capture the large variations due to flow dynamics. For instance, the diffusion approximation in \cite{TMM10:Luan} supposes that the drift and diffusion coefficients are constant over time, which is not true when the number of concurrent flows changes during playback in wireless environments. The assumption of Poisson packet arrivals in \cite{JSAC11:ParandehGheibi,Infocom12:Xu} also fails to take into account these flow dynamics. Note that the analysis of \cite{JSAC11:ParandehGheibi} has been generalized to a two-state Markovian arrival process, but this corresponds more to a bursty traffic due to a Gilbert channel model than to flow dynamics.
\subsection{Main Contributions and Organization}
To the best of our knowledge, this paper is the first attempt to assess the impact of flow dynamics on the
QoE of streaming. We model the system as two queues in tandem. The first queue, representing the scheduler
of the base station, is modeled as a processor sharing queue, while the second represents the playout buffer
whose arrival rates are governed by the output process of the base station queue. We first consider a static
channel (no fast fading) with Constant Bit Rate (CBR) streaming, and derive the prefetching delay distribution
and the starvation probability generation function using Partial Differential Equations (PDEs) as well as
Ordinary Differential Equations (ODEs) constructed over the Markov process describing the flow dynamics.
We then extend the model to the Variable Bit Rate (VBR) streaming using diffusion approximation. We next
extend the model to include a fast fading channel and show that the impact of flow dynamics is preponderant
over the variability of the channel due to fast fading. Extensive simulations show that our models are
accurate enough to be used in QoE prediction. Our analysis also sheds light on the novel QoE enhancement strategies.
The results presented here can be used by the base station to ``recommend'' the prefetching
parameters to the media player, and to guide the admission control and the scheduling algorithms.
The main contributions of this work are summarized as follows:
\begin{enumerate}
\item Developing an analytical framework for assessing the impact of flow dynamics in wireless data networks on streaming QoE.
\item Evaluating the performance of both CBR and VBR streaming.
\item Showing that the variability of the throughput due to flow dynamics is preponderant over the impact of fast channel variability due to fast fading.
\end{enumerate}
The remainder of this paper is organized as follows. Section \ref{sec:motivation} describes the
system model and the QoE metrics. Section \ref{sec:cbr} presents the analytical framework
for analyzing QoE taking into account flow dynamics. VBR streaming is analyzed in Section
\ref{sec:vbr}. The analytical model is verified through simulations in section \ref{sec:simu}
and a perfect match is demonstrated. Section \ref{sec:extension} extends the QoE analysis
framework to include the impact of fast fading. It also shows how to analyze QoE in a general
case where streaming services coexist with classical data services. Section \ref{sec:conclusion} eventually concludes the paper.
\section{Problem Description and Model}
\label{sec:motivation}
In this section, we first describe our motivation and the network settings.
We then define the metrics of quality of experience
for media streaming service, and present a queueing model for the playout buffer at a user.
\subsection{Motivation and Network Description}
We consider a wireless data network that supports a number of flows
When a new flow ``joins'' the network, it requests the streaming service from a media server.
After the connection has been built, the streaming packets are transmitted through the base station (BS).
The streaming flows have \emph{finite} sizes,
which means that a flow ``leaves'' the network once the transmission completes.
Note that each active user cannot watch more than one streams at the mobile device
simultaneously. Hence, we use the terms ``flow'' and ``user'' interchangeably.
In wireless data networks, a streaming flow may traverse both wired and wireless links,
whereas the BS is the
bottleneck for the sake of limited channel capacity
In other words, the queue of an \emph{active} flow is always backlogged at the BS.
This assumption holds because most of Internet streaming servers
use TCP/HTTP protocols to deliver streaming packets. The
TCP protocol in the transport layer exploits the available
bandwidth by pumping as more packets as possible to the BS.
The BS can easily perform per-flow congestion control to limit TCP
sending rate to avoid buffer overflow (a small number of concurrent flows in total).
The adaptive coding and modulation in the physical layer, and ARQ scheme at the MAC layer
can effectively avoid TCP packet loss. Due to these reasons, we do not consider TCP packet losses in our system.
Streaming flows may experience fast fading and normalized signal-to-noise ratio (NSNR) scheduling is usually adopted to achieve multiuser diversity with
the consideration of fairness \cite{TVT07:Choi,TOC06:Song}.
The scheduling duration is commonly around 2ms \cite{Bonald1}.
NSNR selects the user that has the largest ratio of SNR compared with its mean SNR.
It is similar to the well-known proportional fair (PF) scheduler
in that they both attempt to achieve channel access-time fairness.
We consider NSNR instead of
PF for two reasons. First, the moments of throughput of PF
do not have explicit results, even asymptotic ones (see \cite{TOC06:Song} and references therein)
when the channel capacity is computed according to the Shannon theorem.
Second, NSNR needs the knowledge of the average SNR that can be obtained from the history information.
When a flow join the network, its throughput process is stationary as long as the number
of active flows does not change. However, the throughput of PF scheduler is not
stationary, but is a dynamic function of time $t$ (see \cite{TWC04:Whiting} for the ODE throughput model
with two users). It relies on the configuration of the average throughput at time 0.
The initial average throughput may influence the start-up delay, and cause the whole system
intractable. Here, we make a declaration that our analytical framework applies
to any wireless scheduling algorithm whose first two moments of throughput
per-slot can be derived.
At the user side, incoming bits are reassembled into video \emph{frames}
step by step. These video frames are played with a deterministic rate, e.g. 25 frames per second (fps)
in the TV and movie-making businesses.
The size of a frame is determined by the video codec, i.e. a high definition video streaming
or a complex video scenario require more bits to render each frame.
We consider two modes of streaming services: constant bit-rate (CBR) and variable bit-rate (VBR).
In CBR, the rate at which a codec's output data should be consumed is constant (i.e. the
same size of frames). The VBR streaming has a variable frame size so as to deliver a more efficiently encoded
and consistent watching experience. The frame size roughly follows Erlang/Gamma distributions \cite{MASI08}.
\begin{figure}[!htb]
\centering
\includegraphics[width=3.0in, height=0.7in]{threescales.eps}
\caption{Illustration of three different time scales}
\label{fig:time_scales}
\end{figure}
We highlight the properties of the streaming system briefly to facilitate the mathematical modeling.
In our system, there exist three time scales shown in Fig.\ref{fig:time_scales}: i) the scheduling duration (e.g. 2ms); ii)
playback interval (e.g. 40ms for a video frame rate of 25fps), and iii) duration of flow dynamics (lasting about tens of seconds).
The scheduler and the media player do not work at the same granularity of time scale and job size.
\subsection{QoE Metrics}
There exist five industry-standard video quality metrics. Authors in \cite{Sigcomm2011} summarize them
into five terms: \emph{join time}, \emph{buffering ratio}, \emph{rate of buffering events},
\emph{average bitrate} and \emph{rendering quality}. The first three metrics reflect
the fundamental tradeoff in designing the prefetching process. The last two metrics
are concerned with source coding. For
analytical convenience, we redefine the QoE metrics regarding ``prefetching'' process.
\noindent
- \textbf{Start-up delay:} The start-up delay denotes the duration (measured in seconds)
between the time that a user initiates a session and the time that the media player starts
playing video frames. In the initial prefetching phase, the player starts until
the duration of received video reaches the \emph{start-up threshold} measured in seconds of video segment.
The start-up delay depicts the user's impatience of waiting for the
video playback. Once the starvation event happens, the player pauses and resumes until
the rebuffered video duration reaches the \emph{rebuffering threshold}. We use the term
\emph{rebuffering delay} to differentiate the rebuffering time from the initial start-up delay.
\noindent
- \textbf{Starvation probabilities:} When
the playout buffer of a user becomes empty
before the video has been completely played, we call this event a \emph{starvation}.
The starvation is very annoying to users. We adopt the starvation probability to evaluate
the influence of the start-up threshold. In addition, if the rebuffering process is taken
into account, we analyze the probabilities of having a certain number of starvations.
Note that the start-up delay and the starvation probabilities can be used to compute the
QoE metrics in \cite{Sigcomm2011}. The expected number of starvations
is the sum of the products of the number of starvations and its probability. The expected
buffering time equals to the product of the start-up delay in each rebuffering and
the mean number of starvation events (including
the initial prefetching).
\subsection{Basic Queueing Model of Playout Buffer}
We consider a wireless cellular network that supports up to $K$ simultaneous flows.
The purpose of admission control is to avoid the overloading of the cell.
We make the following assumptions:
\noindent
- \textbf{Single user type and static channel:} We begin with the case where streaming users coexist in a static channel, as this provides an easier route to understand
the developed QoE evaluation model. The impact of fast fading is added in section \ref{sec:extension}. We also consider that all the flows have the same SNR, and hence, in a static channel case, identical throughput.
The extension to multiple user classes is presented in Section \ref{sec:conclusion}.
\noindent
- \textbf{Exponentially distributed video duration:} The video duration, measured in
seconds, is exponentially distributed with mean $1/\theta$.
Though the exponential distribution is not the most realistic way to describe video duration,
it reveals the essential features of the system, and is the first
step for more general distributions
Later on (in section \ref{sec:conclusion}), we allow the video length to have the
hyper-exponentially distribution that is commonly adopted in wireless networks \cite{TON08:Salah}.
\noindent
- \textbf{Processor sharing at the BS:} The scheduling slot is
very small (e.g. $\leq$2ms in 3G LTE) compared with the service interval between
two video frames (e.g. 40ms at 25fps) in the playout buffer.
This property enables us to treat the BS as an egalitarian processor sharing queue where
all the flows are served simultaneously. Hence, the per-flow throughput, depicted in continuous time,
is a deterministic step-wise function of the number of active users in the static channel (e.g. \cite{Borst}).
\noindent
- \textbf{Continuous time playback:} The service of video contents is regarded as a continuous
process, instead of a discrete rendering of adjacent video frames spaced by a fixed interval.
This assumption is commonly used (see \cite{wangbing}) and is validated by simulations in this work.
We denote by $\lambda$ the arrival rate of new video streams.
Let $Bitrate$ be the playback speed of video streams in bits per-second, and $C$ (in bps) be the capacity of the static wireless channel.
Given the exponential distribution of video duration, the file size $F$ (measured in bits) is also exponentially
distribution with mean $1/\theta_F=Bitrate/\theta$. Therefore, the dynamics of coexisting flows in the cell
can be depicted as a continuous time Markov chain with a finite state space.
We concentrate on one ``tagged'' flow in order to gain the insight of dynamics of the playout buffer.
At any time $t$,
the tagged flow sees $i$ other flows in a finite space $S:=\{0,1,\cdots,K-1\}$. We denote by $\{I(t);t\geq 0\}$
the external environment process that influences the throughput of the tagged flow.
The environmental change refers to the join of a new flow, or the departure of an existing flow.
From the assumption
of Poisson flow arrival and exponentially distributed flow size, we can see that $\{I(t);t\geq 0\}$
is a homogeneous, irreducible and recurrent Markov process.
Let $\{\pi_i;i\in S\}$ be the stationary
distribution of environmental states that will be computed in the following sections.
The throughput of the tagger user is $b_i:=\frac{C}{Bitrate\cdot(i+1)}$
in seconds of video contents at state $i$. Let $N_e(t)$ be the number of changes in
the environment by time $t$. Denote by $A_l$ the time that the $l^{th}$ environmental change takes place with $A_0 = 0$
and by $I_{l}:=I(A_l)$ the state to which the environment changes after time $A_l$.
When the tagged flow joins the network, we begin to study the dynamics of its playout buffer length.
The entry time of the tagged flow is set to $t=0$.
We denote by $Q(t)$ the length of playout buffer \emph{measured in seconds} of video contents at time $t$. In the prefetching phase,
$Q(t)$ is expressed as
\begin{eqnarray}
Q_a(t) = \sum_{l=1}^{N_e(t)} b_{I_l}(A_l - A_{l-1}) + b_{I_{N_e(t)}}(t-A_{N_e(t)}).
\label{eq:queue_startup}
\end{eqnarray}
Denote by $q_a$ the start-up threshold. The start-up delay $T_a$ is defined as
\begin{eqnarray}
T_a = \inf\{t\geq 0| Q_a(t) \geq q_a\}.
\label{eq:define_startupdelay}
\end{eqnarray}
The cumulative distribution of $T_a$ is expressed as
\begin{eqnarray}
\Psi_i(t;q_a) = \mathbb{P}\{T_a < t|I(0) = i\}
\label{eq:distribution_startupdelay}
\end{eqnarray}
if the tagged flow is in state $i$ upon arrival.
Let $q$ be the duration of buffered video content in seconds before the video playback.
When the media player starts the rendering, the queueing process $\{Q(t); t\geq 0\}$ is given by
\begin{eqnarray}
Q_b(t) = q - t {+} \sum_{l=1}^{N_e(t)} b_{I_l}(A_l {-} A_{l-1}) + b_{I_{N_e(t)}}(t{-}A_{N_e(t)}),
\label{eq:queue_service}
\end{eqnarray}
if the time axis starts at the instant of playing. Define $c_i := b_i - 1$ for all $i\in S$. Define
\begin{eqnarray}
T_b = \inf\{t\geq 0| Q_b(t) < 0\}
\label{eq:define_ruintime}
\end{eqnarray}
to be the time of observing empty buffer. Denote by $T_e (T_e<\infty)$ the completion time of downloading of
the tagged flow. If $T_b$ is less than $T_e$, a starvation event happens at the playout buffer.
Then, the ultimate starvation probability is computed as
\begin{eqnarray}
W_i(q_a) = \mathbb{P}\{T_b < T_e|I(0) = i, Q_b(0) = q_a\}
\label{eq:distribution_ruintime}
\end{eqnarray}
when the playback begins at state $i$, and stops at an arbitrary
state that meets an empty queue for the first time.
The ultimate starvation probability is the weighted sum of starvation probabilities
at all the ergodic entry states.
\section{Complete QoE analysis for CBR streaming}
\label{sec:cbr}
In this section, we model the starvation probability and the prefetching
delay in a static channel
where the media flows join and leave the system dynamically. The key idea
is to investigate the queueing process of one ``tagged'' flow on the basis
of differential equations.
\subsection{Markov models of flow dynamics}
Our purpose here is to construct two Markov chains to characterize the dynamics
of the number of active flows. The first one models flow dynamics
before the ``tagged'' flow joins in the network. Based on this Markov process,
we can compute the stationary distribution of the number of active flows
observed by the ``tagged'' flow at the instant when it is admitted.
The second one describes the flow dynamics after the tagged flow is admitted.
This Markov process enables us to investigate how the playout buffer of the tagged user
changes.
\begin{figure}[!htb]
\centering
\includegraphics[width=3.2in]{ps_markov0.eps}
\caption{Markov chain before the tagged flow joins}
\label{fig:markovchain0}
\end{figure}
We first look into the flow dynamics before the tagged flow joins.
When the NSNR scheduling
algorithm is used, the per-flow throughput is proportional to the reciprocal
of flow population. Given the Poisson arrival rate and the exponentially
distributed service time, we can model the flow dynamics
as a finite-state Markov chain $\mathbf{Z}_a:=\{0,1,\cdots,K\}$ shown in Fig.\ref{fig:markovchain0}.
The transition rate from $i$ to $i-1$ is $\mu_i:=C\theta_F$.
Note that the network capacity is a constant in the static channel.
Hence, we let $\mu_i = \mu$ for $i=1,2,\cdots, K$ and $\mu_0 = 0$.
Define $\rho:=\frac{\lambda}{\mu}$ to be the load of the channel.
Let $z_i^a$ be the stationary probability that there exist $i$ flows. We give
the expression of $z_i^a \;(i\in S\cup\{K\})$ directly because it is easy to compute.
\begin{eqnarray}
z_0^a = \frac{1-\rho}{1-\rho^{K{+}1}};\;\;\;\; z_i^a = \frac{\rho^i(1-\rho)}{1-\rho^{K{+}1}},\;\;\; \forall i=1,\cdots, K. \nonumber
\label{eq:stationarydistribution0}
\end{eqnarray}
The tagged user cannot be admitted at state $K$ due to the admission control at the BS. Therefore, if it joins
in the network successfully, it will observe $i$ other flows with the probability $\pi_i$,
\begin{eqnarray}
\pi_i = \frac{z_i^a}{1-z_K^a} = \frac{\rho^i(1-\rho)}{1-\rho^K}, \;\;\; \forall i\in S.
\label{eq:stationarydistribution1}
\end{eqnarray}
After the tagged flow joins in the network, the Markov process $\mathbf{Z}_a$ has been altered.
The states are the number of flows observed by the tagged user, and the transition rates
are conditioned on the presence of the tagged flow.
Therefore, we model the flow dynamics observed by the tagged flow
through a finite-state Markov chain $\mathbf{Z}_b:=\{0,1,\cdots,K{-}1\}$ in Fig.\ref{fig:markovchain1}.
Denoted by $\nu_i$ the transition rate from state $i$ to $i{-}1$. The per-flow throughput at state $i$
is $\frac{C}{(i{+}1)}$ so that there has $\nu_i:=\frac{iC\theta_F}{(i{+}1)} = \frac{i}{i{+}1}\mu$
for all $i\in S$
For the simplicity of notations, we denote by $\lambda_i$ the transition rate from state $i$ to $i+1$.
It is obvious to have $\lambda_i=\lambda$ for all $i\neq K{-}1$ and $\lambda_{K{-}1} = 0$.
\begin{figure}[!htb]
\centering
\includegraphics[width=3.2in]{ps_markov1.eps}
\caption{Flow dynamics observed by tagged flow}
\label{fig:markovchain1}
\end{figure}
\subsection{Modeling prefetching delay distribution}
\label{subsec:prefetching}
We want to know how long the tagged user needs to wait in the prefetching phase.
Recall that $q_a$ is the start-up threshold.
The prefetching time is only meaningful to the case that the video duration
is \textbf{longer} than $q_a$.
In the prefetching phase, because the playout buffer
does not serve video frames, the queue length of the tagged flow evolves in
an infinitesimal time interval $[0,h]$ with $h (>0)$
\begin{eqnarray}
Q(t+h) = Q(t) + b_ih.
\label{eq:prefetchingqueue1}
\end{eqnarray}
The distribution of the prefetching time is difficult to solve directly.
We resort to the following duality problem:
\medskip
\boxed{
\begin{minipage}{3.1in}
{\ensuremath{\mbox{\sc Duality Problem:}}}
What is the starvation probability by time $t$ if the queue is depleted with rate
$b_i (i\in S)$ and the duration of prefetched contents is $q_a$?
\end{minipage}
}
\medskip
In the duality problem, the queue dynamics in $[0, h]$ is modified as
\begin{eqnarray}
\tilde{Q}(t+h) = \tilde{Q}(t) - b_ih.
\label{eq:prefetchingqueue2}
\end{eqnarray}
We define $U_i(q,t)$ $(\forall i\in S)$ to be the probability of starvation
before time $t$, conditioned on the entry state $i$ and the initially prefetched
content $q$. We use differential equations to obtain $U_i(q,t)$.
In the infinitesimal time interval $[0,h]$, there are four possible events
\begin{itemize}
\item no change of the concurrent flows;
\item arrival of one flow;
\item departure of one flow (not the tagged one);
\item occurrence of more than one events.
\end{itemize}
\noindent Conditioned on the events occurred in $[0,h]$, we have
\begin{eqnarray}
U_i(q,t) \!\!\!&=&\!\!\! (1-\lambda_i h-\nu_ih) U_i(q-b_ih, t-h) \nonumber\\
\!\!\!&&\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! + \lambda_i h U_{i+1}(q-b_ih, t-h) \nonumber\\
\!\!\!&&\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! + \nu_i h U_{i-1}(q-b_ih, t-h) + o(h), \;\; \forall i\in S.
\label{eq:solving_startupdelay_eq1}
\end{eqnarray}
The above equation yields for $i\in S$
\begin{eqnarray}
&&\!\!\!\!\!\!\!\!\!\!\!\!\frac{1}{h}(U_i(q,t) - U_i(q-b_ih, t-h)) = -(\lambda_i+\nu_i)U_i(q-b_ih, t-h) \nonumber\\
&&\!\!\!\!\!\!\!\!\!\!\!\! + \lambda_i U_{i+1}(q{-}b_ih, t{-}h) + \nu_i U_{i-1}(q{-}b_ih, t{-}h) + o(h)/h.
\label{eq:solving_startupdelay_eq2}
\end{eqnarray}
When $h\rightarrow 0$, the left side of eq.\eqref{eq:solving_startupdelay_eq2}
is the partial differentials of $U_i(q,t)$ over $q$ and $t$. In other words, eq.\eqref{eq:solving_startupdelay_eq2}
yields a set of linear partial differential equations (PDEs)
\begin{eqnarray}
\frac{\partial U_i}{\partial t}\!\!&=&\!\! -b_i \frac{\partial U_i}{\partial q} -(\lambda_i+\nu_i)U_i(q,t) \nonumber\\
&& + \lambda_i U_{i+1}(q,t) + \nu_i U_{i-1}(q,t), \;\; \forall i\in S,
\label{eq:solving_startupdelay_eq3}
\end{eqnarray}
with the initial condition
\begin{eqnarray}
U_i(q,0) = 0, \quad \forall q > 0
\label{eq:solving_startupdelay_eq4}
\end{eqnarray}
and the boundary conditions at both sides
\begin{eqnarray}
U_i(0,t) \!\!&=&\!\! 1,\;\;\; \forall \; t\geq 0,
\label{eq:solving_startupdelay_eq5}\\
\lim_{q\rightarrow \infty} U_i(q,t) \!\!&=&\!\! 0,\;\;\; \forall \; t\geq 0.
\label{eq:solving_startupdelay_eq6}
\end{eqnarray}
The initial condition in eq.\eqref{eq:solving_startupdelay_eq4} means
that the starvation cannot happen at time 0 for $q > 0$.
The right-side boundary condition says that the starvation will not happen
before $t$ if the initial prefetching is large enough.
Comparing Eq.\eqref{eq:solving_startupdelay_eq4} with Eq.\eqref{eq:solving_startupdelay_eq5},
we find that $U_i(q,t)$ is discontinuous at $(q,t)=(0,0)$.
This greatly increases the complexity to obtain $U_i(q,t)$, which will be shown later.
Here, the c.d.f. of start-up delay is the solution of linear PDEs
by letting $q$ be $q_a$. To solve the linear PDEs, we first define a matrix as
\begin{eqnarray}
\mathbf{M}_{S}=\left( \begin{array}{cccccc}
\lambda_0 & -\lambda_0 & 0 & \cdots & 0 & 0 \\
-\nu_1 & \lambda_1+\nu_1 & -\lambda_1 & \cdots & 0 & 0 \\
\cdots & \cdots & \cdots & \cdots & \cdots & \cdots \\
0 & 0 & \cdots & \cdots & -\nu_{N{-}1} & \nu_{N{-}1} \end{array} \right).
\label{eq:eq_mus}
\end{eqnarray}
According to the lemma in Appendix.B, the tridiagonal matrix $\mathbf{M}_{S}$ is diagonizable
Let $D_S$ be an invertible matrix, and $\Lambda_S$ be a diagonal matrix that contains the eigenvalues
of $\mathbf{M}_{S}$. Then, there has $\mathbf{M}_{S}=D_S\Lambda_SD_S^{-1}$.
Define a vector function $\mathbf{F}(q,t)$ as
\begin{eqnarray}
\mathbf{F}_i(q,t) {=} 1-\Phi(\frac{q-b_it}{\sqrt{\alpha t}})
, \quad \forall i\in \mathbf{S},
\label{eq:g_function}
\end{eqnarray}
where $\alpha$ is a very small positive and $\Phi(x) = (1/\sqrt{2\pi})\int_{-\infty}^{x}e^{-y^2/2}dy=\frac{1}{2}\textbf{erfc}(-\frac{x}{\sqrt{2}})$.
Then, the linear PDEs in Eq.\eqref{eq:solving_startupdelay_eq3} are solved by
\begin{eqnarray}
\mathbf{U}(q,t) = D_S\exp{(-\Lambda_St)}D_S^{-1}\cdot\{1-\frac{1}{2}\textbf{erfc}(-\frac{q{-}b_it}{\sqrt{2\alpha t}})\}.
\label{eq:pde_solution}
\end{eqnarray}
So far, we have derived the explicit c.d.f. of start-up delay, which only involves a small-scale matrix decomposition.
Detailed analysis can be found in Appendix.
\noindent \textbf{Remark:} The numerical integral of the PDEs may be unstable due to the discontinuity at the point
$(q,t)=(0,0)$. The approximated model using Brownian motion offers a close-form expression, while is less accurate
than the numerical integral.
We next analyze the probability that the prefetching process starts at state $i$ and ends at state $j$, for all $i,j\in S$.
Define
\begin{eqnarray}
V_{i,j}(q;q_a) := \mathbb{P}\{I(T_a)=j|I(0)=i,Q(0)=q\}.
\label{eq:solving_startupprocess1}
\end{eqnarray}
\noindent We can use the approach of obtaining $U_i(q,t)$ to solve $V_{i,j}(q;q_a)$.
Note that we now use the queueing dynamics in eq.\eqref{eq:prefetchingqueue1} instead of eq.\eqref{eq:prefetchingqueue2}.
In the time interval $[0,h]$, there exists for all $i,j\in S$
\begin{eqnarray}
&&\!\!\!\!\!\!\!\!\!\!\!\!\!V_{i,j}(q;q_a) = (1-\lambda_ih-\nu_ih)V_{i,j}(q+b_ih;q_a) \nonumber\\
&& \!\!\!\!\!\!\!\!+ \lambda_ihV_{i{+}1,j}(q{+}b_ih;q_a)+ \nu_i hV_{i{-}1,j}(q{+}b_ih;q_a)+ o(h).
\label{eq:solving_startupprocess2}
\end{eqnarray}
It is easy to see that $V_{i,j}(q;q_a)$ is the solution of the following differential equation
\begin{eqnarray}
b_i\dot{V}_{i,j}(q;q_a) \!\!\!&=&\!\!\! (\lambda_i+\nu_i)V_{i,j}(q;q_a) - \lambda_iV_{i{+}1,j}(q;q_a) \nonumber\\
&& - \nu_i V_{i{-}1,j}(q;q_a), \;\forall i,j\in S,
\label{eq:solving_startupprocess2}
\end{eqnarray}
with the boundary condition
\begin{eqnarray}
V_{i,j}(q_a;q_a) := \left\{\begin{matrix}
\;1 \;\;\; &&\textrm{ if }\; i=j ;\\
\;0 \;\;\; &&\;\;\textrm{otherwise }.
\end{matrix}\right.
\label{eq:solving_startupprocess3}
\end{eqnarray}
We interpret the boundary condition in the following way. If there exist $I(0)=i$ and $Q(0)=q_a$,
the prefetching duration is 0 and the prefetching process ends at state $i$.
Hence, $V_{i,j}(q_a;q_a)$ is 1 iff $i$ equals to $j$.
Define a matrix $\mathbf{M}_{V}$ as $\mathbf{M}_{V} = \mathbf{diag}\{\frac{1}{b_i}\}\cdot\mathbf{M}_S$.
We have the following property w.r.t the eigenvalues of $\mathbf{M}_{V}$.
\begin{lemma}
\label{lemma:no1} The matrix $\mathbf{M}_{V}$ has $K$ real non-negative eigenvalues, and is similar to a diagonal matrix.
\end{lemma}
\noindent Define $\mathbf{1}_j$ to be a column vector in which the $j^{th}$ element is 1 and all other elements are 0.
Eq. \eqref{eq:solving_startupprocess2} can be rewritten as
\begin{eqnarray}
\mathbf{\dot{V}}(q;q_a) = \mathbf{M}_{V} \mathbf{V}(q;q_a).
\label{eq:solving_startupprocess3.5}
\end{eqnarray}
Then, $\mathbf{V}(q;q_a)$ is solved by
\begin{eqnarray}
\mathbf{V}(q;q_a) = \exp{(\mathbf{M}_{V}q)}\cdot \mathbf{V}(0;q_a).
\label{eq:solving_startupprocess3.6}
\end{eqnarray}
According to Lemma \ref{lemma:no1}, we let $\mathbf{M}_{V} :=D_{V}\Lambda_{V}D_{V}^{-1}$ where $D_{V}$ is an invertible matrix and
$\Lambda_{V}$ is the diagonal matrix containing all the eigenvalues of $\mathbf{M}_{V} $. Therefore, Eq.\eqref{eq:solving_startupprocess3.6}
is expressed as
\begin{eqnarray}
\mathbf{V}(q;q_a) = D_V\exp{(\Lambda_{V}q)}D_V^{-1}\cdot \mathbf{V}(0;q_a).
\label{eq:solving_startupprocess3.6}
\end{eqnarray}
Submitting eq.\eqref{eq:solving_startupprocess3} to eq.\eqref{eq:solving_startupprocess3.6}, we yield
\begin{eqnarray}
\mathbf{V}(q;q_a) = D_V\exp{(\Lambda_{V}(q-q_a))}D_V^{-1}\cdot \mathbf{V}(q_a;q_a).
\label{eq:solving_startupprocess3.7}
\end{eqnarray}
\subsection{Modeling starvation probability}
The modeling of starvation probabilities should
take into account the departure of the tagged flow. Recall that
the CTMC in Fig. \ref{fig:markovchain1} assumes the persistent
tagged flow, which is not suitable for the playback process.
Before solving the starvation probabilities,
we first modify the original CTMC by adding an absorbing state \textbf{A} shown in Fig. \ref{fig:markovchain2}.
The state \textbf{A} denotes the event that the tagged flow completes its downloading.
Because of the exponentially distributed video duration,
the transition from state $i$ to
state \textbf{A} is Poisson. Denote by $\varphi_i$ the transition
rate from state $i$ to \textbf{A}. At state $i$, the bandwidth of a flow
is $\frac{C}{i+1}$, resulting in $\varphi_i := \frac{\mu}{i+1}$.
Define $c_i := b_i - 1$.
The queue length of the tagged flow changes in an infinitesimal interval $h$ according to the rule
\begin{eqnarray}
Q(t+h) = Q(t) + c_ih.
\label{eq:queuedyn_playback}
\end{eqnarray}
If $c_i>0$, the bandwidth is sufficient for continuous playback of the tagged flow and $i$ other flows.
For mathematical convenience, we suppose
that $q$ is $0^{-}$ if buffer starvation happens
When the tagged flow enters the absorbing state, it has downloaded the whole file with a non-empty playout
buffer. Thus, the starvation probability at state \textbf{A} is 0 for any $q\geq 0$.
Let $W_i(q)$ be the starvation probability with $q$ seconds of contents in the playout buffer at state $i$.
\begin{figure}[!htb]
\centering
\includegraphics[width=3.2in]{ps_markov2.eps}
\caption{Markov chain for user dynamics with an
absorbing state for departure of tagged flow}
\label{fig:markovchain2}
\end{figure}
We derive a system of ordinary differential equations for $W_i(q)$. In an
infinitesimal interval $[0,h]$, there are five possible events:
\begin{itemize}
\item no change of the concurrent flows;
\item arrival of one more flow;
\item departure of one flow (not the tagged flow);
\item the tagged flow entering the absorbing state;
\item occurrence of more than one events.
\end{itemize}
The above conditions give rise to the a set of equations
\begin{eqnarray}
\!\!\!\!W_i(q) \!\!\!&=&\!\!\! (1-(\lambda_i+\mu_i)h)W_i(q+c_ih) \nonumber\\
\!\!\!\!\!\!\!\!\!\!\!&&\!\!\! +\lambda_i W_{i+1}(q+c_ih) + \nu_{i} W_{i-1}(q+c_ih) +o(h).
\label{eq:solving_starveprob1}
\end{eqnarray}
\noindent When $h\rightarrow 0$, we obtain
\begin{eqnarray}
c_i\dot{W}_i(q) = (\lambda_i+\mu_i)W_i(q) -\lambda_i W_{i+1}(q) - \nu_{i} W_{i-1}(q).
\label{eq:solving_starveprob1}
\end{eqnarray}
The above equations can be rewritten in the matrix form
\begin{eqnarray}
\mathbf{\dot{W}}(q) = \mathbf{M}_W\mathbf{W}(q)
\label{eq:matrixform1}
\end{eqnarray}
\noindent where $\mathbf{M}_W$ is expressed in eq.\eqref{eq:matrixform2}
\begin{eqnarray}
\left( \begin{array}{cccccc}
\frac{\lambda_0+\mu_0}{c_0} & -\frac{\lambda_0}{c_0} & 0 & \cdots & 0 & 0 \\
-\frac{\nu_1}{c_1} & \frac{\lambda_1+\mu_1}{c_1} & -\frac{\lambda_1}{c_1} & \cdots & 0 & 0 \\
\cdots & \cdots & \cdots & \cdots & \cdots & \cdots \\
0 & 0 & \cdots & \cdots & -\frac{\nu_{N{-}1}}{c_{N{-}1}} & \frac{\mu_{N{-}1}+\lambda_{N-1}}{c_{N{-}1}} \end{array} \right).
\label{eq:matrixform2}
\end{eqnarray}
The solution to eq.\eqref{eq:matrixform1} is given directly by
\begin{eqnarray}
\mathbf{W}(q) =\exp{ (\mathbf{M}_Wq)}\cdot\mathbf{W}(0),
\label{eq:matrixform2.1}
\end{eqnarray}
where $\mathbf{W}(0)$ denotes the starvation probabilities with no initial prefetching.
The boundary conditions are $W_i(q)=0$ for all $i$ as $q$ approaches infinity.
Note that $W_i(0)=1$ holds for all $i$ if $c_i<0$. Otherwise, $W_i(0)$ are unknowns for all $i$ with $c_i>0$.
Using the proof of Lemma \ref{lemma:no1}, we can show that $\mathbf{M}_W$ is similar to a diagonal matrix.
There exist an invertible matrix $D_W$ and a diagonal matrix $\Lambda_W$ such that
$\mathbf{M}_W:=D_W\Lambda_WD_W^{-1}$.
The starvation probabilities $\mathbf{W}(q) $ are expressed as
\begin{eqnarray}
\mathbf{W}(q) = D_W\exp{(\Lambda_Wq)}D_W^{-1}\cdot\mathbf{W}(0).
\label{eq:matrixform2.1}
\end{eqnarray}
The eigenvalues in $\Lambda_W$ are sorted in a decreasing order.
According to Gershgorin circle theorem \cite{matrixbook},
the signs of eigenvalues are uncertain since the centers of the Gershgorin
circles can be positive or negative. Based on the signs of $c_i$ for $i\in\mathbf{S}$, we obtain the following corollary.
\begin{corollary}
Suppose that $c_i$ is positive for $0\leq i<k$ and is negative for $k\leq i <K$.
The matrix $\mathbf{M}_W$ has $k$ positive eigenvalues and $K{-}k$ negative eigenvalues.
\end{corollary}
The unknowns in $\mathbf{W}(0)$ can be solved subsequently.
Define a vector $\bar{\mathbf{W}}:=D_W^{-1}\cdot\mathbf{W}(0)$.
When $q$ is infinitely large, $\mathbf{W}(q)$ is a zero vector, resulting in
$\exp{(\Lambda_Wq)}D_W^{-1}\cdot\mathbf{W}(0) = 0$.
Because the first $k$ eigenvalues are positive in $\Lambda_W$, there must have
$\bar{W}_i=0$ for $i<k$. Hence, the unknowns $W_i(0)$ for $i<k$ can be derived.
Next, we build a bridge to interconnect the prefetching
threshold and the starvation probability function $W_i(q)$.
For a given prefetching threshold $q_a$,
the starvation event takes place only when the video duration $T_{video}$ is longer than $q_a$.
This is to say, a flow with $T_{video} > q_a$ can be regarded as a tagged flow.
When the prefetching process is finished, the tagged flow enters the playback process.
Conditioned on the distribution of entry states $\mathbf{\pi}$,
the distribution of the states that the playback process begins (or the prefetching process ends) is computed by
$\mathbf{\pi}\cdot \mathbf{V}(0;q_a)$.
Then, the starvation probability with the prefetching threshold $q_a$ is obtained by
\begin{eqnarray}
P_s(q_a) &=& \mathbb{P}\{T_{video} > q_a\}\cdot \mathbf{\pi}\cdot \mathbf{V}(0;q_a)\cdot\mathbf{W}(q_a) \nonumber\\
&=& \exp\big(-\theta q_a\big)\cdot \mathbf{\pi}\cdot \mathbf{V}(0;q_a)\cdot\mathbf{W}(q_a).
\end{eqnarray}
\subsection{Modeling P.G.F. of starvation events}
When a starvation event happens, the media player pauses until $q_b$ seconds of video contents are
re-buffered. A more interesting but challenging problem is how many starvations may happen in a streaming
session. In this section, we come up with an approach to derive the probability generating function
of starvation events.
We define a \emph{path} as a sequence of prefetching and starvation events, as well as the
event of completing the downloading. Obviously, the probability of
a path depends on the number of starvations. We illustrate a
typical path with $L$ starvations in figure \ref{fig:samplepath}
that starts from a prefetching process and ends at a playback process.
We denote by $I_{l}^A$ the beginning state of the
$l^{th}$ prefetching, by $I_{l}^B$ the beginning state of the $l^{th}$ playback,
and by $I_{e}$ the end of downloading.
The end of a prefetching process
is exactly the beginning of a playback process. The end of a playback process
is also the beginning of a subsequent prefetching process
if the video has not been downloaded completely.
This path contains a sequence of events happening at the states
$\{I_1^A,I_1^B,I_2^A,I_2^B,\cdots, I_{L+1}^A, I_{L+1}^B, I_{e}\}$. The process between $I_l^A$ and $I_l^B$
is the $l^{th}$ prefetching process, while that between $I_l^B$ and $I_{l+1}^A$ is the
$l^{th}$ playback process, ($1\leq l\leq L$). The first starvation
takes place at the instant that the second prefetching process begins.
The starvation event (e.g. $I_l^B,\; 1\leq l\leq L$) cannot happen at the state $i$ that
has $c_i\geq 0$.
\begin{figure*}[!htb]
\centering
\includegraphics[width=6in]{samplepath_flow.eps}
\caption{A path with $L$ starvations}
\label{fig:samplepath}
\end{figure*}
The sample path in figure \ref{fig:samplepath} demonstrates a roadmap to find the p.g.f. of starvation events.
We need to compute the transition probability along the path with all possible states.
Recall that the transition probabilities from state $I_l^A$ to $I_l^B$ have been computed
in section \ref{subsec:prefetching}. The only missing
part is the transition probabilities from state $I_l^B$ to $I_{l+1}^A$.
Denote by $X_{i,j}(q)$ the probability that a playback process starts at
state $i$ and meets with the empty buffer at state $j$ with the prefetching threshold $q$.
Define a matrix $\mathbf{X}(q):=\{X_{i,j}(q);i,j\in S\}$.
Denote by $\mathbf{X}_{j}(q)$ the vector of probabilities that the starvation takes place at state $j$
with the prefetching threshold $q$, i.e. ${\small\mathbf{X}_{j}(q):=[X_{0,j}(q), \cdots, X_{K{-}1,j}(q)]^T}$.
Let ${\small\mathbf{X}_{j}(0):=[X_{0,j}(0),\cdots, X_{K{-}1,j}(0)]^T}$ be the vector of those probabilities without
the prefetching.
Using the same argument, we get the differential equation of $X_{i,j}(q)$, $\forall i,j\in S$,
\begin{eqnarray}
c_i\dot{X}_{i,j}(q) = (\lambda_i+\mu_i)X_{i,j}(q) {-}\lambda_i X_{i+1,j}(q) {-} \nu_{i} X_{i-1,j}(q).
\label{eq:starvation_pgf1}
\end{eqnarray}
The solution of eq.\eqref{eq:starvation_pgf1} is directly given by
\begin{eqnarray}
\mathbf{X}_j(q) = D_W\exp{(\Lambda_Wq)}D_W^{-1}\cdot \mathbf{X}_j(0).
\label{eq:starvation_pgf2}
\end{eqnarray}
The computation of $\mathbf{X}_j(q)$ requires the knowledge of the boundary condition $\mathbf{X}_j(0)$.
Here, $X_{i,j}(0) = 0,\; i\neq j$ and $X_{i,j}(0) = 1$ if $c_i < 0$, and
$X_{i,j}(0) = 0$ if $c_{K{-}1}\geq 0$. The computation of remaining $X_{i,j}(0)$ follows the same approach
as that in the computation of $W_i(0)$.
When replacing $q$ by $q_a$, we obtain the probability $X_{ij}(q_a)$ that the first starvation
happens at state $j$ with $i$ other flows observed by the tagged flow at the beginning
of the playback process. The starvation probability
in a rebuffering process is calculated by $X_{ij}(q_b)$, given the rebuffering threshold $q_b$.
The probability of having $L$ starvations can be expressed as the product of the probabilities
from the first prefetching to the last playback. The probability vector from $I_1^A$ to $I_1^B$
is obtained by
\begin{eqnarray}
\{\mathbb{P}_{I_1^A\rightarrow I_1^B}\} = \mathbf{\pi}\cdot \exp\big(-\theta q_a\big)\cdot\mathbf{V}(0;q_a), \; \forall
I_1^A, I_1^B \in S.
\label{eq:starvation_pgf5}
\end{eqnarray}
\noindent The probability vector from $I_1^A$ to $I_2^A$ is,
\begin{eqnarray}
&&\!\!\!\!\!\!\!\!\!\!\!\!\!\!\{\mathbb{P}_{I_1^A\rightarrow I_2^A}\} = \{\mathbb{P}_{I_1^A\rightarrow I_1^B}\} \cdot \mathbf{X}(q_a)\nonumber\\
&& = \mathbf{\pi}\cdot \exp\big(-q_a\theta\big)\cdot\mathbf{V}(0;q_a)\cdot \mathbf{X}(q_a), \forall
I_1^A, I_2^A \in S.
\label{eq:starvation_pgf6}
\end{eqnarray}
Recall that the starvation happens at state $I_2^A$, and the rebuffering process
ends at state $I_2^B$ with the prefetched video duration $q_b$.
We next compute the probability of having only one starvation denoted by $\mathbb{P}_{\mathrm{1 starv}}$. The possible
paths include $\{I_1^A,I_1^B,I_2^A, I_e\}$ and $\{I_1^A,I_1^B,I_2^A, I_2^B,I_e\}$.
The first part of $\mathbb{P}_{\mathrm{1 starv}}$ refers to the case that the remaining video duration
is less than the rebuffering threshold $q_b$. The second part refers to the case that
the remaining video duration is longer than $q_b$ and there is no starvation after the rebuffering process.
{\small
\begin{eqnarray}
\!\!\!\!\!\!\!&&\!\!\!\!\!\mathbb{P}_{\mathrm{1 starv}} = \{\mathbb{P}_{I_1^A\rightarrow I_2^A}\} \cdot \mathbf{1} \cdot\big(1-\exp(-q_b\theta) \big) + \{\mathbb{P}_{I_1^A\rightarrow I_2^B}\} \cdot (1-\mathbf{W}(q_b))\nonumber\\
\!\!\!\!\!\!\!&&\!\!\!\!\!= \mathbf{\pi}\cdot \exp\big(-q_a\theta\big)\cdot\mathbf{V}(0;q_a)\cdot \mathbf{W}(q_a) \cdot\big(1-\exp(-q_b\theta) \big)\nonumber\\
\!\!\!\!\!\!\!&&\!\!\!\!\! + \mathbf{\pi}\cdot \exp\big(-(q_a+q_b)\theta\big)\cdot\mathbf{V}(0;q_a)\cdot \mathbf{X}(q_a)\cdot \mathbf{V}(0;q_b)\cdot (1-\mathbf{W}(q_b)).
\label{eq:starvation_pgf7}
\end{eqnarray}
}
\noindent Here, the expression $(1-\mathbf{W}(q_b))$ is the probability $I_2^A\rightarrow I_e$ in the first path
and the expression $(1-\mathbf{W}(q_b))$ is that of $I_2^B\rightarrow I_e$ in the second path.
Similarly, we can deduce the probability of having $L (L>1)$ starvations recursively
by $\mathbb{P}_{\mathrm{L starv}}$
{\small
\begin{eqnarray}
\!\!\!\!\!\!\!&&\!\!\!\!\!= \{\mathbb{P}_{I_1^A\rightarrow I_{L{+}1}^A}\} \cdot \mathbf{1} \cdot\big(1-\exp(-q_b\theta) \big) + \{\mathbb{P}_{I_1^A\rightarrow I_{L{+}1}^B}\} \cdot (1-\mathbf{W}(q_b))\nonumber\\
\!\!\!\!\!\!\!&&\!\!\!\!\!= \mathbf{\pi}\cdot \exp\big(-q_a\theta\big)\cdot\mathbf{V}(0;q_a)\mathbf{X}(q_a)\cdot \Big(\exp\big({-}q_b\theta\big)\mathbf{V}(0;q_b)\mathbf{X}(q_b)\Big)^{L{-}1}\nonumber\\
\!\!\!\!\!\!\!&&\!\!\!\!\!\cdot \mathbf{1}\cdot \big(1-\exp(-q_b\theta) \big) + \mathbf{\pi}\cdot \exp\big(-(q_a+q_b)\theta\big)\cdot\mathbf{V}(0;q_a)\mathbf{X}(q_a)\cdot\nonumber\\
\!\!\!\!\!\!\!&&\!\!\!\!\!\cdot \Big(\exp\big({-}q_b\theta\big)\mathbf{V}(0;q_b)\mathbf{X}(q_b)\Big)^{L{-}1}\cdot \mathbf{V}(0;q_b)\cdot (1-\mathbf{W}(q_b)).
\label{eq:finalpgf}
\end{eqnarray}
}
Though the expression in eq.\eqref{eq:finalpgf} looks complicated, it only involves duplicated
products of matrices with dimension $K$ that can be calculated easily.
\section{VBR Streaming: Modeling QoE}
\label{sec:vbr}
In this section, we investigate the QoE of variable bit rate streaming (VBR). We introduce
a diffusion process to model the variation of playback rate.
\subsection{Queueing Model of VBR Streaming}
In VBR, the frame size depends on the video scenario. For instance,
the complex segments of video clips require more bits to render each frame than
the simple segments. Then, the playback process exhibits the variation of
service rate. The complex and the simple
segments occur randomly, producing a mean playback rate. In this context,
an important question is whether the jittering of playback rate
significantly influences the starvation behavior or not.
In VBR streaming, the video file size is exponentially distributed with
the mean $1/\theta_F$. Therefore, the Markovian property
of flow departure still holds in Fig.\ref{fig:markovchain0}-\ref{fig:markovchain2}
and the transition rates remain the same as in Section \ref{sec:cbr}.
Whereas the video duration follows a general distribution.
We define the mean playback rate to be $Bitrate$. The
mean frame size is written as $\frac{Bitrate}{25}$ with frame rate 25fps. Denote by $\sigma$
the standard deviation of video frames. The total variance of video frames
is $25\sigma^2$ in one second.
We define an $It\hat{o}$ process $\{\mathcal{S}(t)\}$ to describe the total service
measured in the duration of video contents by time $t$. The $It\hat{o}$ process $\{\mathcal{S}(t)\}$ satisfies
the following stochastic differential equation
\begin{eqnarray}
d\mathcal{S}(t) = \mathcal{S}(t+h) - \mathcal{S}(t) = 1\cdot h + \bar{\sigma} d\mathcal{B}_h,
\label{eq:brownianmotion1}
\end{eqnarray}
\noindent where $\mathcal{B}$ is the standard Wiener process and the subscript $h$ denotes the duration.
The process $\mathcal{B}_h$ satisfies
$\mathcal{B}_h|_{h=0} = 0$, $E[\mathcal{B}_h] =0$ and the derivative
$d\mathcal{B}_h = \sqrt{h}\mathcal{N}(0,1)$ where $\mathcal{N}(0,1)$ is the
standard Normal distribution.
In eq.\eqref{eq:brownianmotion1},
the parameter $\bar{\sigma}$ denotes the standard deviation of video playback
in a unit time. Hence, given the playback starting at time 0, the total variance of $\mathcal{S}(t)$
is $\mathrm{Var}[\mathcal{S}(t)] = \bar{\sigma}^2\mathrm{Var}[\mathcal{B}_t] = t\bar{\sigma}^2$.
At the unit time $t=1$ second, there has $\mathrm{Var}[\mathcal{S}(1)] = \bar{\sigma}^2$.
Remember that 25 frames are served in one second. The total variance of served bits is thus $25\sigma^2$.
When it is re-scaled by the video bitrate (measured in the duration of video contents),
the variance is expressed as $\frac{25\sigma^2}{Bitrate^2}$. Therefore, we obtain the mapping
$\bar{\sigma} = \frac{5\sigma}{Bitrate}$.
In this section, we integrate the playback perturbation with the fluid-level flow dynamics.
The method employed here is inspired by the ruin analysis in actuarial science \cite{NAA07:Lu,Dufresne}.
With the continuous time assumption, we use the diffusion process $\mathcal{S}(t) $ to
describe the queueing dynamics with the perturbation of playback rate.
The continuous time queueing process in the prefetching phase, $\{Q_a(t); t\geq 0\}$, is defined
as
\begin{eqnarray}
Q_a(t) = \sum_{l=1}^{N_e(t)} b_{I_l}(A_l - A_{l-1}) + b_{I_{N_e(t)}}(t-A_{N_e(t)}) + \bar{\sigma}\mathcal{B}_t.
\label{eq:queue_startup_diffu}
\end{eqnarray}
Similarly, the queueing process in the playback phase, $\{Q_b(t); t\geq 0\}$, is expressed as
\begin{eqnarray}
Q_b(t) = q {+} \sum_{l=1}^{N_e(t)} c_{I_l}(A_l {-} A_{l-1}) + c_{I_{N_e(t)}}(t{-}A_{N_e(t)}) {+} \bar{\sigma}\mathcal{B}_t.
\label{eq:queue_service}
\end{eqnarray}
\noindent For the VBR streaming, the starvation
can be caused by either the playback rate variation in small time scales or the flow dynamics in large
time scales.
\subsection{Starvation Probability}
The computation of starvation probability uses the similar technique as that in section \ref{sec:cbr}.
All possible events that take place in an infinitesimal time interval are taken into account.
Conditioned on the flow dynamics and throughput perturbation in $[0,h]$, we have
\begin{eqnarray}
W_i(q) \!\!&=&\!\! (1-\lambda_i h-\mu_{i} h)W_i(q+c_ih+d\mathcal{B}_h) \nonumber\\
\!\!&&\!\! + \lambda_ihW_{i+1}(q+c_ih+d\mathcal{B}_h) \nonumber \\
\!\!&&\!\!+ \nu_ihW_{i-1}(q+c_ih+d\mathcal{B}_h) + o(h), \; \forall i\in S.
\label{eq:starvprob_brownian1}
\end{eqnarray}
\noindent The above equations yield
\begin{eqnarray}
\!\!\!\!\!\!\!\!\!\!\!\!&&1/h\cdot\big(W_i(q{+}c_ih{+}d\mathcal{B}_h) {-} W_i(q)\big) = (\lambda_i{+}\mu_i) W_i(q{+}c_ih{+}d\mathcal{B}_h) \nonumber\\
\!\!\!\!\!\!\!\!\!\!\!\!&& {-} \lambda_iW_{i{+}1}(q{+}c_ih{+}d\mathcal{B}_h) {-} \nu_iW_{i{-}1}(q{+}c_ih{+}d\mathcal{B}_h) {+} o(h)/h.
\label{eq:starvprob_brownian2}
\end{eqnarray}
As $h\rightarrow 0$, the left-side of eq.\eqref{eq:starvprob_brownian2} is expressed as
\begin{eqnarray}
E[\frac{1}{h}\big(W_i(q+c_ih+d\mathcal{B}_h)- W_i(q)\big)] = c_i\dot{W}_i(q) + \frac{1}{2}\bar{\sigma}^2\ddot{W}(q),
\label{eq:starvprob_brownian3}
\end{eqnarray}
according to \cite{Dufresne}. Submitting \eqref{eq:starvprob_brownian3} to \eqref{eq:starvprob_brownian2}, we obtain
\begin{eqnarray}
&&a \ddot{W}_i(q) {+} c_i \dot{W}_i(q) {-} (\lambda_i{+}\mu_i)W_i(q){+} \lambda_iW_{i{+}1}(q) \nonumber\\
&& \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; + \nu_iW_{i-1}(q) = 0, \forall i\in S,
\label{eq:starvprob_brownian4}
\end{eqnarray}
\noindent where $\ddot{}$ denotes the second order derivative.
The constant $a$ equals to $\frac{1}{2}\bar{\sigma}^2$.
The boundary conditions satisfy
\begin{eqnarray}
W_i(0) = 1, \;\; \forall i\in S. \\
\dot{W}_i(\infty) = 0, \;\; \forall i\in S.
\label{eq:starvprob_brownian5}
\end{eqnarray}
The starvation probability with no initial prefetching is 0 because the queueing process
is oscillating very fast. The queue length will go ``below'' 0 immediately for sure.
When $q$ is infinitely large, the starvation probability $W_i(q)$ is 0. But $W_i(q)$
approaches 0 gradually, giving rise to the first-order derivative $\dot{W}_i(\infty) = 0$.
We denote by $\mathbf{Y}(q):=\{W_0(q),\cdots, W_{K{-}1}(q), \dot{W}_0(q),\cdots,\dot{W}_{K{-}1}(q)\}$.
We further define two matrices, $Y_3$ and $Y_4$, that have the following forms:
\begin{eqnarray}
Y_3 = \mathbf{diag}\{c_i/a\}\cdot \mathbf{M}_W \quad \textrm{and} \quad Y_4 = \mathbf{diag}\{-c_i/a\}. \nonumber
\end{eqnarray}
Then, equations in \eqref{eq:starvprob_brownian4} are rewritten in the matrix form
\begin{eqnarray}
\dot{\mathbf{Y}}(q) = \mathbf{M}_Y \mathbf{Y}(q)= \left[ \begin{array}{ccc}
\mathbf{0} & I \\
Y_3 & Y_4 \end{array} \right]\cdot \mathbf{Y}(q).
\label{eq:starvprob_brownian6}
\end{eqnarray}
The solution to eq.\eqref{eq:starvprob_brownian6} is thus given by
\begin{eqnarray}
\mathbf{Y}(q) = \exp{(\mathbf{M}_Yq)}\cdot \mathbf{Y}(0).
\label{eq:starvprob_brownian7}
\end{eqnarray}
Since $Y_3$ is similar to a symmetric tridiagonal matrix and $Y_4$ is a diagonal matrix,
we make the following conjecture.
\begin{conj}
\label{conj:no1} The matrix $\mathbf{M}_{Y}$ has $2K$ real eigenvalues, and can be expressed
as $\mathbf{M}_{Y}=D_Y\Lambda_YD_Y^{-1}$, where $D_Y$ is an invertible matrix and $\Lambda_Y$
is a diagonal matrix.
\end{conj}
On the basis of the above conjecture, eq.\eqref{eq:starvprob_brownian7} is substituted by
\begin{eqnarray}
\mathbf{Y}(q) = D_Y\exp{(\Lambda_Yq)}D_Y^{-1}\cdot \mathbf{Y}(0).
\label{eq:starvprob_brownian8}
\end{eqnarray}
\section{Simulation}
\label{sec:simu}
In this section, we compare the numerical experiments
with the developed framework using MATLAB. Our model exhibits excellent accuracy.
\subsection{Constant bit-rate streaming}
We consider a network with maximum number of ten simultaneous streaming flows
and the capacity of 2.5Mbps.
Flows arrive to the network with a Poisson rate $\lambda = 0.12$.
Let the video duration
be exponentially distributed with the mean $60$ seconds. Then, there have
$\mu=0.1302$ and $\rho = 0.9216$ at the playback rate 360Kbps, and
$\mu=0.0868$ and $\rho = 1.3824$ at the playback rate 480Kbps.
The simulation lasts $5\times 10^5$ seconds.
\noindent\textbf{Starvation probabilities:} In this set of experiments,
we will illustrate the overall starvation probability, the
starvation probabilities when the playback process begins at
different states, as well as the p.g.f. of starvation events.
Figure \ref{fig:starvprob1} shows the overall starvation probabilities with
different settings of the start-up threshold. When it
increases from 0 to 20s of video contents, the starvation probability decreases.
The higher playback rate (e.g. 480Kbps) incurs larger starvation probabilities
in comparison with the lower playback rate (e.g. 360Kbps). Our mathematical models
match the simulations very well.
Figure \ref{fig:starvprob2} compares the starvation probabilities when
the playback process begins at different states. A higher state
refers to more coexisting flows (or congestions), and hence causing
a larger starvation probability. Note that the arrival rates at state 7 and 9 are less than 360Kbps.
Without prefetching, the starvation event happens for sure.
We further evaluate the probabilities of having one or two starvations in the whole
procedure. For clarity, we choose the same value for the start-up and re-buffering thresholds.
The starvation probabilities increase in the beginning and decrease afterwards
when $q_a$ (or $q_b$) increases from 0 to 30s of video segment. This is because there are many starvations
with very small start-up threshold and few starvations with very large start-up threshold.
Our analytical model predict the starvation probabilities accurately.
\begin{figure*}[!htb]
\begin{minipage}{0.3\linewidth}
\centering
\includegraphics[width=2.4in, height = 1.9in]{starvprob_thresh.eps}
\caption{Overall starvation probability VS start-up threshold}
\label{fig:starvprob1}
\end{minipage}
\hspace{0.5cm}
\begin{minipage}{0.3\linewidth}
\centering
\includegraphics[width=2.4in, height = 1.9in]{starvprob_states.eps}
\caption{Starvation probabilities at different playback states with a playback rate 360Kbps}
\label{fig:starvprob2}
\end{minipage}
\hspace{0.5cm}
\begin{minipage}{0.3\linewidth}
\centering
\includegraphics[width=2.4in, height = 1.9in]{1or2starvprob_thresh.eps}
\caption{Probability of observing one and two starvations}
\label{fig:starvprob3}
\end{minipage}
\vspace{-0.3cm}
\end{figure*}
\begin{figure*}[!htb]
\begin{minipage}{0.3\linewidth}
\centering
\includegraphics[width=2.4in, height = 1.9in]{startup_delay2.eps}
\caption{CDF of start-up delay with $q_a=10$}
\label{fig:startupdelay}
\end{minipage}
\hspace{0.5cm}
\begin{minipage}{0.3\linewidth}
\centering
\includegraphics[width=2.4in, height = 1.9in]{starv_prob_states_diffu.eps}
\caption{Starvation probabilities at all states with $d = 0.01$ and $0.5$
computed by models}
\label{fig:starv_vbr_1}
\end{minipage}
\hspace{0.5cm}
\begin{minipage}{0.3\linewidth}
\centering
\includegraphics[width=2.4in, height = 1.9in]{startup_process_diffu.eps}
\caption{Probabilities that prefetching process starts from a state (from 0 to 9)
and ends at state 2 or 7 with $d = 0.01$ and $0.5$.}
\label{fig:startup_vbr_2}
\end{minipage}
\vspace{-0.3cm}
\end{figure*}
\noindent\textbf{Start-up delay:} We illustrate the
distribution of start-up delays in Fig.\ref{fig:startupdelay}.
The start-up threshold is set to 10s. We highlight the
c.d.f. curves when the tagged flow sees $3,5,$ and $7$
other flows respectively after entering the network.
We use MATLAB PDE function \emph{pdepe} to compute the model in eq.\eqref{eq:solving_startupdelay_eq3}
numerically. Fig.\ref{fig:startupdelay} demonstrates accurate estimation
of start-up delay in the simulation. When the cumulative probability
is close to 1, the PDE model oscillates slightly. This is because the
initial condition $U_i(0,0)$ is discontinuous in eqs.\eqref{eq:solving_startupdelay_eq4}
and \eqref{eq:solving_startupdelay_eq6}. The dotted lines exhibit the c.d.f curves when
we adopt the Brownian motion approach to compute the explicit form. The parameter
$\alpha$ is chosen to be $0.1$ in our paper.
As shown in Fig.\ref{fig:startupdelay}, the explicit-form model provides a rough estimation of
the c.d.f. of start-up delay. However,
the explicit form model has almost the same mean start-up delay as that of the experiments.
\begin{figure*}[!htb]
\begin{minipage}{0.3\linewidth}
\centering
\includegraphics[width=2.4in, height = 1.9in]{starv_vbr_simu.eps}
\caption{Starvation comparison among VBR of different frame size distributions and CBR model}
\label{fig:starv_vbr_3}
\end{minipage}
\hspace{0.5cm}
\begin{minipage}{0.3\linewidth}
\centering
\includegraphics[width=2.4in, height = 1.9in]{starvprob_thresh_ff.eps}
\caption{Starvation probability VS start-up threshold with Rayleigh fading}
\label{fig:fastfading1}
\end{minipage}
\hspace{0.5cm}
\begin{minipage}{0.3\linewidth}
\centering
\includegraphics[width=2.4in, height = 1.9in]{starvprob_states_ff_new.eps}
\caption{Starvation probability at different states with Rayleigh fading}
\label{fig:fastfading2}
\end{minipage}
\end{figure*}
\subsection{Variable bit-rate streaming}
We evaluate the QoE metrics of VBR streaming with a different set of parameters.
The bandwidth is set to 2.0Mbps, and the flow arrival rate is set to 0.08.
Each video streaming has the mean playabck rate of 360Kbps and a frame rate 25fps.
The size of video files are exponentially distributed with the mean $2.16\times 10^7$ bits
(equivalent to 60s with the playback rate 360Kbps). Then, the traffic load
of the system is given by $\rho = 0.864$. The per-flow throughput
in states $5\sim 9$ are insufficient to support the mean playback rate.
We first investigate how the playback variance influences
the prefetching and the playback processes. Fig.\ref{fig:starv_vbr_1}
shows the starvation probabilities when the start-up threshold
and the variance change. When $a=0.01$, the starvation probabilities
computed from the VBR model are the same as those computed from the
CBR model. While they differ greatly with $a=1$. For the case $a=1$,
the jittering of playback rate influences the starvation probability
more with $q_a=2$ than with $q_a=8$. Fig.\ref{fig:startup_vbr_2}
compares the probabilities that the prefetching process
ends at the state 2 and 7 respectively. From this set of experiment,
we can see that even $a=0.5$ does not obviously influence the prefetching.
Fig.\ref{fig:starv_vbr_3} compares the numerical results of VBR streaming
with the model for CBR streaming.
In our simulation, the mean frame size is 14400 bits. According to \cite{MASI08},
the video frame size roughly follows Erlang distribution.
If the Erlang distribution is the sum of $k$ i.i.d. exponentially distributed r.v.s.,
the mean of these r.v.s. is $14400/k$. We consider two cases in this set of
experiments, $k=1$ (i.e. exponential r.v.) and $k=3$. The resulting variances
are $\bar{\sigma}^2 = 0.04$ (i.e. $a = 0.02$) for $k=1$ and $\bar{\sigma}^2 = 0.013$
(i.e. $a=0.0066$) for $k=3$.
The simulation time is $3\times 10^6$ playback slots. From Fig.\ref{fig:starv_vbr_3},
we are surprised to see
that the Erlang distributions of video frames do not obviously influence
the starvation probabilities. The analytical framework for CBR streaming is
good enough to model the starvation behavior for VBR streaming.
\section{Extension to Fast Fading}
\label{sec:extension}
This section models the starvation behavior of CBR streaming
when users experience fast channel fading.
We compute the first two moments of bit arrival process and
show how these parameters can be feed into our analytical
framework.
{\bf Network description.}
Due to the change of radio condition (e.g. user mobility, or a car passing by the user),
the signal strength is no longer a constant at different scheduling slots. To explore
the multiuser diversity gain, the base station adopts the normalized SNR scheduling algorithm
for allocating time slots to coexisting flows.
We begin with the scenario with a fixed population of $i$ users (or flows)
served by a single base station. In each slot, the users measure their channel
qualities and feedback them to the BS. Based on the channel quality indications,
the BS transmits to only one of the users every slot.
Denote by $\gamma_{j,n}$ the instantaneous signal
to noise ratio (SNR) of user $j$, ($1\leq j\leq i$), at slot $n$.
As stated in most of previous work,
we assume that all the users experience Rayleigh fast-fading.
Denote by $\bar{\gamma}_j$ the average SNR of user $j$. Then, the received SNR
of user $j$ is an exponentially distributed random variable with the
following probability density function
$ g_j(\gamma) = \frac{1}{\bar{\gamma}_j}\exp(-\frac{\gamma}{\bar{\gamma}_j})
$
The NSNR scheduler selects the user that has the highest relative SNR for transmission,
$
j^*_n = \max_j\{\gamma_{j,n}/\bar{\gamma}_j,\; j=1,2,\cdots,i\},
$
where $j^*$ is the scheduled user at slot $n$. In this section, we consider
the case of homogeneous average SNRs (i.e. $\bar{\gamma}_j=\bar{\gamma}$ for all $j$). Therefore, the NSNR scheduler is equivalent to
the maximum sum rate (MSR) scheduler that gives the largest per-user throughput
Since the SNRs of different users are independently distributed, the scheduled SNR,
denoted by $\gamma^*$, has the following probability density function \cite{JSAC07:Chang}
$ g^*(\gamma) = \frac{i}{\bar{\gamma}}\exp(-\frac{\gamma}{\bar{\gamma}}) \big(1-\exp(-\frac{\gamma}{\bar{\gamma}})\big)^{i{-}1}. $
Denote by $f(\gamma)$ the data rate of a user with the SNR $\gamma$. Here, $f(\cdot)$ can be a linear function in the low-SNR regime
and a logarithmic function in the high SNR regime if the modulation scheme is continuous. For discrete modulations,
$f(\cdot)$ is a step function of $\gamma$. Without loss of generality, we let $f(\gamma) = \log_2(1+\gamma)$.
{\bf Analysis of throughput process.}
The fast fading along with NSNR scheduling brings variation of bit arrivals to the receiver.
The analytical framework for VBR streaming can be naturally extended to
this scenario. The only modification lies in that the jittering of playback rate
is substituted by that of bit arrivals. Therefore, we need the knowledge
of the mean throughput and its variance measured
in the duration of video contents. To achieve this goal, we must obtain
the mean throughput and its variance measured in bits first.
Denote by $r_i^*$ the transmission rate of the user with the best SNR at a slot
in each Hz when
there are $i$ active flows in the cell. Denote by $r_{i}$ the transmission rate
to \underline{\emph{one particular flow}} at a slot per Hz. Given the assumption that all the flows have the same
average SNR, each flow has the equal probability of being scheduled. Hence, we can see
\begin{eqnarray}
r_{i} := \left\{\begin{matrix}
\;r_i^* \;\; &&\textrm{ w.p. } \;\;\frac{1}{i} ;\\
\;0 \;\; &&\;\textrm{ w.p. } \;\;\frac{i-1}{i}.
\end{matrix}\right.
\label{eq:pf_thru1}
\end{eqnarray}
For the r.v. $r_i^*$, its mean and variance are computed by
\begin{eqnarray}
E[r_i^*] \!&=&\! \int_0^{\infty} f(\gamma)\cdot g^*(\gamma) d\gamma, \label{eq:pf_thru2}
\end{eqnarray}
\begin{eqnarray}
\mathrm{Var}[r_i^*] \!&=&\! \int_0^{\infty} f(\gamma)^2\cdot g^*(\gamma) d\gamma - (E[r_i^*])^2.
\label{eq:pf_thru3}
\end{eqnarray}
The eqs \eqref{eq:pf_thru1}-\eqref{eq:pf_thru3} yield
\begin{eqnarray}
E[r_i] \!&=&\! \frac{1}{i} E[r_i^*], \label{eq:pf_thru4}\\
\mathrm{Var}[r_i] \!&=&\! E[r_i^2] - (E[r_i])^2 = \frac{1}{i}E[(r_i^*)^2]-\frac{1}{i^2}(E[r_i^*])^2 \nonumber\\
\!&=&\! \frac{1}{i}\mathrm{Var}[r_i^*] + (E[r_i^*])^2(\frac{1}{i}-\frac{1}{i^2}).
\label{eq:pf_thru5}
\end{eqnarray
Denote by $D_s$ the duration of scheduling slot (usually 2ms), and by $B$ the width of wireless spectrum
in Hz. Then, the mean and the variance of per-flow throughput measured in the \emph{duration of video contents} are
$\frac{B\cdot D_s\cdot E[r_i]}{Bitrate}$ and $(\frac{B\cdot D_s}{Bitrate})^2\cdot \mathrm{Var}[r_i]$
respectively in one slot.
Let $R_i$ be the r.v. of \underline{\emph{per-flow throughput in one second}} that is measured by the duration of
video contents. In one second, the total throughput of a flow at one Hz is the sum of throughput in $\frac{1}{D_s}$ slots.
Therefore, the r.v. $R_i$ is the sum of $\frac{1}{D_s}$ i.i.d. r.v.s corresponding to the per-slot throughput.
We can express the mean and the variance of $R_i$ as follows:
\begin{eqnarray}
\!\!\!E[R_i] \!\!\!\!&=&\!\! \frac{1}{D_s}\cdot \frac{B\cdot D_s\cdot E[r_i]}{Bitrate} = \frac{B\cdot E[r_i^*]}{i\cdot Bitrate}, \label{eq:pf_thru6}\\
\mathrm{Var}[R_i] \!\!\!\!&=&\!\! \frac{1}{D_s}\cdot (\frac{B\cdot D_s}{Bitrate})^2\cdot \mathrm{Var}[r_i] \nonumber\\
\!\!\!\!\!\!\!\!&=&\!\!\!\! \big(\frac{1}{i}\mathrm{Var}[r_i^*] {+} (E[r_i^*])^2(\frac{1}{i}{-}\frac{1}{i^2})\big)\cdot\frac{B^2\cdot D_s}{Bitrate^2}.
\label{eq:pf_thru7}
\end{eqnarray}
In general, the frequency width $B$ is 1$\sim$5 MHz, the bit-rate is usually greater than
200 Kbps, and $D_s$ equals to 0.002s. Then, $\mathrm{Var}[R_i]$ is usually at the order of $10^{-2}$.
If starvation happens at state $i$,
$E[R_i]$ is usually less than 1, which means that $\frac{B}{Bitrate}$ needs to be small.
However, the small $\frac{B}{Bitrate}$ results in the small variance $\mathrm{Var}[R_i]$.
This is to say, if the variance of bit arrival process is large, there might not exist starvations.
On the contrary, if the starvations appear, the variance is usually small so that its impact
on the starvation is negligible. For this reason, we directly use the framework without diffusion approximation
to model the streaming QoE in a fast fading channel.
{\bf Markov model of flow dynamics}
To analyze the interaction between NSNR scheduling and the flow dynamics,
a fluid-level capacity model is required. When the average SNR
of all active users are the same, the per-flow throughput in each slot
is i.i.d. and only depends on the quantity of flows (see eq.\eqref{eq:pf_thru2}).
Given the exponentially distributed video size, we can model the flow dynamics
as a Markov process.
The Markov processes in Fig.\ref{fig:markovchain0}-\ref{fig:markovchain2}
contain transitions rates such as $\mu_i,\nu_i$ and $\varphi_i$. However,
it is not direct to feed the parameters of this section into the above Markov processes.
In Fig.\ref{fig:markovchain0}, state $i$ refers to the number of flows in the system.
The departure rate is computed by $\mu_i = i\theta E[R_i]$ for $i\in S\cup\{K\}$,
recalling that $E[R_i]$ is average per-user throughput in video duration per second.
It is easy to obtain the stationary distribution of having $i$ flows by
{\small
\begin{eqnarray}
z_i^a \!\!&=&\!\! \frac{\lambda^i}{\prod_{l=1}^{i}\mu_l}
\left[ 1+\sum_{j=1}^{K}\frac{\lambda^j}{\prod_{l=1}^{j}\mu_l}
\right]^{-1} , \;\;\; \forall i=0,\cdots,K,\nonumber
\label{eq:pf_stationary1}
\end{eqnarray}
}
(with the convention that $\prod$ over an empty set is 1).
When a tagged user joins in the system and is also admitted, it observes $i$
other flows with the following stationary distribution $\{\pi\}:$
\begin{eqnarray}
\pi_i = \frac{z_i^a}{1-z_K^a} = \frac{\frac{\lambda^i}{\prod_{l=1}^{i}\mu_l}}{1+\sum_{j=1}^{K-1}\frac{\lambda^j}{\prod_{l=1}^{j}\mu_l}},\;\; \forall i\in S. \nonumber
\label{eq:pf_stationary2}
\end{eqnarray}
The Markov processes shown in Fig.\ref{fig:markovchain1}-\ref{fig:markovchain2}
are conditioned on the existence of the tagged flow. At state $i$, the per-user
throughput is $E[R_{i+1}]$ because there are $i$ flows plus the tagged one.
Hence, the transition rate $\nu_i$ is computed by $\nu_i := i\theta\cdot E[R_{i+1}]$
for all $i\in S$. The transition rate $\varphi_i$ is expressed as $\varphi_i:=\theta\cdot E[R_{i+1}]$.
Define $\tilde{\mu}_i$ as the total departure rate at state $i$ that has
\begin{eqnarray}
\tilde{\mu}_i:=\varphi_i+\nu_i = (i+1)\theta E[R_{i+1}] = \mu_{i+1},
\label{eq:pf_transitionrate1}
\end{eqnarray}
in the presence of the tagged flow. The constants $b_i$ and $c_i$ are obtained by
\begin{eqnarray}
b_i = E[R_{i+1}] \;\; \textrm{ and } \;\; c_i = b_i - 1,\;\; \forall i\in S.
\label{eq:pf_transitionrate2}
\end{eqnarray}
Substituting the above parameters to the framework in section \ref{sec:cbr},
we can derive the approximated QoE metrics in a fast fading channel with flow dynamics.
{\bf Numerical Examples.}
Consider a wireless channel with frequency width of 1MHz.
The average SNRs of users is 5dB. The base station allows at most 10 flows simultaneously,
and schedules the transmission to one of them in every slot of duration 0.002s.
The video duration is exponentially distributed with the mean of 90 seconds and the
video bit rate is chosen to be 480Kbps. Then, the mean throughput
are $\{$3.5749, 2.3702, 1.7844, 1.4369, 1.2061, 1.0412, 0.9174, 0.8207, 0.7432, 0.6794$\}$
times the playback rate at states from 0 to 9. In other words, the mean throughput
at states 6$\sim$9 are insufficient to support the continuous playback.
The variances at all states are
$\{$0.0083, 0.0144, 0.0144, 0.0134, 0.0124, 0.0114, 0.0105, 0.0098, 0.0091, 0.0086$\}$,
which are small enough. We consider two flow arrival rates, $\lambda = 0.07$ and $\lambda = 0.09$.
For $\lambda = 0.07$, the traffic load $\rho$ is greater than 1 at states 0$\sim$5
and less than 1 at states 6$\sim$9. For the latter case, there have $\rho>1$ at all the states.
Each set of simulation lasts $2\times 10^7$ time slots.
In Fig.\ref{fig:fastfading1} we compare the starvation probabilities measured
from a Rayleigh fading channel, and those computed from the model without
considering throughput variation. The simulation matches the model quite well,
which means that the flow-level dynamics have a dominant impact on the playback interruption,
while the impact of throughput variation due to Rayleigh fading is negligible.
In Fig.\ref{fig:fastfading2} we examine the starvation probabilities when the playback process
begins at different states. We test two start-up thresholds, $q_a =\{5, 10\}$, and two flow arrival
rates, $\lambda=\{0.07, 0.09\}$. One can observe
that the starvation probabilities do not differ much in high states (e.g. 8 and 9).
However, the starvation probabilities in the states with mean throughput around 1
are distinguishable, in which state 6 is an example.
With $\lambda=0.09$, a tagged flow sees the congested network (more other flows)
with a higher probability, and also encounters a higher probability of starvation afterwards.
\section{Conclusions and Further Extensions}
\label{sec:conclusion}
In this work, we developed an analytical framework to compute the
QoE metrics of media streaming service in wireless data networks.
Our framework takes into account the dynamics of playout buffer at
three time scales, the scheduling duration, the
video playback variation, as well as the flow arrivals or departures.
We show that the proposed models can accurately predict the distribution of prefetching delay
and the probability generating function of buffer starvations.
The analytical results demonstrate that the flow dynamics have dominant influence
on QoE metrics compared to the jittering in the throughput and the video playback rate.
\textbf{Further Extensions:} Our analytical framework can be adapted to the following scenarios: i) hyper-exponential video length distribution,
ii) heterogeneous channel gains, and iii) mixed data and
streaming flows. The heterogeneity of video durations, channel gains, and traffic types
requires the classification of flows. The heterogeneous
video duration is usually modeled by the hyper-exponential
distribution. Users requesting the videos of the same exponential distribution fall
in one class. The same argument holds in the case of heterogeneous SNRs among users.
We can group the users with more or less the same average SNR in the same class (e.g. see \cite{Borst}).
The service times are still exponentially distributed, but with different parameters
in different user classes.
When classes are introduced, the Markov process are thus
modified to contain multi-dimensional states,
representing the number of (observed) flows in different classes.
We can then construct the PDEs and the ODEs on top of them.
\bibliographystyle{abbrv}
|
1,108,101,566,375 | arxiv | \section{Introduction}
It has been known for a long time that there is a wonderful interplay between discrete time stochastic
processes and partial differential equations, see the seminal paper by Courant, Friedrichs and
Lewy \cite{CFL}. Since then, the pioneering work of Feller revealed deep connections to second
order differential equations with ``complicated'' boundary conditions, see the monograph by Mandl
\cite{mandl} for further details.
Our intention with this work is to go back to the roots and explore the connections of large
systems of ordinary differential equations to parabolic partial differential equations with
various (Wentzell, Robin) boundary conditions from a rather particular point of view: given a large
system of ordinary differential equations, we construct an ``approximating'' partial
differential equation, give estimates on the accuracy of this approximation, and show that in some
cases it is much easier to handle the parabolic equation than the large ODE system.
The main motivation of this theoretical investigation is to approximate a dynamic process on a
network by a partial differential equation. The network is given by an undirected graph and the
process is specified by the possible states of the nodes and the transition probabilities. Typical
examples are epidemic processes and opinion propagation on networks. Analysing the mean field approximation
for the expected number of infected nodes in an epidemic process on a large network we were led to
a first order PDE approximation in our previous work B\'atkai et al.
\cite{Batkai-Kiss-Sikolya-Simon}. In a recent paper, in which regular random, Erd\H
os-R\'enyi, bimodal random, and Barab\'asi-Albert graphs are studied, we have shown that a suitable
choice of the coefficients in the master equation leads to an ODE approximation with tridiagonal
transition rate matrices (see Nagy, Kiss and Simon \cite{NagySimon}). Our study in this paper aims at
approximating dynamic processes on networks, for which the transition rate matrix of the underlying
Markov chain has a tridiagonal structure (with a possible extension to similar matrices).
The paper is organized as follows. First, we introduce our general notation and setup along with a
standard heuristic derivation of an approximating PDE with dynamic boundary conditions via finite differences. It is followed by a different finite difference approximation to yield an approximating PDE with Robin boundary conditions.
Then in Section 4 we present the operator semigroup theoretic setup with the general approximation theorems needed, and we show how to use the well-developed operator matrix approach to differential operators
with Wentzell boundary conditions due to Engel and coauthors \cite{Engel,Engel1, Batkai-Engel} to
prove error estimates. Finally, in the last section we illustrate our results
with two examples: The first one is the propagation of two opinions along a cycle graph, called a
voter-like model, the second is an $SIS$ type epidemic propagation on a complete graph.
\section{Dynamic boundary conditions}
\noindent
In this section we fix our notation, collect the main definitions, derive the first approximating partial differential equation, and give the main heuristics which lie behind our approximation.
Let $N\in\NN$ be a large, fixed integer, and $a, b$ and $c$ real-valued functions on $[-\frac{1}{N},1+\frac{1}{N}]$.
For $0\leq k\leq N$, let $a_k:=a(\frac{k}{N})$, $b_k:=b(\frac{k}{N})$, and $c_k:=c(\frac{k}{N})$.
Consider the following tridiagonal matrix
\[
A_N:=\left(\begin{array}{ccccccc}
b_0 & c_1& 0&\cdots&0&0&0\\
a_0 & b_1&c_2&\cdots&0&0&0\\
0&a_1&b_2&&0 &0&0\\
\vdots& & &\ddots& & & \vdots\\
0& 0&0& &b_{N-2}&c_{N-1}&0\\
0& 0&0& &a_{N-2}&b_{N-1}&c_N\\
0& 0&0 &\cdots&0 &a_{N-1}&b_N\\
\end{array}\right)
\]
and the corresponding (ODE) system
\begin{equation}\label{eq:ode}
\left\{
\begin{aligned}
\dot{x}(t)&=A_Nx(t)\\
x(0)&=v \in\CC^{N+1}
\end{aligned}
\right.
\end{equation}
on $\CC^{N+1}$.
We wish to approximate the solution $x(t)$ to this (ODE) by considering it as a discretisation of a continuous function $u(t,z)$ on the interval $[0,1]$, i.e.,
\[
u\left(t,\frac{k}{N}\right)=x_k(t)
\]
for $0\leq k\leq N$. Now we derive an approximate (PDE) for the function $u(\cdot,\cdot)$ using the (ODE) given above.
For any $1\leq k\leq N-1$ we have:
\begin{eqnarray*}
\partial_t u\left(t,\frac{k}{N}\right)&=&\dot x_k(t)= a_{k-1} x_{k-1}(t) + b_k x_k(t) + c_{k+1} x_{k+1}(t)\\
&=& \frac{1}{2}a_{k-1}\left(x_{k-1}(t) -2x_k(t)+x_{k+1}(t)\right)\\
&&- a_{k-1}\left(\frac{x_{k+1}(t)-x_{k-1}(t)}{2}\right)\\
&&+(a_{k-1}+b_k+c_{k+1})x_k(t)\\
&&+ \frac{1}{2}c_{k+1}\left(x_{k-1}(t) -2x_k(t)+x_{k+1}(t)\right) \\
&&+ c_{k+1}\left(\frac{x_{k+1}(t)-x_{k-1}(t)}{2}\right)\\
&=& \frac{1}{2}a_{k-1}\left(u\left(t,\frac{k-1}{N}\right) -2u\left(t,\frac{k}{N}\right)+u\left(t,\frac{k+1}{N}\right)\right)\\
&&- a_{k-1}\left(\frac{u\left(t,\frac{k+1}{N}\right)-u\left(t,\frac{k-1}{N}\right)}{2}\right)\\
&&+(a_{k-1}+b_k+c_{k+1})u\left(t,\frac{k}{N}\right)\\
&&+ \frac{1}{2}c_{k+1}\left(u\left(t,\frac{k-1}{N}\right) -2u\left(t,\frac{k}{N}\right)+u\left(t,\frac{k+1}{N}\right)\right)\\
&&+ c_{k+1}\left(\frac{u\left(t,\frac{k+1}{N}\right)-u\left(t,\frac{k-1}{N}\right)}{2}\right).
\end{eqnarray*}
By considering the approximations
\[
u\left(t,\frac{k-1}{N}\right) -2u\left(t,\frac{k}{N}\right)+u\left(t,\frac{k+1}{N}\right)= \frac{1}{N^2}\left(\partial_{zz}u\left(t,\frac{k}{N}\right)+O\left(\frac{1}{N^2}\right)\right)
\]
and
\[
\frac{u\left(t,\frac{k+1}{N}\right)-u\left(t,\frac{k-1}{N}\right)}{2}=\frac{1}{N}\left(\partial_z u\left(t,\frac{k}{N}\right)+O\left(\frac{1}{N^2}\right)\right),
\]
using the functions $a, b$ and $c$, and writing $h:=\frac{1}{N}$, we obtain the approximate (PDE)
\begin{equation}\label{eq:pde_main}
\left\{
\begin{aligned}
\partial_t u(t,z)&\simeq \frac{h^2}{2}\left(a\left(z-h\right)+c\left(z+h\right)\right) \partial_{zz} u(t,z)\\
&+h(c(z+h)-a(z-h))\partial_z u(t,z)\\
& +(a(z-h)+b(z)+c(z+h)) u(t,z),
\end{aligned}
\right.
\end{equation}
valid for $z\in(0,1)$. Note that the approximation is of order $h^3$.
On the boundary, similar transformations yield the first order boundary equations
\begin{equation}\label{eq:pde_boundary_left}
\partial_{t} u(t,0)\simeq hc(h)\partial_z u(t,0) + (c(h)+b(0))u(t,0)
\end{equation}
and
\begin{equation}\label{eq:pde_boundary_right}
\partial_t u(t,1)\simeq -ha(1-h)\partial_z u(t,1) + (a(1-h)+b(1))u(t,1).
\end{equation}
Note that here the approximations are only of order $h^2$.
The initial condition $u(0,z)$ is to be chosen as a suitable interpolation of the values $x_k(0)=v_k$ at $z=\frac{k}{N}$ ($0\leq k\leq N$).
\section{Robin boundary condition}
Motivated by stochastic processes, we restrict ourselves here to the important special case where the column sums of the matrix $A_N$ are zero, i.e., $a_0=-b_0,\ b_k=-(a_k+c_k), k=1,2\ldots,N-1$, and $c_N=-b_N$.
Our aim is to find a PDE with suitable boundary
condition the appropriate discretisation of which results in
(\ref{eq:ode}), and which preserves the integral of the initial function.
Let us seek the PDE in the form
\begin{equation}\label{eq:pde}
\partial_tu(t,z)=\partial_{zz}(\alpha(z)u(t,z))+\partial_z(\beta(z)u(t,z)),
\end{equation}
where $z\in (-\frac{1}{2N},1+\frac{1}{2N})$ and
$t\in (0,T]$, and the functions $\alpha$ and $\beta$ are to be
defined. For the derivation of the boundary conditions we take
into account the requirement that
\[\int_{-\frac{1}{2N}}^{1+\frac{1}{2N}}u(t,z)\mathrm{d}z=const.\ \ \forall t\in [0,T].\]
Integrating (\ref{eq:pde}) on $[-\frac{1}{2N},1+\frac{1}{2N}]$ we
obtain the equality
\begin{align*}
0=\partial_t\left(\int_{-\frac{1}{2N}}^{1+\frac{1}{2N}}u(t,z)\mathrm{d}z\right)&=
\partial_z(\alpha u)\left(1+\frac{1}{2N},t\right)-\partial_z(\alpha u)\left(-\frac{1}{2N},t\right)\\
&\quad+(\beta u)\left(1+\frac{1}{2N},t\right)-(\beta u)\left(-\frac{1}{2N},t\right),
\end{align*}
which obviously holds if
\begin{equation}\label{eq:bc1}
\partial_z(\alpha u)\left(-\frac{1}{2N},t\right)+(\beta u)\left(-\frac{1}{2N},t\right)=0, \text{ and }
\end{equation}
\begin{equation}\label{eq:bc2}
\partial_z(\alpha u)\left(1+\frac{1}{2N},t\right)+(\beta u)\left(1+\frac{1}{2N},t\right)=0
\end{equation}
hold. Consider now the continuous problem (\ref{eq:pde}) with boundary
conditions (\ref{eq:bc1})-(\ref{eq:bc2}) and an initial condition
$u(0,z)$ obtained from a suitable interpolation of $v$ in
(\ref{eq:ode}).
Denote the approximation of the solution at the point $z=kh$ by $x_k(t),
k=0,1\ldots, N$. We seek the functions $\alpha$ and $\beta$ such
that by approximating appropriately the derivatives w.r.t. the variable $z$ in
(\ref{eq:pde}), for the functions $x_0(t),x_1(t),\ldots,$ $x_N(t)$
we obtain a system of ODE's of the form (\ref{eq:ode}).
Let us approximate the partial derivatives w.r.t. $z$ for the mesh
points of the indices $k=0,1,2,\ldots, N$ by central
differences. To this aim we define two virtual mesh points:
$-\frac{1}{N}$ and $1+\frac{1}{N}$, where the corresponding
solutions will be denoted by $x_{-1}(t)$ and $x_{N+1}(t)$,
respectively. Then
\begin{equation} \label{eq:u_k}
x_k'(t)=\frac{\alpha_{k-1}x_{k-1}-2\alpha_k x_k+\alpha_{k+1}x_{k+1}}{h^2} + \frac{\beta_{k+1}x_{k+1}-\beta_{k-1}x_{k-1}}{2h}
\end{equation}
for $k=0,1,2,\ldots, N$. Eliminate $x_{-1}$ in the equation for
$k=0$ by considering the left-hand side boundary condition
(\ref{eq:bc1}). To do so, we approximate the derivative w.r.t. $z$
by central difference, while the function value by the arithmetic
mean of the two neighboring values, $x_{-1}$ and $x_1$, to obtain
\[
\frac{\alpha_0x_0-\alpha_{-1}x_{-1}}{h}+\frac{\beta_0x_0+\beta_{-1}x_{-1}}{2}=0.
\]
From this we have
\[
\frac{\alpha_0x_0}{h^2}+\frac{\beta_0x_0}{2h}=\frac{\alpha_{-1}x_{-1}}{h^2}-\frac{\beta_{-1}x_{-1}}{2h},
\]
which yields
\begin{equation} \label{eq:u0}
x_0'(t)=\left(-\frac{\alpha_0}{h^2}+\frac{\beta_0}{2h}\right)x_0+ \left(\frac{\alpha_1}{h^2}+\frac{\beta_1}{2h}\right)x_1.
\end{equation}
Comparing \eqref{eq:u0} to the first equation of \eqref{eq:ode}, we have
\begin{equation*}
b_0=-\frac{\alpha_0}{h^2}+\frac{\beta_0}{2h} \text{ and } c_1=\frac{\alpha_1}{h^2}+\frac{\beta_1}{2h}.
\end{equation*}
Comparing the further equations of (\ref{eq:u_k}) with system (\ref{eq:ode}),
we obtain the relations
\begin{equation} \label{ak_ck}
a_k=\frac{\alpha_k}{h^2}-\frac{\beta_k}{2h},\quad c_k=\frac{\alpha_k}{h^2}+\frac{\beta_k}{2h}.
\end{equation}
It is easy to see that $a_0=-b_0,\ b_k=-(a_k+c_k), k=1,2\ldots,N-1$, and
similar considerations on the right boundary show that $c_N=-b_N.$
The functions $\alpha$ and $\beta$ can be determined from the
equations \eqref{ak_ck} to obtain
\[
\alpha_k=\frac{(a_k+c_k)h^2}{2},\quad \beta_k=\frac{(c_k-a_k)h}{2},
\]
from which
\begin{equation*}
\alpha(z)=\frac{(a(z)+c(z))h^2}{2} \text{ and } \beta(z)=\frac{(c(z)-a(z))h}{2}
\end{equation*}
follows.
The order of approximation of this scheme is yet to be calculated.
For $k=1,2,\ldots N$ the approximation is obviously of order $h^3$ as in the previous example.
However, in the points $z=0$ and $z=1$ (corresponding to $k=0$ and
$k=N+1$) it has only order of $h^2$, since for $z=0$ we have
\begin{align*}
&\left(-{\alpha_0}+\frac{\beta_0}{2} \right)x_0(t)+ \left({\alpha_1}+\frac{\beta_1}{2} \right)u_1(t)\\
&=\frac{\alpha_{-1}u_{-1}(t)-2\alpha_0u_0(t)+\alpha_1u_1(t)}{h^2} h^2 +\frac{\beta_1u_1(t)-\beta_{-1}u_{-1}(t)}{2h} h\\
&\qquad +\frac{1}{h}\left ( \frac{\alpha_0u_0(t)-\alpha_{-1}u_{-1}(t)}{h} h^2 + \frac{\beta_0u_0(t)+\beta_{-1}u_{-1}(t)}{2} h\right )\\
&=\partial_{zz}(\alpha u)(0,t)+O(h^4)+\partial_z(\beta u)(0,t)+O(h^3)+\frac{1}{h}(\partial_z(\alpha u)(-h/2,t)+O(h^4)\\
&\qquad+(\beta u)(-h/2,t)+O(h^3))\\
&=\partial_{zz}(\alpha u)(0,t)+\partial_z(\beta u)(0,t)+O(h^3)+\frac{1}{h}(0+O(h^3))\\
&=\partial_{zz}(\alpha u)(0,t)+\partial_z(\beta u)(0,t)+O(h^2).
\end{align*}
For $z=1$ similar relations hold.
In the following we consider the exact PDE and its solution, the latter being the approximation of the exact solution to the ODE at the points $\tfrac{k}{N}$, and show estimates on how good this approximation is.
\section{Theorems}
\noindent
Now we give a rather general setup to prove the desired estimates on the approximation. We use the theory of operator semigroups and our general reference is Engel and Nagel \cite{EN:00}, Zagrebnov \cite{Z}, or B\'atkai et al. \cite{Batkai-Csomos-Farkas-Ostermann}.
\begin{assumptions}\label{c:apro1.ass:approx_space}
Let $X_n$, $X$ be Banach spaces and assume that there are bounded linear operators $P_n:X\to X_n$, $J_n:X_n\to X$ with the following properties:
\begin{itemize}
\item There is a constant $K>0$ with $\|P_n\|,\, \|J_n\|\leq K$ for all $n\in\NN$,
\item $ P_n J_n = I_n$, the identity operator on $X_n$, and
\item $J_nP_n f\to f$ as $n\to\infty$ for all $f\in X$.
\end{itemize}
\end{assumptions}
\begin{assumptions}\label{c:apro1.ass:approx_gener}
Suppose that the operators $A_n$, $A$ generate strongly continuous semigroups on $X_n$ and $X$, respectively, and that there are constants $M\geq 0$, $\omega\in\RR$ such that the stability condition
\begin{equation}\label{c:apro1.eq:stability}
\|T_n(t)\|\leq M\ee^{\omega t} \qquad \text{ holds for all } n\in\NN,\, t\geq 0.
\end{equation}
\end{assumptions}
We will make use of a special variant of the Trotter-Kato theorem, which we cite here for convenience, see the lectures by B\'atkai, Csom{\'o}s, Farkas and Ostermann \cite[Proposition 3.8]{Batkai-Csomos-Farkas-Ostermann}.
\begin{proposition}\label{prop:appr_first_gen}
Suppose that Assumptions \ref{c:apro1.ass:approx_space} hold, that there is a dense subset $Y\subset D(A)$ invariant under the semigroup $T$ such that $P_nY\subset \dom(A_n)$, and that $Y$ is a Banach space with some norm $\|\cdot\|_Y$ satisfying
\begin{equation*}
\|T(t)\|_Y \leq M \ee^{\omega t}.
\end{equation*}
If there are constants $C>0$ and $p\in \NN$ with the property that for all $f\in Y$
\begin{equation*}
\|A_nP_n f - P_nAf\|_{X_n}\leq C\frac{\|f\|_Y}{n^p},
\end{equation*}
then for each $t>0$ there is $C'>0$ such that
\begin{equation*}
\|T_n(t)P_n f - P_nT(t)f\|_{X_n}\leq C'\frac{\|f\|_Y}{n^p}.
\end{equation*}
Moreover, this convergence is uniform in $t$ on compact intervals.
\end{proposition}
This result can be slightly improved in case analytic semigroups are involved.
\begin{lemma}\label{lem:anal_approx}
Suppose that the conditions of Proposition \ref{prop:appr_first_gen} are satisfied and that $A$ generates an analytic semigroup. If there is $\varepsilon\in (0,1)$ and there are spaces $Y\hookrightarrow \dom(A)\hookrightarrow Z\hookrightarrow X$ such that $T(s)Z\subset Y$ for all $s>0$ and
\begin{equation*}
\|T(s)\|_{\mathcal{L}(Z,Y)}\leq \frac{M}{s^{1-\varepsilon}}
\end{equation*}
holds, then
\begin{equation*}
\|T_n(t)P_n f - P_nT(t)f\|_{X_n}\leq C'\frac{\|f\|_Z}{n^p}
\end{equation*}
for all $n\in \NN$ and $f\in Z$.
\end{lemma}
Note that this condition is for example satisfied if there is $\alpha\in (0,1)$ so that $Y=\dom((I-A)^{1+\alpha})$ and $\dom(A)\hookrightarrow Z\hookrightarrow \dom((I-A)^{\alpha+\varepsilon})$ holds.
\begin{proof}
As in the proof of B\'atkai, Kiss, Sikolya and Simon \cite[Lemma 5]{Batkai-Kiss-Sikolya-Simon}, we have the representation
\begin{equation*}
\left(P_nT(t)-T_n(t)P_n\right)f = \int_0^t T_n(t-s)\left(P_nA-A_nP_n\right)T(s)f\dd s
\end{equation*}
for all $f\in \dom(A)$. Hence, using the analyticy of the semigroup $T$, we obtain the norm estimate
\begin{multline*}
\left\|P_nT(t)f-T_n(t)P_n f\right\| \leq \int_0^t M\ee^{\omega (t-s)}\|(P_nA-A_nP_n)T(s)f\| \dd s \\
\leq \int_0^t M' C\frac{\|T(s)f\|_Y}{n^p}\dd s \leq M''\frac{\|f\|_Z}{n^p}\int_0^t \frac{1}{s^{1-\varepsilon}}\dd s,
\end{multline*}
where the constants $M'$, $C'$ and $M''$ only depend on $t>0$.
\end{proof}
We have seen in the calculations of the previous sections that our approximation is of third order in the interior of the interval and of second order on the boundary. Let us formalize now the calculations and put them into the general framework presented above.
\subsection{Dynamic boundary condition}
Our aim is now to show that sufficiently smooth initial values the derived partial differential equation \eqref{eq:pde_main} with dynamic boundary conditions \eqref{eq:pde_boundary_left} and \eqref{eq:pde_boundary_right} is the right approximation to the ordinary differential equation \eqref{eq:ode}.
As a first step, we have to associate to the partial differential equation \eqref{eq:pde_main} with boundary conditions \eqref{eq:pde_boundary_left} and \eqref{eq:pde_boundary_right} a Banach space $X$ and a generator $A$. Following the approach of Engel \cite{Engel, Engel1} or B\'atkai and Engel \cite{Batkai-Engel}, we introduce the spaces
\begin{equation*}
X:= C[0,1],
\end{equation*}
and
\[
\widetilde{X}:=\left\{\left(
\begin{smallmatrix}
f\\y
\end{smallmatrix}\right)
\in X\times\CC^2
\left|y=(f(0),f(1))^T\right.\right\}.
\]
Let us also consider the operators
\begin{multline*}
(D_m f)(z):= \frac{h^2}{2}\left(a\left(z-h\right)+c\left(z+h\right)\right) f''(z) \\ +h(c(z+h)-a(z-h))f'(z) +(a(z-h)+b(z)+c(z+h)) f(z)
\end{multline*}
defined on its maximal possible domain, $\dom(D_m):=C^{2}[0,1]$, and
\begin{equation*}
Bf:=\left(\begin{smallmatrix} hc(h) f'(0) + (c(h)+b(0))f(0) \\ -ha(1-h)f'(1) + (a(1-h)+b(1))f(1) \end{smallmatrix}\right)
\end{equation*}
defined on $\dom(D_m)$ and mapping to $\CC^2$. Our associated operator should be
\begin{equation*}
A f = D_m f \quad \text{ with } \dom(A) :=\left\{f\in \dom(D_m)\,:\, \left(\begin{smallmatrix} D_mf(0), D_mf(1)\end{smallmatrix} \right)^T=Bf \right\}.
\end{equation*}
Further, for a function $f\in C[0,1]$ we introduce the notation
\begin{equation*}
f_N:=(f(0),f(\tfrac{1}{N}),\ldots,f(1))^T\in\CC^{N+1}.
\end{equation*}
After all these preparations, we can state the main result of this Section.
\begin{theorem}\label{thm:main}
Consider the ordinary differential equation given by \eqref{eq:ode} and the approximating partial differential equation \eqref{eq:pde_main} with dynamic boundary conditions \eqref{eq:pde_boundary_left} and \eqref{eq:pde_boundary_right}, where $v= u_N(0)$. If there is $\varepsilon\in (0,\tfrac{1}{2})$ such that $u(0,\cdot)\in \dom((I-A)^{\frac{1}{2}+\varepsilon})$, then for all $T>0$ there is $C=C(T)>0$ such that for all $t\in (0,T]$ we get
\begin{equation}\label{eq:thm_main}
\|u_N(t,\cdot)- x(t)\|_{\infty} \leq \frac{C}{N^2}\|u(0,\cdot)\|_{\dom((I-A)^{\frac{1}{2}+\varepsilon})}.
\end{equation}
\end{theorem}
\begin{proof}
By Engel \cite{Engel} or B\'atkai and Engel \cite[Remark 4.4]{Batkai-Engel}, the operator $A$ generates an analytic semigroup of angle $\frac{\pi}{2}$ in the space $X$, and this semigroups gives the solutions of the partial differential equation \eqref{eq:pde_main} with dynamic boundary conditions \eqref{eq:pde_boundary_left} and \eqref{eq:pde_boundary_right}.
Further, we introduce the spaces
\begin{equation*}
X_N:=\CC^{N-1}\times \CC^2
\end{equation*}
and define the operators $P_N:\widetilde{X}\to X_N$ as
\begin{equation*}
P_N(f,y):=(f_N,y)\,
\end{equation*}
and take $J_N$ to be a suitable interpolation.
Clearly, these operators and spaces satisfy the conditions in Assumptions \ref{c:apro1.ass:approx_space}.
Abusing the matrix notation, define now on $X_N\simeq\CC^{N+1}$ the operator
\begin{equation*}
\tilde A_N:=\left(\begin{array}{ccccccc}
b_1&c_2&\cdots&0&0&a_0 &0\\
a_1&b_2&&0 &0&0&0\\
\vdots& & &\ddots& & & \vdots\\
0& 0& &b_{N-2}&c_{N-1} &0 &0\\
0& 0& &a_{N-2}&b_{N-1} &0 &c_N\\
c_1&0&\cdots&0&0& b_0& 0\\
0& 0 &\cdots &0 &a_{N-1} &0&b_N\\
\end{array}\right).
\end{equation*}
Taking $\tbinom{f}{y}\in\dom(A)$ (i.e., $f\in W^{1,1}(0,1)$ and $y_1=f(0),\, y_2=f(1)$, we see that
\begin{equation*}
\tilde A_N P_N \tbinom{f}{y} =
\begin{pmatrix}
a_0f(0)+b_1 f(\tfrac{1}{N})+ c_2 f(\tfrac{2}{N})\\
a_1 f(\tfrac{1}{N})+ b_2 f(\tfrac{2}{N})+c_3 f(\tfrac{3}{N})\\
\vdots \\
a_{N-2} f(\tfrac{N-2}{N})+ b_{N-1} f(\tfrac{N-1}{N})+c_N f(1)\\
b_0f(0)+c_1 f(\tfrac{1}{N})\\
a_{N-1} f(\tfrac{N-1}{N})+b_N f(1)
\end{pmatrix}
\end{equation*}
and that
\begin{align*}
(P_N A\tbinom{f}{y})_k &=
\tfrac{1}{2N^2} (a(\tfrac{k-1}{N})f(\tfrac{k-1}{N})-c(\tfrac{k+1}{N})f(\tfrac{k+1}{N}))f''(\tfrac{k}{N}) \\ &+\tfrac{1}{N}(a(\tfrac{k+1}{N})f(\tfrac{k+1}{N})-c(\tfrac{k-1}{N})f(\tfrac{k-1}{N}))f'(\tfrac{k}{N}) \\
&+ (a(\tfrac{k-1}{N})+b(\tfrac{k}{N})+c(\tfrac{k+1}{N})f(\tfrac{k}{N})
\end{align*}
for $k=1,2,\ldots,N-1$, and
\begin{align*}
(P_N A\tbinom{f}{y})_N =& \tfrac{1}{N}c(\tfrac{1}{N})f'(0)+ (c(\tfrac{1}{N})+ b(0))f(0)\\
(P_N A\tbinom{f}{y})_{N+1} =& -\tfrac{1}{N}c(\tfrac{N-1}{N})f'(1)+ (a(\tfrac{N-1}{N})+ b(1))f(1)
\end{align*}
By the calculations of the previous section, we see that there is $C>0$ such that
\begin{align*}
|(P_N A\tbinom{f}{y})_k - (\tilde A_NP_N \tbinom{f}{y})k| \leq& \frac{C}{N^3}\|f'''\|_{\infty},\\
|(P_N A\tbinom{f}{y})_N - (\tilde A_NP_N \tbinom{f}{y})N| \leq& \frac{C}{N^2}\|f''\|_{\infty},\\
|(P_N A\tbinom{f}{y})_{N+1} - (\tilde A_NP_N \tbinom{f}{y})_{N+1}| \leq& \frac{C}{N^2}\|f''\|_{\infty}\\
\end{align*}
hold. Since $A$ generates an analytic semigroup, it leaves $Y=C^3[0,1]$ invariant. Hence, Proposition \ref{prop:appr_first_gen} is applicable with $Y=C^3[0,1]$ and we obtain the desired estimate for all $u(0,\cdot)\in Y$. To improve this result and relax the regularity assumption on the initial value, we use the analyticity of the semigroup and Lemma \ref{lem:anal_approx} with $\alpha=\tfrac{1}{2}$.
Introducing the notation $B=I-A$, our aim now is to show that $\dom(B^{3/2})\subset C^2[0,1]\cap C^3(0,1)$. Since $\dom(B)\subset C^2[0,1]$, it is enough to show that $\dom(B^{1/2})\subset C^1(0,1)$.
Let $f\in \dom(B^{1/2})$ such that $g=B^{1/2}f$. Then by Engel and Nagel \cite[Corollary II.5.28]{EN:00},
\begin{equation*}
f=\int_0^\infty \frac{1}{\sqrt\lambda} R(\lambda+1,A)g \dd\lambda.
\end{equation*}
Further, by checking the explicit representation of the resolvent as in the proof in Engel and Nagel \cite[Theorem VI.4.5]{EN:00}, we see that the resolvent is given by the combination of exponential terms and a convolution term. Since we are in the interior of the domain, we can drop the exponential terms because they do not disturb regularity and concentrate on the convolution term. Hence we may assume that
\begin{equation*}
f=\int_0^\infty \frac{1}{\sqrt\lambda}\frac{1}{2\sqrt{\lambda+1}}\int_0^1 e^{-\sqrt{\lambda+1}|\cdot-s|}g(s) \dd s\dd\lambda.
\end{equation*}
Rewriting, we obtain
\begin{align*}
f(x)&=\int_0^\infty \frac{1}{2\sqrt\lambda\sqrt{\lambda+1}}\left\{\int_0^x e^{-\sqrt{\lambda+1}(x-s)}g(s) \dd s\right. \\
&\qquad\qquad + \left.\int_x^1 e^{-\sqrt{\lambda+1}(s-x)}g(s) \dd s\right\}\dd\lambda\\
&=\int_0^\infty \frac{1}{2\sqrt\lambda\sqrt{\lambda+1}}\left\{e^{-\sqrt{\lambda+1}x}\int_0^x e^{\sqrt{\lambda+1}s}g(s) \dd s \right. \\ &\qquad\qquad + \left.e^{\sqrt{\lambda+1}x} \int_x^1 e^{-\sqrt{\lambda+1}s}g(s) \dd s\right\}\dd\lambda.
\end{align*}
Formally differentiating with respect to $x$ behind the first integral, we obtain
\begin{eqnarray*}
\int_0^\infty \frac{1}{2\sqrt\lambda\sqrt{\lambda+1}}
&&\left\{\frac{-1}{\sqrt{\lambda+1}}e^{-\sqrt{\lambda+1}x}\int_0^x e^{\sqrt{\lambda+1}s}g(s) \dd s+g(x)\right.\\
&&+\left.\frac{1}{\sqrt{\lambda+1}}e^{\sqrt{\lambda+1}x} \int_x^1 e^{-\sqrt{\lambda+1}s}g(s) \dd s-g(x)\right\}\dd\lambda\\
=\int_0^\infty\frac{1}{2\sqrt{\lambda(\lambda+1)}}&&
\left\{\frac{-1}{\sqrt{\lambda+1}}e^{-\sqrt{\lambda+1}x}\int_0^x e^{\sqrt{\lambda+1}s}g(s) \dd s\right.\\
&&+\left.\frac{1}{\sqrt{\lambda+1}}e^{\sqrt{\lambda+1}x} \int_x^1 e^{-\sqrt{\lambda+1}s}g(s) \dd s\right\}\dd\lambda.
\end{eqnarray*}
Since this improper integral converges uniformly in $x$ on any closed subinterval of $(0,1)$, and depends continuously on $x$, the function $f$ is indeed continuously differentiable on $(0,1)$.
\end{proof}
\subsection{Robin boundary condition}
Using analogous argument, we can prove the approximating property of the PDE with Robin boundary conditions. To this end, we introduce the space $X=C[-\frac{h}{2},1+\frac{h}{2}]$ and the operator
\begin{equation*}
Af(z):=\frac{\dd^2}{\dd z^2}\left(h^2\frac{a(z)+c(z)}{2}f(z)\right)+\frac{\dd}{\dd z}\left(h\frac{c(z)-a(z)}{2}f(z)\right),
\end{equation*}
with domain
\begin{multline*}
D(A):=\big\{f\in C^1[-\tfrac{h}{2},1+\tfrac{h}{2}]\cap C^2(-\tfrac{h}{2},1+\tfrac{h}{2})\,:\\
\frac{\dd}{\dd z}\big(h^2\frac{a(z)+c(z)}{2}f(z)\big)+\big(h\frac{c(z)-a(z)}{2}f(z)\big) = 0 \\
\text{ for } z=-\tfrac{h}2, 1+\tfrac{h}{2}\big\}.
\end{multline*}
Further, as before, for a function $f\in C[0,1]$ we use the notation
\begin{equation*}
f_N:=(f(0),f(\tfrac{1}{N}),\ldots,f(1))^T\in\CC^{N+1}.
\end{equation*}
\begin{theorem}\label{thm:main2}
Consider the ordinary differential equation given by \eqref{eq:ode} and the approximating partial differential equation \eqref{eq:pde} with Robin-type boundary conditions \eqref{eq:bc1} and \eqref{eq:bc2}, where $v= u_N(0)$. If there is $\varepsilon\in (0,\tfrac{1}{2})$ such that $u(0,\cdot)\in \dom((I-A)^{\frac{1}{2}+\varepsilon})$, then for all $T>0$ there is $C=C(T)>0$ such that for all $t\in (0,T]$ we get
\begin{equation}\label{eq:thm_main2}
\|u_N(t,\cdot)- x(t)\|_{\infty} \leq \frac{C}{N^2}\|u(0,\cdot)\|_{\dom((I-A)^{\frac{1}{2}+\varepsilon})}.
\end{equation}
\end{theorem}
\begin{proof}
The proof can be carried out in a completely analogous way as for the previous theorem. For second order differential operators with Robin-type boundary conditions we refer to the works by Arendt and coauthors \cite{AMPR,AW} or Warma \cite{W}.
\end{proof}
\section{Applications}
The main motivation of the previous theoretical investigation is to approximate a dynamic process on a network with a partial differential equation and to justify empirical observations. The network is usually given by an undirected graph and the process can be specified by the possible states of the nodes and the transition rate probabilities. The latter is the probability that the state of a node changes from one state to another depending on the states of the neighbouring nodes. In certain classes of models the complete state space can be reduced (using e.g. mean field approximations or structural symmetries), leading to tridiagonal systems.
In this section we show in two cases how the theory can be applied. The first one is the propagation of two opinions along a cycle graph, called a voter-like model, the second is an $SIS$ type epidemic propagation on a complete graph. As usual in the literature, in both models the natural Markov process is conditioned on not reaching the absorbing state(s).
\subsection{Voter-like model on a cycle graph}
Let us consider a cycle graph with $N+2$ nodes, i.e. we have a connected graph, in which each node has two neighbours. A node can be in one of two states, let us denote them by 0 and 1. These states represent two opinions propagating along the edges of the graph (see Holley and Liggett \cite{HolleyLiggett}). If a node is in state 0 and has $k$ neighbours in state 1 ($k=0,1,2$), then its state will change to 1 with probability $k\tau \Delta t$ in a small time interval $\Delta t$. This expresses that opinion 1 invades that node. The opposite case can also happen, that is a node in state 1 can become a node with opinion 0 with a probability $k\gamma \Delta t$ in a small time interval $\Delta t$, if it has $k$ neighbours in state 0. The parameters $\tau$ and $\gamma$ characterize the strengths of the two opinions. The model originates in physics, where in a network of interacting particles each node holds either spin 1 or -1 (see Vazquez and Eguiluz \cite{VazquezEguiluz}). In a single event, a randomly chosen node adopts the spin of one of its neighbors, also chosen at random.
Assuming that at the initial instant the territories of the two opinions are connected sets, the underlying conditioned Markov chain can be given as follows. The state space is the set $\{ 0,1, 2, \ldots , N\}$, where a number $k$ represents the state in which there are $k+1$ nodes in state 1 and they form a connected arc along the cycle graph. Starting from state $k$ the system can move either to state $k+1$ or to $k-1$, since at a given instant only one node can change its state (by using the usual assumption that the changes at the nodes can be given by independent Poisson processes). When the system moves from state $k$ to $k+1$ then a new node in state 1 appears at one of the two ends of the arc of state 1 nodes. Hence the rate of this transition is $2\tau$, expressing that a node in state 0 and having a single neighbour in state 1 becomes a state 1 node, and this can happen at both ends of the state 1 territory. Similarly, the rate of transition from state $k$ to $k-1$ is $2\gamma$. Let us denote by $x_k(t)$ the probability that the system is in state $k$. The above transition rates lead to the differential equation
\begin{equation*}
\dot x(t) = 2\tau x_{k-1} (t) - 2(\tau+\gamma) x_k(t) + 2\gamma x_{k+1}(t) .
\end{equation*}
(For $k=0$ and for $k=N$ the equations contain only two terms.) Thus our system of ODEs takes the form given in (\ref{eq:ode}) with $a\equiv 2\tau$, $c\equiv 2\gamma$ and $b|_{[1/N,1-1/N]}\equiv -2(\tau+\gamma)$, $b(0)=-2\tau$, $b(1)=-2\gamma$, yielding to the differential equation
\begin{equation}\label{eq:odeVoter}
\dot{x}(t)=A_v x(t)
\end{equation}
with the matrix
\begin{equation*}
A_v=2\left(\begin{array}{ccccccc}
-\tau & \gamma& 0&\cdots&0&0&0\\
\tau &-(\tau+\gamma)&\gamma&\cdots&0&0&0\\
0&\tau&-(\tau+\gamma)&&0 &0&0\\
\vdots& & &\ddots& & & \vdots\\
0& 0&0& &-(\tau+\gamma)&\gamma&0\\
0& 0&0& &\tau&-(\tau+\gamma)&\gamma\\
0& 0&0 &\cdots&0 &\tau&-\gamma\\
\end{array}\right)
\end{equation*}
subject to the initial condition $x(0)=v \in\CC^{N+1}$. Using (\ref{eq:pde_main}) the corresponding approximating PDE is then given by:
\begin{equation}\label{eq:pdeVoter}
\left\{\begin{aligned}
\partial_t u(t,z)&= (\tau+\gamma)h^2\partial_{zz} u(t,z)+2(\gamma-\tau)h\partial_z u(t,z)\\
\partial_t u(t,0)&= 2\gamma h\partial_z u(t,0) + 2(\gamma-\tau) u(t,0)\\
\partial_t u(t,1)&= -2\tau h\partial_z u(t,1) - 2(\gamma-\tau) u(t,1).
\end{aligned}\right.
\end{equation}
To illustrate the effectiveness of our method numerically, we consider the special case of $\tau=\gamma=\alpha/2$, leading to the simplified equations
\begin{equation}\label{eq:pdeVoter2}
\left\{
\begin{aligned}
\partial_t u(t,z)&=\alpha h^2\partial_{zz} u(t,z)\\
\partial_t u(t,0)&=\alpha h\partial_z u(t,0)\\
\partial_t u(t,1)&=-\alpha h\partial_z u(t,1),
\end{aligned}
\right.
\end{equation}
where the associated generator has all its eigenvalues in $(-\infty,0]$.
Wishing to apply the Fourier method, we look for the solution in the form
\[
u(t,z)=\sum_{j=0}^\infty c_je^{\lambda_jt}w_j(z)
\]
It is enough to find the eigenfunctions $e^{\lambda_jt}w_j(z)$. The PDE and the boundary conditions then yield the system of equations
\begin{equation*}
\left\{
\begin{aligned}
\lambda w&=\alpha h^2 w''\\
\lambda w(0)&=\alpha hw'(0)\\
\lambda w(1)&=-\alpha hw'(1).
\end{aligned}
\right.
\end{equation*}
The first equation yields
\[
w_j(z)=c_{1,\lambda_j} \cos (\omega_j z/h) +c_{2,\lambda_j} \sin(\omega_j z/h),
\]
with $\lambda_j=-\omega_j^2$, $\omega_j\geq 0$. Substituting into the first boundary condition we obtain $-\omega_j c_{1,\lambda_j}=c_{2,\lambda_j}$, allowing us to choose $c_{1,\lambda_j}=1$ and hence write
\[
w_j(z)=\cos (\omega_j z/h) -\omega_j \sin(\omega_j z/h).
\]
Now substituting into the second boundary condition we obtain
\[
\tan\left(\frac{\omega_j}{h}\right)=\frac{2\omega_j}{\omega_j^2-1}.
\]
This has exactly one solution in each interval $\left((2j-1)h\frac{\pi}{2},(2j+1)h\frac{\pi}{2}\right)$ for $j\geq 0$.
The constants $c_j$ are determined by the initial condition
\[
u(0,z)=\sum_{j=0}^\infty c_j w_j(z).
\]
Introducing the infinite matrix $G=((\langle w_k,w_l\rangle)_{k,l})$ , where $\langle\cdot,\cdot\rangle$ is the $L^2$ scalar product, and the vectors $U=(\langle w_j,u(0,\cdot)\rangle_j)$ and $c=(c_j)_j$, this leads to the equation
\begin{equation}\label{eq:Fourier}
Gc=U
\end{equation}
for the Fourier coefficients of the solution.
\begin{figure}[h!]
\begin{center}
\includegraphics[width=10cm]{fig1.eps}
\caption{The probability distribution $x_k(t)$, $k=0,1, 2, \ldots
, N$ at time $t=500$ obtained from system (\ref{eq:odeVoter})
(circles) and the solution $z\mapsto u(t,z)$ of the PDE
(\ref{eq:pdeVoter}) at time $t=500$ (continuous line), with
initially 200 nodes in state 1 with probability 1, and with
$N=1000$, $\tau=0.5$, $\gamma=0.5$.} \label{fig1}
\end{center}
\end{figure}
In Figure \ref{fig1} the solution of system (\ref{eq:odeVoter}) is compared to the
solution of the PDE (\ref{eq:pdeVoter}) when $\tau=\gamma=0.5$, the latter was
plotted using the Fourier method with the first 40 eigenfunctions. The first 40
eigenvalues were determined by using Newton's method within each interval given
above, and then we solved equation (\ref{eq:Fourier}) restricted to the first 40
variables. We observed that on our desktop computer MATLAB needed 15.719000 seconds
to get the ODE solution at $t=100$,
while for the Fourier method 0.016000 seconds were needed to solve the PDE.
We also compared the solutions of the ODE and the PDE for the
Robin-type boundary condition. For the Voter-like model equation
\eqref{eq:pde} has the form
\begin{equation}\label{eq:Voter_pde_Robin}
\partial_tu(t,z)=(\tau+\gamma)h^2\partial_{zz}u(t,z)+2(\gamma-\tau)h\partial_zu(t,z),
\end{equation}
with $z\in \left(-\frac{1}{2N},1+\frac{1}{2N}\right), t\in (0,T]$, and the Robin-type boundary conditions read
as
\begin{eqnarray}\label{eq:leftbc_Voter}
(\tau+\gamma)h\partial_zu\left(t,-\frac{1}{2N}\right)
+2(\gamma-\tau)u\left(t,-\frac{1}{2N}\right)=0,
\\\label{eq:rightbc_Voter}
(\tau+\gamma)h\partial_zu\left(t,1+\frac{1}{2N}\right)
+2(\gamma-\tau)u\left(t,1+\frac{1}{2N}\right)=0
\end{eqnarray}
for $t\in [0,T]$. The system (\ref{eq:odeVoter}) was solved with
MATLAB's ode45 solver, while the partial differential equation
with MATLAB's pdepe solver. The results of the comparison are
shown in Fig.~\ref{abra_Robin} at time $t=500$ for two different
parameter choices.
\begin{figure} [h!]
\begin{center}
\mbox{\epsfig{file=Robin_1.eps,height=7cm,width=0.5\textwidth}\epsfig{file=Robin_2.eps,height=7cm,width=0.5\textwidth}}
\caption{The probability distribution $x_k(t)$, $k=0,1, 2, \ldots
, N$ at time $t=500$ obtained from system (\ref{eq:odeVoter})
(circles) and the solution $z\mapsto u(t,z)$ of the PDE
(\ref{eq:Voter_pde_Robin}) with boundary conditions
(\ref{eq:leftbc_Voter}) and (\ref{eq:rightbc_Voter}) at time
$t=500$ (continuous line) with initially 200 nodes in state 1
with probability 1, and with $N=1000$, for $\tau=\gamma=0.5$
(left panel) and for $\tau=0.7$, $\gamma=0.3$ (right panel).}
\label{abra_Robin}
\end{center}
\end{figure}
\subsection{$SIS$ disease transmission model on a complete graph}
The second motivation of our study comes from epidemiology where a paradigm disease transmission model is the
simple susceptible-infected-susceptible ($SIS$) model on a completely connected graph with $N+1$
nodes, i.e. all individuals are connected to each other. From the disease dynamic viewpoint, each
individual is either susceptible ($S$) or infected ($I$) -- a susceptible one with $k+1$ infected neighbours can be infected
at rate ($k\tau$) and the infected ones can recover at a given rate ($\gamma$) and become susceptible again. Since the graph is complete, the state space is the set $\{ 0,1, 2, \ldots , N\}$, where a number $k$ represents the state in which there are $k$ infected nodes. Starting from state $k$ the system can move either to state $k+1$ or to $k-1$, since at a given instant only one node can change its state. When the system moves from state $k$ to $k+1$ then a susceptible node becomes infected. Hence the rate of this transition is $k(N-k)\tau$, expressing that any of the $N-k$ susceptible nodes can become infected and each of them has $k$ infected neighbours (since the graph is complete). The rate of transition from state $k$ to $k-1$ is $k\gamma$, because any of the $k$ infected nodes can recover. Let us denote by $x_k(t)$ the probability that the system is in state $k$, i.e. there are $k$ infected nodes. The above transition rates lead to the differential equation
$$
\dot x(t) = (k-1)(N-k+1)\tau x_{k-1} (t) - (k(N-k)\tau+k\gamma) x(t) + (k+1)\gamma x_{k+1}(t) .
$$
(For $k=0$ and for $k=N$ the equations contain only two terms.) Thus our system of ODEs takes the form given in (\ref{eq:ode}) with $a_k=k(N-k)\tau$, $c_k=k\gamma$ and $b_k=-a_k-c_k$, that is $a(z)=N^2\tau z(1-z)$, $c(z)=N\gamma z$ and $b(z)=-a(z)-c(z)$. We note that an approximation of this system by a first order PDE was investigated in B{\'a}tkai, Kiss, Sikolya and Simon \cite{Batkai-Kiss-Sikolya-Simon}. According to (\ref{eq:pde_main}) our method yields the following second order approximation
\begin{eqnarray*}
\partial_t u(t,z)&=&\frac{\alpha (z-h)(1-z+h)+\gamma (z+h)}{2}h\partial_{zz} u(t,z)\\
&&+(\gamma (z+h)-\alpha (z-h)(1-z+h))\partial_z u(t,z)\\
&&+(\alpha (2z-1-h) +\gamma ) u(t,z)\\
\partial_t u(t,0)&=&\gamma h\partial_z u(t,0) + \gamma u(t,0)\\
\partial_t u(t,1)&=&-\alpha (1-h)h\partial_z u(t,1) + \alpha (1-h) u(t,1).
\end{eqnarray*}
Our theorem implies that the solution of this PDE approximates the solution of the corresponding ODE (\ref{eq:ode}) in the order of $1/N^2$. We note that the usually used first order PDE approximates the ODE in the order of $1/N$. The advantage of that first order PDE is that it can be solved analytically yielding the well-known mean-field approximation for the expected number of infected nodes, see B\'atkai et al. \cite{Batkai-Kiss-Sikolya-Simon}. Our second order PDE cannot be solved analytically, hence only a numerical approximation can be obtained by using our method. It is the subject of future work to derive PDE approximations for epidemic propagation on different random graphs and compare their solutions to those of the original ODE system.
\section*{Acknowledgments}
\noindent
Supported by the OTKA grant Nr. K81403 and by the European Research Council Advanced Researcher Grant No. 227701 (Leader: L. Lov\'asz).
|
1,108,101,566,376 | arxiv | \section{Introduction}
The theoretical proposal \cite{bernevig2006a,kane2005,bernevig2006,
qi2008,fu2007a,moore2007,roy2006,zhang2009a} and experimental
discovery of the topological insulators
\cite{koenig2008,hsieh2008,xia2009,chen2009} have provoked an
intensive research effort in condensed matter physics. Topological
insulators (TI) with time-reversal symmetry are generally
characterized by a topological term in the electromagnetic action
with a quantized coefficient\cite{qi2008}. These states have been
theoretically predicted and experimentally observed in both two and
three dimensions, including the two-dimensional (2D) HgTe/HgCdTe
quantum wells \cite{bernevig2006a, koenig2008}, and bulk
three-dimensional materials Bi$_2$Te$_3$, Bi$_2$Se$_3$ and
Bi$_{1-x}$Sb$_x$
\cite{zhang2009a,chen2009,xia2009,fu2007a,hsieh2008,roushan2009}.
They exhibit robust gapless modes at boundaries, {\it e.g.\/} a 1D
helical edge mode for 2D TIs, and a 2D helical surface mode for 3D
TIs with odd numbers of Dirac cones. Due to time reversal symmetry,
backscattering is forbidden for the helical edge and surface states,
and an analysis of interaction effects for the 1D helical edge modes
shows they are stable against weak and intermediate strength
interactions \cite{wu2006,xu2006}. Bi$_2$Te$_3$ and Bi$_2$Se$_3$
have been predicted to have bulk band gaps exceeding room
temperature\cite{zhang2009a}, which makes them promising for future
applications.
Zhang {\it et al} predict that the surface states of Bi$_2$Te$_3$
consist of a single Dirac cone at the $\Gamma$ point, and that the
Dirac cone evolves into a hexagonal shape at higher
energy\cite{zhang2009a}. Furthermore, near the Dirac point, the spin
of the electron lies perpendicular to the momentum. Angle-resolved
photo-emission spectroscopy (ARPES) measurements performed on the
surface of Bi$_2$Te$_3$ have confirmed these predictions in
detail\cite{chen2009,hsieh2009}. The typical shape of the Fermi
surface is a snowflake-like warped hexagon. The low-energy
\textsf{O}(2) symmetry of the Dirac cone is broken due to the
$C_{3v}$ symmetry of the underlying lattice\cite{zhang2009a}, and
can be modeled by a warping term in the effective
model\cite{fu2009}. Another powerful surface probe, spectroscopic
scanning tunneling microscopy (STM), is sensitive to quasi-particle
interference (QPI) around impurities, and provides an important tool
to study electronic structures in unconventional materials, such as
high T$_{\rm c}$ cuprates \cite{hanaguri2007,wang2003}. It can
provide information in momentum space through real space measurement
with a high energy resolution. Recently, several groups have
performed STM measurements on surface states of Bi$_2$Te$_3$ and
Bi$_{1-x}$Sb$_x$
\cite{roushan2009,alpichshev2009,zhang2009,gomes2009}.
Backscattering induced by non-magnetic impurities between
time-reversal (TR) partners with opposite momenta is forbidden due
to their opposite spin configurations. This is confirmed by the
real space Friedel oscillation pattern and by analysis of the QPI
characteristic scattering wavevector.
In this paper, we perform a detailed QPI analysis of the surface states of the topological insulator
Bi$_2$Te$_3$. A general TR-invariant impurity potential including scalar and spin-orbit scattering
components is studied using the standard $T$-matrix formalism.
The scattering on the iso-energy surface strongly depends on the both momentum and
spin orientation. Scattering between TR partners vanishes as a consequence of TR symmetry.
The scattering is dominated by wavevectors which connect regions on the Fermi surface of
extremal curvature, but also accounting for spin polarization.
STM experiments\cite{alpichshev2009,zhang2009} have yielded rich information about the QPI structure.
In addition to the absence of backscattering, the STM experiments
also observed recovered scattering\cite{alpichshev2009} at a wavevector ($\vec{k}_{nest}$ in their, and $\vec{q}_2$ in our notation),
and an extinction\cite{zhang2009} (i.e. near absence
of scattering) ($\vec{q}_3$ in their and our notation), both at wavevectors which do not connect TR states.
Below, we offer a novel explanation
of this experimental puzzle. Our results are in excellent overall agreement with the QPI experiment in Bi$_2$Te$_3$.
\section{Suface Dirac model with warping term}
The $\vec{k}\cdot\vec{p}$ Hamiltonian for the surface Dirac cone was
first derived in Ref. \onlinecite{zhang2009a}. The bare Hamiltonian is written as ${\cal H}_0=\int\!d^2\!k\,
\psi^\dagger({\vec k})\,H({\vec k})\,\psi({\vec k})$, where
$\psi^\dagger(\vec{k})=(c^\dagger_{\vec{k}\uparrow},c^\dagger_{\vec{k}\downarrow})$.
With the addition of the cubic warping term \cite{fu2009},
\begin{equation}} \def\ee{\end{equation} H({\vec k})=v\big(\vec{k}\times\vec{\sigma}\big)\cdot\hat{z} +\lambda
k^3\cos3\phi_{\vec k}\>\sigma^z\ . \ee
The azimuthal angle of ${\vec k}$ is $\phi_{\vec k}=\tan^{-1}(k_y/k_x)$,
where the $\Gamma$-$K$ direction is taken as $\hat{x}$ axis.
Following Ref. \onlinecite{fu2009}, the
quadratic terms are dropped since they do not significantly change the
shape of the constant energy contour, and the characteristic energy
and wavevector scales are defined as: $E^*=v\,k_c$ and
$k_c=\sqrt{v/\lambda}$. This Hamiltonian can be diagonalized by
introducing
\begin{eqnarray}} \def\eea{\end{eqnarray} \hat{U}({\vec k})=\left(
\begin{array}{cc}
\cos(\theta_{\vec k}/2) & i e^{-i\phi_{\vec k}}\sin(\theta_{\vec k}/2) \\ & \\
i e^{i\phi_{\vec k}}\sin(\theta_{\vec k}/2) & \cos(\theta_{\vec k}/2) \\
\end{array}\right)\ ,
\label{eigenfunctions}
\eea
where
$\tan \theta_{\vec k}=k^2_c/( k^2\cos 3\phi_{\vec k})$. One then finds $H({\vec k})=E({\vec k})\,U({\vec k})\,\sigma^z\,U^\dagger({\vec k})$,
with eigenvalues $E_\pm=\pm E({\vec k})$ where
\begin{equation}} \def\ee{\end{equation}
E({\vec k})=\sqrt{(vk)^2 + (\lambda k^3\cos 3\theta_{\vec k})^2}\ .
\ee
In fig. \ref{fig:fs}(a) we plot the isoenergy contour $E=1.5\, E^*$, which
qualitatively reproduces the snowflake Fermi surface observed in the
first-principles calculation and the ARPES
experiment \cite{zhang2009a,chen2009,fu2009}. As for the scattering
process, we take
\begin{eqnarray}} \def\eea{\end{eqnarray} {\cal H}_{\rm imp}=\!\!\int \! d^2k\,d^2k'\>V_{{\vec k}-{\vec k}'}\,
\psi^\dagger({\vec k}')\left[{\mathbb I} + i c \,{\vec k}\times{\vec k}'
\cdot\vec{\sigma}\right]\psi(\vec{k}). \label{hs} \eea
For a single short-ranged scatterer we may approximate
$V_{{\vec k}-{\vec k}'}\approx V_0$. The second term
corresponds to the spin-orbit scattering with the coefficient $c$
describing its relative strength to the potential scattering.
It is convenient to project the potential onto the eigenbasis of ${\cal H}_0$, so
\begin{eqnarray}} \def\eea{\end{eqnarray} \hat{V}_{{\vec k},{\vec k}'} \equiv V_0\>\hat{U}^\dagger({\vec k}')
\left[{\mathbb I}+ i c\,{\vec k}\times{\vec k}'
\cdot\vec{\sigma}\right]\hat{U}({\vec k}). \label{vkk} \eea
For simplicity, we first consider the $c=0$ case (pure scalar potential scattering), returning
later to the general spin-orbit case ($c\neq 0$). Since the spectrum is particle-hole symmetric,
let us focus on a definite (positive) sign of the energy. The QPI will then be dominated by scatterings inside
the positive energy band, whose effective scattering potential is:
\begin{equation}} \def\ee{\end{equation}
\hat{V}_{{\vec k},{\vec k}'}^{(11)}=V_0\bigg[\cos\frac{\theta_{\vec k}}{2}\cos\frac{\theta_{{\vec k}'}}{2}
+\sin\frac{\theta_{\vec k}}{2}\sin\frac{\theta_{{\vec k}'}}{2}\,e^{i(\phi_{\vec k}-\phi_{{\vec k}'})}\bigg]\ .
\label{v11}
\ee
This effect also appears in the QPI analysis of the orbital-band systems
where orbital hybridization brings strong momentum dependence
to the scattering process \cite{lee2009}.
\begin{figure}
\includegraphics{fs.eps}
\caption{\label{fig:fs} (Color online) (a)The iso-energy contour
near the $\Gamma$ point for $E=1.5 E^*$ with snow-flake shape.
The $\hat{x}$ and $\hat{y}$ axes are chosen to be the $\Gamma$-$K$
and $\Gamma$-$M$ directions respectively, and $k_c=\sqrt{v/\lambda}$.
The red and brown (dark gray) dots refer to the valley and the tip
points on the contour, and the arrows indicates six
representative scattering wavevectors. $k_L$ and $k_U$ are solutions
of $E_+(k_L,\theta=0)=E_+(k_U,\theta=\pi/2)=E$ which are the
boundary of the truncation for the $\vec{k}$-integration used
in this paper.
(b) The spin orientations of the eigenfunctions for $\alpha_+$ band at
valley and tip points. The dotted lines refer to the
mirror-symmetric lines ($\Gamma$-M), and the system has a
three-fold rotational symmetry.
The arrow indicate the spin configuration in the $xy$ plane and
the solid circle (cross) refers to $S_z$ being along $+\hat{z}$ ($-\hat{z}$).
At the cusp points the spin lies only on the $xy$ plane while
$S_z$ has the largest magnitude at the valley points
with staggered signs.}
\end{figure}
\section{Effect of spin orientation on the QPI pattern}
The points of extremal curvature on the Fermi surface are divided into two groups, arising from the `valleys' ($k=k_L$, positive curvature)
and `tips' ($k=k_U$, negative curvature). We define the complexified points $A=k_L\,e^{i\pi/3}$, $B=k_L$, $C=k_L e^{-i\pi/3}$,
$W=k_U e^{5\pi i/6}$, $X=k_U e^{-5\pi i/6}$, and $Y=k_U e^{-i\pi/2}$. Then from eqn. \ref{v11} we obtain
$\big|V^{(11)}_{AB}\big|^2=\frac{3V_0^2}{4}\sin^2\vartheta$, $\big|V^{(11)}_{AC}\big|^2=\frac{V_0^2}{4}+\frac{3V_0^2}{4}\cos^2\vartheta$,
and $V^{(11)}_{A{\bar A}}=0$, where ${\bar A}=-A$, corresponding to scattering through the vectors ${\vec q}_3$, ${\vec q}_2$, and ${\vec q}_1$,
respectively, with $\tan\vartheta=(k_c/k_L)^2$. We also find $\big|V^{(11)}_{WX}\big|^2=\frac{3V_0^2}{4}$,
$\big|V^{(11)}_{WY}\big|^2=\frac{V_0^2}{4}$, and $V^{(11)}_{W{\bar W}}=0$. These processes are depicted in fig. \ref{fig:fs}(a).
While $V^{(11)}_{A{\bar A}}=V^{(11)}_{W{\bar W}}=0$ is a direct consequence of TR symmetry, the other processes through scattering
vectors ${\vec q}_{2,3,5,6}$ are in general finite. Their amplitude variation may be understood in terms of the spin orientation of the eigenfunctions
throughout the Brillouin zone, ${\vec S}({\vec k})=(-\sin\theta_{\vec k}\sin\phi_{\vec k} \,,\, \sin\theta_{\vec k}\cos\phi_{\vec k} \,,\, \cos\theta_{\vec k})$, depicted in fig. \ref{fig:fs}(b).
Bi$_2$Te$_3$ has the symmetry of $C_{3v}$, {\it i.e.\/} three-fold
rotational symmetry plus the three reflection lines ($\Gamma$-$M$ plus two equivalent lines). Therefore at the tips $S^z({\vec k})$
must vanish since $\sigma^z$ is odd under the mirror operation. $S^z({\vec k})$ has the largest magnitude at the valleys, but with
staggered signs, as shown in the figure. Since scalar potential scattering does not flip electron spin, its matrix element
is largest when ${\vec S}({\vec k})\cdot{\vec S}({\vec k}')$ is large and positive, {\it i.e.\/} high spin overlap.
This echoes the experimental finding of Pascual {\it et al.}\cite{pascual} that in the QPI pattern on Bi(110),
only the scattering processes preserving the spin orientation are visible.
One major difference, however, betwwen Bi(110) and Bi$_2$Te$_3$ is that the former has multiple Fermi surfaces and the scattering
processes preserving spin orientations do exist at finite $\vec{q}$, while the later only has one Fermi surface and therefore
no such scatterings could exist.
At the tips, the spin lies in-plane, with $\theta_{\vec k}=\frac{\pi}{2}$,
independent of the scanning energy $E$. It can be checked that ${\vec S}({\vec k}+{\vec q}_5)\cdot{\vec S}({\vec k}) > {\vec S}({\vec k}+{\vec q}_6)\cdot{\vec S}({\vec k})$, hence
$\big|V^{(11)}_{WX}\big|^2 > \big|V^{(11)}_{WY}\big|^2$. For scatterings between the valleys, ${\vec S}({\vec k})\cdot{\vec S}({\vec k}')$
depends crucially on $S^z({\vec k})$ and $S^z({\vec k}')$. Accounting for the valley-to-valley oscillation in ${\vec S}({\vec k})$, we conclude that as
the scanning energy increases, $\big|V^{(11)}_{AC}\big|^2$ grows while $\big|V^{(11)}_{AB}\big|^2$ shrinks.
This simple argument gives a qualitative explanation for the absence of the ${\vec q}_3$ scattering in the STM experiment \cite{zhang2009}.
For typical experimental parameters \cite{fu2009}, $E/E^*\approx 1.5$ and $k_L/k_c\approx 1$. In this case we estimate
the scalar potential scattering gives that
$\big|V^{(11)}_{WX}\big|^2 : \big|V^{(11)}_{AC}\big|^2 : \big|V^{(11)}_{AB}\big|^2 : \big|V^{(11)}_{WY}\big|^2 \approx 6:5:3:2$.
\section{Numerical Results}
To specifically compute the QPI image, we employ a $T$-matrix
approach \cite{balatsky2006} for multiband systems \cite{lee2009}.
In the operator basis $\Psi({\vec k})=U({\vec k})\,\psi({\vec k})$, the Green's function is
written in matrix form as
\begin{eqnarray}} \def\eea{\end{eqnarray}
\hat{G}({\vec k},{\vec k}',\omega)&=&\hat{G}_0({\vec k},\omega)\,
\delta_{{\vec k},{\vec k}'} + \hat{G}_0({\vec k},\omega)\,
\hat{T}_{{\vec k},{\vec k}'}(\omega) \, \hat{G}_0({\vec k}',\omega)\nonumber\\
\label{gtg} \eea
where the $T$-matrix satisfies
\begin{eqnarray}} \def\eea{\end{eqnarray} \hat{T}_{{\vec k},{\vec k}'}(\omega)=\hat{V}_{{\vec k},{\vec k}'} +\int\!\!d^2p\>
\hat{V}_{{\vec k},{\vec p}} \, \hat{G}_0({\vec p},\omega) \,
\hat{T}_{{\vec p},{\vec k}'}(\omega)\ , \label{tmatrix} \eea
and
$\big[\hat{G}_{0,\sigma}({\vec k},\omega)\big]_{ab}=
\big[\omega+i\delta-E_a({\vec k})\big]^{-1}\delta_{a,b}$ are the bare Green's
functions. In spectroscopic imaging STM \cite{balatsky2006}, the
conductance ($dI/dV$) measured by the STM is proportional to the
local density of states defined as
\begin{equation}} \def\ee{\end{equation} \rho(\vec{r},\omega)=\rho_\uparrow(\vec{r},\omega)+\rho_\downarrow(\vec{r},\omega)\ , \ee
where $\rho_\sigma(\vec{r},\omega)={\rm Im}
G_\sigma(\vec{r},\vec{r},\omega)$ is the local density of states for
spin $\sigma$. The QPI image in the Brillouin zone
$\rho(\vec{q},\omega)$ is then obtained by performing the Fourier
transformation of the conductance $dI/dV$. As a result, we can
calculate $\rho({\vec q},\omega)$ using the $T$-matrix formalism by:
\begin{eqnarray}} \def\eea{\end{eqnarray}
\rho({\vec q},\omega) &=& \int \!\!d^2r \> e^{i {\vec q}\cdot{\vec r}}\,\rho({\vec r},\omega) \nonumber\\
&=&{1\over 2i}\int \!\! d^2k\>\textsf{Tr} \bigg[\hat{U}({\vec k})\,\hat{G}({\vec k},{\vec k}+{\vec q},\omega)\,
\hat{U}^\dagger({\vec k}+{\vec q}) \nonumber\\
&&\quad -\Big(\hat{U}({\vec k})\,\hat{G}({\vec k},{\vec k}-{\vec q},\omega)\,
\hat{U}^\dagger({\vec k}-{\vec q})\Big)^* \bigg]
\label{rhoq}
\eea
where the trace is taken with respect to the matrix index.
Because physically STM measures the local density of states in the spin basis of $\hat{\psi}({\vec k})$, while our $T$-matrix theory here is developed
in the eigenbasis of $\hat{\Psi}({\vec k})$, the $\textsf{SU}(2)$ rotation matrices $\hat{U}({\vec k})$ are introduced in the last line of eq. \ref{rhoq}
to transform back to the physical spin basis. Because the first term in eq. \ref{gtg}, $\rho({\vec q}=0)$ contains the sum of the total
density of states without the impurity, which makes it much larger than $\rho({\vec q}\neq 0)$,
we only plot $|\rho({\vec q}\neq 0)|$ in order to reveal weaker structures of the QPI induced by the impurity scattering.
We solve eq. \ref{tmatrix} numerically, using 2D polar coordinates.
Since the dominant scattering processes are between ${\vec k}$ points on the constant energy contour $E_+(k,\theta)=E$
(we focus on $E>0$ here), we perform the integration within the range $k_L\leq k\leq k_U$ with $k_L$ and $k_U$ indicated in Fig.
\ref{fig:fs}(a). The resulting QPI images are plotted in fig. \ref{fig:qpi} for $c=0$ with $E=1.5\, E^*$ fixed. For this choice of
parameters, $k_L/k_c= 1.029$ and $k_U/k_c=1.5$. As shown in fig. \ref{fig:qpi}(a), ${\vec q}_5$ and ${\vec q}_2$
indicated by the red (dark gray) and green (light gray) circles are the strongest features while ${\vec q}_3$ (indicated by the white circle) is almost invisible.
The reason why ${\vec q}_5$ is even stronger than ${\vec q}_2$ while they have comparable scalar scattering potential is due to the
difference in the density of states.
Because the tip points shown in fig. \ref{fig:fs}(a) have larger density of states than the valley points,
the weights of ${\vec q}_5$ is larger than those of ${\vec q}_2$, resulting in the stronger features observed for ${\vec q}_5$.
The strong features near ${\vec q}=0$ correspond to small $\vec{q}$ scatterings around the tips and valleys points,
which have also be seen in experiments. Our results reproduce satisfactorily the experimental findings and are also
consistent with the analysis from the spin-orientation selection rule discussed above.
As the scanning energy increases further, the surface states along the $\Gamma-M$ direction start to merge into the
conduction band of the bulk states. In this case, the tips of the constant energy contour will be
mixed up with these bulk bands, which weakens the ${\vec q}_5$ scattering but enhances the
small ${\vec q}$ scatterings near the $\Gamma$ point. This is consistent with the experiment \cite{zhang2009}, showing
that the area of the strong features near $\Gamma$ point becomes much larger after the scanning energy exceeds the bottom
of the conduction band.
\begin{figure}
\includegraphics{qpi.eps}
\caption{\label{fig:qpi} (Color online) The quasiparticle interference image for
(a) $c=0$ and (b) $c=0.5$ with $E=1.5\, E^*$ and $V_0/E^*=0.1$. In
this case, $k_L/k_c=1.029$ and $k_U/k_c=1.5$. (a) The strongest
large ${\vec q}$ scatterings are ${\vec q}_5$ and ${\vec q}_2$ indicated by the red (dark gray) and green (light gray) circles
(and their symmetric points). ${\vec q}_3$ (indicated by the white circle)
is too weak to be seen. (b) For $c=0.5$, new QPI features with large
momenta are visible.}
\end{figure}
\section{Spin-orbital scattering impurity}
Now we briefly comment on the effect of the spin-orbit scattering given in eq. \ref{hs} which in principle exists in any realistic system.
Since surface states of the topological insulator Bi$_2$Te$_3$ are
two-dimensional, the spin-orbit scattering potential only has one component:
\begin{equation}} \def\ee{\end{equation}
{\cal H}^{\rm SO}_{\rm imp}=i c V_0\!\!\int\!\! d^2k\,d^2k'\>kk'\sin(\phi_{{\vec k}'}-\phi_{\vec k}) \,\psi^\dagger({\vec k}')\,\sigma^z\,\psi({\vec k}).
\ee
Backscattering is still forbidden because of the $\sin(\phi_{{\vec k}'}-\phi_{\vec k})$ factor.
Although $\sigma^z$ does not flip spin, the angle-dependence $\sin(\phi_{{\vec k}'}-\phi_{\vec k})$ gives
rise to an additional suppression beyond that from the spin-orientation
selection rule discussed in the case of scalar impurity scattering.
Moreover, because the matrix element is linear in $kk'$, the spin-orbit
scattering tends to enhance the scatterings between quasiparticles
with large momenta. All these additional effects due to the spin-orbit scattering can be
roughly seen in a straightforward calculation froim eq. \ref{vkk}:
\begin{eqnarray}} \def\eea{\end{eqnarray}
\big|V^{(11)}_{A{\bar A}}\big|^2 &=& \big|V^{(11)}_{W{\bar W}}\big|^2 = 0 \\
\big|V^{(11)}_{AC}\big|^2 &=&\frac{V_0^2}{4}\Big[\big(1-\frac{3}{2}ck^2_L \big)^2
+3\cos^2 \vartheta \big(1+\frac{1}{2}ck^2_L\big)^2\Big]{\vphantom{\sum_N^N}}\nonumber\\
\big|V^{(11)}_{AB}\big|^2 &=&\frac{3V_0^2}{4}\sin^2 \vartheta\big(1-\frac{1}{2}ck^2_L\big)^2\nonumber\\
\big|V^{(11)}_{WX}\big|^2 &=&\frac{3V_0^2}{4}\big(1-\frac{1}{2}ck^2_U\big)^2{\vphantom{\sum_N^N}}\nonumber\\
\big|V^{(11)}_{WY}\big|^2 &=&\frac{V_0^2}{4}\big(1-\frac{3}{2}ck^2_U\big)^2\ . \nonumber
\eea
Nonzero $c$ brings in new interferences which could lead to unusual
suppressions or enhancements for some scattering wavevectors, depending
not only on the magnitude and sign of $c$, but also on the scanning energy $E$.
In fig. \ref{fig:qpi}(b) we show the QPI image for $c=0.5$. While the main features are still
similiar to those of fig. \ref{fig:qpi}(a), new prominent features associated with larger
momentum scatterings are visible. Since the matrix elements for spin-orbit scattering are larger for
quasiparticles with larger momentum, this term will become more and more important as the scanning
energy $E$ increases. A detailed analysis of the spin-orbit scattering will be presented in a future publication.
In comparison with the results in ref. \cite{zhang2009}, we find that spin-orbit scattering from the impurity of the Ag
atom is not very important in this particular experiment.
\section{Conclusion}
In conclusion, we have analyzed the quasiparticle interference
induced by nonmagnetic impurities on the surface of the topological insulator Bi$_2$Te$_3$ using
a $T$-matrix approach . While the backscattering is completely
forbidden by time-reversal symmetry, other scatterings are allowed, resulting in the QPI patterns observed in
STM experiments \cite{alpichshev2009,zhang2009} . We have shown further that the scattering strengths depends crucially
on the spin orientations of the eigenfunctions.
Since nonmagnetic impurities can not flip spin, the scalar scattering potential between two eigenstates is larger as their spin overlap is larger.
Combined with the variation of the density of states, we have shown that
some of the scatterings might be too weak to be seen
in comparison with the strongest ones, and our results successfully reproduce the QPI patern observed in experiments.
We have further discussed the effect of the spin-orbit scattering on the QPI pattern.
While the backscattering is still forbidden, we find that the spin-orbit scattering enhances several new features at large momentum,
and the detailed QPI features strongly depends on the sign and strength of the spin-orbit scattering potential.
We are grateful to Xi Chen, Liang Fu, Aharon Kapitulnik, Qin Liu, Xiaoliang Qi, Qikun Xue for insightful discussions. CW and WCL
are supported by ARO-W911NF0810291. S CZ is supported by the
Department of Energy, Office of Basic Energy Sciences, Division of
Materials Sciences and Engineering, under contract
DE-AC02-76SF00515.
{\it Note added} -- While this paper was about completion, we learned a related work by Zhang {\it et al.}\cite{zhang20092}.
|
1,108,101,566,377 | arxiv | \section{Introduction}
Systems made of correlated electrons confined in
semiconductor nanoscopic dot and ring structures, so-called quantum dots (QDs)
and rings (QRs) respectively, have been the subject of intense theoretical and
experimental research, see e.g. Refs. \onlinecite{Jac98,Lip08} and references therein.
From the latter point of view, for quantum dots it has been proved \cite{Tar96} the
possibility to tune over a wide range the number of electrons contained in the system,
as well as to control both the size and the shape of the dots by means of external gate
voltages, goal that has not been achieved yet for ring geometries
due to the higher complexity of their fabrication process,
\cite{Gar97,Lor00,Fuh01} which involves several experimental techniques
such as atomic force microscopy,\cite{Hel99} strain-induced self-organization \cite{Gar97}
or droplet molecular beam epitaxy. \cite{Gon05}
The interest of QRs arises from their peculiar behavior in the presence of a perpendicularly
applied magnetic field ($B$), which is very distinct from that observed in QDs and shows up as
an oscillatory behavior of their energy levels as a function of $B$.
This property, together with the fact that in narrow enough QRs the electrons experiment a
nearly one-dimensional Coulomb repulsion, leads to the integer and fractional
Aharonov-Bohm effects, usually associated with the appearance of the so-called persistent currents
in the ring. \cite{Kle07} These quantum-interference phenomena have been experimentally reported
\cite{Ihn05} and have motivated a series of theoretical works whose number is steadily
increasing,
see e.g., Refs. \onlinecite{Cha94,Kri94,Emp99,Lin01,Aic06,Liu08} and references
therein.
One of the most appealing possibilities offered by electron systems confined in semiconductor
heterostructures is their ability to form coupled entities, usually referred to as ``artificial
molecules'', in which the role of the constituent ``atoms'' is played by single quantum dots or
rings and that have analogies with natural molecules such as the hybridization of the
electronic states forming molecular-like orbitals. In addition, these artificially coupled
systems present important advantages such as a tunable ``interatomic'' coupling by means, e.g.,
of the modification of the relative position/size of the constituents.
This fact has, besides its intrinsic interest, potential relevance to quantum information
processing schemes since basic quantum gate operations require controllable coupling between
qubits. In this sense, artificial molecules based on two coupled QDs, called quantum dot
molecules (QDMs) have been proposed
as scalable implementations for quantum computation purposes and have received great attention
from the scientific community in the last years --see e.g. Refs. \onlinecite{Bli96,Sch97,Ron01,
Hol02,Ota05,Par00,Pal95,Anc03,Aus04,Bel06} and references therein.
Also, molecular-beam epitaxy techniques have recently allowed the synthesis of
{\it quantum ring molecules} (QRMs) in the form of concentric double QRs \cite{Man05,Kur05} and
vertically stacked layers of self-assembled QRs, \cite{Sua04,Gra05} the optical and structural
properties of the latters having also been characterized by photoluminescence spectroscopy and
by atomic force microscopy, respectively.
This has sparked theoretical studies on the structure and optical response of both
vertically and concentrically coupled QRs of different complexity and scope, revealing properties
different from those of their dot counterparts due to the non-simply connected ring topology.
For instance, studies on the single-electron spectrum of vertical QRMs \cite{Ahn00,Li04} have shown
that the electronic structure of these systems is more sensitive to the inter-ring distance than that
of coupled QDs. As a consequence, in ring molecules quantum tunneling effects are enhanced since less
tunneling energy is required to enter the molecular-type phase. Also, the consideration
of ``heteronuclear'' artificial molecules constituted by slightly different QRs offers the interesting
possibility to control the effective coupling of direct-indirect excitons \cite{Dia07}
by means of the application of a magnetic field and taking advantage of the fact that charge
tunneling between states with distinct angular momentum is strongly suppressed by orbital selection
rules. To this end, some authors have considered the case of QRMs made of strictly one-dimensional,
zero-thickness QRs and have used diagonalization techniques to address the few-electron
problem. \cite{Ahn00,Sza07,Dia07,Sza08}
The simultaneous effect of both electric and magnetic fields applied to a single-electron QRM has
also been studied \cite{Pia07} --see also Ref. \onlinecite{Ahn00}-- and the optical response of
QRMs where the thickness of the constituent QRs is taken into
account has been obtained.\cite{Cli05} In addition, the spatial correlation between electron pairs
in vertically stacked QRs only electrostatically coupled has been shown to undergo oscillations
as a function of the magnetic flux,
with strongly correlated situations between ground states with odd angular momentum
turning out to occur even at large inter-ring distances.
\cite{Sza07} More recently, the structure of a QRM made of two vertically stacked quantum
rings has been addressed at zero magnetic field for a few tens of electrons within
the local spin-density functional theory (LSDFT) neglecting\cite{Cas06} and incorporating\cite{Mal06}
the vertical thickness of the constituent QRs.
In this work we address the ground state (gs) of two thick, vertically coupled
identical quantum rings forming ``homonuclear'' QRMs populated with up to 40 electrons and
pierced by a perpendicularly applied magnetic field.
We extend in this way our previous study,\cite{Mal06} addressing the appearance and physical
interplay between the spin and isospin\cite{Pal95} degrees of freedom as a function of
the variation of both the intensity of the magnetic field and the inter-ring separation.
Modelling systems charged with such large number of electrons requires the employment of
methodologies that minimize the computational cost.
Here we have made use of the LSDFT,\cite{Emp99,Aic06} whose accuracy for the
considered values of the magnetic
field has been assessed\cite{Anc03} by comparing the obtained results for a single QD
with those given by the current-spin-density functional theory (CSDFT),\cite
{Fer94} which in principle is better-suited for high magnetic fields, and also with
exact results for artificial molecules.\cite{Ron99}
This paper is organized as follows.
In Sec. II we briefly introduce the LSDFT and the model used to represent the vertical QRMs.
In Sec. III we discuss the obtained results for some selected configurations,
and a summary is given in Sec. IV.
\section{Density functional calculation for many-electron vertical
quantum ring homonuclear molecules}
The axial symmetry of the system allows one to work in cylindrical
coordinates. The confining potential $V_{cf}(r,z)$ has been taken
parabolic in the $xy$-plane with a repulsive core around the origin, plus
a symmetric double quantum well in the $z$-direction, each one with width
$w$, depth $V_0$, and separated by a distance $d$.
To improve on the convergence of the calculations, the double-well
profile has been slightly rounded off, as illustrated
in Fig. 2 of Ref. \onlinecite{Anc03}.
The potential thus reads $V_{cf}(r,z)=V_r(r)+V_z(z)$, where
\begin{eqnarray}
V_r(r) &=&
V_0 \,\Theta(R_0-r) +
\frac{1}{2}\, m \,\omega_0^2\, (r - R_0)^2 \, \Theta (r-R_0)
\nonumber
\\
V_z(z)&=&V_0\left\{
\begin{array}{ll}
\frac{1}{1+e^{(z+d/2+w)/\sigma}}-
\frac{1}{1+e^{(z+d/2)/\sigma}}
& \; {\rm if} \; z \le 0\\
\frac{1}{1+e^{(z-d/2)/\sigma}}-
\frac{1}{1+e^{(z-d/2-w)/\sigma}}
& \; {\rm if} \; z > 0 \; , \\
\end{array}\right.
\label{eq2}
\end{eqnarray}
with $\sigma=2\times 10^{-3}$ nm, and $\Theta(x)=1$ if $x>0$ and zero
otherwise.
The convenience of using a hard-wall confining potential to describe the
effect of the inner core in QRs is endorsed by several works in the
literature.\cite{Li01} We have taken $R_0=10$ nm, $V_0=$ 350 meV,
$\hbar \omega_0=6$ meV and $w=5$ nm.
These parameters determine the confinement for the electrons together with
the distance between the constituent quantum wells that is varied to study
QRMs in different inter-ring coupling regimes.
For small electron numbers ($N$), it is justified to take $\omega_0$
to be $N$-independent. However, in a more realistic scheme its value
should be tuned according to the number of electrons contained in the
system, relaxing the confinement as the latter is increased.
In the case of quantum dots it has oftenly been used a $N^{-1/4}$-dependence
that arises from the $r$-expansion near the origin of the Coulomb potential
created by a two-dimensional uniform positive charge distribution --jellium
model-- and that it is generalized to the case of quantum dot molecules as
$\omega_0 = \kappa N^{-1/4}_{B}$, $N_B$ being the
number of electrons filling bonding orbitals --see below. The rationale
for this generalization is given in Ref. \onlinecite{Aus04}.
It is clear that the mentioned $N$-dependence would be harder to justify
for QRs, and in fact no alternative law is known for a single QR that could
be generalized to the case of QRMs. For this reason, in this work we have taken
$\omega_0$ to be $N$-independent, which is to some extend less
realistic for the largest values of $N$ we have considered.
Considering the $N$-electron system placed in a magnetic field parallel
to the $z$-axis, within the LSDFT in the effective mass, dielectric constant
approximation, the Kohn-Sham equations \cite{Pi01,Anc03} in cylindrical coordinates
read
\begin{eqnarray}
& & \left[-\frac{1}{2} \left( \frac{\partial^2}{\partial r^2}
+ \frac{1}{r} \frac{\partial}{\partial r} - \frac{l^2}{r^2}
+ \frac{\partial^2}{\partial z^2} \right) - \frac{\omega_c}{2}\,l
+ \frac{1}{8} \omega_c^2 r^2 + V_{cf}(r,z) \right.
\nonumber
\\
& &
\label{eq1}
\\
&+& \left. V_H + V_{xc} + \left( W_{xc}
+\frac{1}{2} g^* \mu_B B\right) \eta_{\sigma} \right]
u_{n l \sigma}(r,z) =
\varepsilon_{n l \sigma} u_{n l \sigma}(r,z) \,\, ,
\nonumber
\end{eqnarray}
where the single-particle (sp) wave functions have been taken to be
of the form $\phi_{n l\sigma}(r,z,\theta,\sigma)=
u_{n l\sigma}(r,z) e^{-\imath l \theta} \chi_{\sigma}$ with
$n =0, 1, 2, \ldots$, $l =0, \pm 1, \pm 2, \ldots$, $-l$ being
the projection of the single-particle orbital angular momentum
on the symmetry axis, and $\sigma$=$\uparrow$$(\downarrow)$
representing spin-up(down) states.
The vector potential has been chosen in the symmetric gauge,
namely ${\bf A}= B (-y,x,0)/2$; $\mu_B = \hbar e/(2 m_e c)$ and
$\omega_c =e B/c$ are, respectively, the Bohr magneton and the
cyclotron frequency, and $\eta_{\sigma}$=$+1(-1)$ for
$\sigma$=$\uparrow$$(\downarrow)$;
$V_H(r,z)$ is the direct Coulomb potential, and $V_{xc}={\partial
{\cal E}_{xc}(n,m)/\partial n}\vert_{gs}$ and
$W_{xc}={\partial {\cal E}_{xc}(n,m)/\partial m}\vert_{gs}$
are the variations of the exchange-correlation
energy density ${\cal E}_{xc}(n,m)$ in terms of the electron
density $n(r,z)$ and of the local spin-magnetization
$m(r,z)\equiv n^{\uparrow}(r,z)-n^{\downarrow}(r,z)$ taken at the gs.
${\cal E}_{xc}(n,m) \equiv {\cal E}_{x}(n,m) + {\cal E}_{c}(n,m)$
has been built from three-dimensional homogeneous electron gas
calculations; this yields a well-known,\cite{Lun83} simple analytical
expression for the exchange contribution ${\cal E}_{x}(n,m)$.
For the correlation term ${\cal E}_{c}(n,m)$ we have used
the parametrization proposed by Perdew and Zunger.\cite{Per81}
Details about how the Kohn-Sham and the Poisson equations
have been solved can be found in Ref. \onlinecite{Pi01}.
Notice the use in Eq. (\ref{eq1}) of effective atomic units
$\hbar=e^2/\epsilon=m=$1, where $\epsilon$ is the dielectric constant
and $m$ the electron effective mass. In units
of the bare electron mass $m_e$ one has $ m = m^* m_e$,
the length unit being the effective
Bohr radius $a_0^* = a_0\epsilon/m^*$
and the energy unit the effective Hartree $H^* = H m^*/\epsilon^2$.
In the numerical applications we have considered
GaAs quantum rings, for which we have taken $\epsilon$ = 12.4, and
$m^*$ = 0.067; this yields $a^*_0 \sim$ 97.9 ${\rm \AA}$ and
$H^*\sim$ 11.9 meV, the effective gyromagnetic constant being $g^*=-0.44$.
To label the gs configurations (``phases'') we use an adapted version of the
ordinary spectroscopy notation,\cite{Ron99} namely $^{2S+1}L^{\pm}_{g,u}$,
where $S$ and $L$ are the total $|S_z|$ and $|L_z|$,
respectively. The superscript $+(-)$ corresponds to symmetric (antisymmetric)
states under reflection with respect to the $z=0$ plane bisecting the QRMs,
and the subscript $g(u)$ refers to positive(negative) parity states.
All these are good quantum numbers even in the presence of an axial magnetic field.
By analogy with natural molecules, symmetric and antisymmetric states
are referred to as bonding (B) and antibonding (AB) orbitals, respectively.
We have defined the ``isospin'' quantum number $I_z$ --bond order in Molecular
Physics-- as\cite{Par00,Anc03,Ron99} $I_z = (N_B -N_{AB})/2$, $N_{B(AB)}$ being
the number of occupied bonding(antibonding) sp states.
\section{Results}
Due to the large number of variables needed to characterize a given
QRM configuration (electron number, magnetic
field and inter-ring distance), we limit ourselves to present
results in a limited range of values for such variables, aiming at
discussing calculations that might illustrate the appearance of
some properties of the systems under study.
For the sake of comparison, we have also addressed one single QR
symmetrically located with respect to the $z=0$ plane with the
same thickness (5 nm) and radial confinement as the coupled rings.
Fig. \ref{fig1} shows the Kohn-Sham sp levels for one single ring hosting $N$=40
electrons as a function of $l$ for different values of the applied magnetic field.
As it is well known, these levels are $\pm l$-degenerate at $B=0$. In this particular
case, the gs has $S_z=1$, and it is made up of symmetric (with respect to $z=0$) sp
states with up to $n=3$. In the non-interacting single-electron model, in which
the Coulomb energy is not considered and consequently the sp wave functions
factorize into a $r$-dependent and a $z$-dependent part with associated
quantum numbers $n_r$ and $n_z$, i.e.,
$u_{nl}(r,z) \rightarrow {\cal U}_{n_r}(r) {\cal Z}_{n_z}(z)$, one would say that
the gs is made up of sp states with $n_z=0$ and radial quantum numbers up to $n_r=3$.
When $B\neq$ 0, the $\pm l$-degeneracy is lifted and, on the other hand,
the $l<0$ sp levels become progressively depopulated
in favor of those with $l>0$ as the magnetic field increases until eventually
--at about $\sim$ 4 T-- only $l>0$ orbitals are filled. At this point,
only a few states with $n=2$ are occupied, and the ring has $S_z=0$. From this
value of $B$ on, the simultaneous filling of increasingly higher-$l$ states and
those close to $l=0$ gives rise to configurations containing only states with $n=1$
and with large values of the total spin (e.g., $S_z=9$ for $B=8$ T). Eventually, the system
becomes fully spin-polarized at $B \sim$ 13.5 T. It is worth noticing the conspicuous
bending of the ``Landau bands'' (sets of bonding or antibonding states characterized
by the same $n$ and spin, and different value of $l$),
instead of displaying a fairly flat region, as it happens
when the in-plane confinement is produced by a jellium-like potential,\cite{Emp99}
but not with our present choice of a $N$-independent parabola.
It is also worth to stress that, due to the much stronger confinement in the vertical
direction as compared to that in the radial one, only symmetric states
are occupied.
Analogously, the energy levels corresponding to QRMs with $N=40$ and inter-ring
distances $d=2$, 4 and 6 nm are shown in Figs. \ref{fig2}-\ref{fig4}.
One can see the gradual evolution of the system
as $d$ increases; indeed, at $d=2$ nm the spectrum is very similar to that
of the single ring, with only bonding sp states being occupied.
As $d$ increases, a few antibonding orbitals become populated at small $B$'s, as
one can see from the top panels of Fig. \ref{fig3}, corresponding to $d=4$ nm,
but eventually for increasing values of $B$ the QRMs have again ground states where
only bonding states are populated, as can be seen from the bottom panels of the
same figure. For this inter-ring distance, the fully spin-polarized state is reached
at $B \sim$ 13.75 T. Finally,
for the largest ring separation considered, namely $d=6$ nm, a large amount
of antibonding orbitals become occupied giving rise to small $I_z$'s instead
of the fairly large isospin values found for similar configurations at smaller
distances (compare the bottom panels in Fig. \ref{fig4} with those in
Figs. \ref{fig2} and \ref{fig3}).
In particular, the fully spin-polarized gs is found at about $B\sim$ 7 T with
$I_z=2$, whereas for $d=2$ and 4 nm it appears near $B=14$ T
and has the maximum possible isospin value, namely $I_z=20$.
At $d=6$ nm, the maximum-spin state naturally consists of two distinct bands, one
made up of bonding and another of antibonding states. These configurations are the
QRM-analogues of the maximum density droplet (MDD) configurations found for QDMs
at similar inter-dot distances, called respectively MDD$_B$ and MDD$_{AB}$ in
Ref. \onlinecite{Aus04}.
Increasing further the magnetic field causes the progressive occupation of
higher-$l$ orbitals, which provokes the depopulation of the
antibonding band and the consequent increase of $I_z$. For the highest
considered magnetic field ($B\sim$ 14 T), some antibonding orbitals are
still occupied, yielding $I_z=17$.
These results are a consequence of the evolution with $d$ of the energy
difference between bonding and antibonding states, $\Delta_{SAS}$,
which accurately varies as a function of the inter-ring distance
according to the law $\Delta_{SAS}= \Delta_0 e^{-d/d_0}$, already found
for QDMs.\cite{Par00}
In our case, from the difference in energy of single-electron (bonding and
antibonding) QRMs we have obtained $\Delta_0=82$ meV and $d_0=1.68$ nm,\cite{Mal06}
values which have turned out to be unaffected by the applied magnetic field.
Clearly, the value of $\hbar \omega_0$ as compared to $\Delta_{SAS}$, which allows
to discern between the strong ($\hbar \omega_0 \lesssim \Delta_{SAS}$) and the weak
($\hbar \omega_0 \gg \Delta_{SAS}$) quantum mechanical coupling regimes,
has a crucial influence on the actual filling of bonding
and antibonding sp states at a given inter-ring distance. Indeed, increasing
$\omega_0$ while keeping constant the double well structure may favor the population
of antibonding orbitals for large enough values of $N$. \cite{Anc03} This can be
understood from the non-interacting electron model, in which the single-electron
energies are the sum of two independent terms, one arising from the $z$-localization
and characterized by the quantum number $n_z$, and another, which increases as
$\omega_0$ does, arising from the $r$-localization and depending on $l$ and the
radial quantum number $n_r$. If $N$ is large enough,
the QRMs can minimize its energy by populating antibonding states with
low values of $n_r$ and $l$ instead of going on populating bonding states
with higher quantum numbers. This explains why some antibonding states were
filled even for $d=2$ nm at $B=0$ and $N=40$ for the QRMs of Ref. \onlinecite{Mal06},
where $\omega_0$ was taken to be 15 meV, value almost three times larger than the
one considered in the present work.
\cite{Anc03}
This particular structure of the bonding and antibonding bands at high magnetic fields
may have some observable effects on the far-infrared response of QRMs. Indeed,
since the dipole operator cannot connect bonding with antibonding sp states,
for QRMs in the weak coupling limit one would expect the dipole spectrum to
display additional fragmentation in the characteristic edge modes of the ring
geometry \cite{Emp99} due to the contribution of the antibonding electron-hole
pairs (see e.g. the bottom panels of Fig. \ref{fig4}).
Figure \ref{fig5} shows the evolution with $d$ of the gs energy and the
molecular phase of QRMs made up of $N=8$ electrons and submitted to
magnetic fields of different intensities.
Notice that even moderate values of $B$ give rise to ground states with
large total angular momentum, which increases as the magnetic
field does. For this reason, we have denoted it by its actual value instead
of employing the usual notation with upper Greek letters, except for the
cases with $L_z=0$.
Similar conclusions can be drawn for all the values of the magnetic field
we have considered; on the one hand, for the studied inter-ring distances,
the energy of the molecular phases increases with $d$ due to the enhancement
of the energy of the bonding states, \cite{Pi01} which dominates over the
decrease of the Coulomb energy --for larger distances the constituent QRs are
so apart that eventually this decrease dominates and the tendency is reversed.
On the other hand, one can see that the first phase transitions are always
found at the largest inter-ring distances since, as happens for QDMs in the
few-electron limit, they are due to the replacement of an occupied bonding sp
state by an empty antibonding one. This also explains why in most of the cases,
and especially for the highest magnetic fields, the total angular momentum of
the QRMs in the weakest coupling regime is reduced: the filled antibonding
orbitals have lower $l$'s than the replaced bonding states.
We have determined the magnetic field that gives rise to
ring molecules with fully spin-polarized gs, and show it
in Fig. \ref{fig6} as a function of $N$
for different inter-ring distances going from the strong
to the weak quantum mechanical coupling regimes.
The isospin value of each configuration is also indicated.
The number of electrons, $N= 8\times M$
with $M=1$ to 5, was chosen with the aim of
producing closed-shell structures at $B=0$ in the weak coupling limit.
One can see that the results for $d=2$ and 4 nm are very close,
with only noticeable differences for $N=32$. This can be
understood from the bottom panels of Figs. \ref{fig2} and \ref{fig3}, which
show that for rather large magnetic fields only bonding orbitals are
occupied for both ring separations.
Contrarily, from Fig. \ref{fig4} one can see that in weaker coupling regimes
the filling of antibonding states favors the fully spin-polarization of the
QRMs at low $B$ intensities as compared to those needed when the rings are
closer to each other, which explains the differentiated results corresponding
to $d=5$ and 6 nm in Fig. \ref{fig6}.
When antibonding orbitas are populated, the variation of the magnetic field
yields numerous transitions between different molecular phases with different
isospin that are more complex
than these observed in vertically coupled QDs. This particular behavior
is mainly due to the periodic destabilization suffered by the lowest-$l$
occupied orbitals induced by the magnetic field, which is a direct
consequence of the Aharonov-Bohm effect and makes it rather difficult
to find a pattern among the observed evolutions for the different electronic
populations.
The spin and isospin phases as a function of the magnetic field are shown in
Fig. \ref{fig7} for $d=6$ nm corresponding to $N=8, 16,$ and 24.
It can be seen that in all cases at $B=0$ the QRMs have $I_z=2$
and $S_z=1$; when $B$ is increased, non-monotonic spin and isospin oscillations
with $\Delta I_z=\pm 1$ and $\Delta S_z=\pm 1$ and 2 appear, respectively.
Two facts, also present in QDMs, \cite{Anc03,Aus04} are worth to be stressed:
on the one hand, molecular phase changes from $-$($+$) to $+$($-$) ground states
--recall that, as explained in Sec. II, this sign is related to the symmetry of
the molecular configuration-- involve $\Delta I_z=+1$(-1) flips;
on the other hand, quite often the transitions
in both magnitudes take place simultaneously, except obviously when the QRMs
reach the full spin-polarization, point from which on the isospin increases in
one-unit jumps until the system is made up of only bonding states.
The comparison of the isospin phases for QRMs with $d=4$ and 6 nm is
presented in Fig. \ref{fig8} for $N=32$ and 40. Clearly, the highest
values of $I_z$ appear for the smallest inter-ring distances, as
expected from the single-particle levels shown in Fig. \ref{fig3},
corresponding to $d=4$ nm and $N=40$, in which only a few antibonding
orbitals are occupied for low values of $B$. Indeed, one can see from
the bottom panels of Fig. \ref{fig8} that for this inter-ring distance
magnetic fields of about 5 T are enough to yield configurations with
the maximum isospin value $N/2$, whereas for the QRMs with $d=6$ nm
such values of $B$ still correspond to small $I_z$'s due to the large
amount of filled antibonding states.
We have also calculated the addition energies, defined by
\begin{equation}
\Delta_2(N)= E(N+1)- 2 E(N)+ E(N-1) \;\;\; ,
\end{equation}
$E(N)$ being the total energy of the $N$-electron system, for QRMs made of up to
14 electrons at different inter-ring distances, submitted to several
magnetic fields, as a function of $N$. For the sake of comparison, we have also
calculated $\Delta_2(N)$ for the corresponding single rings. The results for
$B=0, 3,$ and 6 T are shown in Figs. \ref{fig9}-\ref{fig11}, respectively, in
which the bottom panels correspond to the single ring.
From Fig. \ref{fig9} one can see that at zero magnetic field the single-QR addition
spectrum presents the usual intense peaks at $N=2, 6$ and 10 with zero total
spin, and those at $N=4$ and 8 with $S_z=1$ satisfying Hund's rule.
Similar results are found for the QRMs with $d=2$ and 4 nm, indicating that
such systems behave as a single ring owing to the strong quantum mechanical
coupling corresponding to these inter-ring distances --notice that the spin
values coincide for all the configurations but that with $N=13$. This fact contrasts
with the results found for the vertical ring molecules of Ref. \onlinecite{Mal06},
where at $d=4$ nm the spectrum clearly reflected an intermediate coupling situation
due to the filling of the first antibonding orbitals. As commented before, for the
systems studied in the present paper such states are only occupied for larger
inter-ring separations (or $N$'s of the order of 30).
The spectrum corresponding to $d=6$ nm is shown in the top panel of the same
figure. One can see that, although some of the marked peaks are preserved, in
particular those at $N=2$ and 8, the ones at $N=4,6$ no longer
exist --notice that for 6 electrons the spectrum presents now a minimum
and also that a new peak is found at $N=5$.
This intricate structure can be understood from the corresponding
single-particle energy levels. Indeed, it appears that the QRMs with
$N\leq 4$ are made up of only bonding states, the first antibonding state being
filled when $N=5$.
From $N\geq 7$ on, the QRMs have always occupied both B and AB orbitals but,
however, the intermediate 6-electron configuration has again only symmetric states.
This alternate behavior evidences that 6 nm is not a separation large enough for
the QRMs to be in the weak coupling limit, but rather corresponds to an intermediate regime.
Notice also that, from the results of Ref. \onlinecite{Mal06}, in the weak coupling limit
one would expect to find clearly marked peaks at the same $N$ values as for the
single ring multiplied by two --i.e. at $N=4$, 12 and 20, indicating that the rings
are so apart that behave as
isolated entities. We have checked that for our QRMs to present such spectrum, we
should consider inter-ring distances of about 10 nm.
The different spin values for $d=6$ nm as compared to those in the strong coupling
regime can also be explained from the sp levels. For example, the $2S_z=3$ assignation
of the QRM with $N=5$ is due to the above-mentioned filling of an antibonding
--spin-up with $l=0$-- orbital replacing the spin-down $|l|=1$ state occupied
for $d=2$ and 4 nm.
Analogously, the configuration with $S_z=1$ (instead of $S_z=0$) for $N=10$
can also be explained from the sp levels: in the strong coupling limit,
the QRM is formed by the spin-degenerated sp levels with $l=0,|1|$ and $|2|$, but
this closed-shell configuration is prevented by the filling of the antisymmetric
orbitals at $d=6$ nm. Finally, the reverse situation occurs at $N=8$, where
the closing of the antibonding $l=0$ and $|1|$ shells contrasts with the Hund's-rule
configurations found for the strongly coupled molecules.
Fig. \ref{fig10} shows the addition energies corresponding to the situation in which
a magnetic field of 3 T is applied to the rings.
Like what is found at $B=0$, the spectrum of the single system and of the molecules with
$d=2$ and 4 nm are rather similar --notice the different energy scales,
the most remarkable difference being the salient minimum that appears for the single
QR at $N=5$.
For the above-mentioned inter-ring distances, peaks with $S_z=0$ are found at
$N=2,4,8,10$ and 12, as well as a peak at $N=6$ with $S_z=2$, although they
are not as clearly marked as at $B=0$.
It turns out that, even in the presence of a magnetic field, when the
single-particle energy levels no longer display the $\pm l-$degeneration,
the QRMs can adopt configurations that are somehow analogue to these
characteristic of the situation at $B=0$, namely the closed-shell
ones and those fulfilling Hund's rule. Indeed, for e.g. $d=4$ nm and $N=10$,
the ring molecule is made up of the spin-degenerate bonding states with $l=0-4$
(instead of those with $|l|=0-2$ of the $B=0$ case). Similarly, at $N=6$
the occupied orbitals are the spin-up and -down ones with $l=1$ and 2, and the
spin-up ones with $l=0$ and 3 (instead of the spin-degenerate states with
$|l|=0-1$ filled at zero magnetic field).
For larger inter-ring separations, the occupancy of the first antibonding
orbitals washes out these structures and the addition spectrum becomes
flatter and irregular.
One can notice also the different spin assignations between the single and
the coupled systems, especially for the lowest-populated configurations. In
particular, the single QRs with $N\leq 5$ turn out to be fully spin-polarized,
which can be attributed to the combined effect of the magnetic field and a
relatively strong exchange-correlation interaction characteristic of few-electron
single quantum rings. The relatively higher spin values at $d=6$ nm for $N\geq7$
are due to the filling of the antibonding states.
Finally, the addition energies for $B=6$ T are shown in Fig. \ref{fig11}. It can be
seen that in all cases the only clearly marked peak is the one at $N=2$, with the
rest of the spectra being rather flat, following the trend observed at $B=3$ T.
Nevertheless, some weak peaks are still found and can be interpretated
as in the previous cases, e.g. the one at $N=8$ for
$d=4$ nm with $2S_z=2$: the system fills the spin-up and -down
states with $l=2-4$ and the spin-up ones with $l=1$ and 5.
One can also notice that the faint peak of the 4-electron configurations
of both the single ring and the QRM with $d=2$ nm becomes a minimum
at larger inter-ring distances.
Concerning the spin, the single QRs and the QRMs with $d=2$ and 4 nm turn
out to be fully polarized for $N\leq 7,5$ and 3, respectively, whereas the
filling of the antibonding states favors the fully spin-polarization of
molecules with the largest ring separation for all the considered electron
numbers.
\section{Summary}
Within the local spin-density functional theory, we have addressed the
ground state of quantum ring molecules containing up to 40 electrons,
with different inter-ring distances, and submitted to perpendicular
magnetic fields. In the strong coupling regime the energy levels and
the addition energies of the QRMs are similar to those of a single QR,
although some differences are found due to the effect of the magnetic
field, which has a tendency to wash out the clearly marked peaks
characteristic of the $B=0$ case as well as to yield flatter addition spectra.
However, even at $B\neq 0$, some peaks are still present and they can be
interpretated as at zero magnetic field.
When the ring separation is increased until the first antibonding orbitals
are occupied, the addition spectra become irregular and the ring molecules
are fully spin-polarized at relatively low magnetic fields.
The filling of such states yields isospin oscillations as a function of $B$,
increasing in one-unit jumps once the corresponding molecular configurations
reach the maximum spin value.
Despite the lack of experimental results to compare ours with, we believe
that the ones herewith presented may be helpful in the analysis of future
experiments on vertically coupled QRs concerning, e.g., the realization of
single-electron transistor (SET) measurements, where the evolution of the
chemical potential $\mu(N)$ with the magnetic field can be experimentally
identified as the variation of the position of the current peaks as a function
of the applied field, showing irregularities arising from phase transitions.
\section*{ACKNOWLEDGMENTS}
This work has been performed under grants FIS2005-01414 from DGI (Spain),
Generalitat Valenciana FPI (MR), and projects UJI-Bancaixa
P1-1B2006-03 (Spain) and 2005SGR00343 from Generalitat de Catalunya.
|
1,108,101,566,378 | arxiv |
\section{Introduction}
Count time series is an active area of current research, with several recent review papers and books appearing on the topic \citep{fokianos2012count, davis2016handbook, weiss2018introduction, davis2021count}. Gaussian models, which are completely characterized by the series' mean and autocovariance structure, may inadequately describe count series, especially when the counts are small. This paper uses a recent advance in count modeling in \cite{jia2021count} to develop a very general count time series model with seasonal characteristics. Specifically, a transformation technique is used to convert a standardized seasonal correlated Gaussian process into a seasonal count time series. The modeling paradigm allows any marginal count distribution to be produced, has very general correlation structures that can be positive or negative, and can be fitted via likelihood methods. While our most basic setup produces strictly stationary count series (in a periodic sense), nonstationary extensions, particularly those involving covariates, are easily achieved.
With $T$ denoting the known period of the data, our objective is to model a time series $\{ X_t \}$ in time $t$ that has a count marginal distribution and periodic properties with known period $T$. A seasonal notation uses $X_{dT+\nu}$ to denote the series during the $\nu$th season of cycle $d$. Here, $\nu \in \{ 1, 2, \ldots, T \}$ is the seasonal index and $d \in \{ 0, 1, 2, n/T-1 \}$. We assume that there are $n$ total observations, taken at the times $1, 2, \ldots, n/T$. To avoid trite work with edge effects, we assume that $n/T$ is a whole number.
We seek to construct count series having the cumulative distribution function $F_\nu(x)= P[X_{dT+\nu} \leq x ]$ for each cycle $d$ --- this stipulation imposes a periodic marginal distribution on the series. In fact, our constructed series will be strictly periodically stationary: for each $k \geq 1$ and all times $t_1 < t_2 < \ldots < t_k$, the joint distribution of $(X_{t_1}, \ldots, X_{t_k})^\prime$ coincides with that of $(X_{t_1+T}, \ldots, X_{t_k+T})^\prime$. We use notations such as $\{ X_t \}$ and $\{ X_{dT+\nu} \}$ interchangeably, the latter being preferred when seasonality is emphasized.
Some previous seasonal count time series models are now reviewed. The most widely used seasonal count time series models to date develop periodic versions of discrete integer-valued autoregressions (PINAR models) --- see \cite{monteiro2010integer, santos2019periodic}, and \cite{bentarzi2020some}. For example, a first order PINAR series $\{ X_t \}$ obeys the difference equation
\[
X_{dT+\nu} = p(\nu) \circ X_{dT+\nu-1} + \epsilon_{dT+\nu}.
\]
Here, $p(\nu) \in [0,1]$ for each season $\nu$ and $\circ$ denotes the classical thinning operator: for an independent and identically distributed (IID) sequence of zero-one Bernoulli trials $\{ B_i \}_{i=1}^\infty$ and a count-valued random variable $C$ that is independent of $\{ B_i \}_{i=1}^\infty$, $p \circ C := \sum_{i=1}^C B_i$. The noises $\{ \epsilon_{dT+\nu} \}$ are periodic independent and identically distributed (IID) count-valued random variables having finite second moments.
The PINAR model class has drawbacks. Even in the stationary case, PINAR models cannot produce some marginal distributions. \cite{joe2016markov} quantifies the issue in the stationary case, showing that only marginal distributions in the so-called discrete self-decomposable family can be achieved. Another issue is that PINAR models must have non-negative correlations. Negatively correlated count series do arise \citep{kachour2009first, livsey2018multivariate, jia2021count}. Likelihood inference for PINAR and INAR models can be challenging; moreover, adding covariates to the models is non-trivial. See \cite{joe2019likelihood} for more in the stationary setting.
A different method for constructing seasonal count series uses a periodic renewal point processes as in \cite{fralix2012renewal} and \cite{livsey2018multivariate}. Here, a zero-one binary sequence $\{ B_t \}_{t=1}^\infty$ is constructed to be periodically stationary and $\{ B_{1,t} \}, \{ B_{2,t} \}, \ldots$ denote IID copies of $\{ B_t \}_{t=1}^\infty$. The periodic count series is constructed via the superposition
\[
X_t = \sum_{i=1}^{N_{t}} B_{i,t}.
\]
Here, $\{ N_t \}_{t=1}^\infty$ is a periodic IID sequence of count valued random variables independent of the $\{ B_{i,t} \}$. For example, to obtain a correlated sequence $\{ X_t \}$ with Poisson marginal distributions, $\{ N_t \}$ is taken as independent Poisson in $t$, with $N_{dT+\nu}$ having the mean $\lambda_{\nu} > 0$. Then it is easy to see that $X_{dT+\nu}$ is Poisson distributed with mean $\lambda_{\nu} P(B_{\nu}=1)$. \cite{fralix2012renewal, lund2016renewal}, and \cite{livsey2018multivariate} show how to produce the classical count marginal distributions (Poisson, binomial, and negative binomial) with this setup and consider $\{ B_t \}$ processes constructed by clipping Gaussian processes.
While binary-based models typically have negative correlations whenever $\{ B_t \}$ does, it can be difficult to achieve some marginal distributions. A prominent example of this is the often sought generalized Poisson marginal. Perhaps worse, likelihood methods of parameter inference appear intractable --- current parameter inference methods use Gaussian pseudo-likelihoods, which only use the series' mean and covariance structure. See \cite{davis2021count} for additional detail.
Before proceeding, a clarification needs to be made. The models constructed here posit a particular count marginal distribution for the data {\em a priori}. This differs from dynamic linear modeling goals, where count models are often built from conditional specifications. For a time-homogeneous AR(1) example, a dynamic linear model might employ the state space setup $X_t | \alpha_t \sim \mbox{Poisson} (e^{\alpha_t})$, where $\alpha_t = \beta \alpha_{t-1} + \eta_t$, $|\beta| < 1$, and $\{ \eta_t \}$ is zero mean Gaussian noise. Such a setup produces a conditional Poisson distribution, not a series with a Poisson marginal distribution. In fact, as \cite{asmussen_foss_2014} show, the marginal distribution in the above Poisson state space setup can be far from Poisson.
Additional work on periodic count series is contained in \cite{morina2011statistical, monteiro2015periodic, bentarzi2017periodic, aknouche2018periodic, santos2019theory, aknouche2018periodic}, and \cite{ouzzani2019mixture}. Most of these references take one of the above approaches. Motivated by \cite{jia2021count}, this paper presents a different approach.
The rest of this paper proceeds as follows. The next section reviews periodic time series methods, focusing on periodic autoregressive moving-average (PARMA) and seasonal autoregressive moving-average (SARMA) difference equation structures. Section 3 clarifies our model and its properties. Section 4 narrates parameter estimation methods and Section 5 studies these techniques via simulation. Section 6 analyzes a twenty year segment of weekly rainy day counts in Seattle, Washington. Section 7 concludes with comments and remarks.
\section{Periodic Time Series Background}
This section briefly reviews periodic (seasonal) time series. Our future count construction uses a series $\{ Z_t \}$, standardized to $E[ Z_t ] \equiv 0$ and $\mbox{Var}(Z_t) \equiv 1$, and having Gaussian marginal distributions. While the mean of $\{ Z_t \}$ is zero, periodic features in the autocorrelation function of $\{ Z_t \}$, which we denote by $\rho_{Z}(t,s)=\mbox{Cov}(Z_t,Z_s)$, will become paramount.
We call $\{ Z_t \}$ a PARMA($p,q$) series if it obeys the periodic ARMA($p,q$) difference equation
\begin{equation}
\label{eq:PARMA}
Z_{dT+\nu} =
\sum_{k=1}^p \phi_k(\nu) Z_{dT+\nu-k} +
\eta_{dT+\nu} + \sum_{k=1}^ q \theta_k(\nu) \eta_{dT+\nu-k}.
\end{equation}
Here, $\{ \eta_t \}$ is a zero mean white noise sequence with the periodic variance $\mbox{Var}(\eta_{dT+\nu})=\sigma^2(\nu)>0$. The autoregressive order is $p$ and the moving-average order is $q$, which are taken constant in the season $\nu$ for simplicity. The autoregressive and moving-average coefficients are $\phi_1(\nu), \ldots, \phi_p(\nu)$ and $\theta_1(\nu), \ldots, \theta_q(\nu)$, respectively, during season $\nu$. We tacitly assume that model parameters are identifiable from the covariance of the series. This may require more than the classical causality and invertibility conditions \citep{reinsel2003elements}. Gaussian PARMA solutions are strictly stationary with period $T$ as long as the autoregressive polynomial does not have a root on the complex unit circle --- see \cite{lund1999modeling} for quantification. Not all PARMA parameters are free due to the restriction $\mbox{Var}(X_t) \equiv 1$; the following example delves further into the matter.
\vspace{.12in} \noindent {\bf Example 3.1} A PAR(1) series with period $T$ obeys the recursion
\begin{equation}
\label{eq:PAR1}
Z_t = \phi(t) Z_{t-1} + \eta_t,
\end{equation}
where $\{\eta_t \}$ is zero mean white noise with $\mbox{Var}(\eta_t)= \sigma^2(t)$. The quantities $\phi(t)$ and $\sigma^2(t)$ are assumed periodic in $t$ with period $T$. This difference equation is known to have a unique (in mean squared) and causal solution whenever there is a stochastic contraction over an entire cycle: $| \prod_{\nu=1}^T \phi(\nu) | < 1$ \citep{lund1999modeling}.
To impose $\mbox{Var}(Z_t) \equiv 1$, take a variance on both sides of (\ref{eq:PAR1}) and set $\mbox{Var}(Z_t) = \mbox{Var}(Z_{t-1})=1$ to infer that $\sigma^2(t) = 1 - \phi^2(t)$, which we tacitly assume is positive for all $t$. This uses $\mbox{Cov}(Z_{t-1}, \eta_t)=0$, which follows by causality. The covariance structure of $\{ Z_t \}$ can now be extracted as
\[
\rho_{Z}(t,s) = \prod_{i=0}^{t-s-1} \phi(t-i).
\]
for $s < t$. $\clubsuit$.
Another class of periodic models in use today are the SARMA series. SARMA series are actually time-stationary models, but have comparatively large autocorrelations at lags that are multiples of the period $T$. The most basic SARMA($p,q$) series $\{ Z_t \}$ obeys a difference equation driven at lags that are multiples of the period $T$:
\begin{equation}
\label{eq:SARMA}
Z_t =
\sum_{k=1}^p \phi_k Z_{t-kT} +
\eta_t + \sum_{k=1}^q \theta_k \eta_{t-kT},
\end{equation}
where $\{ \eta_t \}$ is zero mean independent noise with a constant variance. In this setup, $\rho_Z(t,s)=0$ unless $t-s$ is a whole multiple of the period $T$. As such, many authors allow $\{ \eta_t \}$ to have additional correlation, specifically a zero mean ARMA($p^*,q^*$) series. This results in a model that can have non-zero correlations at any lag; however, the model is still stationary and does not have any true periodic features. Since the model is stationary, we write $\rho_Z(t,s)= \rho_Z(t-s)$.
\vspace{.12in} \noindent {\bf Example 3.2} A SAR(1) series with period $T$ and AR(1) $\{ \eta_t \}$ obeys the difference equation pair
\begin{equation}
\label{e:system}
Z_t = \phi Z_{t-T} + \eta_t; \quad
\eta_t = \alpha \eta_{t-1} + \epsilon_t,
\end{equation}
where $\{ \epsilon_t \}$ is zero mean white noise with variance $\sigma^2_\epsilon$, $|\phi|<1$, and $|\alpha|< 1$. Combining these two difference equations results in a stationary and causal AR($T+1$) model for $\{ Z_t \}$.
Imposing $\mbox{Var}(Z_t) \equiv 1$ and taking a variance in the first equation in (\ref{e:system}) gives
\[
1 = \phi^2 + \mbox{Var}(\eta_t) +
2 \phi\mbox{Cov}(Z_{t-T}, \eta_t).
\]
To proceed, use equation (\ref{e:system})'s causal solutions $\eta_t = \sum_{k=0}^\infty \alpha^k \epsilon_{t-k}$ and $Z_{t-\ell} = \sum_{m=0}^{\infty}\phi^m \eta_{t-mT-\ell}$ to get
\begin{equation}
\label{inter1}
\mbox{Cov}( \eta_t, Z_{t-\ell} ) = \sigma^2_\epsilon \frac{\alpha^{\ell}}{(1-\alpha^2)(1-\phi\alpha^T)}
\end{equation}
for any $\ell > 0$. Combining the last two equations, we see that taking
\begin{equation}
\label{varset}
\sigma_\epsilon^2 = \frac{(1-\phi^2)(1-\alpha^2)(1-\phi\alpha^T)}{1+\phi\alpha^T}
\end{equation}
indices $\mbox{Var}(Z_t) \equiv 1$.
To extract the covariance structure of $\{ Z_t \}$, multiply both sides of (\ref{e:system}) by $Z_{t-h}$ and take expectations to get the Yule-Walker type equations
\begin{eqnarray*}
\rho_Z(0) &=& \phi \rho_Z(T) + E(Z_t\eta_t)\\
&\vdots&\\
\rho_Z(T) &=& \phi\rho_Z(0) + E(Z_{t-T}\eta_t).
\end{eqnarray*}
This system can be rewritten in vector form as
\[
\begin{bmatrix}
1&0&\cdots&0&-\phi\\
0&1&\cdots&-\phi&0\\
\vdots&\vdots&\vdots&\vdots&\vdots\\
0&-\phi&\cdots&1&0\\
-\phi&0&\cdots&0&1
\end{bmatrix}\begin{bmatrix}
\rho_Z(0)\\
\rho_Z(1)\\
\vdots\\
\rho_Z(T-1)\\
\rho_Z(T)
\end{bmatrix}=\begin{bmatrix}
E(\eta_tZ_t)\\
E(\eta_tZ_{t-1})\\
\vdots\\
E(\eta_tZ_{t-T+1})\\
E(\eta_tZ_{t-T})
\end{bmatrix}.
\]
One can show that the inverse of the matrix in the above linear system exists. From this, (\ref{inter1}), (\ref{varset}), and some very tedious algebraic manipulations, one can extract
\[
\rho_Z(h)=\frac{\alpha^h+\phi\alpha^{T-h}}
{1 + \phi\alpha^T}, \quad 0 \leq h \leq T.
\]
For the $h>T$ model correlations, multiply the first equation in (\ref{e:system}) by $Z_{t-h}$ for $h > T$ and take expectations to get the recursion $\rho_Z(h) = \phi \rho_Z(h-T) + E(\eta_t Z_{t-h})$. This can be solved with (\ref{inter1}) to get
\[
\rho_Z(h) = \phi^{a}\frac{\alpha^b+\phi\alpha^{T-b}}{1+\phi\alpha^T}+\sum_{k=0}^{a-1}\phi^k\alpha^{h-Tk}\frac{1-\alpha^2}{1+\phi\alpha}, \quad h>T,
\]
where $a = \lfloor h/T \rfloor$ and $b=h-aT$. $\clubsuit$.
PARMA and SARMA methods are compared in detail in \cite{lund2011choosing}. PARMA models are usually more applicable since the immediate past of the process is typically more influential than past process lags at multiples of the period $T$. Applications in the environment \citep{vecchia1985periodic, bloomfield1994periodic, lund1995climatological, tesfaye2004identification} tend to be PARMA; SARMA structures are useful in economics \citep{franses1994multivariate, franses2004periodic, hurd2007periodically}. PARMA reviews are \cite{gardner1975characterization, lund1999modeling}, and \cite{gardner2006cyclostationarity}; statistical inference for PARMA models is studied in \cite{lund2000recursive, basawa2001large, basawa2004first, lund2006parsimonious, shao2004least}, and \cite{shao2006mixture}. SARMA inference is addressed in \cite{chatfield1973box}.
\section{Methodology}
Our methods extend the work in \cite{jia2021count} with Gaussian transformations (copulas) to the periodic setting. Let $\{ X_t \}$ denote the time series to be constructed, which takes values in the count support set $\{0, 1, 2, \ldots \}$. Our construction works with a latent Gaussian series $\{ Z_t \}$ with zero mean and a unit variance at all times. Then $X_t$ is obtained from $Z_t$ via
\begin{equation}
\label{groundzero}
X_{dT+\nu}= F_{\nu}^{-1}\left( \Phi(Z_{dT+\nu}) \right),
\end{equation}
where $\Phi(\cdot)$ is the cumulative distribution function (CDF) of the standard normal distribution and $F_\nu(\cdot)$ is the desired marginal distribution for $X_t$ during season $\nu$. Here, $F_\nu^{-1}$ is the quantile function
\begin{equation}
F_{\nu}^{-1}(u) = \inf\left\{x: F_{\nu}(x) \geq u \right \}.
\label{eq:quantile}
\end{equation}
As \cite{jia2021count} shows, this construction leaves $X_{dT+\nu}$ with the marginal distribution $F_\nu$ for every $d$ and $\nu$. This model is very flexible: any marginal distribution can be achieved for any desired season $\nu$, even continuous ones. The marginal distribution $F_\nu$ can have the same form or be different for distinct seasons $\nu$. Any marginal distribution whatsoever can be achieved; when count distributions are desired, the quantile definition in (\ref{eq:quantile}) is the version of the inverse CDF that produces the desired marginals.
\subsection{Properties of the Model}
Toward ARMA and PARMA model order identification, if $\{ Z_t \}$ is an $m$-dependent series, then $Z_{t_1}$ and $Z_{t_2}$ are independent when $|t_1-t_2|>m$ since $\{ Z_t \}$ is Gaussian. By (\ref{groundzero}), $X_{t_1}$ and $X_{t_2}$ are also independent and $\{ X_t \}$ is also $m$-dependent. From the characterization of stationary moving averages (Proposition 3.2.1 in \cite{Brockwell_Davis_1991}) and periodic moving-averages in \cite{Shao_Lund_2004}, we see that if $\{ Z_t \}$ is a periodic moving average of order $q$, then $\{ X_t \}$ is also a periodic moving average of order $q$. Unfortunately, analogous results for autoregressions do not hold. For example, if $\{ Z_t \}$ is a periodic first order autoregression, $\{ X_t \}$ may not be a periodic autoregression of any order \citep{jia2021count}.
We now derive the covariance structure of $\{ X_t \}$ via Hermite expansions. Let $\gamma_{X}(t,r) = \mbox{Cov}(X_t, X_r)$ be the covariance of $\{ X_t \}$ at times $t$ and $r$, where $r \leq t$. Then $\gamma_X(t,r)$ can be related to $\rho_Z(t,r)$ via Hermite expansions. To do this, let $G_\nu(x)=F^{-1}_{\nu}(\Phi(x))$ and write the Hermite expansion of $G_\nu(\cdot)$ as
\begin{equation}
G_\nu(z) = g_0(\nu) +\sum_{k=1}^{\infty}g_k(\nu)H_k(z).
\end{equation}
Here, $g_k(\nu)$ is the $k$th Hermite coefficient for season $\nu$, whose calculation is described below, and $H_k(z)$ is the $k$th Hermite polynomial defined by
\begin{equation}
H_k(z) = (-1)^ke^{z^2/2}\dfrac{d^k}{dz^k}\left( e^{-z^2/2} \right).
\label{eq:Hermite}
\end{equation}
The first three Hermite polynomials are $H_0(x) \equiv 1$, $H_1(x) = x$, and $H_2(x) = x^2-1$. Higher order polynomials can be found via the recursion $H_k(x)=xH_{k-1}(x) - H_{k-1}^\prime (x)$, which follows from (\ref{eq:Hermite}).
The polynomials $H_k$ and $H_j$ are orthogonal with respective to the standard Gaussian measure if $k \neq j$: $E[ H_k(Z)H_j(Z)]= 0$ for a standard normal $Z$ unless $j=k$ (in which case $E[H_k(Z)^2]= k!$). The Hermite coefficients are computed from
\begin{equation}
\label{gks}
g_k(\nu) = \dfrac{1}{k!}\int_{-\infty}^{\infty} G_{\nu}(t)H_k(t)\phi(t)dt, \quad k=1, 2, \ldots,
\end{equation}
where $\phi(t) = \Phi^\prime(t)= e^{-t^2/2}/\sqrt{2 \pi}$ is the standard normal density.
Lemma 2.1 in \cite{han2016correlation} shows that
\begin{equation}
\label{link_correlation}
\gamma_X(t,r) = \sum_{k=1}^{\infty}k!g_k(s(t))g_k(s(r))
\rho_Z(t,r)^k,
\end{equation}
where $s(t) = t - T \lfloor (t+1)/T \rfloor$ denotes the season corresponding to time $t$. Let $\sigma_X^2(t) = \gamma_X(t,t) = \sum_{k=1}^{\infty}k!g_k^2(s(t))$ denote the variance of $X_t$. Then the ACF of $\{ X_t \}$ is
\begin{equation}
\label{eq:link}
\rho_X(t,r)=\frac{\gamma_X(t,r)}{\sigma_X(t)\sigma_X(r)}
=\sum_{k=1}^{\infty}
\frac{k!g_k(s(t))g_k(s(r))}{\sigma_X(t)\sigma_X(r)}
\rho_Z(t,r)^k=\sum_{k=1}^{\infty}\ell_k
\rho_Z(t,r)^k:=L(\rho_Z(t,r)),
\end{equation}
which is a power series in $\rho_Z(t,r)$ with $k$th coefficient
\begin{equation}
\label{eq:link_coefficient}
\ell_k :=
\frac{k!g_k(s(t))g_k(s(r))}{\sigma_X(t)\sigma_X(r)}.
\end{equation}
\cite{jia2021count} call $L(\cdot)$ a link function and $\ell_k$ a link coefficient. When $\{ Z_t \}$ is stationary and $F_\nu$ does not depend on $\nu$, they show that $L(0)=0$, $L(1)=1$, and $L(-1)=\mbox{Corr} (G(Z_0),G(-Z_0))$. It is not true that $L(-1)=-1$ in any case nor is $L(1)=1$ in the periodic case; indeed, stationary or periodically stationary count processes with arbitrarily positive or negative correlations may not exist. For example, the pair $(Z, -Z)$, where $Z$ is standard normal has correlation -1, but two Poisson random variables, both having mean $\lambda$, whose correlation is -1, do not exist.
The model produces the most flexible correlation structures possible in a pairwise sense. Specifically, consider two distinct seasons $\nu_1$ and $\nu_2$ and suppose that $F_{\nu_1}$ and $F_{\nu_2}$ are the corresponding marginal distributions for these seasons. Then Theorems 2.1 and 2.5 in \cite{whitt1976bivariate} show that the bivariate random pair $(X_{\nu_1}, X_{\nu_2})$ having the marginal distributions $F_{\nu_1}$ and $F_{\nu_2}$, respectively, with the largest correlation has form $X_{\nu_1}= F^{-1}_{\nu_1}(U)$ and $X_{\nu_2}= F_{\nu_2}^{-1}(U)$, where $U$ is a uniform[0,1] random variable. To achieve the largest correlation, one simply takes $\{ Z_t \}$ to have unit correlation at these times; that is, take $Z_{\nu_1}= Z_{\nu_2}$. Since $\Phi(Z_{\nu_1})$ is uniform[0,1], the claim follows. For negative correlations, the same results in \cite{whitt1976bivariate} also show that the most negatively correlated pair that can be produced has the form $X_{\nu_1}=F_{\nu_1}^{-1}(U)$ and $X_{\nu_2}=F_{\nu_2}^{-1}(1-U)$. This is produced with a Gaussian series having $\mbox{Corr}(Z_{\nu_1}, Z_{\nu_2})=-1$, which is obtained by selecting $Z_{\nu_2}=-Z_{\nu_1}$. Then $\Phi(Z_{\nu_1})$ is again uniform[0,1] and $\Phi(Z_{\nu_2})=\Phi(-Z_{\nu_1})=1-\Phi(Z_{\nu_1})$, since $\Phi(-x)=1-\Phi(x)$ for all real $x$.
The previous paragraph implies that one cannot construct more general autocorrelation functions for count series than what has been constructed above --- they do not exist. Negatively correlated count series do arise \citep{kachour2009first, livsey2018multivariate, jia2021count} and can be described with this model class. In the stationary case where the marginal distribution $F_\nu$ is constant over all seasons $\nu$, a series $\{ X_t \}$ with $\mbox{Cov}(X_t,X_{t+h})=1$ for all $h$ is achieved by taking $Z_t \equiv Z$, where $Z$ is standard normal. This unit correlation property will not carry over to our periodic setting. For example, a random pair $(X_{\nu_1}, X_{\nu_2})$ having a Poisson marginal with mean $\lambda_{\nu_1}$ during season $\nu_1$ and a Poisson marginal with mean $\lambda_{\nu_2}$ during season $\nu_2$ with a unit correlation do not exist when $\lambda_{\nu_1} \ne \lambda_{\nu_2}$. This said, the model can produce any correlation structures within ``the range of achievable correlations". As such, the model class here is quite flexible.
The amount of autocorrelation that $\{ X_t \}$ inherits from $\{ Z_t \}$ is now discussed. An implication of the result below, which establishes monotonicity of the link function by showing that its derivative is positive, is that the larger the autocorrelations are in $\{ Z_t \}$, the larger the autocorrelations are in $\{ X_t \}$. We state the result below and prove it in the Appendix.
\noindent {\bf Proposition 3.1:} {\it For a fixed $t$ and $r$, let $L(\cdot)$ denote the link function in (\ref{eq:link}). Then for $u \in (-1, 1)$, the derivative of the link is positive and has form}
\begin{equation}
\label{e:link-derivative-again}
L'(u) = \frac{
\sum_{j_1=0}^\infty \sum_{j_2=0}^\infty
e^{-\frac{1}{2(1-u^2)}\left[ \Phi^{-1}(C_{j_1}(s(t))^2 + \Phi^{-1}(C_{j_2}(s(r))^2 - 2 u \Phi^{-1}(C_{j_1}(s(t)) \Phi^{-1}(C_{j_2}(s(r))\right]}}
{\sqrt{\sigma_X(t)\sigma_{X}(r)} 2\pi \sqrt{1-u^2}}.
\end{equation}
Here,
\begin{equation}
\label{eq:c_n}
C_{j}(\nu)=\mathbb{P}[ X_\nu \leq j]
\end{equation}
denotes the cumulative probabilities of $X_\nu$ at season $\nu$.
\subsection{Calculation and Properties of the Hermite Coefficients}
An important numerical task entails calculating $g_k(\nu)$, which only depends on $F_{\nu}(\cdot)$ by (\ref{gks}). To do this, rewrite $G_\nu(z)$ in the form
\begin{equation}
G_\nu(z)=\sum_{j=0}^{\infty}
j \mathbb{1}_
{ [C_{j-1}(\nu) \leq \Phi^{-1}(z) < C_{j}(\nu) ]}
=\sum_{j=1}^{\infty}j
\mathbb{1}_{\left[ \Phi^{-1}(C_{j-1}(\nu)),\Phi^{-1}(C_j(\nu)) \right)}(z),
\end{equation}
where the convention $C_{-1}=0$ is made. We also take $\Phi^{-1}(0)= -\infty$ and $\Phi^{-1}(1)=\infty$. Then for $k \geq 1$, integration by parts yields
\begin{eqnarray*}
g_k(\nu) &=& \frac{1}{k!}\sum_{j=0}^{\infty}n\mathbb{E}\left[ \mathbb{1}_{\left[ \Phi^{-1}(C_{j-1}(\nu)),\Phi^{-1}(C_j(\nu)) \right)}(Z_0)H_k(Z_0) \right]\\
&=&\frac{1}{k!}\sum_{j=0}^{\infty}\frac{j}{\sqrt{2\pi}}\int_{\Phi^{-1}(C_{j-1}(\nu))}^{\Phi^{-1}(C_{j}(\nu))}H_k(z)e^{-z^2/2}dz\\
&=&\frac{1}{k!}\sum_{j=0}^{\infty}\frac{j}{\sqrt{2\pi}}\int_{\Phi^{-1}(C_{j-1}(\nu))}^{\Phi^{-1}(C_{j}(\nu))}(-1)^k\left( \frac{d^k}{dz^k}e^{-z^2/2} \right)dz\\
&=&\frac{1}{k!}\sum_{j=0}^{\infty}\frac{j}{\sqrt{2\pi}}(-1)^k\left( \frac{d^{k-1}}{dz^{k-1}}e^{-z^2/2} \right)\Bigg|_{z=\Phi^{-1}(C_{j-1}(\nu))}^{z=\Phi^{-1}(C_j(\nu))}\\
&=&\frac{1}{k!}\sum_{j=0}^{\infty}\frac{j}{\sqrt{2\pi}}(-1)e^{-z^2/2}H_{k-1}(z)\Bigg|_{z=\Phi^{-1}(C_{j-1}(\nu))}^{z=\Phi^{-1}(C_j(\nu))}.\\
\end{eqnarray*}
Simplifying this telescoping sum gives
\begin{equation}
\label{eq:g_k}
g_k(\nu) = \dfrac{1}{k!\sqrt{2\pi}}\sum_{j=0}^{\infty}
e^{-[\Phi^{-1}(C_{j}(\nu))]^2/2}H_{k-1}(\Phi^{-1}(C_{j}(\nu))).
\end{equation}
Notice that the summands in (\ref{eq:g_k}) are zero whenever $\Phi^{-1}(C_j(\nu))=\pm\infty$. Lemma 2.1 in \cite{jia2021count} shows that the expansion converges whenever $\mathbb{E}[ X_t^p ] < \infty$ for some $p>1$. This condition automatically holds for time series, which implicitly require a finite second moment. For count distributions with a finite support, $C_j(\nu)$ becomes unity for large $j$. For example, a Binomial marginal distribution with 7 trials is considered in our later application. Here, the summation can be reduced to $j \in \{ 0, 1, \ldots, 7 \}$. For count distributions on a countably infinite support, approximating (\ref{eq:g_k}) requires truncation of an infinite series. This is usually not an issue: numerically, $C_j(\nu)$ quickly converges to unity as $j \rightarrow \infty$ for light tailed distributions --- or equivalently, $e^{-\Phi^{-1}(C_{j}(\nu))^2/2}H_{k-1}(\Phi^{-1}(C_{j}(\nu)) \rightarrow 0$. In addition to (\ref{eq:g_k}), $g_k(\nu)$ can also be approximated by Gaussian quadrature; see {\it gauss.quad.prob} in the package {\it statmod} in {\it R}. However, the approximation in (\ref{eq:g_k}) is more appealing in terms of simplicity and stability \citep{jia2021count}.
\section{Parameter Inference}
This section develops likelihood methods of inference for the model parameters via particle filtering and sequential Monte Carlo techniques. With many count time series model classes, likelihoods are intractable \citep{davis2021count}. Accordingly, researchers have defaulted to moment and composite likelihood techniques. However, if the count distributional structure truly matters, likelihood methods should ``feel" this structure and return superior parameter estimates. Gaussian pseudo-likelihood estimates, which are based only on the mean and autocovariance of the series, are developed in \cite{jia2021count} in the stationary case. \cite{jia2021count} presents an example where Gaussian pseudo-likelihood estimates perform well and an example where they perform poorly.
For notation, let $\boldsymbol{\theta}$ contain all parameters in the $T$ marginal distributions $F_1, \ldots, F_T$ and $\boldsymbol{\eta}$ contain all parameters governing the evolution of $\{ Z_t \}$. The data $\{ x_1, x_2, \ldots, x_n \}$ denote our realization of the series.
The likelihood function $\mathcal{L}(\boldsymbol{\theta}, \boldsymbol{\eta})$ is simply a high dimensional multivariate normal probability. To see this, use (\ref{groundzero}) to get
\begin{equation}
\label{eq:likilihood}
\boldsymbol{\mathcal{L}}( \boldsymbol{\theta},\boldsymbol{\eta})
= \mathbb{P}(X_1=x_1, \cdots, X_n = x_n)
= \mathbb{P}\left( Z_1 \in (a_1,b_1], \cdots, Z_n \in (a_n,b_n] \right)
\end{equation}
for some numbers $\{ a_i \}_{i=1}^n$ and $\{ b_i \}_{i=1}^n$ (these are clarified below but are not important here). The covariance matrix of $(Z_1, \ldots , Z_n)$ only depends on $\boldsymbol{\eta}$, not on $\boldsymbol{\theta}$. Unfortunately, evaluating a high dimensional multivariate normal probability is numerically challenging for large $n$. While methods to handle this problem exist \citep{kazianka2010copula, kazianka2013approximate, bai2014efficient}, they often contain substantial estimation bias.
An alternative approach, which is the one taken here, uses simulation methods to approximate the likelihood. General methods in this category include the quasi-Monte Carlo methods of \cite{genz2002comparison} and the prominent Geweke–Hajivassiliou–Keane (GHK) simulator of \cite{geweke1991efficient} and \cite{hajivassiliou1996simulation}. The performance of these methods, along with an additional ``data cloning" approach, are compared in \cite{han2020maximum}, where the author shows that estimators from these methods are similar, but that the GHK methods are much faster, having a numerical complexity as low as order $mn$. Here, $m$ is the pre-selected number of sample paths to be simulated (the number of particles). As we will subsequently see, the GHK simulator works quite well for large $m$.
\cite{jia2021count} propose a sequential importance sampling method that uses a modified GHK simulator. In essence, importance sampling is used to evaluate integrals by drawing samples from an alternative distribution and averaging their corresponding weights. Suppose that we seek to estimate $\int_{\mathcal{D}}f(\boldsymbol{x})d\boldsymbol{x}$. Then
\[
\int_{\mathcal{D}}f(\boldsymbol{x})d\boldsymbol{x} = \int_{\mathcal{D}}\dfrac{f(\boldsymbol{x})}
{q(\boldsymbol{x})}q(\boldsymbol{x})d\boldsymbol{x},
\]
where $f(\boldsymbol{x})/q(\boldsymbol{x})$ is called the weight and the proposed distribution $q$ is called the importance distribution. Without loss of generality, we assume that $q(\boldsymbol{x}) > 0$ whenever $\boldsymbol{x} \in \mathcal{D}$ and that $q(\boldsymbol{x})=0$ for $\boldsymbol{x} \in \mathcal{D}^c$. Then the importance sampling estimate of the integral is the law of large numbers justified average
\[
\int_{\mathcal{D}}\dfrac{f(\boldsymbol{x})}{q(\boldsymbol{x})}q(\boldsymbol{x})d\boldsymbol{x} \approx \dfrac{1}{m}\sum_{k=1}^{m}\dfrac{f(\boldsymbol{x}^{(k)})}{q(\boldsymbol{x}^{(k)})},
\]
where $\{\boldsymbol{x}^{(1)}, \ldots, \boldsymbol{x}^{(m)}\}$ are $m$ IID samples drawn from the proposed distribution $q$. With the notation $z_{1:n}=(z_1, \ldots, z_n)$, notice that the likelihood in (\ref{eq:likilihood}) has form
\begin{equation}
\label{eq:likelihood_int}
\int_{ \{ z_t \in (a_t, b_t], t=1, \ldots, n \} } \boldsymbol{\phi}_{\boldsymbol{\eta}}\left( z_{1:n}\right)dz_1 \ldots dz_n = \int_{\{z_t\in (a_t, b_t], t=1, \ldots, n \}}\dfrac{\boldsymbol{\phi}_{\boldsymbol{\eta}}\left(z_{1:n}\right)}{q(z_{1:n})}q(z_{1:n})dz_1 \ldots dz_n.
\end{equation}
Observe that $\{ a_i \}_{i=1}^n$ and $\{ b_i\}_{i=1}^n$ only depend on $\boldsymbol{\theta}$ and the data $\{x_1, x_2, \ldots, x_n\}$. Specifically,
\[
a_t = \Phi^{-1}(C_{x_t-1}(s(t))) \quad \mbox{and} \quad b_t = \Phi^{-1}(C_{x_t}(s(t))),
\]
where $C_{n}(v)$ is defined in (\ref{eq:c_n}) and $s(t)$ is the season at time $t$. Here, it is best to choose a proposed distribution $q$ such that 1) $q(z_{1:n}) > 0$ for $z_t \in (a_t, b_t]$ and $q(z_{1:n})=0$ otherwise; 2) the weight $\boldsymbol{\phi}_{\boldsymbol{\eta}} \left( z_{1:n}\right)/q(z_{1:n})$ is easy to compute; and 3) $\{ Z_t \}$ can be efficiently drawn from $q$. Our GHK simulator satisfies all three conditions.
To develop our GHK sampler further, we take advantage of the latent Gaussian structure in the PARMA or SARMA series $\{ Z_t \}$. In simple cases, $\{ Z_t \}$ may even be a Markov chain. The GHK algorithm samples $Z_t$, depending on the its previous history $Z_{t-1}, \ldots, Z_1$ and $X_t$, from a truncated normal density. Specifically, let $p_{\boldsymbol{\eta}(t)} \left( z_t | z_{t-1}, \cdots, z_{1}, x_{t} \right)$ denote the truncated normal density of $Z_t$ given the history $Z_{t-1}, \ldots, Z_1$ and $X_t=x_t$. Then
\begin{equation}
\label{eq:p_density}
p_{\boldsymbol{\eta}(t)} \left( z_t | z_{t-1}, \ldots, z_{1}, x_{t} \right) = \dfrac{1}{r_t}
\left[
\dfrac{\phi(\frac{z_t-\hat{z}_t}{r_t})}{\Phi(\frac{b_t-\hat{z}_t}{r_t})-\Phi(\frac{a_t-\hat{z}_t}{r_t})}
\right], \quad a_t < z_t < b_t,
\end{equation}
where $\hat{z}_t$ and $r_t$ are the one-step-ahead mean and standard deviation of $Z_t$ conditioned on $z_1, z_2, \ldots, z_n$. Again, $a_t$ and $b_t$ only depend on $x_t$. Here, we choose the importance sampling distribution
\begin{equation}
q_{\boldsymbol{\eta}}(z_{1:n}|x_{1:n}) = p_{\boldsymbol{\eta}(1)}(z_1|x_1)
\prod_{t=2}^{n}p_{\boldsymbol{\eta}(t)}
\left( z_t | z_{t-1}, \ldots, z_{1}, x_{t} \right).
\end{equation}
Elaborating further, let $\mathcal{N}(\mu, \sigma^2; a, b)$ denote a normal random variable with mean $\mu$ and variance $\sigma^2$ that is known to lie in $(a,b]$, where $a < b$. Then $X_1$ is first drawn from $\mathcal{N}(0, 1; a_1, b_1)$. Thereafter, $X_2, X_3, \ldots, X_n$ are sequentially sampled from the distribution in (\ref{eq:p_density}). The proposed importance sampling distribution is efficient to sample, has the desired distributional support, and induces an explicit expression for the weights:
\[
\frac{\boldsymbol{\phi}_{\boldsymbol{\eta}(t)}\left( z_{1:n}\right)}{q_{\boldsymbol{\eta}(t)}(z_{1:n}|x_{1:n})}
=
\dfrac{p_{\boldsymbol{\eta}(1)}(z_1)}{p_{\boldsymbol{\eta}(1)}(z_1|x_1)} \prod_{t=2}^{n} \dfrac{ p_{\boldsymbol{\eta}(t)}\left(z_t\big|z_{t-1}, \ldots,z_{1} \right) }{p_{\boldsymbol{\eta}(t)}\left( z_t\big| z_{t-1}, \ldots, z_1, x_t \right)}.
\]
Using (\ref{eq:p_density}) gives
\[
\dfrac{ p_{\boldsymbol{\eta}}\left(z_t\big|z_{t-1},\ldots,z_{1} \right) }{p_{\boldsymbol{\eta}}\left( z_t\big| z_{t-1},\ldots,z_1,x_t \right)} = \Phi\left(\frac{b_t-\hat{z}_t}{r_t}\right)-\Phi\left(\frac{a_t-\hat{z}_t}{r_t}\right).
\]
Therefore,
\[
\frac{\boldsymbol{\phi}_{\boldsymbol{\eta}}\left( z_{1:n}\right)}{q(z_{1:n})} =
\left[
\Phi\left( b_1 \right) - \Phi\left( a_1 \right) \right] \prod_{t=2}^{n}\left[ \Phi\left( \dfrac{b_t-\hat{z}_t }{r_t} \right) - \Phi\left(\dfrac{a_t-\hat{z}_t }{r_t} \right) \right].
\]
Define the initial weight $w_1 = \Phi(b_1) - \Phi(a_1)$. We then recursively update the weights via
\[
w_t = w_{t-1}\left[\Phi\left( \frac{b_t-\hat{z}_t }{r_t} \right) - \Phi\left(\frac{a_t-\hat{z}_t }{r_t} \right)\right]
\]
at time $t$ during the sequential sampling procedure. At the end of the sampling, we obtain
\[
w_n = \frac{\boldsymbol{\phi}_{\boldsymbol{\eta}}( z_{1:n})}{q_{\boldsymbol{\eta}}(z_{1:n}|x_{1:n})}.
\]
In the classic GHK simulator, $\hat{Z}_t$ and $r_t^2$ are obtained from a Cholesky decomposition of the covariance matrix of $\{ Z_t \}$. Here, they are based on the PARMA or SARMA model for $\{ Z_t \}$.
The full sequential importance sampling procedure is summarized below.
\begin{itemize}
\item [1] Initialize the process by sampling $Z_1$ from the $\mathcal{N}(0,1; C_{x_1}(s(1)), C_{x_1}(s(1)))$ distribution. Define the weight $w_1$ by
\begin{equation}
w_1 =
\Phi^{-1}(C_{x_{1}}(s(1))) - \Phi^{-1}(C_{x_{1}-1}(s(1)))
\end{equation}
\item [2] Now iterate steps 2 and 3 over $t=2, 3, \ldots, n$. Conditioned on $Z_1, \ldots, Z_{t-1}$, generate
\begin{equation}
Z_{t} \stackrel {{\cal D}} {=}
\mathcal{N}\left( \hat{Z}_{t}, r_t; \Phi^{-1}(C_{x_{t}}(s(t))), \Phi^{-1}(C_{x_{t}-1}(s(t))) \right).
\end{equation}
For example, in the PAR(1) model, $\hat{Z}_t = \phi(t) Z_{t-1}$ for $t \geq 1$, with the startup condition $\hat{Z}_1=0$; $r_t=1-\phi^2(t)$ for $t > 1$ with the startup $r_1=1$.
\item [3] Define the weight $w_t$ via
\begin{equation}
w_t = w_{t-1}~\left[ \Phi\left( \dfrac{\Phi^{-1}(C_{x_{t}}(s(t)))-\hat{Z}_t }{r_t} \right) - \Phi\left(\dfrac{\Phi^{-1}(C_{x_{t}-1}(s(t)))-\hat{Z}_t }{r_t} \right) \right]
\end{equation}
\end{itemize}
The above generates a fair draw of a single ``particle path" $\{ Z_t \}$ with the property that the $\{ X_t \}$ series generated from $\{ Z_t \}$ yields the observations $x_1, \ldots, x_n$. Repeating this process $m$ independent times gives $m$ simulated process trajectories. Let $\{ {\bf Z}^{(1)}, \ldots, {\bf Z}^{(m)} \}$ be these trajectories and denote their corresponding weights at time $n$ by $\{ w_n^{(k)} \}_{k=1}^m$.
The importance sampling estimate is given by
\[
\hat{\mathcal{L}}^{\mbox{GHK}}\left( \boldsymbol{\theta},\boldsymbol{\eta} \right)=\dfrac{1}{m}\sum_{k=1}^{m}w_{n}^{(k)}.
\]
A large $m$ provides more accurate estimation.
The popular ``L-BGSF-B" gradient step and search method is used to optimize the estimated likelihood $\hat{\mathcal{L}}^{\mbox{GHK}}(\boldsymbol{\theta}, \boldsymbol{\eta})$; other optimizers may also work. However, $\hat{\mathcal{L}}^{\mbox{GHK}}(\boldsymbol{\theta},\boldsymbol{\eta})$ is ``noisy" due to the sampling. One popular fix to this smooths the estimated likelihood by generating a set of random quantities in the particle filtering through transformation and keeps them constant across the computations for different sets of parameters. This method, called common random numbers (CRNs), makes the simulated likelihood $\hat{\mathcal{L}}^{\mbox{GHK}}(\boldsymbol{\theta},\boldsymbol{\eta})$ relatively smooth in its parameters; see \cite{kleinman1999simulation} and \cite{glasserman1992some} for more on CRNs. In practice, the CRN point estimator behaves similarly to those for regular likelihoods; moreover, the Hessian-based covariance matrix, which is based on the derivative of $\hat{\mathcal{L}}^{\mbox{GHK}}(\boldsymbol{\theta}, \boldsymbol{\eta})$, behaves much better numerically when CRNs are used. As the next section demonstrates, this procedure will yield standard errors that are very realistic.
Turning to model diagnostics, the probability integral transform (PIT) is used as a tool to evaluate model fitness. PIT methods, proposed in \cite{dawid1984present}, check the statistical consistency between probabilistic forecasts and the observations. Under the ideal scenario that the observations are drawn from the prediction distribution and the predictive distribution is continuous, PIT residuals are uniformly distributed over $[0, 1]$. PIT histograms tend to be $U$-shaped when the observations are over-dispersed. Unfortunately, the above themes do not hold for discrete count data. To remedy this, \cite{czado2009predictive} propose a nonrandomized PIT residual where uniformity still holds. Quantifying this, write the conditional cumulative distribution function of $X_t$ as
\begin{equation}
P_t(y):=\mathbb{P} \left( X_t \leq y | X_1 = x_1,\ldots,X_{t-1}=x_{t-1} \right), y \in \left\{ 0, 1, \ldots \right\} .
\end{equation}
Then the nonrandomized mean PIT residual is defined as $\bar{F}(u)= n^{-1}\sum_{t=1}^{n}F_t(u|x_t)$, where
\begin{equation}
F_t(u|y)=\left\{\begin{array}{cl}
0,&\hbox{if }u\leq P_t(y-1)\\
\dfrac{u-P_t(y-1)}{P_t(y)-P_t(y-1)},&\hbox{if }P_t(y-1)<u<P_t(y)\\
1,&\hbox{if }u\geq P_t(y)
\end{array}
\right..
\end{equation}
The quantity $P_t(y)$ can be approximated during the particle filtering algorithms; specifically,
\begin{equation}
\hat{P}_t(y) = \sum_{i=0}^{y}w_{i,t}(\hat{Z}_t),
\end{equation}
where
\[
w_{i,t}(z)=\Phi\left(\dfrac{\Phi^{-1}(C_i\left(s(t)\right))-z}{r_t} \right)-\Phi\left(\dfrac{\Phi^{-1}(C_{i-1}\left(s(t)
\right))-z}{r_t} \right).
\]
The weight $w_{i,t}(z)$ can be obtained at time $t$ during the particle filtering algorithm.
\section{Simulations}
This section presents a simulation study to evaluate the performance of our estimation methods. Periodic time series models often have a large number of parameters. One way of consolidating these parameters into a parsimonious tally involves placing Fourier parametric constraints on the model parameters \citep{lund2006parsimonious, anderson2007fourier}, as is done below.
\subsection{Poisson Marginals}
Our first simulation examines the classical Poisson count distribution with the PAR(1) $\{ Z_t \}$ in Example 2.1. Here, $F_\nu$ is taken as a Poisson marginal with mean $\lambda(\nu)$, where the first-order Fourier constraint
\[
\lambda(\nu) = a_1 + a_2\cos\left( \dfrac{2\pi(\nu-a_3)}{T} \right)
\]
is imposed to consolidate the $T$ mean parameters into three. Here, $|a_2| < a_1$ is imposed to keep $\lambda(\nu)$ non-negative. The periods $T=10$ and $T=50$ are studied, the latter taken to roughly correspond to our future application to weekly rainy day counts. Our $\{ Z_t \}$ process obeys
\[
Z_t = \phi(t)Z_{t-1} + \epsilon_t \sqrt{1-\phi(t)^2},
\]
with the AR coefficient $\phi(\nu)$ also being constrained by a first-order Fourier series that induces a causal model:
\begin{equation}
\label{seasonalphi}
\phi(\nu) = b_1 + b_2\cos\left( \dfrac{2\pi(\nu-b_3)}{T} \right).
\end{equation}
These specifications ensure that $\{ Z_t \}$ is a standard normal process ($E[Z_t] \equiv 0$ and $\mbox{Var}(Z_t) \equiv 1$). The parameters chosen must be legitimate in that $\lambda(\nu)$ must be positive for each $\nu$ and the PAR(1) model must be causal. A six-parameter scheme that obeys these constraints is $a_1 = 10, a_2 = 5, a_3 = 5; b_1 = 0.5, b_2 = 0.2$, and $b_3 = 5$, which is now studied.
For each MLE optimization, $m=500$ independent particles are used along with series lengths of $n=100$ and $n=300$. CRN techniques are used to the ensure that the likelihood is relatively smooth with respect to its parameters. This is an essential step --- see \cite{masarotto2017gaussian, han2018gckrig} for more on CRNs. Identifiability issues with the phase shift Fourier parametrizations arise since $a\cos(\pi/2 - b) = -a\cos(b - \pi/2)$; because of this, we impose $a_3, b_3 \in [0,T)$. Finally, The popular quasi-Newton method L-BFGS-B is implemented to optimize the likelihoods \citep{steenbergen2006maximum}.
Figures 1 shows boxplots of parameter estimators aggregated from $500$ independent series of various lengths and periods. The sample means of the parameter estimators are all close to their true values. When $T=50$ and $n=100$, there are only two complete cycles of data to estimate parameters from. For standard errors of these parameters, Table \ref{PoissonPAR(1)} reports two values: 1) sample standard deviations of the parameter estimators over the 500 runs (denominator of 499), and 2) the average (over the 500 runs) of standard errors obtained by inverting the Hessian matrix at the maximum likelihood estimate for each run (denominator of 500). Additional simulations (not shown here) with larger sample sizes with $T=50$ show that any biases recede as $n$ increases.
\begin{figure}[hbt!]
\centering
\includegraphics[width=14cm]{boxPlot_Simulation1_Analysis_try.pdf}
\caption{Box plots of parameter estimators for a Poisson marginal distribution with a PAR(1) $\{ Z_t \}$. All estimators appear approximately unbiased --- the dashed lines demarcate true parameter values.}
\end{figure}
\begin{table}[hbt!]
\centering
\begin{threeparttable}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|}
\headrow
\hline
\multicolumn{9}{|c|}{Model 1} \\
n&T&&$ \hat{a_1} $&$ \hat{a_2} $&$ \hat{a_3} $&$ \hat{b_1} $&$ \hat{b_2} $&$ \hat{b_3} $\\
&&mean&9.98375 &5.00087 &5.00443 &0.49213 &0.23474 &4.98639\\
100&10&SD&0.60068 &0.63494 &0.18490 &0.07818 &0.09896 &0.90434\\
&&$ \hat{E}(I'(\theta)^2) $&0.57199 &0.60025 &0.17375 &0.07720 &0.10282 &0.90060\\ \hline
&&mean&9.98127 &5.05051 &5.01370 &0.46899 &0.25285 &5.24743\\
100&50&SD&0.58975 &0.79520 &1.14622 &0.08861 &0.11470 &3.50616\\
&&$ \hat{E}(I'(\theta)^2) $&0.59190 &0.76871 &1.11326 &0.08268 &0.11469 &4.51175\\ \hline
&&mean&9.98652 &5.01761 &5.00714 &0.49608 &0.20812 &4.98741\\
300&10&SD&0.33694 &0.36520 &0.10039 &0.04173 &0.05550 &0.48356\\
&&$ \hat{E}(I'(\theta)^2) $&0.32824 &0.35012 &0.09958 &0.04151 &0.05569 &0.45228\\ \hline
&&mean&9.97370 &4.98609 &5.00089 &0.49386 &0.20998 &4.89220\\
300&50&SD&0.33449 &0.45038 &0.62028 &0.04188 &0.05726 &2.09052\\
&&$ \hat{E}(I'(\theta)^2) $&0.34588 &0.45153 &0.65046 &0.04156 &0.05584 &2.33228\\ \hline
\end{tabular}
\caption{Standard errors for the parameter estimators for a Poisson marginal distribution with a PAR(1) $\{ Z_t \}$. The results show the sample standard deviation (SD) of the parameter estimators from 500 independent series, and the average of the 500 standard errors obtained by inverting the Hessian matrix ($\hat{E}(I^\prime(\theta)^2)$) at the maximum likelihood estimate over these same runs.}
\label{PoissonPAR(1)}
\end{threeparttable}
\end{table}
We next consider the same Poisson marginal case, but now change $\{ Z_t \}$ to the SAR(1) series in Example 2.2. The $a_1$, $a_2$, and $a_3$ chosen for this simulation are the same as above. The SAR(1) parameters chosen are $\phi=0.5$ and $\alpha=0.3$. Figure 2 shows boxplots of the parameter estimators akin to those in Figure 1. The overall performance is again very good --- interpretations of the results are similar to those for the PAR(1) model above. Table \ref{PoissonSAR(1)} shows our two types of standard errors and again reveals nice agreement.
\begin{figure}[hbt!]
\centering
\includegraphics[width=14cm]{boxPlot_Simulation2_Analysis_try.pdf}
\caption{Box plots of parameter estimators for the Poisson marginal distribution with a SAR(1) $\{ Z_t\}$. All estimators appear approximately unbiased --- the dashed lines demarcate true parameter values.}
\end{figure}
\begin{table}[hbt!]
\centering
\begin{threeparttable}
\begin{tabular}{|c|c|c|c|c|c|c|c|}
\headrow
\hline
\multicolumn{8}{|c|}{Model 2} \\
n&T&&$ \hat{a_1} $&$ \hat{a_2} $&$ \hat{a_3} $&$ \phi $&$ \alpha $\\
&&mean&9.94650 &5.07987 &4.99206 &0.47876 &0.28482\\
100&10&SD&0.66096 &0.87087 &0.26175 &0.08455 &0.10242\\
&&$ \hat{E}(I'(\theta)^2) $&0.65459 &0.80021 &0.25874 &0.08242 &0.09947\\ \hline
&&mean&9.96430 &5.07165 &4.99623 &0.49003 &0.27121\\
100&50&SD&0.51019 &0.71742 &1.16749 &0.11527 &0.09630\\
&&$ \hat{E}(I'(\theta)^2) $&0.50809 &0.68786 &1.10774 &0.10345 &0.09874\\ \hline
&&mean&9.99120 &5.06420 &4.99278 &0.49535 &0.29260\\
300&10&SD&0.41347 &0.50127 &0.15764 &0.04399 &0.05155\\
&&$ \hat{E}(I'(\theta)^2) $&0.40803 &0.50201 &0.15827 &0.04269 &0.05426\\ \hline
&&mean&9.99133 &5.01493 &5.01603 &0.49874 &0.29189\\
300&50&SD&0.37005 &0.49076 &0.78444 &0.04674 &0.05392\\
&&$ \hat{E}(I'(\theta)^2) $&0.36755 &0.49687 &0.79666 &0.04543 &0.05403\\ \hline
\end{tabular}
\caption{Standard errors for the parameter estimators for the Poisson marginal distribution with a SAR(1) $\{ Z_t \}$. The results show the sample standard deviation (SD) of the parameter estimators from 500 independent series, and the average of the 500 standard errors obtained by inverting the Hessian matrix ($\hat{E}(I^\prime(\theta)^2)$) at the maximum likelihood estimate over these same runs.}
\label{PoissonSAR(1)}
\end{threeparttable}
\end{table}
\subsection{A Markov Chain Induced Marginal Distribution}
Another marginal distribution that we consider is derived from a two-state Markov chain (TSMC) model. This distribution will fit our weekly rainy day counts well in the next section. Consider a Markov transition matrix ${\bf Q}$ on two states with form
\[
{\bf Q}=
\left[ \begin{array}{cc}
\alpha & 1- \alpha \\
1-\beta & \beta \\
\end{array}
\right] .
\]
Here, $\alpha \in (0,1)$ is interpreted as the probability that day $t+1$ is dry given that day $t$ is dry; analogously, $\beta \in (0,1)$ is the probability that day $t+1$ is rainy given that day $t$ is rainy. Let $\{ M_t \}_{t=0}^7$ be a Markov chain with these transition probabilities. The marginal distribution that we consider for $\{ X_t \}$ has the form
\[
\mathbb{P}( X_t =k ) = \mathbb{P}_{M_0} \left( \sum_{t=1}^7 M_t = k \right), \quad k \in \{ 0, 1, 2, 3, 4, 5, 6, 7 \},
\]
where $M_0 \in \{ 0, 1 \}$. Here, $M_0=0$ signifies that the day before the week started was dry and $M_0=1$ signifies that the day before the week started was rainy. This marginal distribution, while difficult to derive in explicit form, allows for dependence in the day-to-day rain values, improving on a Binomial model with seven trials that models successive days as independent.
It is not easy to derive an explicit form for the distribution of $F$; however, it can be built up numerically by allowing the number of days in a week to be a variable $L$ and recursing on it:
\begin{eqnarray}
\mathbb{P}_0 \left( \sum_{t=1}^L M_t = k \right) &=&
(1-\alpha) \mathbb{P}_1 \left( \sum_{t=1}^{L-1} M_t = k-1 \right) +
\alpha \mathbb{P}_0 \left( \sum_{t=1}^{L-1} M_t = k \right); \\
\mathbb{P}_1 \left( \sum_{t=1}^L M_t = k \right) &=&
\beta \mathbb{P}_1 \left( \sum_{t=1}^{L-1} M_t = k-1 \right) +
(1-\beta) \mathbb{P}_0 \left( \sum_{t=1}^{L-1} M_t = k \right).
\end{eqnarray}
These recursions start with probabilities for a one day week: $\mathbb{P}_0(M_t=1)=1-\alpha$; $\mathbb{P}_0(M_t=1)=\alpha$; $\mathbb{P}_1(M_t=1)=\beta$, and $\mathbb{P}_1(M_t=0)=1-\beta$. We take the initial state of the chain to be random with the stationary distribution
\[
\mathbb{P}(M_0=0)= \frac{1-\alpha}{2-\alpha-\beta}; \quad \mathbb{P}(M_0=1)=\frac{1-\beta}{2-\alpha-\beta}.
\]
To allow for periodicites in the above TSMC structure, we parametrize $\alpha$ and $\beta$ as short Fourier series again:
\[
\alpha(\nu)=a_1 + a_2\cos\left( \dfrac{2\pi(\nu-a_3)}{T} \right); \quad
\beta(\nu)=b_1 + b_2\cos\left( \dfrac{2\pi(\nu-b_3)}{T} \right).
\]
Our first TSMC simulation considers the PAR(1) $\{ Z_t \}$ in (\ref{seasonalphi}). This is a nine parameter model. The parameter values considered are $a_1 = 0.4, a_2 = 0.2, a_3 = 5; b_1 = 0.5, b_2 = 0.2, b_3 = 0.3, c_1 = 0.2, c_2 = 0.1$, and $c_3 = 5$, which induce a causal $\{ Z_t \}$ and legitimate Markov chain transitions (all transitions have non-negative probabilities). Figure \ref{fig:TSMC+PAR(1)} shows boxplots of the estimated parameters over 500 independent series of various lengths and periods. Table \ref{tab:TSMC+PAR(1)} shows standard errors computed from the two methods previously described. For the most part, the results are satisfying. Some of the phase shift parameter's "Hessian inverted" standard errors are larger than the sample standard deviation standard errors. The phase shift parameter is the argument where its associated cosine wave is maximal and lies in $[0,T]$. Because of the larger support set, this parameter will naturally have more variability than say parameters supported in (-1,1). Also, when $n=100$ and $T=50$, there are only two complete cycles from which to estimate the location of this maximum --- this will be statistically difficult. Additional simulations (not reported) show that these discrepancies recede as the sample size gets larger.
\begin{figure}[hbt!]
\centering
\includegraphics[width=14cm]{boxPlot_Simulation3_Analysis_try.pdf}
\caption{Box plots of parameter estimators for the TSMC marginal distribution with a PAR(1) $\{ Z_t \}$. All estimators appear approximately unbiased --- the dashed lines demarcate true parameter values.}
\label{fig:TSMC+PAR(1)}
\end{figure}
\begin{table}[hbt!]
\centering
\begin{threeparttable}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|}
\headrow
\hline
\multicolumn{12}{|c|}{Model 3} \\
n&T&&$ \hat{a_1} $&$ \hat{a_2} $&$ \hat{a_3} $&$ \hat{b_1} $&$ \hat{b_2} $&$ \hat{b_3} $&$ \hat{c_1} $&$ \hat{c_2} $&$ \hat{c_3} $\\
&&mean&0.389 &0.205 &4.997 &0.490 &0.307 &4.985 &0.199 &0.105 &4.963\\
100&10&SD&0.041 &0.060 &0.472 &0.036 &0.048 &0.295 &0.076 &0.134 &1.221\\
&&$ \hat{E}(I'(\theta)^2) $&0.043 &0.059 &0.550 &0.035 &0.045 &0.280 &0.108 &0.178 &2.275\\ \hline
&&mean&0.391 &0.199 &5.009 &0.490 &0.302 &5.109 &0.192 &0.072 &5.994\\
100&50&SD&0.040 &0.057 &1.876 &0.032 &0.050 &1.137 &0.072 &0.153 &4.153\\
&&$ \hat{E}(I'(\theta)^2) $&0.044 &0.065 &3.210& 0.036& 0.049& 1.603& 0.112& 0.195& 14.718\\ \hline
&&mean&0.394& 0.200& 5.013& 0.496& 0.301& 5.002& 0.194& 0.114& 4.969\\
300&10&SD&0.024& 0.033& 0.306& 0.021& 0.026& 0.159& 0.052& 0.084& 1.095\\
&&$ \hat{E}(I'(\theta)^2) $&0.025& 0.033& 0.291& 0.020& 0.026& 0.159& 0.060& 0.087& 1.376\\ \hline
&&mean&0.395& 0.201& 4.989& 0.495& 0.304& 5.017& 0.191& 0.108& 5.989\\
300&50&SD&0.024& 0.035& 1.269& 0.020& 0.026& 0.733& 0.053 &0.085& 4.130\\
&&$ \hat{E}(I'(\theta)^2) $&0.025& 0.034& 1.506& 0.020& 0.027& 0.823& 0.060& 0.097& 8.054\\ \hline
\end{tabular}
\caption{Standard errors for the parameter estimators for the TSMC marginal distribution with a PAR(1) $\{ Z_t \}$. The results show the sample standard deviation (SD) of the parameter estimators from 500 independent series, and the average of the 500 standard errors obtained by inverting the Hessian matrix ($\hat{E}(I^\prime(\theta)^2)$) at the maximum likelihood estimate over these same runs.}
\label{tab:TSMC+PAR(1)}
\end{threeparttable}
\end{table}
Finally, we consider the TSMC marginal distribution with a SAR(1) $\{ Z_t \}$. Figure \ref{fig:TSMC+SAR(1)} shows boxplots of the estimated parameters over 500 independent series of various lengths and periods. Table \ref{tab:TSMC+SAR(1)} shows standard errors computed by our two methods. Again, the performance is good --- the interpretation of the results is analogous to that given before.
\begin{figure}[hbt!]
\includegraphics[width=14cm]{boxPlot_Simulation4_Analysis_try.pdf}
\caption{Box plots of parameter estimators for the TSMC marginal distribution with a SAR(1) $\{ Z_t \}$. All estimators appear approximately unbiased --- the dashed lines demarcate true parameter values.}
\label{fig:TSMC+SAR(1)}
\end{figure}
\begin{table}[hbt!]
\centering
\begin{threeparttable}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|}
\headrow
\hline
\multicolumn{11}{|c|}{Model 4} \\
n&T&&$ \hat{a_1} $&$ \hat{a_2} $&$ \hat{a_3} $&$ \hat{b_1} $&$ \hat{b_2} $&$ \hat{b_3} $&$ \phi $&$ \alpha $\\ \hline
&&mean&0.372 &0.200 &4.995 &0.480 &0.299 &5.010 &0.447 &0.260\\
100&10&SD&0.061 &0.068 &0.605 &0.050 &0.056 &0.356 &0.107 &0.111\\
&&$ \hat{E}(I'(\theta)^2) $&0.055 &0.063 &0.624 &0.046 &0.053 &0.335 &0.102 &0.109\\ \hline
&&mean&0.378 &0.199 &4.890 &0.482 &0.299 &5.106 &0.469 &0.250\\
100&50&SD&0.049 &0.060 &2.807 &0.044 &0.048 &1.564 &0.123 &0.104\\
&&$ \hat{E}(I'(\theta)^2) $&0.049 &0.062 &3.602 &0.040 &0.050 &1.572 &0.121 &0.107\\ \hline
&&mean&0.388 &0.201 &5.006 &0.493 &0.302 &5.004 &0.477 &0.289\\
300&10&SD&0.033 &0.038 &0.317 &0.030 &0.032 &0.202 &0.056 &0.063\\
&&$ \hat{E}(I'(\theta)^2) $&0.033 &0.038 &0.335 &0.028 &0.032 &0.193 &0.054 &0.060\\ \hline
&&mean&0.390 &0.199 &5.030 &0.492 &0.302 &5.016 &0.483 &0.281\\
300&50&SD&0.033 &0.040 &1.689 &0.027 &0.031 &1.061 &0.055 &0.059\\
&&$ \hat{E}(I'(\theta)^2) $&0.032 &0.038 &1.706 &0.027 &0.032 &0.960 &0.056 &0.060\\ \hline
\end{tabular}
\caption{Standard errors for the parameter estimators for the TSMC marginal distribution with a SAR(1) $\{ Z_t \}$. The results show the sample standard deviation (SD) of the parameter estimators from 500 independent series, and the average of the 500 standard errors obtained by inverting the Hessian matrix ($\hat{E}(I^\prime(\theta)^2)$) at the maximum likelihood estimate over these same runs.}
\label{tab:TSMC+SAR(1)}
\end{threeparttable}
\end{table}
\section{Application}
\begin{figure}[h!]
\centering
\includegraphics[width=14cm]{Plot_Data_Analysis.pdf}
\caption{Top: The Seattle weekly rainy day counts from 2016-2019 only; Middle: Weekly sample means and variance for the rainy day counts from 2000-2019; Bottom left and bottom right: Sample ACF and PACF of all observations.}
\label{fig:dataspecs}
\end{figure}
This section applies our techniques to a series of weekly rainy day counts in the Seattle, Washington area recorded from 01-Jan-2000 to 31-Dec-2019. The data were collected at the Seattle Tacoma airport weather station and are available at \url{http://www.ncdc.noaa.gov}. Here, any day receiving a non-zero amount of precipitation is counted as a rainy day. As such, $X_t \in \{0, 1, 2, 3, 4, 5, 6 , 7 \}$ for each $t$. For convenience, we only analyze the first 364 days in a year, inducing a period of $T=52$ in the series. Any inaccuracies incurred by neglecting these days is minimal. Figure 3 summarizes our data. The top plot graphs the weekly rainy day counts from the last four years of the series only (for visual clarity), from the first week in 2016 to the 52nd week in 2019. One sees a clear seasonal cycle with summer weeks experiencing significantly less rain than winter weeks. The middle plot in the figure displays the sample mean and variance of the weekly counts over the entire 20 year data period, aggregated by week of year. For example, the mean and variance for the 1st week in January are the sample means and variance (denominator of 19) over all 20 1st weeks occurring from 2000 to 2019. The sample mean and variance have roughly sinusoidal structures and are minimal during the summer months. The bottom plots in the figure show sample autocorrelations (ACF) and partial autocorrelations (PACF) of the series. The pattern in the ACF is indicative of a periodic mean in the series that has not been removed.
Several marginal distributions for this series merit exploration. The binomial distribution with seven trials is a classic structure for such data. However, this distribution is underdispersed (variance is smaller than the mean), which does not jibe with the data patterns seen in the middle plot of Figure \ref{fig:dataspecs}. Another distribution considered is the TSMC distribution of the last section. This distribution can be overdispersed, and as we will subsequently see, fits our series quite well. A final marginal distribution considered is the generalized Poisson marginal truncated to the support set $\{ 0, 1, 2, 3, 4, 5, 6, 7 \}$. For clarity, the generalized Poisson marginal we use has distribution
\begin{eqnarray*}
\mathbb{P}(Y=k)&=&\frac{e^{-(\lambda+\eta k)} \lambda(\lambda+\eta k)^{k-1}}{k!}, \quad k=0,1,\ldots; \\
\mathbb{E}(Y)&=&\mu=\frac{\lambda}{1-\eta}; \\
\mbox{Var}(Y)&=&\sigma^2=\frac{\lambda}{(1-\eta)^3}
\end{eqnarray*}
for a count variable $Y$, with $\lambda>0$ and $\eta \in [0,1)$. When $\eta=0$, $Y$ is Poisson($\lambda)$ and is equi-dispersed. First order Fourier cosine constraints are placed on the mean and variance pair $(\mu(\nu), \sigma^2(\nu))$ and then mapped back to parameter pair $(\lambda(\nu), \alpha(\nu))$.
For structures of $\{ Z_t \}$, we consider PAR(1), AR(1), and SAR(1) models (see Section 2). The PAR(1) structure uses the first order Fourier cosine consolidation in (\ref{seasonalphi}) for $\{ Z_t \}$. The AR(1) $\{ Z_t \}$ is simply a standard AR(1) series with a unit variance. The SAR(1) form for $\{ Z_t \}$ is the two parameter model in Example 2.2. For parameters in the marginal distributions, the success probabilities are
\[
p(\nu)=a_1 + a_2\cos\left( \frac{2\pi(a_3-\nu)}{T} \right)
\]
in the binomial fits;
\[
\alpha(\nu)=a_1 + a_2\cos\left( \frac{2\pi(a_3-\nu)}{T} \right), \quad
\beta(\nu)=b_1 + b_2\cos\left( \frac{2\pi(b_3-\nu)}{T} \right),
\]
for the TSMC fits, and
\begin{eqnarray*}
\mu(\nu) = a_1 + a_2\cos\left( \frac{2\pi(a_3-\nu)}{T} \right),&& \quad \sigma^2(\nu)=b_1 + b_2\cos\left( \frac{2\pi(b_3-\nu)}{T} \right), \\
\lambda(\nu) = 1 - \sqrt{\frac{\mu(\nu)}{\sigma^2(\nu)}}, && \quad \alpha(\nu) = \mu(\nu)\sqrt{\frac{\mu(\nu)}{\sigma^2(\nu)}}
\end{eqnarray*}
in the truncated generalized Poisson fit.
Table \ref{Tab:AICBIC} displays BIC and AIC scores for various fitted $\{ Z_t \}$ structures and marginal distributions. The best marginal distribution is the TSMC; the truncated generalized Poisson marginal distribution is a close second. The generalized Poisson marginal distribution is known to be a very flexible count time series model \citep{ver2007quasi} that fits many observed series well. Of note is that an AR(1) latent $\{ Z_t \}$ is preferred to either a PAR(1) or SAR(1) structure. This does not mean that the end fitted model is non-periodic; indeed, the parameters in the marginal distribution $F_\nu$ depend highly on the week of year $\nu$. However, the seasonality in the PAR(1) and SAR(1) $\{ Z_t \}$ do not make an appreciable difference --- a stationary AR(1) $\{ Z_t \}$ is sufficient.
\begin{table}[h!]
\centering
\begin{threeparttable}
\begin{tabular}{cccccc}
\headrow
Marginal Distribution & Model & WN &AR(1) & PAR(1) & SAR(1)\\
Binomial&AIC&4278.628&{\bf 4227.775}&4229.708&4229.458 \\
&BIC&4293.469&{\bf 4247.563}&4259.39&4254.193\\
Two State Markov Chain (TSMC)&AIC&3888.114&{\bf 3853.589}&3856.244&3855.624\\
&BIC&3917.796&{\bf 3888.218}&3900.766&3895.200\\
Truncated Overdispersed Poisson&AIC&3885.032&{\bf 3853.840}&3856.995&3855.672\\
&BIC&3914.714&{\bf 3888.469}&3901.518&3895.248\\
\hline
\end{tabular}
\end{threeparttable}
\caption{AIC and BIC statistics for models with binomial, TSMC, and truncated Poisson marginal distributions. The lowest AIC/BIC for each marginal distribution are bolded. The TSMC marginal distribution with an AR(1) $\{ Z_t \}$ is judged optimal.}
\label{Tab:AICBIC}
\end{table}
Table \ref{Paraest} shows the estimated parameters in the fitted model. Based on asymptotic normality, which is expected but has not been proven, all parameters except $b_3$ appear to be significantly non-zero. A zero $b_3$ is plausible: $b_3=0$ implies that the maximal variability of the weekly rainy day counts start at the beginning of the calendar year, which roughly corresponds to the meteorological height of winter. Standard errors were estimated by inverting the Fisher information matrix at the likelihood estimates. For completeness, Table \ref{paraest2} shows parameter estimates and standard errors for the truncated generalized Poisson fit. The interpretation of these results are similar to those above.
\begin{table}[h!]
\caption{Estimates and standard errors of the TSMC AR(1) model. The L-BFGS-B algorithm was used to optimize particle filtering likelihoods.}
\centering
\begin{threeparttable}
\begin{tabular}{cccccccc}
\headrow
Parameters&$ a_1 $&$ a_2 $&$ a_3 $&$ b_1 $&$ b_2 $&$ b_3 $&$ \phi $\\ \hline
Point Estimates&0.737 &-0.163 & 4.687 & 0.648& 0.132 & 1.660 & 0.198\\
Standard Error&0.011& 0.014& 0.674 &0.013 &0.018 &1.039& 0.032\\
\hline
\end{tabular}
\end{threeparttable}
\label{Paraest}
\end{table}
\begin{table}[h!]
\caption{Estimates and standard errors of the generalized Poisson-AR(1) fit. The L-BFGS-B algorithm is used to optimize particle filtering likelihoods.}
\centering
\begin{threeparttable}
\begin{tabular}{cccccccc}
\headrow
Parameters&$ a_1 $&$ a_2 $&$ a_3 $&$ b_1 $&$ b_2 $&$ b_3 $&$ \phi $\\ \hline
Point Estimates&3.999 &2.975 &3.977 &8.155 &6.926 &3.955& 0.195\\
Standard Error&0.263 &0.298 &0.448& 1.586& 1.685 &0.926& 0.033\\
\hline
\end{tabular}
\end{threeparttable}
\label{paraest2}
\end{table}
Moving to a residual analysis, Figure \ref{resid1} shows diagnostics for the TSMC marginal with an AR(1) $\{ Z_t \}$. The top left plot shows the raw residuals and the bottom left and right plots show sample ACFs and PACFs of these residuals. No major issues are seen. The top right graph shows a QQ plot of these residuals for a standard normal distribution. Some departure from normality is noted in the two tails of the plot.
Finally, Figure \ref{resid2} shows PIT histograms of the residuals for the binomial and TSMC fits with AR(1) $\{ Z_t \}$. There are obvious departures from uniformity for the binomial marginal --- this marginal distribution does not seem to describe the data well. The histogram for the two-state Markov chain marginal is roughly uniform; hence, we have a good fitting model and the slight lack of normality in $\{ Z_t \}$ in Figure \ref{resid1} does not appear overly problematic.
\begin{figure}[h!]
\centering
\includegraphics[width=14cm]{Plot_res_Analysis.pdf}
\caption{Top left: TSMC + AR(1) residuals. Top right: A QQ plot of these residuals. Bottom left: The sample ACF of these residuals. Bottom right: The sample PACF of these residuals.}
\label{resid1}
\end{figure}
\begin{figure}[h!]
\centering
\includegraphics[width=14cm]{Plot_ModelCheck_Analysis.pdf}
\caption{Left: a binomial marginal PIT histogram. Right: a TSMC marginal PIT histogram. The binomial marginal does not fit as well as the TSMC marginal.}
\label{resid2}
\end{figure}
\section{Concluding Comments}
The above paper constructs a very general model for seasonal count time series through a latent Gaussian process transformation. Any marginal distribution can be achieved and the correlations are as flexible as possible and can be negative. Estimation of the model parameters through likelihood techniques can be conducted by particle filtering techniques. The methods were shown to to work well on simulated data and capably handled a periodic count sequence supported on $\{ 0, 1, 2, 3, 4, 5, 6, 7 \}$. There, we found that the latent Gaussian process did not need to be periodic, but the marginal distribution of the data contained periodicities. The fitted model was very parsimonious, containing only seven parameters.
Extensions of the above techniques to handle covariates merit exploration. For this, we suggest allowing $\boldsymbol{\theta}$ to depend on the covariates as in \cite{jia2021count}. Trying to modify the latent Gaussian process to handle covariates generally causes trouble. Multivariate versions of the methods are also worth studying.
\section{Appendix}
{\bf Proof of Proposition 3.1:} We follow similar reasoning to \cite{pipiras2017long} and \cite{jia2021count}. We begin with a generalization of the Price Theorem (Theorem 5.8.5 in \cite{pipiras2017long}), stated as follows and easily proven. Let $G_{\nu_1}$ and $G_{\nu_2}$ be two continuous differentiable functions. Then their link function has a derivative with form
\begin{equation}
\label{e:price}
L'(u) = \frac{1}{\sqrt{\mbox{Var}(X_1) \mbox{Var}(X_2)}} E [ G_{\nu_1}'(Z_1) G_{\nu_2}'(Z_2) ] \Big|_{\mbox{\scriptsize Corr}(Z_1,Z_2)=u}.
\end{equation}
Here, $Z_1$ and $Z_2$ are a correlated Gaussian pair, each component standardized, and with $\mbox{Corr}(Z_1,Z_2)=u$.
In our application, $G_{\nu_1}$ and $G_{\nu_2}$ are non-negative and non-decreasing since they are cumulative distribution functions. But because our data are counts, $G_{\nu_1}$ and $G_{\nu_2}$ are step functions and not necessarily differentiable on the integers. To remedy this, we approximate $G_{\nu_1}$ and $G_{\nu_2}$ by differentiable functions and take limits in the approximation.
To do this, let $U \stackrel{\cal D}{=}{\cal N}(0,1)$. For any $\epsilon >0$ and $\ell \in \{ 1, 2\}$,
\begin{eqnarray}
G_{\epsilon,\nu_\ell}(x) & := & E [ G_{\nu_\ell}(x + \epsilon U)] = \int_{- \infty}^\infty G_{\nu_\ell}(z)
\frac{e^{-\frac{(x-z)^2}{2\epsilon^2}}}{\sqrt{2 \pi} \epsilon} dz \nonumber \\
& = & \sum_{j=0}^\infty j \int_{\Phi^{-1}(C_{j-1}(\nu_\ell))}^{\Phi^{-1}(C_j(\nu_\ell))}
\frac{e^{-\frac{(x-z)^2}{2\epsilon^2}}}{\sqrt{2 \pi} \epsilon} dz \nonumber \\
& = & \sum_{j=0}^\infty j \int_{\Phi^{-1}(C_{j-1}(\nu_\ell))-x}^{\Phi^{-1}(C_j(\nu_\ell))-x}
\frac{e^{-\frac{w^2}{2\epsilon^2}}}{\sqrt{2 \pi}\epsilon} dw.
\label{e:G-epsilon}
\end{eqnarray}
The ``kernel''
\begin{equation}
\label{addon1}
\frac{e^{-\frac{(x-z)^2}{2\epsilon^2}}}
{\sqrt{2\pi}\epsilon}
\end{equation}
acts like Dirac's delta function $\delta_{\{ x \}}(z)$ at $z=x$ as $\epsilon \downarrow 0$. Note that $G_{\epsilon,\nu_\ell}(x)$ is non-decreasing and differentiable with first derivative
\begin{equation}
\label{e:G-epsilon-derivative}
G_{\epsilon,\nu_\ell}'(x) = \frac{1}{\sqrt{2\pi}\epsilon} \sum_{j=0}^\infty j \Big[ e^{-\frac{(\Phi^{-1}(C_{j-1}(\nu_{\ell}))-x)^2}{2\epsilon^2}} - e^{-\frac{(\Phi^{-1}(C_{j}(\nu_{\ell}))-x)^2}{2\epsilon^2}} \Big] = \frac{1}{\sqrt{2\pi}\epsilon} \sum_{j=0}^\infty e^{-\frac{(\Phi^{-1}(C_{j}(\nu_{\ell}))-x)^2}{2\epsilon^2}},
\end{equation}
and define $X_{\ell}^{(\epsilon)} = G_{\epsilon,\nu_\ell}(Z_{\ell})$ for $\ell \in \{ 1, 2 \} $. Equation (\ref{e:link-derivative-again}) gives
\begin{eqnarray}
L_\epsilon'(u) & = & \frac{1}{\sqrt{\mbox{Var}(X_1^{(\epsilon)})\mbox{Var}(X_2^{(\epsilon)})}}E [G_{\epsilon,\nu_1}'(Z_1) G_{\epsilon,\nu_2}'(Z_2) ] \Big|_{\mbox{\scriptsize Corr}(Z_1,Z_2)=u} \nonumber \\
& = & \frac{1}{\sqrt{\mbox{Var}(X_1^{(\epsilon)})\mbox{Var}(X_2^{(\epsilon)})}}
\int_{-\infty}^\infty
\int_{-\infty}^\infty G_{\epsilon,\nu_1}'(Z_1) G_{\epsilon,\nu_2}'(Z_2)\frac{1}{2\pi\sqrt{1-u^2}} e^{-\frac{1}{2(1-u^2)}\big(z_1^2 + z_2^2 - 2 u z_1 z_2\big)} dz_1dz_2 \nonumber \\
& = & \frac{1}{\sqrt{\mbox{Var}(X_1^{(\epsilon)})\mbox{Var}(X_2^{(\epsilon)})}} \sum_{j_1=0}^\infty \sum_{j_2=0}^\infty \int_{-\infty}^\infty \int_{- \infty}^\infty
\frac{1}{\sqrt{2\pi}\epsilon} e^{-\frac{(\Phi^{-1}(C_{j_1}(\nu_1))-z_1)^2}{2\epsilon^2}}
\frac{1}{\sqrt{2\pi}\epsilon} e^{-\frac{(\Phi^{-1}(C_{j_2}(\nu_2))-z_2)^2}{2\epsilon^2}} \times \nonumber \\
& & \quad \quad \ \frac{1}{2\pi\sqrt{1-u^2}} e^{-\frac{1}{2(1-u^2)}\big(z_1^2 + z_2^2 - 2 u z_1 z_2\big)} dz_1dz_2.
\end{eqnarray}
Noting again that the quantity in (\ref{addon1}) acts like a Dirac's delta function $\delta_{\{x\}}(z)$, the limit as $\epsilon \downarrow 0$ should be
\begin{equation}
L'(u) = \frac{1}{\sqrt{\mbox{Var}(X_1)\mbox{Var}(X_2)}} \sum_{j_1=0}^\infty \sum_{j_2=0}^\infty \frac{1}{2\pi \sqrt{1-u^2}}
e^{-\frac{1}{2(1-u^2)}\big( \Phi^{-1}(C_{j_1}(\nu_1))^2 + \Phi^{-1}(C_{j_2}(\nu_2))^2 - 2 u \Phi^{-1}(C_{j_1}(\nu_1)) \Phi^{-1}(C_{j_2}(\nu_2))\big)},
\end{equation}
which is always non-negative. The existence and form of $L^\prime(u)$ stems from the fact that we are differentiating a power series with absolutely convergent coefficients inside its radius of convergence. That $\sum_{k=0}^\infty |\ell_k| < \infty$ follows from (\ref{eq:link_coefficient}), the Cauchy-Schwarz inequality, and the finiteness of $\sum_{k=0}^\infty k! g_k(\nu_1)^2$ and $\sum_{k=0}^\infty k!g_k(\nu_2)^2$.
We now show that $L_\epsilon'(u)$ converges to $L'(u)$. For this, we first need an expression for the Hermite coefficients of $G_{\epsilon,\nu_\ell}(\cdot)$, denoted by $g_{\epsilon,k}(\nu_\ell)$ for $\ell \in \{ 1, 2 \}$. These will be compared to the Hermite coefficients $ g_{k}(\nu_\ell) $ of $G_{\nu_\ell}$.
Taylor expanding the Hermite polynomial $H_k(x+y) = \sum_{d=0}^k {k \choose d} y^{k-d} H_d(x)$ implies
\begin{eqnarray*}
G_{\epsilon,\nu_\ell}(x) & = & E [G_{\nu_\ell}(x+\epsilon U)] =
E \left[ \sum_{k=0}^\infty g_{k}(\nu_\ell) H_k(x+\epsilon U) \right] \\
& = & E \left[ \sum_{k=0}^\infty g_{k}(\nu_\ell) \sum_{d=0}^k {k \choose d} (\epsilon U)^{k-d} H_d(x) \right] \\
& = & \sum_{d=0}^\infty H_d(x) \sum_{k=d}^\infty g_{k}(\nu_\ell) \epsilon^{k-d} {k \choose d} E[U^{k-d}].
\end{eqnarray*}
After changing summation indices and using that $E[U^p] = 0$ if $p$ is odd, and equal to $(p-1)!!$ if $p$ is even, where $k!!=1 \times 3 \times \cdots \times k$ when $k$ is odd, we get
\begin{equation}
\label{e:gk-epsilon}
g_{\epsilon,k}(\nu_\ell) = g_{k}(\nu_\ell) + \sum_{q=1}^\infty g_{k+2q}(\nu_\ell) \epsilon^{2q} {k+2q \choose k} (2q-1)!! = g_{k}(\nu_\ell) + \sum_{q=1}^\infty g_{k+2q}(\nu_\ell) \epsilon^{2q}\frac{(k+2q)!}{k!2^qq!}.
\end{equation}
Then
\begin{eqnarray}
\label{e:gk1gk2-epsilon}
\nonumber
|g_{k}(\nu_1)g_{k}(\nu_2) - g_{\epsilon,k}(\nu_1)g_{\epsilon,k}(\nu_2)| &\leq & |g_{k}(\nu_1)| \sum_{q=1}^\infty |g_{k+2q}(\nu_1)| \epsilon^{2q}\frac{(k+2q)!}{k!2^qq!} + |g_{k}(\nu_2)| \sum_{q=1}^\infty |g_{k+2q}(\nu_2)| \epsilon^{2q}\frac{(k+2q)!}{k!2^qq!} \\
&&+ \Big(\sum_{q=1}^\infty |g_{k+2q}(\nu_1)| \epsilon^{2q}\frac{(k+2q)!}{k!2^qq!}\Big) \Big(\sum_{q=1}^\infty |g_{k+2q}(\nu_2)| \epsilon^{2q}\frac{(k+2q)!}{k!2^qq!}\Big).
\end{eqnarray}
Use the Cauchy-Schwarz inequality to obtain the bounds
\[
\sum_{q=1}^\infty |g_{k+2q}(\nu_\ell)| \epsilon^{2q}\frac{(k+2q)!}{k!2^qq!} \leq
\left( \sum_{q=1}^\infty g_{k+2q}^2(\nu_\ell) (k+2q)! \right)^{1/2}
\left( \sum_{q=1}^\infty \epsilon^{4q} \frac{(k+2q)!}{(k!)^2(2^qq!)^2} \right)^{1/2}
\]
\[
\leq \frac{M_{k,\ell}}{(k!)^{1/2}}
\left( \sum_{q=1}^\infty \epsilon^{4q} \frac{(k+2q)!}{k!(2q)!} \right)^{1/2}\quad\quad \forall\ell\in\{1,2\} ,
\]
where $M_{k,\ell}$ is some finite constant that converges to zero as $k \rightarrow \infty$. Since $\mbox{\rm Var}(X_\ell) = \sum_{k=1}^\infty k! g_{k}^2(\nu_\ell)$ is finite and $(2^qq!)^2$ is of the same order as $(2q)!$, $\sum_{q=1}^\infty g_{k+2q}^2(\nu_\ell) (k+2q)! \to 0$ as $k \rightarrow \infty$. We use the fact that $\sum_{p=0}^\infty x^{p} {k+p\choose p} = (1-x)^{-k-1}$ for $|x|<1$ to obtain a bound for $\sum_{p=1}^\infty \epsilon^{2p} {k+p \choose p}$. Then (\ref{e:gk1gk2-epsilon}) gives
\begin{equation}
\label{e:gk-epsilon-gk-2}
|g_{k}(\nu_1)g_{k}(\nu_1) - g_{\epsilon,k}(\nu_1)g_{\epsilon,k}(\nu_2)| \leq \sum_{\ell=1}^{2}\frac{ M_{k,\ell}|g_{k}(\nu_\ell)|}{(k!)^{1/2}} \left[ (1-\epsilon^2)^{-k-1} -1 \right]^{1/2} + \frac{M_{k,1}M_{k,2}}{k!} [(1-\epsilon^2)^{-k-1} -1].
\end{equation}
Now take the first derivative of the link function in (\ref{eq:link}) to obtain
\[
L'(u) = \frac{1}{\sqrt{\mbox{Var}(X_1)\mbox{Var}(X_2)}} \sum_{k=1}^\infty g_{k}(\nu_1)g_{k}(\nu_2)k!ku^{k-1},
\]
where the series converges absolutely for $u \in (-1,1)$ since the ``extra'' $k$ gets dominated by $u^{k-1}$. Similarly,
\[
L_\epsilon'(u) = \frac{1}{\sqrt{\mbox{Var}(X_1)\mbox{Var}(X_2)}}\sum_{k=1}^\infty g_{\epsilon,k}(\nu_1)g_{\epsilon,k}(\nu_2)k!ku^{k-1}.
\]
The above expression agrees with Theorem 5.1.10 in \cite{pipiras2017long}. To show that the difference between $L'_{\epsilon}(u)$ and $L'(u)$ converges to zero as $\epsilon \downarrow 0$, use
\begin{eqnarray}
\nonumber
&&|L'(u) - L_\epsilon'(u)| \leq \Big| \frac{1}{\sqrt{\mbox{Var}(X_1)\mbox{Var}(X_2)}}- \frac{1}{\sqrt{\mbox{Var}(X_1^{(\epsilon)})\mbox{Var}(X_2^{(\epsilon)})}}\Big| \sum_{k=1}^\infty g_{k}(\nu_1)g_{k}(\nu_2)k! k|u|^{k-1} \\
&&+\frac{1}{\sqrt{\mbox{Var}(X_1^{(\epsilon)})\mbox{Var}(X_2^{(\epsilon)})}} \sum_{k=1}^\infty |g_{k}(\nu_1)g_{k}(\nu_2) - g_{\epsilon,k}(\nu_1) g_{\epsilon,k}(\nu_2)|k! k|u|^{k-1}.
\end{eqnarray}
From (\ref{e:gk-epsilon-gk-2}), we see that $|g_{k}(\nu_1)g_{k}(\nu_2) - g_{\epsilon,k}(\nu_1)g_{\epsilon,k}(\nu_2)| \rightarrow 0$ as $\epsilon \downarrow 0$. Hence, $\sum_{k=1}^\infty |g_{k}(\nu_1)g_{k}(\nu_2) - g_{\epsilon,k}(\nu_1) g_{\epsilon,k}(\nu_2)|k! k|u|^{k-1}$ converges to zero by the dominated convergence theorem as $\epsilon \downarrow 0$. Using (\ref{link_correlation}), we concluded that $\mbox{Var}(X_1^{(\epsilon)})\rightarrow \mbox{Var}(X_1)$ and $\mbox{Var}(X_2^{(\epsilon)}) \rightarrow \mbox{Var}(X_2)$ as $\epsilon \downarrow 0$. Therefore,
\[
\left|
\frac{1}{\sqrt{\mbox{Var}(X_1)\mbox{Var}(X_2)}}-
\frac{1}{
\sqrt{\mbox{Var}(X_1^{(\epsilon)})
\mbox{Var}(X_2^{(\epsilon)})}
}
\right|
\rightarrow 0
\quad \mbox{as} \quad \epsilon \downarrow 0
\]
follows by continuity of the function $x^{-1/2}$ away from $x=0$ (the limiting variances are tacitly assumed positive to avoid degeneracy).
|
1,108,101,566,379 | arxiv | \section{Introduction}
The main purpose of this work is to answer, for a specific family of continuous random trees (CRT in short), the following general question about measured metric spaces. If $m(r)$ denotes the measure assigned to the ball centered at some fixed distinguished point and with radius $r\geq 0$, is the non-decreasing function $m$ absolutely continuous with respect to the Lebesgue measure on $\intervallefo{0}{\infty}$ ?
When the answer is positive, the density $m'(r)$ can then be viewed as the measure of the sphere with radius $r$. When further the metric space is a continuum tree, the density $m'$ is sometimes known as the \textit{profile} of the tree.
This question has been answered by Haas \cite{H04} for the class of self-similar fragmentation trees, which notably includes Aldous' CRT. Recall that a conservative self-similar fragmentation describes the evolution of a branching particle system such that at every branching event, the sum of the masses of the children coincides with the mass of the parent, and self-similarity refers to the property that the evolution of a particle with mass $x>0$ is a scaling transformation (depending on an index $\alpha\in\mathbb{R}$) of that of a particle with unit mass. Informally, Haas and Miermont \cite{HM04} associated to a conservative self-similar fragmentation with index $\alpha<0$ a self-similar continuous random tree which is further naturally equipped with a root and a probability mass measure, and Haas \cite{H04} proved that under some very minor hypotheses, the non-decreasing function $m$ is then absolutely continuous if $\alpha>-1$, and singular if $\alpha\leq-1$.
The present work should be viewed as a generalization of \cite{H04} to self-similar \textit{growth-fragmentations}, introduced by Bertoin \cite{B17}. As the name suggests, the latter extend pure fragmentations by incorporating a growth element in the dynamic of particles, and this changes deeply the behaviour of the system. Rembart and Winkel \cite{RW16} constructed recently the CRT's which describe the genealogy of self-similar growth-fragmentations with index $\alpha<0$, whereas the so-called intrinsic area measure was introduced in \cite{BBCK16}.
The motivation of the present work is not just getting a formal extension of the results of Haas; it also stems from the connection between random surfaces and growth-fragmentations as we shall now explain informally. It was pointed out in \cite{BBCK16} and \cite{BCK15} that for certain random surfaces with a boundary, the process obtained by slicing the surface at fixed distances from the boundary and measuring the lengths of the resulting cycles yields a self-similar growth-fragmentation with negative index. One might then expect that, just as for smooth surfaces, the area $A(r)$ of the components at distance at most $r$ from the boundary can then be recovered by integrating the total cycle lengths at height $0\leq r'\leq r$; that is, that the non-decreasing function $r\mapsto A(r)$ is absolutely continuous with density given by the total cycle lengths. It turns out that this intuition is wrong in general, and it is thus natural to wonder whether nonetheless the absolute continuity of $A(\cdot)$ holds.
The law of a growth-fragmentation is determined by the index of self-similarity and the so-called cumulant function $\kappa$. (more details are given in Section \ref{Section preliminaries}).
Our main result is stated in terms of $\alpha$ and the smallest root $\omega_-$ of $\kappa$. More precisely, whilst the critical value is -1 for pure fragmentations, we show that the genealogical CRT of a growth-fragmentation has an absolutely continuous profile as soon as $\alpha>-\omega_-$, whereas it is singular if $\alpha\leq -\omega_-$. In particular, we shall see that for the whole family of random maps considered in \cite{BBCK16}, the function $t\mapsto A(t)$ is absolutely continuous.
The paper is organized as follows. In Section \ref{Section preliminaries}, we recall the settings of \cite{BBCK16}. This includes the definition of a growth-fragmentation and its CRT. The construction of the intrinsic area measure from the branching random walk following in generations the collection of particles at birth is recalled. A loose description of the spinal decomposition is also given.
Section \ref{Section Regularity of the intrinsic area process} is divided in four subsections. The first one contains our main result. The second subsection is a toolbox that recalls basic properties of the major ingredients of the proof, which is given in the third subsection. A simple corollary on the number of fragments is stated in the fourth subsection.
We dwell on the absolutely continuous case in Section \ref{Section Approximation of the density} and we see that, modulo few adjustments, the proof of Haas adapts to show that the profile can be approximated by small (or equivalently relatively large) fragments.
Finally, Section \ref{Section Hausdorff dimension} is devoted to the Hausdorff dimension of $\mathrm{d}A$ when singular. We obtain the lower bound from Frostman's Lemma, and derive the upper bound from the Hausdorff dimension of the leaves of the CRT, obtained by Rembart and Winkel \cite{RW16}.
In Appendix are shown two technical lemmas, including the Feller property of the growth-fragmentation (Lemma \ref{Lemma Feller}), which is needed for the arguments of Haas to apply in Section \ref{Section Approximation of the density}.
\section{Preliminaries}\label{Section preliminaries}
\paragraph{The cell-system.}
We consider a positive self-similar Markov process $X$ with index $\alpha< 0$, in the sense that its law $\mathbb{P}_x$ started from $X_0=x>0$ is the same as that of $(xX_{tx^{\alpha}})_{t\geq 0}$ under $\mathbb{P}_1$. We assume that $X$ converges almost surely to 0. Lamperti's transformation \cite{L72} enables us to view $X$ as a time changed of $\exp(\xi)$, where $\xi$ is a L\'evy process. As a consequence the lifetime of $X$, i.e. the first hitting time of the absorbing state 0, is given by an exponential functional of $\xi$ (we shall provide more details later on).
We follow Bertoin's construction \cite{B17} of the cell-system driven by $X$: let $\chi_{\emptyset}:=(\chi_{\emptyset}(t))_{t\geq 0}$ have law $\mathbb{P}_1$. The process $\chi_{\emptyset}$ is viewed as the size of the Eve cell $\emptyset$, evolving in time. Its birth-time $b_{\emptyset}$ is taken to be 0. Let $(b_i)_{i\geq 1}$ be an exhausting sequence of its negative jump times and let $(\Delta_i)_{i\geq 1}$ be the corresponding sequence of the absolute values of the sizes of its negative jumps (the existence is ensured by the fact that $\chi_{\emptyset}$ converges to 0 almost surely).
Each negative jump is interpreted as the birth of a new cell, that is at time $b_i$ a cell labeled $i$ is born and evolves independently of the other cells, with law $\mathbb{P}_{\Delta_i}$. The other generations are defined recursively in the same manner, using the Ulam-Harris-Neveu notation, that is every cell is labeled by some $u\in\mathbb{U}:=\bigcup_{n\geq 0}\mathbb{N}^n$. We denote $|u|$ its generation and $u(k)$ its ancestor at generation $k\leq|u|$ (by convention, $|\emptyset|=0$).
For $x>0$, we denote $\mathcal{P}_x$ the law of the cell-system starting from a single cell of initial size $x$. Similarly, if $\underline{x}:=(x_1,x_2,\cdots)$ is a non-decreasing null sequence, $\mathcal{P}_{\underline{x}}$ is the distribution of a cell-system starting from independent cells of sizes $x_1,x_2,\cdots$.
\paragraph{The branching random walk.}
Define the collections of logarithms of cells at birth, indexed by generations, as
\begin{align*}
\mathcal{Z}_n:=\left\{\!\!\left\{\mathrm{ln}\chi_u(0):u\in\mathbb{N}^n\right\}\!\!\right\},
\end{align*}
where $\left\{\!\!\left\{\cdots\right\}\!\!\right\}$ refers to multiset, meaning that the elements are repeated according to their multiplicities.
Thanks to self-similarity, $(\mathcal{Z}_n)_{n\geq 1}$ is a branching random walk.
Let $(b,\sigma^2,\Lambda)$ be the characteristics of the L\'evy process $\xi$ and assume that there exists $p>0$ such that $\int_1^\infty e^{py}\Lambda(\mathrm{d}y)<\infty$ (we also assume that $\Lambda(\intervalleoo{-\infty}{0})>0$ as there are no children otherwise). We thus have that the Laplace exponent of $\xi$, given by $\psi(q):=\log\mathbb{E}(\exp(q\xi(1)))$ is finite at least on $\intervalleff{0}{p}$ (see e.g. \cite{K14} Theorem 3.6); we set $\psi(q)=\infty$ whenever the expectation is infinite. The so-called \textit{cumulant function} is defined as
\begin{align}\label{equation cumulant kappa}
\kappa:q\mapsto\psi(q)+\int_{\intervalleoo{-\infty}{0}}(1-e^y)^q\Lambda(\mathrm{d}y),\qquad q>0.
\end{align}
The mean Laplace transform of $\mathcal{Z}_1$ is then given by $q\mapsto 1-\kappa(q)/\psi(q)$, when this makes sense (see \cite{B17} Lemma 3). Hence, as soon as $\kappa(q)=0$, the process $(\sum_{|u|=n}\chi_u(0)^q)_{n\geq 0}$ is a martingale.
We thus naturally assume that there exists $\omega_->0$ such that\footnote{In the context of branching random walks, this assumption is known as the Cram\'er hypothesis and $\omega_-$ is called the Malthusian parameter.}
\begin{align}\label{Cramer hypothesis}
\kappa(\omega_-)=0,\qquad
-\infty<\kappa'(\omega_-)<0.
\end{align}
The so-called \textit{intrinsic martingale} introduced in \cite{BBCK16} is then defined as
\begin{align*}
\mathcal{M}(n):=\sum_{|u|=n}\chi_u(0)^{\omega_-},\qquad n\geq 0.
\end{align*}
This martingale is moreover uniformly integrable with mean 1 under $\mathcal{P}_1$ (see \cite{BBCK16} Lemma 2.3).
We shall also denote $\omega_+:=\sup\left\{q\geq 0:\kappa(q)<0\right\}$, which is strictly greater than $\omega_-$ thanks to \eqref{Cramer hypothesis}. (In \cite{BBCK16}, $\omega_+$ is a second root of $\kappa$, which if it exists, is consistent with our definition.)
Finally, we rule out the case where $X$ is the negative of a subordinator, as this induces fragmentation processes, which are fully addressed by \cite{H04}.
\paragraph{The Ulam tree, the CRT and the intrinsic area measure.}
In \cite{BBCK16}, the authors define a random measure on the boudary of the Ulam tree $\partial\mathbb{U}$, which is the set of infinite integer sequences, endowed with the distance $d(\ell,\ell'):=\exp(-\sup\{n\geq 0:\ell(n)=\ell'(n)\})$ which makes it a complete metric space (recall that $\ell(n)$ denotes the ancestor of $\ell$ at generation $n$). Specifically, for every $u\in\mathbb{U}$ with $|u|=n$, let $B(u):=\{\ell\in\partial\mathbb{U}:\ell(n)=u\}$ be the ball in $\partial\mathbb{U}$ generated by $u$. The \textit{intrinsic area measure} on $\partial\mathbb{U}$ is then defined by
\begin{align*}
\mathcal{A}(B(u)):=\lim_{k\to\infty}\sum_{|v|=k,v(n)=u}\chi_v(0)^{\omega_-}.
\end{align*}
(This is well-defined thanks to the uniform integrability of $(\mathcal{M}(n))_{n\geq 0}$ and the branching property.)
The total mass is denoted $\mathcal{M}:=\lim_{n\to\infty}\mathcal{M}(n)=\mathcal{A}(\partial\mathbb{U})$.
Rembart and Winkel \cite{RW16} built a CRT from the cell-system, that is very similar to $\partial\mathbb{U}$.
The construction is as follows: construct a first segment of length equal to the lifetime $\zeta_{\emptyset}$ of $\emptyset$, endowed with a metric corresponding to the age of the cell. It means that each point of this branch corresponds to the Eve cell at a particular time of its life; the root $\rho$ is thus naturally taken to be the point corresponding to 0. On this branch, at every jump location $b_i$, glue a new branch of length equal to the lifetime $\zeta_i$ of the cell $i$, with the corresponding metric. This yields a CRT $(\mathcal{T}_1,d_1)$. For all $n\geq 1$, to obtain $(\mathcal{T}_{n+1},d_{n+1})$, repeat this procedure on every branch $u\in\mathbb{N}^n$ at locations $\left\{b_{uj}:j\geq 1\right\}$.
Theorem 1.7 in \cite{RW16} shows that, whenever $\psi(-\alpha)<0$, $(\mathcal{T}_n,d_n)_{n\geq 1}$ converges almost surely in the Gromov-Hausdorff topology to some compact CRT $(\mathcal{T},d)$. Eventhough it is not explicitely given in their construction, there is a very natural way to define simultaneously the analogue of the intrinsic area measure on $\mathcal{L}(\mathcal{T})$, the set of leaves of $\mathcal{T}$, that we now introduce. Fix $n\geq 1$ and consider $\mathcal{T}_n$. For every $u\in\mathbb{N}^n$ and $j\geq 1$, put a mass $\chi_{uj}(0)^{\omega_-}$ at location $b_{uj}$ on the branch of $u$. This defines a measure $\mathcal{A}_n$ on $\mathcal{T}_n$, with total mass given by $\mathcal{M}(n+1)$. As for the Ulam tree, it is clear that $(\mathcal{A}_n)_{n\geq 1}$ converges weakly toward a measure $\mathcal{A}_{\mathcal{T}}$ with total mass $\mathcal{M}$ and supported on $\mathcal{L}(\mathcal{T})$, the set of leaves of $\mathcal{T}$. The correspondance between $\mathcal{T}$ and $\overline{\mathbb{U}}:=\mathbb{U}\cup\partial\mathbb{U}$ is straightforward from the two constructions, that is every $x\in\mathcal{T}$ corresponds to either a unique $\chi_u(t)$ for some $u\in\mathbb{U}, t\in\intervalleff{0}{\zeta_u}$, or a unique $\ell\in\partial\mathbb{U}$. In particular, $\mathcal{A}_{\mathcal{T}}$ and $\mathcal{A}$ are essentially the same, this is even clearer when looking at the masses at heights.
Recall that the height function on $\mathcal{T}$ is defined as the distance to the root
\begin{align*}
\mathrm{ht}(x):=d(\rho,x).
\end{align*}
We then define $A_{\mathcal{T}}:\mathbb{R}_+\to\mathbb{R}_+$ by
\begin{align}\label{equation def A}
&A_{\mathcal{T}}:t\mapsto\mathcal{A}_{\mathcal{T}}(\left\{\ell\in\mathcal{L}(\mathcal{T}):\mathrm{ht}(\ell)\leq t\right\}).\nonumber\\
\shortintertext{This coincides exactly with}
&A:t\mapsto\mathcal{A}(\left\{\ell\in\partial\mathbb{U}:\zeta_\ell\leq t\right\}),
\end{align}
where $\zeta_\ell:=\lim_{n\to\infty}b_{\ell(n)}$. (Actually, $\mathcal{L}(\mathcal{T})$ also contains cells at death-times, but they do not generate area since there are only countably many.)
Since the cell system carries more informations, we shall rather work with $A$ than $A_{\mathcal{T}}$.
The elements of $\mathcal{T}\setminus\mathcal{L}(\mathcal{T})$ at a fixed height $t\geq 0$ correspond to the collection of cells alive at time $t$:
\begin{align*}
\mathbf{X}(t):=\left\{\!\!\left\{\chi_u(t-b_u):u\in\mathbb{U},b_u\leq t<b_u+\zeta_u\right\}\!\!\right\}.
\end{align*}
This is the definition of Bertoin \cite{B17} of the growth-fragmentation process induced by the cell-system.
Shi \cite{S17} showed that the distribution of $\mathbf{X}$ is characterized by the pair $(\kappa,\alpha)$.\footnote{However, this is not the case of the distribution of the cell-system.}
The lifetime of $\mathbf{X}$ is defined as $\zeta:=\inf\left\{t>0:\mathbf{X}(t)=\emptyset\right\}$.
In \cite{B17}, it is shown that the Cram\'er hypothesis \eqref{Cramer hypothesis} ensures that the following properties hold:
\begin{enumerate}[label=\textbullet]
\item Almost surely, for any fixed $\epsilon>0$, there are finitely many fragments larger than $\epsilon$ in $\mathbf{X}(t)$ for all $t\geq 0$.
\item $\zeta<\infty$ almost surely. (\cite{B17} Corollary 3)
\item $\mathbf{X}$ enjoys the self-similarity and branching properties, as stated in \cite{B17} Theorem 2.
\end{enumerate}
As a consequence, $\mathcal{T}$ satisfies a Markov-branching type property, that we express in terms of $A$ as follows: let $t\geq 0$ and let $(A_i)_{i\geq 1}$ be a sequence of i.i.d. copies of $A$, independent of $(\mathbf{X}(u);u\leq t)$, then for all $s\geq 0$:
\begin{align}\label{equation branching + self-similarity of A}
A(t+s)-A(t)\stackrel{d}{=}\sum_{i\geq 1}X_i(t)^{\omega_-}A_i(sX_i(t)^{\alpha}),
\end{align}
where for $i\geq 1$, $X_i(t)$ denotes the size of the $i$th largest fragment in $\mathbf{X}(t)$ (being possibly 0).
\paragraph{Spinal decomposition}
We now give an informal description of the spinal decomposition induced by $\mathcal{A}$, introduced in \cite{BBCK16} Section 4. The statements are provided without proof, the reader is refered to this paper for a rigorous treatment.
We introduce a probability measure $\widehat{\mathcal{P}}_1$ describing the joint distribution of a cell-system and a random leaf $\sigma\in\partial\mathbb{U}$. Under $\widehat{\mathcal{P}}_1$, the law of the cell-system is absolutely continuous with respect to $\mathcal{P}_1$, with density $\mathcal{M}$. The random leaf $\sigma$ is then tagged according to the intrinsic area. In particular we have
\begin{lem}\label{Lemma A and random leaf}
Under $\widehat{\mathcal{P}}_1$ and conditionally on the cell-system, the probability measure $\mathrm{d}A/\mathcal{M}$ satisfies
\begin{align*}
\frac{\mathrm{d}A(t)}{\mathcal{M}}
=\widehat{\mathcal{P}}_1\left(\zeta_\sigma\in\mathrm{d}t|(\chi_u)_{u\in\mathbb{U}}\right).
\end{align*}
\end{lem}
Let $\phi:q\mapsto\kappa(\omega_-+q),\ q\geq 0$. It is known that $\phi$ can be viewed as the Laplace exponent of a L\'evy process\footnote{This fact is stated in \cite{BBCK16} Lemma 2.1 for $q\mapsto\kappa(\omega_++q)$, however it is also true for $\phi$ by the same arguments.} that we denote $\eta$. We then define the positive self-similar Markov process $(Y_t)_{t\geq 0}$ with index $\alpha$, associated with $\eta$ by Lamperti's transformation, that is
\begin{align*}
(Y_t)_{t\geq 0}:=\left(\exp\left(\eta(\tau_t)\right)\right)_{t\geq 0},
\end{align*}
where the time-change $\tau_t$ is defined for all $t\geq 0$ by
\begin{align}\label{Definition time-change tau Lamperti}
\tau_t:=\inf\left\{s\geq 0:\int_0^s e^{-\alpha\eta(u)}\mathrm{d}u\geq t\right\}.
\end{align}
The absorption time of $Y$ is thus given by the following exponential functional
\begin{align}\label{equation I exponential functional}
I=\int_0^\infty e^{-\alpha\eta(t)}\mathrm{d}t.
\end{align}
Since $\kappa'(\omega_-)<0$ by \eqref{Cramer hypothesis}, we know that $\eta$ drifts to $-\infty$ and $I<\infty$ almost surely. We shall denote $\widehat{\mathbb{P}}_x$ the law of $Y$ starting from $x>0$.
The \textit{spine} $(\sigma(t))_{t\geq 0}$ is the process following the size of the ancestors of $\sigma$ in time. Remark that we can write $\zeta_{\sigma}=\inf\left\{t>0:\sigma(t)=0\right\}$. We thus call $\zeta_\sigma$ the \textit{lifetime} of $\sigma$ to emphasize that we will look at $\sigma$ as a random process rather than a random element of a random metric space. In this direction, we have
\begin{lem}\label{Lemma spine distributed as Y}
Under $\widehat{\mathcal{P}}_1$, the spine $\sigma$ is distributed as $Y$ under $\widehat{\mathbb{P}}_1$. In particular, it holds that $\zeta_{\sigma}\stackrel{d}{=}I$.
\end{lem}
Lemma \ref{Lemma A and random leaf} relates $A$ to the lifetime of the spine, which in turns is distributed as the variable $I$ by Lemma \ref{Lemma spine distributed as Y}. Let $\mathcal{C}_0^\infty(\mathbb{R}_+^*)$ be the set of infinitely differentiable functions on $\mathbb{R}_+^*$ vanishing together with their derivatives at infinity.
Equation \eqref{equation I exponential functional} plays a crucial role to obtain distributional properties of $I$. The next lemma collects some that we shall extensively use throughout the rest of this work.
\begin{lem}\label{properties of k}
The variable $I$ has a bounded density in $\mathcal{C}_0^\infty(\mathbb{R}_+^*)$, which we denote by $k$. Further, $\lim_{x\to 0+}k(x)=0$ and
\begin{align*}
\widehat{\mathbb{E}}_1\left(I^{-1}\right)=\alpha\kappa'(\omega_-)<\infty.
\end{align*}
\end{lem}
These results are already known. Hence in the following proof, we only provide references and check that the hypotheses of the cited theorems are fulfilled.
\begin{proof}
Theorem 3.9 in \cite{BLM08} ensures the existence of $k$.
Recently, Patie and Savov \cite{PS16} have shown that $k$ is infinitely differentiable.
The L\'evy measure $\Pi$ of $\eta$ is given (see \cite{BBCK16} Section 4.3) by
\begin{align}\label{equation measure Pi}
\Pi(\mathrm{d}y):=e^{\omega_-y}(\Lambda+
\widetilde{\Lambda})(\mathrm{d}y).
\end{align}
where $\widetilde{\Lambda}$ is the push-forward of $\Lambda$ by $y\mapsto\mathrm{1}_{\left\{y<0\right\}}\log(1-e^y)$.
We see that if $\Lambda(\mathbb{R}_-)=\infty$, then $\Pi$ also has infinite total mass. Notice that we have either $\Lambda(\mathbb{R}_-)=\infty$, or $\sigma^2>0$, or that $\eta$ is a compound Poisson process with a non-negative drift. Theorem 2.4.(3)\footnote{In \cite{PS16}, the authors use the equivalent convention that the process drifts to $+\infty$ and they take the negative of the exponential to define $I$.} in \cite{PS16} thus shows that $k\in\mathcal{C}_0^\infty(\mathbb{R}_+^*)$ (see in particular Remark 2.5 in the same paper). Finally, the limit at 0 of $k$ is given in \cite{PS16} by Theorem 2.15.
The statement on the moment of order $-1$ can be found in \cite{CPY97} Proposition 3.1(iv).
\end{proof}
We conclude this section by recalling the following essential fact on the spinal decomposition: conditionally on $(\sigma(t))_{t\geq 0}$, a child depends only on the spine through its own initial value, given by the size $x$ of the negative jump who generated it, and then evolves with law $\mathbb{P}_x$, independently from $(\sigma(t))_{t\geq 0}$ and the other children.
\underline{\textit{Notation}}:
In the sequel, the expectations under $\mathbb{P}_x,\widehat{\mathbb{P}}_x,\mathcal{P}_x,\widehat{\mathcal{P}}_x$ are denoted respectively by $\mathbb{E}_x,\widehat{\mathbb{E}}_x,\mathcal{E}_x,\widehat{\mathcal{E}}_x$.
\section{Existence of the profile}\label{Section Regularity of the intrinsic area process}
\subsection{Main result}
The following theorem answers the question of the regularity of $t\mapsto A(t)$ in terms of $\alpha$.
\begin{thm}\label{absolute continuity}
$\mathrm{d}A$ is almost surely singular with respect to the Lebesgue measure if and only if $\alpha\leq -\omega_-$, whereas $\mathrm{d}A(x)$ is absolutely continuous almost surely whenever $\alpha>-\omega_-$.
\end{thm}
We recover Theorem 4 of Haas \cite{H04}, since in the pure fragmentations case $\omega_-=1$ (her result does not require dislocations to be binary though). Recall that this theorem can be read as a statement on the $A$ stemming from either $\partial\mathbb{U}$ or $\mathcal{T}$, as explained in Section \ref{Section preliminaries}.
\subsection{Toolbox}\label{Subsection toolbox}
We introduce in this subsection the main tools of the proof of Theorem \ref{absolute continuity}.
Let $\mu$ be a measure on $\mathbb{R}$. We denote its Fourier-Stieljes transform
\begin{align*}
\mathcal{F}_\mu(\theta):=\int_{\mathbb{R}}e^{i\theta x}\mu(\mathrm{d}x),\qquad \theta\in\mathbb{R}.
\end{align*}
Recall from Plancherel's Theorem that
\begin{equation}\label{condition density}
\mu(\mathrm{d}x)\ll\mathrm{d}x\quad\text{with}\quad\mu(\mathrm{d}x)/\mathrm{d}x\in\mathbb{L}^2(\mathrm{d}x)\qquad\Leftrightarrow\qquad\mathcal{F}_{\mu}\in\mathbb{L}^2(\mathrm{d}x).
\end{equation}
We shall use \eqref{condition density} to prove the next lemma, which is the main ingredient in the proof of the absolute continuity of $\mathrm{d}A$.
It was used in \cite{H04} but somewhat implicitely. We state it in a general setting.
Let $\mathbf{P}$ be a probability measure on a generic random space. Let $(E,d,\mu,\rho)$ be a random measured metric space, where $\mu$ is a measure on $E$ with finite total mass $\mathbf{P}$-almost surely and $\rho\in E$ is a distinguished element.
Let $B(\rho,r)$ be the open ball centered in $\rho$ with radius $r>0$.
Let $\gamma,\gamma'$ be two random variables in $E$ such that $\gamma$ and $\gamma'$ are conditionally independent given $(E,d,\mu,\rho)$, with conditional law $\mu(\cdot)/\mu(E)$.
\begin{lem}\label{Lemma absolute continuity for random metric space}
If the law of $\nabla:=d(\rho,\gamma)-d(\rho,\gamma')$ has a density $h$ which is bounded in a neighbourhood of 0, then $m:r\mapsto \mu(B(\rho,r))$ is absolutely continuous with respect to the Lebesgue measure, with a density in $\mathbb{L}^2(\mathrm{d}x)$ $\mathbf{P}$-almost surely.
\end{lem}
\begin{proof}
Recall \eqref{condition density}, we thus look at
\begin{align*}
\frac{|\mathcal{F}_{\mathrm{d}m}(\theta)|^2}{\mu(E)^2}
&=\frac{\mathcal{F}_{\mathrm{d}m}(\theta)}{\mu(E)}\cdot\frac{\mathcal{F}_{\mathrm{d}m}(-\theta)}{\mu(E)}
=\int_0^\infty\int_0^\infty\frac{\mathrm{d}m(x)}{\mu(E)}\cdot\frac{\mathrm{d}m(y)}{\mu(E)}e^{i\theta(x-y)}\\
&=\mathbf{E}\left(e^{i\theta\nabla}|(E,d,\mu,\rho)\right),
\end{align*}
where $\mathbf{E}$ is the expectation operator induced by $\mathbf{P}$.
We see in particular that $\theta\mapsto\mathbf{E}\left(e^{i\theta\nabla}\right)\geq 0$. Theorem 9 in \cite{BC49} ensures that if $\nabla$ has a density bounded in a neighbourhood of $0$, then its Fourier transform is integrable, that is
\begin{align*}
\int_{\mathbb{R}}\mathbf{E}\left(e^{i\theta\nabla}\right)\mathrm{d}\theta=\mathbf{E}\left(\int_{\mathbb{R}}\frac{|\mathcal{F}_{\mathrm{d}m}(\theta)|^2}{\mu(E)^2}\right)<\infty.
\end{align*}
We conclude by Plancherel's Theorem \eqref{condition density}.
\end{proof}
We shall use this Lemma for two suitable choices of $\mathbf{P}$, taking $(E,d,\mu,\rho)$ as $(\mathcal{T},d,\mathcal{A},0)$, the distance $d$ being the age, see Section \ref{Section preliminaries}. This means in particular that $m=A$.
We state an easy but important consequence of Lemma \ref{Lemma A and random leaf} in the next lemma. Recall that $\zeta$ denotes the lifetime of $\mathbf{X}$.
\begin{lem}\label{Lemma A(epsilon)}
The function $t\mapsto A(t)$ is strictly increasing on $\intervalleoo{0}{\zeta}$ and it holds that
\begin{align*}
\mathcal{E}_1\left(A(\epsilon)\right)=o(\epsilon),\quad\epsilon\to 0.
\end{align*}
\end{lem}
\begin{proof}
Fix $\epsilon>0$. We write
\begin{align*}
\mathcal{E}_1(A(\epsilon))
&=\widehat{\mathcal{E}}_1\left(\frac{A(\epsilon)}{\mathcal{M}}\right)=\widehat{\mathcal{P}}_1\left(\zeta_{\widehat{\chi}}\leq \epsilon\right),
\end{align*}
where the last identity is seen from Lemma \ref{Lemma A and random leaf}.
Lemma \ref{Lemma spine distributed as Y} combined with the fact that $k(x)\to 0$ as $x\to 0+$ from Lemma \ref{properties of k} entail that
\begin{align*}
\mathcal{E}_1\left(A(\epsilon)\right)
=o(\epsilon),\quad\epsilon\to 0.
\end{align*}
Very similar arguments as in \cite{H04} Proposition 10$(iv)$ can be applied to show that $A$ is strictly increasing on $\intervalleoo{0}{\zeta}$.
\end{proof}
\subsection{Proof of Theorem \ref{absolute continuity}}\label{Subsection proof}
In the case of self-similar pure fragmentations, Haas \cite{H04} exploited the unit interval representation to tag two fragments by sampling two independent uniform random variables on $\intervalleff{0}{1}$. In the context of Lemma \ref{Lemma absolute continuity for random metric space}, it means that the measure $\mu$ is uniform over the leaves, given the tree.
Recall that we work on $\overline{\mathbb{U}}$. In our case, $\mathcal{A}$ is not uniform on $\partial\mathbb{U}$. However it is not required to apply Lemma \ref{Lemma absolute continuity for random metric space}.
We divide the proof into two subsections. Even though the second one would be enough to prove Theorem \ref{absolute continuity} in great generality, it is very similar to the first one but involves considerations that can be avoided in some cases, and we do so for the sake of clarity.
\subsubsection{The case $\omega_+/\omega-_->2$ and $\alpha>-\omega_-$.}
We assume throughout this subsection that $\omega_+/\omega_->2$ and $\alpha>-\omega_-$. The reason is that thanks to Lemma 2.3 in \cite{BBCK16}, $\mathcal{E}_1(\mathcal{M}^2)<\infty$. We can thus define a probability measure $\check{\mathcal{P}}_x$ absolutely continuous with respect to $\widehat{\mathcal{P}}_x$ with density $\mathcal{M}/\widehat{\mathcal{E}}_x(\mathcal{M})=x^{\omega_-}\mathcal{M}/\mathcal{E}_x(\mathcal{M}^2)$, which also means that $\check{\mathcal{P}}_x$ has density $\mathcal{M}^2/\mathcal{E}_x(\mathcal{M}^2)$ with respect to $\mathcal{P}_x$. In particular, we can choose $\mathbf{P}=\check{\mathcal{P}}_1$ and try to apply Lemma \ref{Lemma absolute continuity for random metric space}. (This argument does not apply when $\omega_+/\omega_-\leq 2$ since we then know, still from \cite{BBCK16} Lemma 2.3, that $\mathcal{E}_1(\mathcal{M}^2)=\infty$.)
In this subsection, we write $C=1/\mathcal{E}_1(\mathcal{M})$.
\begin{lem}\label{Lemma density nabla integrable case}
Consider $\nabla$ as in Lemma \ref{Lemma absolute continuity for random metric space}. Under $\check{\mathcal{P}}_1$, $\nabla$ has a density $h:\mathbb{R}\to\mathbb{R}_+\cup\left\{\infty\right\}$, given by
\begin{align*}
h(x)=C\widehat{\mathbb{E}}\left(\sum_{s>0}|\Delta_-Y(s)|^{\omega_-+\alpha}Y(s)^{\alpha}\int_0^\infty\mathrm{d}uk(u|\Delta_-Y(s)|^{\alpha})k((u-x)Y(s)^{\alpha})\right),
\end{align*}
where $k$ denotes the density of the law of $I$ from Lemma \ref{properties of k}.
\end{lem}
\begin{proof}
In what follows, we shall implicitely use several times Tonelli's Theorem, since every terms involved in the proof are positive.
Let $f:\mathbb{R}\to\mathbb{R}_+$ be any non-negative measurable function. We have
\begin{align*}
\check{\mathcal{E}}_1\left(f(\nabla)\right)
&=C\widehat{\mathcal{E}}_1\left(\mathcal{M}f(\nabla)\right)
=C\widehat{\mathcal{E}}_1\left(\mathcal{M}f(\zeta_\sigma-\zeta_{\sigma'})\right)\\
&=C\widehat{\mathcal{E}}_1\left(\mathcal{M}\widehat{\mathcal{E}}_1\left(\left.f(\zeta_\sigma-\zeta_{\sigma'})\right|(\chi_u)_{u\in\mathbb{U}},\sigma\right)\right)\\
&=C\widehat{\mathcal{E}}_1\left(\int_{\partial\mathbb{U}}\mathcal{A}(\mathrm{d}\ell)f(\zeta_\sigma-\zeta_{\ell})\right),
\end{align*}
where we used the conditional independence of $\sigma$ and $\sigma'$ given $(\chi_u)_{u\in\mathbb{U}}$. We now choose a subtree among those generated by the children of the spine $(\sigma(s))_{s\geq 0}$ (the ancestors of $\sigma$), according to the intrinsic area. Denoting $\partial\mathbb{U}_s$ the leaves of the tree descending from the negative jump (if any) of the spine at time $s$, it reads as
\begin{align*}
\check{\mathcal{E}}_1\left(f(\nabla)\right)
&=C\widehat{\mathcal{E}}_1\left(\sum_{s>0}\widehat{\mathcal{E}}_1\left(\left.\mathcal{M}_s\int_{\partial\mathbb{U}_s}\frac{\mathcal{A}(\mathrm{d}\ell)}{\mathcal{M}_s}f(\zeta_\sigma-\zeta_{\ell})\right|(\sigma(t))_{t\geq 0}\right)\right)\\
&=C\widehat{\mathcal{E}}_1\left(\sum_{s>0}\widehat{\mathcal{E}}_{1}\left(\left.f(\zeta_\sigma-s-\zeta_{\widehat{\sigma}})\right|(\sigma(t))_{t\geq 0}\right)\right),
\end{align*}
where $\widehat{\sigma}$ is a random leaf of $\partial\mathbb{U}_s$ tagged according to the restiction of the intrinsic area on this subtree. Hence, $\zeta_{\widehat{\sigma}}$ is distributed as the lifetime of a spine under $\widehat{\mathcal{P}}_{|\Delta_-\sigma(s)|^{\omega_-}}$, conditionally on $(\sigma(t))_{t\geq 0}$. More precisely, Theorem 4.7 in \cite{BBCK16} ensures that $\zeta_{\widehat{\sigma}}$ depends on $\sigma$ only through $\Delta_-\sigma(s)$, and is independent of what happens to the spine at other times. The scaling property yields that the above is equal to
\begin{align}\label{Equation second spine rescaled}
&C\widehat{\mathcal{E}}_1\left(\sum_{s>0}|\Delta_-\sigma(s)|^{\omega_-}\widehat{\mathcal{E}}_{1}\left(\left.f\left(\zeta_\sigma-s-|\Delta_-\sigma(s)|^{-\alpha}\zeta_{\widehat{\sigma}}\right)\right|(\sigma(t))_{t\geq 0}\right)\right),
\end{align}
where now the law of $\zeta_{\widehat{\sigma}}$ is independent of $\sigma$ and is that of $I$ under $\widehat{\mathbb{P}}_1$ by Lemma \ref{Lemma spine distributed as Y}. We thus get
\begin{align*}
\check{\mathcal{E}}_1\left(f(\nabla)\right)
&=C\widehat{\mathcal{E}}_1\left(\sum_{s>0}|\Delta_-\sigma(s)|^{\omega_-}\int_0^\infty\mathrm{d}xk(x)f\left(\zeta_\sigma-s-|\Delta_-\sigma(s)|^{-\alpha}x\right)\right)\\
&=C\widehat{\mathbb{E}}_1\left(\sum_{s>0}|\Delta_-Y(s)|^{\omega_-}\int_0^\infty\mathrm{d}xk(x)f\left(I-s-|\Delta_-Y(s)|^{-\alpha}x\right)\right),
\end{align*}
by Lemma \ref{Lemma spine distributed as Y}.
Since every term in the sum is positive, we can use the optional projection Theorem (\cite{DM2} Theorem 57) with respect to the natural filtration of $Y$ (see also Theorem 43 in the same book for a definition of optional projection).
The right-hand side above becomes
\begin{align*}
&C\widehat{\mathbb{E}}_1\left(\sum_{s>0}|\Delta_-Y(s)|^{\omega_-}\int_0^\infty\mathrm{d}xk(x)\int_0^\infty\mathrm{d}yk(y)f\left(Y(s)^{-\alpha}y-|\Delta_-Y(s)|^{-\alpha}x\right)\right)\\
&\hspace{2cm}=C\widehat{\mathbb{E}}_1\Bigg(\sum_{s>0}|\Delta_-Y(s)|^{\omega_-+\alpha}Y(s)^{\alpha}\int_0^\infty\mathrm{d}xk(x|\Delta_-Y(s)|^{\alpha})\\
&\hspace{7.5cm}\times\int_0^\infty\mathrm{d}yk(yY(s)^{\alpha})f\left(y-x\right)\Bigg),
\end{align*}
which gives the claim.
\end{proof}
We now provide the proof of the absolute continuity of $\mathrm{d}A$.
\begin{proof}[Proof of Theorem \ref{absolute continuity}, case $\alpha>\omega_-$ and $\omega_+/\omega_->2$.]
By the lemmas \ref{Lemma absolute continuity for random metric space} and \ref{Lemma density nabla integrable case}, it is sufficient to show that the supremum of $h$ is finite, that is
\begin{align*}
\sup_{x\in\mathbb{R}}\widehat{\mathbb{E}}\left(\sum_{s>0}|\Delta_-Y(s)|^{\omega_-+\alpha}Y(s)^{\alpha}\int_0^\infty\mathrm{d}uk(u|\Delta_-Y(s)|^{\alpha})k((u-x)Y(s)^{\alpha})\right)<\infty.
\end{align*}
For all $x\in\mathbb{R}$, we can bound $h(x)$ after a suitable change of variable by
\begin{align}\label{Equation upper bound h abs cont}
h(x)
\leq C\widehat{\mathbb{E}}\left(\sum_{s>0}|\Delta_-Y(s)|^{\omega_-}(Y(s)\vee|\Delta_-Y(s)|)^{\alpha}\right)||k||_\infty \int_0^\infty k(u)\mathrm{d}u,
\end{align}
the last integral being equal to 1 since $k$ is a density.
It remains to show that the expectation is finite. Since the terms in the sum do not depend on $\alpha$, we can assume without loss of generality that $Y$ is homogeneous, that is $Y(s)=\exp(\eta(s))$ for all $s\geq 0$. The compensation formula for Poisson point processes then yields that the expectation in the right-hand side above is equal to
\begin{align*}
&\int_0^\infty\mathrm{d}s\widehat{\mathbb{E}}_1\left(e^{(\omega_-+\alpha)\eta(s)}\right)\int_{\intervalleoo{-\infty}{0}}\Pi(\mathrm{d}y)(1-e^y)^{\omega_-}(e^y\vee(1-e^y))^{\alpha}\\
&\hspace{3cm}\leq\int_0^\infty\mathrm{d}se^{\kappa(2\omega_-+\alpha)s}\int_{\intervalleoo{-\infty}{0}}\Lambda(\mathrm{d}y)e^{\omega_-y}(1-e^y)^{\omega_-}2^{-\alpha}.
\end{align*}
The last integral is finite by definition of $\omega_-$, as well as the first one when $\kappa(2\omega_-+\alpha)<0$, that is $-\omega_-<\alpha<\omega_+-2\omega_-$. Since $\alpha>-\omega_-$ and $\omega_+/\omega_->2$ by assumption, we always have $-\omega_-<\alpha<0<\omega_+-2\omega_-$, which ends the proof.
\end{proof}
\subsubsection{The case $\omega_+/\omega_-\leq 2$ and $\alpha>-\omega_-$.}
The main issue when $\omega_+/\omega_-\leq 2$ is that the measure $\check{\mathcal{P}}$ defined earlier has now infinite total mass, which prevents us to use Lemma \ref{Lemma absolute continuity for random metric space}. We overcome this by defining a new measure $\check{\mathcal{P}}^{(K)}$ in such a way that the two random leaves are tagged among those whose ancestors' sizes have never been too large, which, we will see, entails that $\check{\mathcal{P}}^{(K)}$ has finite total mass.
In this direction, let $K>0$ intended to tend to $\infty$. Define $B_K$ as
\begin{align*}
B_K:=\left\{\ell\in\partial\mathbb{U}:\ell^*\leq K\right\},
\end{align*}
where $\ell^*:=\sup_{t\geq 0}\ell(t)$. Let $\mathcal{A}_K$ be the restriction of the measure $\mathcal{A}$ to $B_K$, and $\mathcal{M}_K:=\mathcal{A}_K(B_K)$.
We now define for all $x>0$ the probability measure $\widehat{\mathcal{P}}_x^{(K)}$ such that it is absolutely continuous with respect to $\mathcal{P}_x$, with density $\mathcal{M}_K/\mathcal{E}_x(\mathcal{M}^{(K)})$. In the same vein, we would like to define $\check{\mathcal{P}}_x^{(K)}$ to be absolutely continuous with respect to $\widehat{\mathcal{P}}_x^{(K)}$ with density $\mathcal{M}_K/\widehat{\mathcal{E}}_x^{(K)}(\mathcal{M}_K)$. To ease the expressions, we shall write $C$ for the finite and strictly positive constants that appear when changing of measures. Even if $C$ may vary from line to line, it is always known and only depends on the starting point and $K$, which will play no role in the proofs.
We need the following
\begin{lem}\label{Lemma 2nd moment truncated intrinsic area}
For all $x>0$, it holds that $\widehat{\mathcal{E}}_x^{(K)}(\mathcal{M}_K)=\mathcal{E}_x(\mathcal{M}_K^2)<\infty$. In particular, $\left\{\check{\mathcal{P}}_x^{(K)};x>0\right\}$ is a family of probability measures.
\end{lem}
\begin{proof}
By self-similarity, it is enough to show that $\mathcal{E}_1(\mathcal{M}_K^2)<\infty$. Let $\sigma_K$ denote a random leaf sampled on $B_K$ with conditional law $\mathcal{A}_K(\cdot)/\mathcal{M}_K$ given $(\chi_u)_{u\in\mathbb{U}}$. Similarly as in the previous subsection, we write
\begin{align*}
\mathcal{E}_1\left(\mathcal{M}_K^2\right)
&=C\widehat{\mathcal{E}}_1^{(K)}\left(\mathcal{M}_K\right)
=C\widehat{\mathcal{E}}_1^{(K)}\left(\sum_{s>0}|\Delta_- \sigma_K(s)|^{\omega_-}\widehat{\mathcal{E}}_1^{(K)}\left(\mathcal{M}_{K,s}|(\sigma_K(t))_{t\geq 0}\right)\right)\\
\shortintertext{where $\mathcal{M}_{K,s}$ is the rescaled truncated mass generated by the cell born at time $s$ (truncated when cells reach a size $K/|\Delta_-\sigma_K(s)|^{\omega_-}$ by the scaling property). Since $\mathcal{M}_{K,s}$ cannot be greater than the non-truncated area, that has expectation 1, we get}
\mathcal{E}_1\left(\mathcal{M}_K^2\right)
&\leq C\widehat{\mathcal{E}}_1^{(K)}\left(\sum_{s>0}|\Delta_- \sigma_K(s)|^{\omega_-}\right)=C\widehat{\mathcal{E}}_1\left(\sum_{s>0}\mathds{1}_{\left\{\sigma^*\leq K\right\}}|\Delta_- \sigma_K(s)|^{\omega_-}\right)\\
&=C\widehat{\mathbb{E}}_1\left(\sum_{s>0}\mathds{1}_{\left\{Y^*\leq K\right\}}|\Delta_- Y(s)|^{\omega_-}\right).
\end{align*}
As previously we assume without loss of generality that $Y$ is homogeneous. Now we fix $p\in\intervalleoo{0}{\omega_+-\omega_-}$ and we bound the latter from above by
\begin{align*}
CK^{\omega_--p}\widehat{\mathbb{E}}_1\left(\sum_{s>0}e^{p\eta(s-)}(1-e^{\Delta_-\eta(s)})^{\omega_-}\right).
\end{align*}
The compensation formula yields that the expectation is equal to
\begin{align*}
\int_0^\infty\mathrm{d}se^{(\omega_-+p)\kappa(s)}\int_{\intervalleoo{-\infty}{0}}\Pi(\mathrm{d}y)(1-e^y)^{\omega_-},
\end{align*}
which is finite since $\kappa(\omega_-+p)<0$ and by definition of $\omega_-$ and $\Pi$. This shows that $\mathcal{E}_x(\mathcal{M}^2_K)<\infty$ for any $x>0$ by self-similarity.
\end{proof}
Thanks to Lemma \ref{Lemma 2nd moment truncated intrinsic area}, Lemma \ref{Lemma absolute continuity for random metric space} applies with $\mathbf{P}=\check{\mathcal{P}}_1^{(K)}$ and $\nabla_{K}:=\zeta_{\sigma_K}-\zeta_{\sigma_K'}$, where $\sigma_K$ and $\sigma_K'$ are conditionally independent random leaves in $B_K$ with conditional law $\mathcal{A}_{K}(\cdot)/\mathcal{M}_K$ given $(\chi_u)_{u\in\mathbb{U}}$. We claim that when $K$ is large, $B_K=\partial\mathbb{U}$, that is
\begin{align*}
\lim_{K\to\infty}\mathcal{P}_1\left(\mathcal{A}=\mathcal{A}_K\right)=1.
\end{align*}
Indeed, as a direct consequence of Corollary 4 in \cite{B17}, we have that $\sup_{u\in\mathbb{U}}\chi_u^*<\infty$ $\mathcal{P}$-a.s., where $\chi_u^*:=\sup_{t\geq 0}\chi_u(t)$. By Lemma \ref{Lemma absolute continuity for random metric space}, it is hence enough to show that $\nabla_K$ has a bounded density in a neighbourhood of 0 to prove the Theorem.
Similarly to Lemma \ref{Lemma density nabla integrable case}, we have
\begin{lem}\label{Lemma density nabla truncated}
Under $\check{\mathcal{P}}_1^{(K)}$, $\nabla_K$ has a density $h:\mathbb{R}\to\mathbb{R}_+\cup\left\{\infty\right\}$ given by
\begin{align*}
h:x\mapsto&C\widehat{\mathbb{E}}_1\Bigg(\sum_{s>0}\mathds{1}_{\left\{\sup_{t\leq s}Y(t)\leq K\right\}}|\Delta_-Y(s)|^{\omega_-+\alpha}Y(s)^\alpha\\
&\hspace{4cm}\times \int_0^\infty \mathrm{d}u g_{2,s}(u|\Delta_-Y(s)|^\alpha)g_{1,s}((u-x)Y(s)^\alpha)\Bigg),
\end{align*}
where $C$ is known and comes from the change of measure, and $g_{1,s},g_{2,s}$ are non-negative random functions, measurable with respect to the natural filtration of $Y$ at time $s$ for all $s\geq 0$, that are all pointwise bounded by $k$.
\end{lem}
The proof being very similar to that of Lemma 6, we do not provide all the steps, but only those where new arguments are needed.
\begin{proof}
We have the following analogue of \eqref{Equation second spine rescaled}:
\begin{align*}
\check{\mathcal{E}}_1^{(K)}\left(f(\nabla_K)\right)
&=C\widehat{\mathcal{E}}_1\Bigg(\sum_{s>0}\mathds{1}_{\left\{\sigma^*\leq K\right\}}|\Delta_-\sigma(s)|^{\omega_-}\\
&\hspace{1cm}\times\widehat{\mathcal{E}}_{1}\Big(\mathds{1}_{\left\{\widehat{\sigma}^*\leq K|\Delta_-\sigma(s)|^{-\omega_-}\right\}}f\left(\zeta_\sigma-s-|\Delta_-\sigma(s)|^{-\alpha}\zeta_{\widehat{\sigma}}\right)\Big|(\sigma(t))_{t\geq 0}\Big)\Bigg)\\
&=C\widehat{\mathbb{E}}_1\Bigg(\sum_{s>0}\mathds{1}_{\left\{Y_1^*\leq K\right\}}|\Delta_-Y_1(s)|^{\omega_-}\\
&\hspace{1cm}\times\widehat{\mathbb{E}}_{1}\left(\left.\mathds{1}_{\left\{Y_2^*\leq K|\Delta_-Y_1(s)|^{-\omega_-}\right\}}f\left(I_1-s-|\Delta_-Y_1(s)|^{-\alpha}I_2\right)\right|Y_1\right)\Bigg),\\
\end{align*}
by Lemma \ref{Lemma spine distributed as Y}, where $I_1$ and $I_2$ are the respective absorption times at 0 of two independent positive self-similar Markov processes $Y_1,Y_2$ with same distribution $\widehat{\mathbb{P}}_1$.
The expectations in the right-hand side then becomes
\begin{align*}
\widehat{\mathbb{E}}_1\left(\sum_{s>0}\mathds{1}_{\left\{Y_1^*\leq K\right\}}|\Delta_-Y_1(s)|^{\omega_-}\int_0^\infty \mathrm{d}x g_{2,s}(x)f\left(I_1-s-|\Delta_-Y_1(s)|^{-\alpha}x\right)\right),
\end{align*}
where for any $s>0$ such that $\Delta_-Y_1(s)<0$, the random function
\begin{align*}
g_{2,s}:x\mapsto k(x)\widehat{\mathbb{P}}\left(\left.Y_2^*\leq K|\Delta_-Y_1(s)|^\alpha\right|I_2=x\right)
\end{align*}
is measurable with respect to the natural filtration of $Y_1$ at time $s$. Clearly, $g_{2,s}(x)\leq k(x)$ for all $x>0$. As before, applying \cite{DM2} Theorem 57, we obtain that
\begin{align*}
\check{\mathcal{E}}_1\left(f(\nabla_{|B})\right)
&=C\widehat{\mathbb{E}}_1\Bigg(\sum_{s>0}\mathds{1}_{\left\{\sup_{t\leq s}Y_1(t)\leq K\right\}}|\Delta_-Y_1(s)|^{\omega_-}\int_0^\infty \mathrm{d}x g_{2,s}(x)\int_0^\infty\mathrm{d}yg_{1,s}(y)\\
&\hspace{6cm}\times f\left(Y_1(s)^{-\alpha}y-|\Delta_-Y_1(s)|^{-\alpha}x\right)\Bigg),
\end{align*}
where $g_{1,s}$ is defined in the same way as $g_{2,s}$.
\end{proof}
\begin{proof}[Proof of Theorem \ref{absolute continuity}, the case $\alpha>-\omega_-$ and $\omega_+/\omega_-\leq 2$.]
As previously, we only need to show that $h$ given in Lemma \ref{Lemma density nabla truncated} is bounded to conclude by Lemma \ref{Lemma absolute continuity for random metric space}. Recall that the (random) functions $g_{1,s},g_{2,s}$, $s\geq 0$ in the definition of $h$ are bounded by $k$. For any $x\in\mathbb{R}$, similarly to \eqref{Equation upper bound h abs cont}, we thus have that
\begin{align*}
h(x)
&\leq C\widehat{\mathbb{E}}_1\left(\sum_{s>0}\mathds{1}_{\left\{\sup_{t\leq s}Y(t)\leq K\right\}}|\Delta_-Y(s)|^{\omega_-}(Y(s)\vee|\Delta_-Y(s)|)^{\alpha}\right)||k||_\infty ||k||_1.
\end{align*}
As before, we can consider the simpler homogeneous case to show that $h$ is bounded, without loss of generality. Let $\eta(s) = \log(Y(s))$, $s>0$. We rewrite the expectation above as
\begin{align*}
&\widehat{\mathbb{E}}_1\Bigg(\sum_{s>0}\mathds{1}_{\left\{\sup_{t\leq s}\eta(t)\leq \log K\right\}}e^{(\omega_-+\alpha)\eta(s-)}(1-e^{\Delta_-\eta(s)})^{\omega_-}(e^{\Delta_-\eta(s)}\vee(1-e^{\Delta_-\eta(s)}))^\alpha\Bigg)\\
&\hspace{1cm}\leq 2^{-\alpha}\widehat{\mathbb{E}}_1\Bigg(\sum_{s>0}\mathds{1}_{\left\{\sup_{t\leq s}\eta(t)\leq \log K\right\}}e^{(\omega_-+\alpha)\eta(s-)}(1-e^{\Delta_-\eta(s)})^{\omega_-}\Bigg)\\
&\hspace{1cm}=2^{-\alpha}K^{\omega_-+\alpha-p}\widehat{\mathbb{E}}_1\Bigg(\sum_{s>0}e^{p\eta(s-)}(1-e^{\Delta_-\eta(s)})^{\omega_-}\Bigg),
\end{align*}
for any arbitrary fixed $p\in\intervalleoo{0}{\omega_+-\omega_-}$. The compensation formula then shows that the last expectation is equal to
\begin{align*}
\int_0^\infty\mathrm{d}se^{\kappa(\omega_-+q)s}\int_{\intervalleoo{-\infty}{0}}\Lambda(\mathrm{d}y)e^{\omega_-y}(1-e^{y})^{\omega_-},
\end{align*}
which is clearly finite from the choice of $q$ and the definition of $\omega_-$. We have thus proved that $h$ is bounded. By Lemma \ref{Lemma absolute continuity for random metric space}, this shows that $t\mapsto\mathcal{A}(\left\{\ell\in B:\zeta_\ell\leq t\right\})$ is absolutely continuous with respect to the Lebesgue measure $\check{\mathcal{P}}$-a.s., and therefore $\mathcal{P}$-a.s., and we conclude using that $\lim_{K\to\infty}\mathcal{P}(\mathcal{A}_{|B}=\mathcal{A})=1$ as stated earlier.
\end{proof}
\subsubsection{The singular case, $\alpha\leq -\omega_-$.}\label{subsubsection singular case}
We finish the proof of Theorem \ref{absolute continuity}, that is we show that when $\alpha\leq\omega_-$, $\mathrm{d}A$ is almost surely singular with respect to the Lebesgue measure.
Since $t\mapsto A(t)$ is non-decreasing, $A'$ exists almost surely, therefore by Fubini's Theorem we can define $A'$ for almost every $t$. For such $t$, applying \eqref{equation branching + self-similarity of A} and using the same notation we obtain:
\begin{align*}
A(t+\epsilon)-A(t)
&\stackrel{d}{=}\sum_{i\geq 1}X_i^{\omega_-}(t)A_i(\epsilon X_i^\alpha(t)).
\end{align*}
Suppose that there are infinitely many fragments at time $t$. Then for all $n\geq 1$, set $\epsilon_n:=X_n^{-\alpha}(t)$ and divide the last expression by $\epsilon_n$ to get
\begin{align*}
\epsilon_n^{-1}(A(t+\epsilon_n)-A(t))
&=\sum_{i\geq 1}X_n^\alpha(t)X_i^{\omega_-}(t)A_i(X_n^{-\alpha}(t)X_i^\alpha(t))\\
&\geq X_n^{\alpha+\omega_-}(t)A_n(1).
\end{align*}
Since the $A_n$'s are i.i.d. copies of $A$, there are almost surely infinitely many $A_n(1)$'s which are greater than any given constant. But if $\alpha\leq -\omega_-$, we see that $\limsup_{n\to\infty}\epsilon_n^{-1}(A(t+\epsilon_n)-A(t))=\infty$ which is in contradiction with the fact that $A$ admits a derivative at $t$. This implies that there is only a finite number of fragments at time $t$, say $N\in\mathbb{N}$, therefore we can switch the sum and the limit and we obtain :
\begin{align*}
\lim_{\epsilon\to 0}\epsilon^{-1}(A(t+\epsilon)-A(t))
&=\sum_{i\leq N}X_i^{\alpha+\omega_-}(t)A_i'(0),
\end{align*}
where the derivatives are well defined and equal to 0 by Lemma \ref{Lemma A(epsilon)}. Hence $A'(t)=0$ for almost every $t$ and $\mathrm{d}A$ is singular.
\subsection{Number of fragments}\label{Subsection number of fragments}
As we just saw in the proof of the singular case of Theorem \ref{absolute continuity}, there is a link between the number of fragments in the growth-fragmentation and the regularity of $A$. Even though we shall not use this relation later on, we state it in a corollary as it might be of independent interest.
\begin{cor}\label{Corollary number of fragments}
Suppose that $\alpha>-\omega_-$, then almost surely the number of fragments with positive mass is infinite for every $t$ such that $a(t)>0$.
Conversely if $\alpha\leq-\omega_-$, then almost surely the number of fragments with positive mass is finite for almost every $t\geq 0$.
\end{cor}
\begin{proof}
The second statement has been established in the subsection \ref{subsubsection singular case}.
If $\alpha>-\omega_-$, then by Theorem \ref{absolute continuity} we have $\mathrm{d}A(t)=a(t)\mathrm{d}t$.
Fix $s,t>0$ such that $t<\zeta$. By \eqref{equation branching + self-similarity of A} we can write
\begin{align}
A(t+s)-A(t)\stackrel{d}{=}\sum_{i=1}^{N_t}X_i(t)^{\omega_-} A_i(s X_i(t)^\alpha),
\end{align}
where $N_t$ is the number of fragments (possibly infinite) with positive mass at time $t$, $\left\{A_i;\ i=1..N_t\right\}$ are i.i.d. copies of $A$. Then using the above identity we have
\begin{align*}
\frac{1}{\epsilon}\sum_{i=1}^{N_t}X_i(t)^{\omega_-} A_i(\epsilon X_i(t)^\alpha)
&\xrightarrow[\epsilon\to 0]{\mathrm{a.s.}}a(t).
\end{align*}
Suppose moreover that $N_t<\infty$, then Lemma \ref{Lemma A(epsilon)} implies that $a(t)=0$.
\end{proof}
\section{Approximation of the profile}\label{Section Approximation of the density}
Throughout this section, we assume that $\mathrm{d}A(t)=a(t)\mathrm{d}t$, or equivalently $\alpha>-\omega_-$, by Theorem \ref{absolute continuity}.
Our main goal in what follows is to adapt the arguments of Haas \cite{H04} Section 5 to the growth-fragmentations case. We aim to show that $a$ can be approximated by both the small fragments and the relatively big ones. More precisely, define the processes $M,N$ for all $\epsilon>0,t\geq 0$ by
\begin{align*}
M(t,\epsilon):=&\sum_{i\geq 1}X_i^{\omega_-}(t)\mathds{1}_{\left\{X_i(t)\leq\epsilon\right\}},\\
N(t,\epsilon):=&\sum_{i\geq 1}\mathds{1}_{\left\{X_i(t)>\epsilon\right\}}.
\end{align*}
\begin{thm}\label{Theorem fragments approx density a}
Suppose that $\alpha>-\omega_-$. Then for almost every $t\geq 0$, we have that
\begin{align*}
\epsilon^\alpha M(t,\epsilon)&\xrightarrow[\epsilon\to 0]{\mathrm{a.s.}}\frac{a(t)}{\alpha\kappa'(\omega_-)},\\
\epsilon^{\omega_-+\alpha}N(t,\epsilon)&\xrightarrow[\epsilon\to 0]{\mathrm{a.s.}}\frac{a(t)}{(\omega_-+\alpha)|\kappa'(\omega_-)|}.
\end{align*}
\end{thm}
Note that $\kappa'(\omega_-)\in\intervalleoo{-\infty}{0}$ by \eqref{Cramer hypothesis}.
Theorem \ref{Theorem fragments approx density a} is the analogue of Theorem 7 in \cite{H04}, which deals with self-similar fragmentations (see also \cite{D07} Proposition 4.2 addressing the profile of L\'evy trees associated with fragmentations that are not necessarily self-similar).
In order to prove this result, as Haas we shall focus on the small fragments, since the behaviour of $N(t,\epsilon)$ as $\epsilon\to 0$ can be deduced from that of $M(t,\epsilon)$ applying Tauberian's Theorems (as discussed in the end of this section).
\begin{lem}[Analogue of Lemma 8 in \cite{H04}]\label{Adaptation of Lemma 8 of Haas}
Let $I$ be a random variable with density $k$, independent of $\mathbf{X}$. If $\alpha>-\omega_-$, then for almost every $t>0$,
\begin{align*}
\lim_{\epsilon\to 0}\epsilon^\alpha\mathcal{E}\left(M(t,\epsilon I^{\frac{1}{\alpha}})\Big|\mathbf{X}\right)\stackrel{a.s.}{=}a(t).
\end{align*}
\end{lem}
Provided that $\mathbf{X}$ is a Feller process, the proof is almost identical to that of Haas, with the difference that one has to work on an event having a probability arbitrarily close to 1 and that ensures $\int_0^\infty a^2(t)\mathrm{d}t$ to have finite expectation (this event can be $H_K:=\{\sup_{u\in\mathbb{U}}\chi_u^*\leq K\}$ where $\chi_u^*:=\sup_{t\geq 0}\chi_u(t)$).
We skip the details of the proof of Lemma \ref{Adaptation of Lemma 8 of Haas}, the Feller property and its proof are given in Appendix.
The following lemma restates the first convergence in Theorem \ref{Theorem fragments approx density a}:
\begin{lem}\label{Lemma small fragments}
When $\alpha>-\omega_-$, we have for almost every $t\in\mathbb{R}_+$ that
\begin{align*}
\epsilon^\alpha M(t,\epsilon)&\xrightarrow[\epsilon\to 0]{\mathrm{a.s.}}\ a(t)/\alpha\kappa'(\omega_-).
\end{align*}
\end{lem}
\begin{proof}
This proof is again very similar to that of Haas, we thus just focus on verifying that the hypotheses of the Wiener-Pitt theorem (Theorem 4.8.0 of \cite{BGT87}) are satisfied, and refer to the proof of \cite{H04} Theorem 7 to see how it applies to show the convergence of small fragments.
What has to be shown is that the Mellin transform of $I$, defined as $\mathscr{M}_I(ix):=\widehat{\mathbb{E}}_1(I^{ix-1})$, exists and is non zero for all $x\in\mathbb{R}$.
We already know by Lemma \ref{properties of k} that $\widehat{\mathbb{E}}_1(I^{-1})=\alpha\kappa'(\omega_-)\in\intervalleoo{0}{\infty}$. Let $\Psi$ be the characteristic exponent of $\eta$ defined as $\Psi:\theta\mapsto-\log\widehat{\mathbb{E}}(e^{i\theta\eta(1)})$.
Theorem 2.7(1) in \cite{PS16} shows that
\begin{align*}
\mathscr{M}_I(ix)
=\frac{\Psi(-\alpha x)}{ix}\mathscr{M}_I(1+ix).
\end{align*}
It is not hard to check that $\Psi(-\alpha x)\neq 0$.
Moreover, it is also stated in the same theorem that
\begin{align*}
\mathscr{M}_I(1+ix)=\Phi_+(0)\frac{\Gamma(1+ix)}{W_{\Phi_-}(1+ix)}W_{\Phi_+}(-ix),
\end{align*}
where $\Phi_+$ (respectively $\Phi_-$) is the characteristic exponent of the ascending (respectively descending) ladder height process of $\eta$ and $W_{\phi_+}$ (respectively $W_{\Phi_-}$) is the generalized Weierstrass product of $\Phi_+$ (respectively $\Phi_-$) as in $\cite{PS16}$ (see Kyprianou \cite{K14} or Bertoin \cite{B96} for definitions and details on ladder height processes). In particular, it is well-known that since $\eta$ drifts to $-\infty$, we have $\Phi_+(0)>0$. Therefore $W_{\phi_+}(z)\neq 0$ for all $z\in\mathbb{C}$ with $\mathrm{Re}(z)\geq 0$ by Theorem 3.2 of $\cite{PS16}$. Furthermore, it also ensures that $W_{\Phi_-}$ is holomorphic on $\left\{z\in\mathbb{C}:\mathrm{Re}(z)>0\right\}$ therefore $|W_{\Phi_-}(1+ix)|<\infty$. Since $\Gamma(1+ix)$ is non-zero, we see that $\mathscr{M}_I(1+ix)\neq 0$, we can then apply the Wiener-Pitt theorem, as planned, giving the claim.
\end{proof}
To conclude the proof of Theorem \ref{Theorem fragments approx density a}, it remains to show that for almost every $t\geq 0$,
\begin{align}\label{equation M sim N}
\frac{-\alpha}{\omega_-}M(t,\epsilon)\underset{\epsilon\to 0}{\sim}\frac{\omega_-+\alpha}{\omega_-}\epsilon^{\omega_-}N(t,\epsilon).
\end{align}
Let $\mu:=\sum_{i\geq 1}\delta_{X_i(t)^{\omega_-}}$ and let $\overline{\mu}(x):=\mu(\intervalleoo{x}{\infty})$. Define $\mathrm{d}f(y)=y\mu(\mathrm{d}y)$. Equation \eqref{equation M sim N} can be shown using Tauberian's Theorems, we refer to the proof of equation (4) in \cite{B04} to see that $f$ is regularly varying at 0 with index $1-\beta\in\intervalleoo{0}{1}$ if and only if $\overline{\mu}$ is regularly varying at 0 with index $-\beta$. In that case, it holds that
\begin{align*}
\beta\epsilon\overline{\mu}(\epsilon)\underset{\epsilon\to 0}{\sim}(1-\beta)f(\epsilon).
\end{align*}
Now remark that $\overline{\mu}(\epsilon)=N(t,\epsilon^{1/\omega_-})$ and $f(\epsilon)=M(t,\epsilon^{1/\omega_-})$ which implies with Lemma \ref{Lemma small fragments} that $1-\beta=-\alpha/\omega_-\in\intervalleoo{0}{1}$. This proves that \eqref{equation M sim N} holds.
\section{Hausdorff dimension}\label{Section Hausdorff dimension}
We now study the case $\alpha\leq-\omega_-$ so that $\mathrm{d}A$ is singular with respect to the Lebesgue measure almost surely, by Theorem \ref{absolute continuity}. We describe the set on which $\mathrm{d}A$ is concentrated through its Hausdorff dimension, see \cite{F85} for background.
Recall by Lemma \ref{Lemma A(epsilon)} that $A$ is strictly increasing on $\intervalleoo{0}{\zeta}$ so the support of $\mathrm{d}A$ is exactly $\intervalleff{0}{\zeta}$. However $\mathrm{dim}_H(\mathrm{d}A)$ is not necessarily 1, as shown in the following theorem:
\begin{thm}\label{Theorem Hausdorff dimension}
Suppose $-\omega_+<\alpha\leq -\omega_-$, then it holds that:
\begin{align*}
\mathrm{dim}_H(\mathrm{d}A)=\frac{\omega_-}{-\alpha},\qquad \mathcal{P}_1\text{-a.s}.
\end{align*}
Furthermore, $\mathrm{dim}_H(\mathrm{d}A)\geq -\omega_-/\alpha$ holds for any value of $\alpha\leq\omega_-$.
\end{thm}
\begin{rem}[Hölder continuity]
Copying the argument of Haas \cite{H04} Proposition 12(i), Theorem \ref{Theorem Hausdorff dimension} directly implies that if $-\omega_+<\alpha\leq -\omega_-$, then $A$ is $\gamma$-Hölder continuous for every $\gamma<-\omega_-/2\alpha$.
\end{rem}
\subsection{The lower bound}
Frostman's Lemma (see e.g. \cite{F85} Corollary 6.6(a)), that we now recall, is the key to the lower bound.
\begin{lem}[Frostman's Lemma]\label{Lemma Frostman Lemma}
Let $b\in\intervalleof{0}{1}$ and let $\mu$ be a finite measure on $\mathbb{R}$. If
\begin{align*}
\mathcal{I}_b(\mu):=\int_{\mathbb{R}}\int_{\mathbb{R}}\frac{\mathrm{d}\mu(u)\mathrm{d}\mu(v)}{|u-v|^b}<\infty,
\end{align*}
then $\dim_H(\mu)\geq b$.
\end{lem}
\begin{proof}[\textbf{Proof of Theorem \ref{Theorem Hausdorff dimension}: the lower bound}]
In the light of Lemma \ref{Lemma Frostman Lemma}, it is sufficient to show that
\begin{align}\label{equation b-energie}
\mathcal{E}_1(\mathcal{I}_b(\mathrm{d}A))<\infty,
\end{align}
for all $b<\frac{\omega_-}{-\alpha}$.
We write
\begin{align*}
\mathcal{E}_1(\mathcal{I}_b(\mathrm{d}A))
&=\mathcal{E}_1\left(\int_0^\infty\int_0^\infty\frac{\mathrm{d}A(u)\mathrm{d}A(v)}{|u-v|^b}\right).\\
\shortintertext{As previously, we sample a first spine applying Lemma \ref{Lemma A and random leaf} and we get}
\mathcal{E}_1(\mathcal{I}_b(\mathrm{d}A))
&=\widehat{\mathcal{E}}_1\left(\mathcal{E}_1\left(\int_0^\infty\frac{\mathrm{d}A(v)}{|\zeta_{\sigma}-v|^b}\Big|(\sigma(t))_{t\leq \zeta_{\sigma}}\right)\right)\\
&=\widehat{\mathbb{E}}_1\left(\mathcal{E}_1\left(\int_0^\infty\frac{\mathrm{d}A(v)}{|I_1-v|^b}\Big|(Y(t))_{t\leq I_1}\right)\right),
\end{align*}
where $I_1$ denotes the life time of $Y$. We then decompose $A$ as in the proof of Theorem \ref{absolute continuity}, we write
\begin{align*}
\mathcal{E}_1(\mathcal{I}_b(\mathrm{d}A))
&=\widehat{\mathbb{E}}_1\left(\sum_{s<I_1}\mathcal{E}_{|\Delta_- Y(s)|}\left(\int_s^\infty\frac{\mathrm{d}A_s(v)}{|I_1-v|^b}\Big|(Y(t))_{t\leq I_1}\right)\right),
\end{align*}
where $A_s$ is the intrinsic area function associated with the restriction of $\mathcal{A}$ to $\partial\mathbb{U}_s$, the leaves of the subtree generated by the cell born at time $s$. In particular, $A_s$ has same conditional distribution as $A$ under $\mathcal{P}_{|\Delta_- Y_s|}$ shifted by $s$ (see \cite{BBCK16} Theorem 4.7). We rewrite the right hand-side above denoting $A_s^*:=A_s(s+\cdot)$ and get
\begin{align*}
&\mathcal{E}_1(\mathcal{I}_b(\mathrm{d}A))\\
&\hspace{1cm}=\widehat{\mathbb{E}}_1\left(\sum_{s<I_1}\mathcal{E}_{|\Delta_- Y(s)|}\left(\int_0^\infty\frac{\mathrm{d}A_s^*(v)}{|I_1-s-v)|^b}\Big|(Y(t))_{t\leq I_1}\right)\right)\\
&\hspace{1cm}=\widehat{\mathbb{E}}_1\left(\sum_{s<I_1}|\Delta_- Y(s)|^{\omega_-}\mathcal{E}_1\left(\int_0^\infty\frac{\mathrm{d}A_s^*(v)}{|I_1-s-|\Delta_- Y(s)|^{-\alpha}v|^b}\Big|(Y(t))_{t\leq I_1}\right)\right),
\end{align*}
where we applied the self-similarity of $A_s^*$ for the last equality. Using Lemmas \ref{Lemma A and random leaf} and \ref{Lemma spine distributed as Y}, let $I_2$ be a random variable with density $k$ independent of $Y$ and write
\begin{align*}
\mathcal{E}_1(\mathcal{I}_b(\mathrm{d}A))
&=\widehat{\mathbb{E}}_1\left(\sum_{s<I_1}|\Delta_- Y(s)|^{\omega_-}\widehat{\mathbb{E}}_1\left(\big|I_1-s-|\Delta_- Y(s)|^{-\alpha}I_2\big|^{-b}\Big|(Y(t))_{t\leq I_1}\right)\right)\\
&=\widehat{\mathbb{E}}_1\left(\sum_{s<I_1}|\Delta_- Y(s)|^{\omega_-}\big|I_1-s-|\Delta_- Y(s)|^{-\alpha}I_2\big|^{-b}\right).
\end{align*}
We use Theorem 57 of \cite{DM2}, justified again by positivity: the optional projection of $s\mapsto|I_1-s-|\Delta_-Y_s|^{-\alpha}I_2|^{-b}$ being given by
\begin{align*}
s\mapsto \int_0^\infty\mathrm{d}u\int_0^\infty\mathrm{d}v\frac{k(u)k(v)}{|Y(s)^{-\alpha}u-|\Delta_-Y(s)|^{-\alpha}v|^b}.
\end{align*}
We hence obtain
\begin{align*}
\mathcal{E}_1(\mathcal{I}_b(A))
&=\widehat{\mathbb{E}}_1\left(\sum_{s<I_1}|\Delta_-Y(s)|^{\omega_-}\int_0^\infty\mathrm{d}u\int_0^\infty\mathrm{d}v\frac{k(u)k(v)}{|Y(s)^{-\alpha}u-|\Delta_-Y(s)|^{-\alpha}v|^b}\right).
\end{align*}
Assuming without loss of generality that $Y$ is homogeneous, we see that the latter is equal to
\begin{align*}
&\widehat{\mathbb{E}}_1\Bigg(\sum_{s>0}e^{(\omega_-+\alpha b)\eta(s-)}(1-e^{\Delta_-\eta(s)})^{\omega_-}\\
&\qquad\qquad\qquad\qquad\times\int_0^\infty\mathrm{d}u\int_0^\infty\mathrm{d}v\frac{k(u)k(v)}{|e^{-\alpha\Delta_-\eta(s)}u-(1-e^{\Delta_-\eta(s)})^{-\alpha}v|^b}\Bigg).
\end{align*}
Lemma \ref{Lemma technical inequalities} in Appendix finally yields that
\begin{align*}
\mathcal{E}_1(\mathcal{I}_b(A))
&\leq C\widehat{\mathbb{E}}_1\left(\sum_{s>0}e^{(\omega_-+\alpha b)\eta(s-)}(1-e^{\Delta_-\eta(s)})^{\omega_-}\right),
\end{align*}
where $C$ is a deterministic finite constant. The compensation formula then shows that the righ-hand side above is equal to
\begin{align*}
C\int_0^\infty e^{s\kappa(2\omega_-+\alpha b)}\mathrm{d}s\int_{\mathbb{R}_-}(1-e^y)^{\omega_-}e^{\omega_-y}(\Lambda+\widetilde{\Lambda})(\mathrm{d}y).
\end{align*}
The last integral being finite by definition of $\omega_-$, we see that this expression is finite whenever $\kappa(2\omega_-+\alpha b)<0$, which is the case if and only if $b\in\intervalleoo{\frac{2\omega_--\omega_+}{-\alpha}}{\frac{\omega_-}{-\alpha}}$ (this interval is never empty since we assume that $\omega_-<\omega_+$). We thus have shown that
\begin{align*}
b\in\intervalleoo{\frac{2\omega_--\omega_+}{-\alpha}}{\frac{\omega_-}{-\alpha}}\quad\Rightarrow\quad\eqref{equation b-energie}\quad\Rightarrow\quad\mathcal{I}_b(A)<\infty\quad\text{a.s.},
\end{align*}
which by Lemma \ref{Lemma Frostman Lemma} gives the lower bound for any $\alpha\leq\omega_-$.
\end{proof}
\subsection{The upper bound}
In the pure fragmentation setting, the analogue of $A$ is the function $M$ of the loss of mass. The upper bound of $\mathrm{dim}_H(\mathrm{d}M)$ has been obtained by Haas and Miermont in \cite{HM04} by constructing the CRT induced by the fragmentation. They first investigated the Hausdorff dimension of the leaves of the tree, then they deduced the upper bound for $\mathrm{dim}_H(\mathrm{d}M)$ using the fact that the image of a set by any surjective Lipschitz mapping (in their case the cumulative height profile) has Hausdorff dimension at most equal to that of the original set (this is a direct consequence of Lemma 1.8 in \cite{F85}). Since Rembart and Winkel \cite{RW16} already provided the Hausdorff dimension of the leaves $\mathcal{L}(\mathcal{T})$ of the CRT, we can use the same argument as Haas and Miermont to obtain the upper bound. For this reason we now work on $(\mathcal{T},\mathcal{A}_{\mathcal{T}})$ instead of $(\overline{\mathbb{U}},\mathcal{A})$.
It is not hard to see from its definition in Section \ref{Section preliminaries} that the height function $\mathrm{ht}$ is Lipschitz with respect to the metric $d$.
\begin{proof}[\textbf{Proof of the upper bound}]
Recall that $\mathcal{A}_{\mathcal{T}}$ is supported on $\mathcal{L}(\mathcal{T})$ (more precisely the subset of $\mathcal{L}(\mathcal{T})$ corresponding to leaves in $\partial\mathbb{U}$).
By definition of $A$ \eqref{equation def A}, $\mathrm{d}A(\mathrm{ht}(\mathcal{L}(\mathcal{T})))$ is equal to its total mass $\mathcal{M}$. Therefore,
\begin{align*}
\mathrm{dim}_H(\mathrm{d}A)\leq\mathrm{dim}_H(\mathrm{ht}(\mathcal{L}(\mathcal{T})))\leq\mathrm{dim}_H(\mathcal{L}(\mathcal{T}))
\end{align*}
since $\mathrm{ht}$ is Lipschitz. By Theorem 4.5 in \cite{RW16}, $\mathrm{dim}_H(\mathcal{L}(\mathcal{T}))=-\omega_-/\alpha$, which gives the claim.
\end{proof}
\section{Application to Boltzmann random planar maps}
In \cite{BBCK16}, the authors showed that by cutting particular Boltzmann random maps at heights, one obtains a collection of cycles whose lengths are described in scaling limit by a specific family of growth-fragmentations with cumulant function of the form
\begin{align*}
\kappa_\theta(q):=\frac{\cos(\pi(q-\theta))}{\sin(\pi(q-2\theta))}\cdot\frac{\Gamma(q-\theta)}{\Gamma(q-2\theta)},\qquad q\in\intervalleoo{\theta}{2\theta+1},
\end{align*}
with self-similarity index $\alpha=1-\theta$, for some parameter $\theta\in\intervalleof{1}{3/2}$. (The case $\theta=3/2$ corresponds to the Brownian map.)
The Cram\'er hypothesis \eqref{Cramer hypothesis} holds with $\omega_-=\theta+1/2$ and $\omega_+=\theta+3/2$, so that the intrinsic area of the ball of radius $r$ is an absolutely continuous function of $r$, by Theorem \ref{absolute continuity}. The small cycle lengths in the random maps are related to $a$ by Theorem \ref{Theorem fragments approx density a} in this paper and Theorem 6.8 in \cite{BBCK16}.
\section*{Acknowledgement}
I would like to thank Jean Bertoin for his constant support, thorough supervision and genuine kindness. I am also grateful to Bastien Mallein for numerous helpful discussions.
\section*{Appendix}
We state and prove here a technical results on the density $k$ that we have used in the proof of the lower bound in Theorem \ref{Theorem Hausdorff dimension}.
\begin{lem}\label{Lemma technical inequalities}
Let $b,c\in\intervalleoo{0}{1}$. We have that
\begin{align*}
\int_0^{\infty}\mathrm{d}u\int_0^{\infty}\mathrm{d}v\frac{k(u)k(v)}{|uc^{-\alpha}-v(1-c)^{-\alpha}|^b}
&\leq C,
\end{align*}
where $C$ is a finite constant not depending on $c$.
\end{lem}
\begin{proof}
We have
\begin{align*}
\int_0^{\infty}\mathrm{d}u\int_0^{\infty}\mathrm{d}v&\frac{k(u)k(v)}{|uc^{-\alpha}-v(1-c)^{-\alpha}|^b}\\
&\qquad\qquad=c^\alpha(1-c)^\alpha\int_0^\infty k(uc^\alpha)\mathrm{d}u\int_0^\infty \mathrm{d}v\frac{k(v(1-c)^\alpha)}{|u-v|^b}.
\end{align*}
Consider the last integral, we have that
\begin{align*}
\int_{0}^{\infty}\frac{k(v(1-c)^\alpha)}{|u-v|^b}\mathrm{d}v
&\leq\int_{u-1}^{u+1}\frac{||k||_\infty}{|u-v|^b}\mathrm{d}v+\int_0^\infty k(v(1-c)^\alpha)\mathrm{d}v\\
&\leq\int_{u-1}^{u+1}\frac{||k||_\infty}{|u-v|^b}\mathrm{d}v+\int_0^\infty k(v)\mathrm{d}v\\
&=C,
\end{align*}
where $C$ denotes a constant not depending on $c$.
This yields that
\begin{align*}
\int_0^{\infty}\mathrm{d}u\int_0^{\infty}\mathrm{d}v\frac{k(u)k(v)}{|uc^{-\alpha}-v(1-c)^{-\alpha}|^b}
&\leq c^\alpha(1-c)^\alpha\int_0^\infty Ck(uc^\alpha)\mathrm{d}u\\
&=(1-c)^\alpha C
\end{align*}
Notice that the same arguments apply when the roles of $c$ and $(1-c)$ are exchanged, which entails that the upper bound that we just obtained holds with $(c\vee(1-c))^\alpha$ instead of $(1-c)^\alpha$. One remarks that $c\vee(1-c)\geq 1/2$ and the claim follows.
\end{proof}
For $q>0$, we define $\ell^{q\downarrow}$ the subset of $\ell^q$ of non-increasing null sequences with finite $q$-norm, denoted by $||\cdot||_q$.
\begin{lem}[Feller's Property]\label{Lemma Feller}
The law of the growth-fragmentation $\mathbf{X}$ satisfies the following Feller's property: let $\underline{x}_n$, $n\in\mathbb{N}$ and $\underline{x}$ be elements of $\ell^{\omega_-\downarrow}$ such that $(\underline{x}_n)_{n\geq 1}$ converges in $\ell^{\omega_-\downarrow}$ to $\underline{x}$. Then it holds that
\begin{align*}
\mathcal{P}_{\underline{x}_n}\Rightarrow\mathcal{P}_{\underline{x}},\quad\text{as }n\to\infty,
\end{align*}
where $\Rightarrow$ means weak convergence in the sense of finite-dimensional distributions in $\ell^{\omega_-\downarrow}$.
\end{lem}
\begin{proof}[Proof of Lemma \ref{Lemma Feller}]
We denote $\underline{x}_n:=(x_{n,1},x_{n,2},\cdots)$ and $\underline{x}:=(x_1,x_2,\cdots)$.
Let $\mathbf{X}^{(n)}$ (respectively $\mathbf{Y}$) be a self-similar growth-fragmentation process with distribution $\mathcal{P}_{\underline{x}_n}$ (respectively $\mathcal{P}_{\underline{x}}$). We shall show that the Wasserstein distance between $\mathcal{P}_{\underline{x}_n}$ and $\mathcal{P}_{\underline{x}}$ converges to zero, which will entail the claim.
Let $t>0$ and write
\begin{align}\label{equation decomposition GF's}
\mathcal{E}\left(||\mathbf{X}^{(n)}(t)-\mathbf{Y}(t)||_{\omega_-}^{\omega_-}\right)
&\leq\mathcal{E}\left(\sum_{k\geq 1}||\mathbf{X}^{(n)}_k(t)-\mathbf{Y}_k(t)||_{\omega_-}^{\omega_-}\right),
\end{align}
where $\mathbf{X}^{(n)}_k$ and $\mathbf{Y}_k$ are growth-fragmentations with respective distributions $\mathcal{P}_{x_{n,k}}$, $\mathcal{P}_{x_k}$. In the same vein as in the proof of Proposition 2 in \cite{B17} (viz the branching property for growth-fragmentations), we fix $\epsilon>0$ and define $\mathbf{X}^{(n)}_{k,\epsilon}$, (respectively $\mathbf{Y}_{k,\epsilon}$) the growth-fragmentation obtained from $\mathbf{X}^{(n)}_k$ (respectively $\mathbf{Y}_k$) by killing every fragment - and those it generates in the future - as soon as it reaches a size smaller than $\epsilon$. The triangle inequality on the right-hand side of \eqref{equation decomposition GF's} entails that
\begin{align}\label{equation middle term GF's}
\mathcal{E}\left(||\mathbf{X}^{(n)}(t)-\mathbf{Y}(t)||_{\omega_-}^{\omega_-}\right)
\leq 3^{\omega_-}\left(A_n+B_n+C\right),
\end{align}
where
\begin{align*}
A_n:=&\mathcal{E}\left(\sum_{k\geq 1}||\mathbf{X}^{(n)}_k(t)-\mathbf{X}^{(n)}_{k,\epsilon}(t)||_{\omega_-}^{\omega_-}\right)\\
B_n:=&\mathcal{E}\left(\sum_{k\geq 1}||\mathbf{X}^{(n)}_{k,\epsilon}(t)-\mathbf{Y}_{k,\epsilon}(t)||_{\omega_-}^{\omega_-}\right)\\
C:=&\mathcal{E}\left(\sum_{k\geq 1}||\mathbf{Y}_{k,\epsilon}(t)-\mathbf{Y}_k(t)||_{\omega_-}^{\omega_-}\right).
\end{align*}
Recall that $X_j(t)$ is the size of the $j$th largest fragment in a growth-fragmentation at time $t$. We define $X_j^*(t)$ the infimum of the sizes of the ancestors of $X_j(t)$ before time $t$. In particular, if $X_j(t)>0$, then $X_j^*(t)>0$.
Fix $a>\epsilon$ and write
\begin{align*}
A_n
=\sum_{k\geq 1}\mathds{1}_{\left\{x_{n,k}\leq\epsilon\right\}}&\mathcal{E}_{x_{n,k}}\left(||\mathbf{X}_k^{(n)}(t)||_{\omega_-}^{\omega_-}\right)\\
+\mathds{1}_{\left\{x_{n,k}>\epsilon\right\}}&\mathcal{E}_{x_{n,k}}\left(\sum_{j\geq 1}X_j(t)^{\omega_-}\mathds{1}_{\left\{0<X_j^*(t)\leq\epsilon,\ X_j(t)\leq a\right\}}\right)\\
+\mathds{1}_{\left\{x_{n,k}>\epsilon\right\}}&\mathcal{E}_{x_{n,k}}\left(\sum_{j\geq 1}X_j(t)^{\omega_-}\mathds{1}_{\left\{0<X_j^*(t)\leq\epsilon,\ X_j(t)>a\right\}}\right).
\end{align*}
The first part is smaller than $\sum_{k\geq 1}x_{n,k}^{\omega_-}\mathds{1}_{\left\{x_{n,k}\leq\epsilon\right\}}$ by Theorem 2 in \cite{B17}. Applying Proposition 4.6 and Theorem 4.7 in \cite{BBCK16}, we bound the second part by
\begin{align*}
&\sum_{k\geq 1}\mathds{1}_{\left\{x_{n,k}>\epsilon\right\}}\mathcal{E}_{x_{n,k}}\left(\sum_{j\geq 1}X_j(t)^{\omega_-}\mathds{1}_{\left\{X_j(t)\leq a\right\}}\right)\\
&\hspace{3cm}\leq\sum_{k\geq 1}\mathds{1}_{\left\{x_{n,k}>\epsilon\right\}}x_{n,k}^{\omega_-}\widehat{\mathbb{P}}_{x_{n,k}}\left(0<Y(t)\leq a\right).
\end{align*}
Hence, since $Y$ is a Feller process, we have that
\begin{align}\label{equation limsup A_n}
\limsup_{n\to\infty}A_n
&\leq\sum_{k\geq 1}\mathds{1}_{\left\{x_k\leq\epsilon\right\}}x_k^{\omega_-}+\mathds{1}_{\left\{x_k\geq\epsilon\right\}}x_k^{\omega_-}\widehat{\mathbb{P}}_{x_k}\left(0<Y(t)\leq a\right)\nonumber\\
&\hspace{2.5cm}+\mathds{1}_{\left\{x_k\geq\epsilon\right\}}\mathcal{E}_{x_k}\left(\sum_{j\geq 1}X_j(t)^{\omega_-}\mathds{1}_{\left\{0<X_j^*(t)\leq\epsilon,\ X_j(t)\geq a\right\}}\right)
\end{align}
(the latter expectations follow from self-similarity, Fatou's Lemma and stochastic continuity).
The sum of the terms smaller than $\epsilon$ can be taken arbitrarily close to 0, which is also the case of the second part of the sum with $a$, by dominated convergence.
To see that the last terms of\eqref{equation limsup A_n} can also be taken as small as one wishes with $\epsilon$ and $a$, we refer to the proof of Proposition 2 in \cite{B17}. Therefore, it holds that $\lim_{n\to\infty}A_n=0$. Similar arguments can be used to deal with $C$.
It remains to show that $\lim_{n\to\infty}B_n=0$ and the claim will follow from \eqref{equation middle term GF's}. Using the self-similarity, we have that
\begin{align*}
B_n
&=\mathcal{E}\left(\sum_{k\geq 1}||\mathds{1}_{\left\{x_{n,k}>\epsilon\right\}}x_{n,k}\mathbf{X}^{(n)}_{k,\epsilon/x_{n,k}}(tx_{n,k}^\alpha)-\mathds{1}_{\left\{x_k>\epsilon\right\}}x_k\mathbf{Y}_{k,\epsilon/x_k}(tx_k^\alpha)||_{\omega_-}^{\omega_-}\right).
\end{align*}
Now each growth-fragmentation has distribution $\mathcal{P}_1$, and since the Wasserstein metric is given by the infimum over the set of joint distributions, we can assume that $\mathbf{Y}_k=\mathbf{X}^{(n)}_k$. We drop the indicator functions, letting $\mathbf{Y}_{k,\delta}$ be the null sequence whenever $\delta\geq 1$. Since $\underline{x}_n\xrightarrow{\ell^{\omega_-}}\underline{x}$, we just need to show the following convergence:
\begin{align}\label{equation B_n to 0}
\sum_{k\geq 1}x_k^{\omega_-}\mathcal{E}\left(||\mathbf{Y}_{k,\epsilon/x_{n,k}}(tx_{n,k}^\alpha)-\mathbf{Y}_{k,\epsilon/x_k}(tx_k^\alpha)||_{\omega_-}^{\omega_-}\right)\underset{n\to\infty}{\longrightarrow}0.
\end{align}
The left-hand side is bounded by
\begin{align*}
&\sum_{k\geq 1}x_k^{\omega_-}\mathcal{E}\Big(||\mathbf{Y}_{k,\epsilon/x_{n,k}}(tx_{n,k}^{\alpha})-\mathbf{Y}_{k,\epsilon/x_k}(tx_{n,k}^{\alpha})||_{\omega_-}^{\omega_-}\\
&\hspace{6cm}+||\mathbf{Y}_{k,\epsilon/x_k}(tx_{n,k}^{\alpha})-\mathbf{Y}_{k,\epsilon/x_k}(tx_k^{\alpha})||_{\omega_-}^{\omega_-}\Big).
\end{align*}
The second part converges to 0 as $n\to\infty$ by stochastic continuity. The first part contains only fragments whose ancestors have minimum size between $\epsilon/(x_{n,k}\vee x_k)$ and $\epsilon/(x_{n,k}\wedge x_k)$. The dominated convergence theorem yields the claim, that is \eqref{equation B_n to 0} holds, which concludes the proof.
\end{proof}
\bibliographystyle{plain}
|
1,108,101,566,380 | arxiv | \section{Examples for local parameterization functions}
\label{sec:app1}
\begin{example}
Taylor polynomials of degree $k$ at $t=t_n$,
\begin{equation}
\label{eq:taylor}
\widetilde{\gamma}_n(\tau; \boldsymbol{\Gamma}_n)=\sum_{j=0}^k \widehat{\gamma}^n_j\, \tau^j
\end{equation}
where $\widehat{\gamma}^n_j=\frac{\gamma^{(j)}(t_n)}{j!}$, for $j=0, 1, \ldots, k$.
\end{example}
\begin{example}
Interpolating polynomials at $k+1$ equally spaced points in $[t_n, t_{n+1}]$
\begin{equation}
\label{eq:interp}
\widetilde{\gamma}_n(\tau;\boldsymbol{\Gamma}_n)=\sum_{j=0}^k \widehat{\gamma}_j^n l_j(\tau),
\end{equation}
where
\begin{equation*}
\l_j(\tau)=\prod_{\substack{i=0,\\ i\neq j}}^k \,\frac{k\tau-i\delta_n}{(j-i)\delta_n}, \quad j=0, 1, \ldots, k,
\end{equation*}
are the Lagrange basis. The parameters are point values on equally spaced points in $[t_n, t_{n+1}]$, i.e., $\widehat{\gamma}_j^n=\gamma(t_n+j\delta_n/k)$, for $j=0, 1, 2, \ldots, k$.
\end{example}
\begin{example}
The $L^2$ projection into the polynomial space $\mathbb{P}^k([t_n ,t_{n+1}])$.
\begin{equation}
\widetilde{\gamma}_n(\tau;\boldsymbol{\Gamma}_n)=\sum_{j=0}^k \widehat{\gamma}_j^n \widehat{\phi}_j(\tau)
\end{equation}
where $$\widehat{\phi}_j(\tau)=\sqrt{\frac{2j+1}{\delta_n}} p_j\left(\frac{2}{\delta_n}\tau-1\right), \quad s\in [0, \delta_n]$$
and $\{p_j\}_{j=0}^k$ are the Legendre polynomials on the reference interval $[-1, 1]$ defined by the following recursive relationship:
\begin{equation}
(j+1)p_{j+1}(x)=(2j+1)\, x\, p_j(x)-j\,p_{j-1}(x), \quad p_0(x)=1, \quad p_1(x)=x,
\end{equation}
for $x\in [-1, 1]$. The parameters $\{\gamma^n_j\}_{j=0}^k$ are defined by
\begin{equation}
\gamma_j^n=\int_{t_{n}}^{t_{n+1}} \gamma(t)\widehat{\phi}_j(t-t_n)\,dt
=\sqrt{\frac{(2j+1)\delta_n}{4}} \int_{-1}^1 \gamma\left( \frac{\delta_n}{2} (s+1)+t_n\right) p_j(s)\,ds.
\end{equation}
\end{example}
\section{A Proof of Proposition \ref{prop:NN_error}}
\label{sec:app2}
\begin{proof}
Let us define $\widetilde{\boldsymbol{\Psi}}=\widehat{\mathbf{I}}+\widetilde{\pphi}$ as the exact evolution operator \eqref{evo_mod} for the modified system, where $\widehat{\mathbf{I}}$ is defined in \eqref{hatI}. Then for any $\mathbf{z}, \mathbf{y}\in {\mathbb R}^d$, and $(\boldsymbol{\Lambda}, \delta)\in {\mathbb R}^{n_b}\times {\mathbb R}$ we have
\begin{align}
\label{eq: B1}
\nonumber
\abs{\widehat{\N}(\mathbf{z}, \boldsymbol{\Lambda}, \delta; \Theta^*)-{\widetilde{\boldsymbol{\Psi}}}(\mathbf{y}, \boldsymbol{\Lambda}, \delta; \Theta^*)}
&\leq \abs{\widehat{\N}(\mathbf{z}, \boldsymbol{\Lambda}, \delta; \Theta^*)-\widetilde{\boldsymbol{\Psi}}(\mathbf{z}, \boldsymbol{\Lambda}, \delta)}+\abs{\widetilde{\boldsymbol{\Psi}}(\mathbf{z}, \boldsymbol{\Lambda}, \delta)-{\widetilde{\boldsymbol{\Psi}}}(\mathbf{y}, \boldsymbol{\Lambda}, \delta)}\\\nonumber
&=\abs{\mathbf{N}(\mathbf{z}, \boldsymbol{\Gamma}, \delta; \Theta^*)-\widetilde{\pphi}(\mathbf{z}, \boldsymbol{\Gamma}, \delta)}+\abs{\widetilde{\boldsymbol{\Psi}}(\mathbf{z}, \boldsymbol{\Lambda}, \delta)-{\widetilde{\boldsymbol{\Psi}}}(\mathbf{y}, \boldsymbol{\Lambda}, \delta)}\\
&\leq \mathcal{E}+e^{L_1\delta}|\mathbf{z}-\mathbf{y}|
\end{align}
where in the last step we have used \eqref{eq:err_NN} and the classical result on the continuity of dynamical system with respect to the initial data; see \cite[p. 109]{stuart1998dynamical}.
To proceed, let us further set $\widetilde{\boldsymbol{\Psi}}_k(\cdot)=\widetilde{\boldsymbol{\Psi}}(\cdot, \boldsymbol{\Gamma}_k, \delta_k)$ and $\widehat{\boldsymbol{\Psi}}_k(\cdot)=\widehat{\N}(\cdot, \boldsymbol{\Gamma}_k, \delta_k;\Theta^*)$.
Then for $t=\sum_{k=0}^{n-1} \delta_k$, by \eqref{evo_mod} and \eqref{eq:prediction}, the solution and the approximation at time $t$ can be represented as $n$ compositions of one-step evolution operators as below
\begin{equation*}
\widetilde{\mathbf{x}}(t)=\widetilde{\boldsymbol{\Psi}}_{n-1}\circ \widetilde{\boldsymbol{\Psi}}_{n-2} \circ \cdots \widetilde{\boldsymbol{\Psi}}_0(\mathbf{x}_0),\quad
\widehat{\mathbf{x}}(t)=\widehat{\boldsymbol{\Psi}}_{n-1}\circ \widehat{\boldsymbol{\Psi}}_{n-2} \circ \cdots \widehat{\boldsymbol{\Psi}}_0(\mathbf{x}_0).
\end{equation*}
Then by applying \eqref{eq: B1} recursively, we have
\begin{align*}
&|\wh{\mathbf{x}}(t)-\widetilde{\mathbf{x}}(t)|\\
= & \abs{\widehat{\boldsymbol{\Psi}}_{n-1}\circ\widehat{\boldsymbol{\Psi}}_{n-2}\circ \cdots \circ\widehat{\boldsymbol{\Psi}}_{0}(\mathbf{x}_0)-{\widetilde{\boldsymbol{\Psi}}}_{n-1}\circ {\widetilde{\boldsymbol{\Psi}}}_{{n-2}}\circ \cdots \circ {\widetilde{\boldsymbol{\Psi}}}_{0}(\mathbf{x}_0)}\\
\leq & \mathcal{E}+e^{L_1\delta_{n-1}}\abs{\wh{\boldsymbol{\Psi}}_{n-2}\circ \cdots \circ\wh{\boldsymbol{\Psi}}_{0}(\mathbf{x}_0)- {\widetilde{\boldsymbol{\Psi}}}_{{n-2}}\circ \cdots \circ {\widetilde{\boldsymbol{\Psi}}}_{0}(\mathbf{x}_0)}\\
\leq & \ldots\\
\leq & \mathcal{E}\left(1+e^{L_1\delta_{n-1}}+e^{L_1(\delta_{n-1}+\delta_{n-2})}+\ldots+e^{L_1\sum_{i=1}^{n-1}\delta_i}\right)\\
\leq & \mathcal{E}\left(1+e^{L_1\Delta}+e^{2L_1\Delta}+\ldots+e^{(n-1)L_1\Delta}\right)\\
= & \frac{e^{nL_1\Delta}-1}{e^{L_1\Delta}-1}\,\mathcal{E}
\end{align*}
which implies the result \eqref{eq:approx_err}.
\end{proof}
\section{Conclusion} \label{sec:conclusions}
In this paper we presented a numerical approach for learning unknown non-autonomous dynamical systems using observations of system states. To circumvent the difficulty posed by the non-autonomous nature of the system, the system states are expressed as piecewise integrations over
time. The piecewise integrals are then transformed into parametric form, upon a local parameterization procedure of the external time-dependent inputs. We then designed deep neural network (DNN) structure to model the parametric piecewise integrals. Upon using sufficient training data to train the DNN model, it can be used recursively over time to conduct system prediction for other external inputs. Various numerical examples in the paper suggest the methodology holds promise to more complex applications.
\section{Numerical Examples} \label{sec:examples}
In this section, we present numerical examples to verify the properties
of the proposed methods. Since our purpose is to validate
the proposed deep learning method, we employ synthetic data generated
from known dynamical systems with known time-dependent inputs. The
training data are generated by solving the known system with high
resolution numerical scheme, e.g., 4th-order Runge Kutta with
sufficiently small time steps. Our proposed learning method is then
applied to the training data set. Once the learned model is
constructed, we conduct system prediction using the model with new
initial conditions and new external inputs. The prediction results are
then compared with the reference solution obtained by solving the
exact system with the same new inputs. Also, to clearly examine the
numerical errors, we only present the tests where the training
data do not contain noises.
In all the examples, we generate the training
data set \eqref{data_set} with $K^{(i)}\equiv 2$, $\forall i$, i.e., each trajectory
only contains two data points. For each of the $i$-th entry in the
data set,
the first data entry is
randomly sampled from a domain $I_\mathbf{x}$ using uniform distribution.
The second data entry is produced by
solving the underlying reference dynamical system with a time step
$\delta^{(i)}\in I_{\Delta}=[0.05, 0.15]$ and subject to a
parameterized external input in the form of \eqref{local_param},
whose parameters \eqref{Gamma} are uniformly sampled from a domain
$I_{\boldsymbol{\Gamma}}$. The sampling domains $I_\mathbf{x}$ and $I_{\boldsymbol{\Gamma}}$ are
problem specific and listed separately for each example.
The DNNs in all the examples use activation function
$\sigma(x)=\tanh(x)$ and are trained by minimizing the mean squared
loss function in \eqref{eq:loss}. The network training is conducted
by using Adam algorithm \cite{kingma2014adam} with the open-source
Tensorflow library \cite{tensorflow2015}.
Upon satisfactory training, the learned models are used to conduct
system prediction, in the form of \eqref{eq:prediction}, with a
constant step size $\delta_n=0.1$.
\subsection{Linear Scalar Equation with Source}
Let us first consider the following scalar equation
\label{exmp:scalar}
\begin{equation}
\frac{dx}{dt}=-\alpha(t)x+\beta(t),
\end{equation}
where the time-dependent inputs $\alpha(t)$ and $\beta(t)$ are locally
parameterized with polynomials of degree $2$, resulting the local
parameter set \eqref{Gamma} $\boldsymbol{\Gamma}_n\in{\mathbb R}^{n_b}$ with $n_b=3+3=6$. We build a neural
network model consisting of $3$ hidden layers with $80$ nodes per
layer. The model is trained with
$20,000$ data trajectories randomly sampled, with uniform
distribution, in the state variable domain $I_{\mathbf{x}}=[-2,2]$ and the
local parameter domain $I_{\boldsymbol{\Gamma}}=[-5,
5]^6$.
After the network model is trained, we use it to conduct system
prediction. In \figref{fig:ex1_scalar}, the prediction result with a
new initial condition $x_0=2$ and new external inputs
$\alpha(t)=\sin(4t)+1$ and
$\beta(t)=\cos(t^2/1000)$ is shown, for time up to $T=100$.
The reference solution is also shown for comparison. It can be seen
that
the network model produces accurate prediction for this relatively
long-term integration.
\begin{figure}[!htb]
%
\begin{center}
\includegraphics[width=\linewidth]{./Figures/Scalar_x_4_interpolate.pdf}
\caption{DNN model prediction of \eqref{exmp:scalar} with
external inputs $\alpha(t)=\sin(4t)+1$ and
$\beta(t)=\cos(t^2/1000)$ and an initial condition $x_0 = 2$.
Comparison of long-term neural network model prediction (labelled ``NN'')
with the reference solution.}
\label{fig:ex1_scalar}
\end{center}
\end{figure}
For this relatively simple and low-dimensional system, its learning can be
effectively conducted by other standard approximation method, as
discussed in Remark \ref{remark1}. With the same quadratic polynomial
for local parameterization as in the DNN modeling, which results in $\boldsymbol{\Gamma}_n\in
[-5,5]^6$, we employ tensor Legendre orthogonal polynomials in total
degree space, which is a standard multi-dimensional approximation
technique, for the approximation of the one-step evolution operator
in \eqref{evo_mod}. In \figref{fig:ex1_polynomial_approx}, the
prediction results by the polynomial learning model are shown, for a case with external inputs
$\alpha(t)=\sin(t/10)+1$ and $\beta(t)=\cos(t)$.
In \figref{fig:ex1_polynomial_approx}(a), the prediction result
obtained by 2nd-degree polynomial learning model is shown. We observe
good agreement with the reference solution. In
\figref{fig:ex1_polynomial_approx}(b), the numerical errors at $T=100$
are shown for the polynomial learning model with varying degrees. We observe
that the errors decay exponentially fast when the degree of polynomial
is increased. Such kind of exponential error convergence is expected
for approximation of smooth problems, such as this example.
\begin{figure}[!htb]
\centering
\begin{subfigure}[b]{0.48\textwidth}
\begin{center}
\includegraphics[width=0.99\linewidth]{./Figures/example-1-compare-solutions}
\caption{System prediction.}
\end{center}
\end{subfigure}
\begin{subfigure}[b]{0.48\textwidth}
\begin{center}
\includegraphics[width=0.99\linewidth]{./Figures/example-1-increasing-degree}
\caption{Errors vs. polynomial degree.}
\end{center}
\end{subfigure}
\caption{Polynomial learning model for \eqref{exmp:scalar} with
$\alpha(t)=\sin(t/10)+1$ and $\beta(t)=\cos(t)$. (a) Comparison
of the model prediction with reference solution. (b)
Relative error in prediction at $T=100$
for increasing polynomial degree in the polynomial learning
model. In all models piecewise quadratic polynomials are
used for local parameterization.}
\label{fig:ex1_polynomial_approx}
\end{figure}
\subsection{Predator-prey Model with Control}
We now consider the following Lotka-Volterra Predator-Prey model with
a time-dependent input $u(t)$:
\begin{equation} \label{exmp:pred}
\begin{split}
\dfrac{d x_1}{dt}&= x_1- x_1 x_2+u(t),\\
\dfrac{d x_2}{dt}&=- x_2+ x_1 x_2.
\end{split}
\end{equation}
The local parameterization for the external input is conducted using
quadratic polynomials, resulting in $\boldsymbol{\Gamma}_n\in{\mathbb R}^3$. More
specifically, we set $I_{\boldsymbol{\Gamma}}=[0, 5]^3$ and the state variable
space $I_{\mathbf{x}}=[0, 5]^2$.
The DNN learning model consists of $3$ hidden layers, each of which
with $80$ nodes. The network training is conducted using $20,000$ data
trajectories randomly sampled from $I_{\mathbf{x}}\times I_{\boldsymbol{\Gamma}}$.
In \figref{fig:pred_1}, we plot its prediction result for a case with
$u(t)=\sin(t/3)+\cos(t)+2$, for time up to $T=100$, along with the
reference solution. It can be seen that the DNN model prediction
agrees very well with the reference solution. The numerical error
fluctuates at the level of $O(10^{-3})$, for this relatively long-term
prediction.
\begin{figure}[!htb]
\centering
\begin{subfigure}[b]{0.49\textwidth}
\begin{center}
\includegraphics[width=\linewidth]{./Figures/Pred_u1_2_projection.pdf}
\caption{System prediction of $x_1$.}
\label{fig:pred_1}
\end{center}
\end{subfigure}
\begin{subfigure}[b]{0.49\textwidth}
\begin{center}
\includegraphics[width=1.0\linewidth]{./Figures/Predator_err_x1_2_interpolation.pdf}
\caption{Error in prediction for $x_1$}
\end{center}
\end{subfigure}
\caption{DNN learning model for \eqref{exmp:pred}.
Comparison of its prediction result for $x_1$ with
$u(t)=\sin(t/3)+\cos(t)+2$ against reference solution.
Results for $x_2$ are very similar and not shown.}
\end{figure}
\subsection{Forced Oscillator}
We now consider a forced oscillator
\begin{equation}
\label{exmp:pend}
\begin{split}
\frac{dx_1}{dt}&=x_2,\\
\frac{dx_2}{dt}&=-\nu(t)\,x_1-k\, x_2+f(t),
\end{split}
\end{equation}
where the damping term $\nu(t)$ and the forcing $f(t)$ are
time-dependent processes.
Local parameterization for the inputs is conducted using quadratic
polynomials. More specifically, the training data are generated
randomly by sampling from state variable space
$I_\mathbf{x}=[-3, 3]^2$ and local parameterization space
$I_{\boldsymbol{\Gamma}}=[-3,3]^6$. Similar to other examples, the DNN
contains $3$ hidden layers with $80$ nodes in
each hidden layer. System prediction using the trained network model
is shown in \figref{fig:pend_3}, for rather arbitrarily chosen
external inputs $\nu(t)=\cos(t)$
and $f(t)=t/50$. Once again, we observe very good agreement with the
reference solution for relatively long-term simulation up to $T=100$.
\begin{figure}[!htb]
\centering
\begin{subfigure}[b]{0.48\textwidth}
\begin{center}
\includegraphics[width=1.0\linewidth]{./Figures/Pendulum_x1_3_interpolate.pdf}
\caption{$x_1(t)$}
\end{center}
\end{subfigure}
\begin{subfigure}[b]{0.48\textwidth}
\centering
\includegraphics[width=1.0\linewidth]{./Figures/Pendulum_x2_3_interpolate.pdf}
\caption{$x_2(t)$}
\end{subfigure}
\caption{DNN model prediction of \eqref{exmp:pend} with inputs
$\nu(t)=\cos(t)$ and $f(t)=t/50$.
}
\label{fig:pend_3}
\end{figure}
\subsection{PDE: Heat Equation with Source}
We now consider a partial differential equation (PDE). In particular,
the following heat equation with a source term,
\begin{equation} \label{exmp:heat}
\begin{split}
&u_t = u_{xx}+q(t,x), \quad x\in [0, 1],\\
&u(0, x)=u_0(x), \\
& u(t,0) = u(t,1) = 0,
\end{split}
\end{equation}
where $q(t, x)$ is the source term varying in both space and time. We
set the source term to be
\begin{equation*}
q(t, x)= \alpha(t) e^{-\frac{(x-\mu)^2}{\sigma^2}},
\end{equation*}
where $\alpha(t)$ is its time varying amplitude and parameter
$\mu$ and $\sigma$ determine its the spatial profile.
The learning of \eqref{exmp:heat} is conducted in a discrete
space. Specifically, we employ $n=22$ equally distributed grid points in
the domain $[0,1]$,
$$
x_j = j/(n-1), \qquad j=1,\dots, n.
$$
Let
$$
\mathbf{u}(t) = \left[u(t, x_2), \cdots, u(t,x_{n-1})\right]^\dagger,
$$
we then seek to construct a DNN model to discover
the dynamical behavior of the solution vector $\mathbf{u}(t)$. Note that the
boundary values $u(x_1) = u(x_n)=0$ are fixed in the problem setting
and to be included in the learning model.
Upon transferring the learning of the PDE \eqref{exmp:heat} into
learning of a finite dimensional dynamical system of $\mathbf{u}\in{\mathbb R}^d$, where
$d=n-2 = 20$, the DNN learning method discussed in this paper can be
readily applied. Training data are synthetic data generated by solving
the system \eqref{exmp:heat} numerically. In particular, we employ
second-order central difference scheme using the same grid points
$\{x_j\}$. The trajectory data are generated by randomly sample
$\mathbf{u}\in{\mathbb R}^{20}$ in a specific domain $I_\mathbf{u} = [0,2]^{20}$. Quadratic
polynomial interpolation is used in local parameterization of the time
dependent source term, resulting in 3-dimensional local representation
for the time dependent coefficient $\alpha(t)$. Random sampling in
domain $I_\alpha = [-2,2]^3$, $I_\mu = [0,3]$, $I_\sigma = [0.05,
0.5]$ is then used to generate the synthetic training data set, for
the parameters
$\alpha$, $\mu$, and $\sigma$, respectively.
The DNN network model thus consists of a total of $25$
inputs. Because of
curse-of-dimensionality, constructing accurate approximation in 25
dimensional space is computational expensive via traditional methods
such as polynomials, radial basis, etc. For DNN, however, 25 dimension
is considered low and accurate network model can be readily
trained. Here we employ a DNN with $3$ hidden layers, each of which
with $80$ nodes.
Upon successful training of the DNN model, we conduct system
prediction for a new source term (not in training data set), where
$\alpha(t)=t-\lfloor t \rfloor$ is a saw-tooth discontinuous function,
$\mu=1$, and $\sigma=0.5$.
The system prediction results are shown in \figref{fig:heat_3}, along
with the reference solution solved from the underlying PDE. We observe
excellent agreement between the DNN model prediction to the reference
solution. It is worth noting that the DNN model, once trained, can be readily used to
predict system behavior for other time dependent inputs.
\begin{figure}[!htb]
\centering
\begin{subfigure}[b]{0.48\textwidth}
\begin{center}
\includegraphics[width=1.0\linewidth]{./Figures/Heat_u2_4_interpolate.pdf}
\caption{Solution evolution at $x=0.5$}
\end{center}
\end{subfigure}
\begin{subfigure}[b]{0.48\textwidth}
\begin{center}
\includegraphics[width=1.0\linewidth]{./Figures/Heat_snapshot_4.pdf}
\caption{Solution profile at $t=2$}
\end{center}
\end{subfigure}
\begin{subfigure}[b]{0.48\textwidth}
\begin{center}
\includegraphics[width=1.0\linewidth]{./Figures/Heat_contour_predict_30_4.pdf}
\caption{Reference solution contours
over time}
\end{center}
\end{subfigure}
\begin{subfigure}[b]{0.48\textwidth}
\begin{center}
\includegraphics[width=1.0\linewidth]{./Figures/Heat_contour_exact_30_4.pdf}
\caption{DNN prediction contours
over time}
\end{center}
\end{subfigure}
\caption{System prediction of \eqref{exmp:heat} with
$\alpha(t)=t- \lfloor t \rfloor$, $\mu=1$, and
$\sigma=0.5$. Comparison between the predictions by the
DNN model and the reference solution.}
\label{fig:heat_3}
\end{figure}
\section{Introduction} \label{sec:intro}
There has been growing research interests in designing machine learning methods to learn unknown physical models from observation data. The fast development of modern machine learning algorithms and availability of vast amount of data have further promoted this line of research.
A number of numerical methods have been developed to learn dynamical systems. These include
sparse identification of nonlinear dynamical systems (SINDy) \cite{brunton2016discovering}, operator inference \cite{peherstorfer2016data}, model selection approach \cite{Mangan20170009}, polynomial expansions \cite{wu2019numerical, wu2019structure},
equation-free multiscale methods \cite{kevrekidis2003equation, theodoropoulos2000coarse},
Gaussian process regression \cite{raissi2017machine}, and deep neural networks \cite{rico1993continuous, raissi2018deep,raissi2018multistep, long2018pde, long2019pde, rudy2019deep}.
Most of these methods treat the unknown governing equations as functions mapping state variables to their time derivatives.
Although effective in many cases, the requirement for
time derivatives poses a challenge when these data are not directly available, as numerical approximation of derivatives can be highly sensitive to noises.
Learning methods that do not require time derivatives have also been developed,
in conjunction with, for example,
dynamic mode decomposition (DMD) \cite{schmid2010dynamic}, Koopman operator theory \cite{mezic2005spectral, mezic2013analysis}, hidden Markov models \cite{Galioto2020}, and more
recently, deep neural network (DNN) \cite{qin2019data}.
The work of \cite{qin2019data} also established a newer framework, which, instead of directly
approximating the underlying governing equations like in most other methods, seeks to approximate
the flow map of the unknown system. The approach produces exact time integrators for system prediction and is particularly suitable with residual network (ResNet) (\cite{he2016deep}).
The approach was recently extended to learning dynamical systems with uncertainty \cite{qin2019UQ},
reduced system \cite{FuChangXiu_JMLMC20}, model correction \cite{chen2020generalized}, and partial differential equations (PDEs) \cite{wu2020data}.
Most of the aforementioned methods are applicable only to autonomous dynamical systems, whose
time invariant property is a key in the mathematical formulation of the methods. For non-autonomous systems with time-dependent inputs, the solution states
depend on the entire history of the system states.
This renders most of the existing methods non-applicable.
A few approaches have been explored for non-autonomous systems in the context of system control \cite{proctor2016dynamic, brunton2016sparse, proctor2018generalizing}. They are, however, not applicable
for general non-autonomous system learning.
The focus of this paper is on data driven learning method for non-autonomous systems. In particular,
we present a novel numerical approach suitable for learning general non-autonomous systems with
time-dependent inputs. The key ingredient of the method is in the decomposition of the system learning
into piecewise local learnings of over a set of discrete time instances. Inside each of the time intervals
defined by the discrete time instances, we seek to locally parameterize the external
time-dependent inputs using a local basis over time. This transforms the original non-autonomous system into a superposition of piecewise local parametric systems over each time intervals. We then design a neural
network structure, which extends the idea of ResNet learning for autonomous system (\cite{qin2019data})
and parametric system (\cite{qin2019UQ}),
to the local parametric system learning by using observation data.
Once the local network model is successfully trained and constructed, it can be iteratively used over
discrete time instances, much like the way standard numerical integrators are used, to provide system predictions
of different initial conditions and time-dependent external inputs, provided that the new inputs can be properly
parameterized by the local basis used during system learning. In addition to the description of the
algorithm, we also provide theoretical estimate on the approximation error bound of the learned
model. The proposed method is applicable to very general non-autonomous systems, as it requires
only mild assumptions, such as Lipschitz continuity, on the original unknown system.
A set of numerical examples, including linear and nonlinear dynamical systems as well as
a partial differential equation (PDE), are provided. The numerical results demonstrate that the proposed
method can be quite flexible and effective. More in-depth examination of the method shall follow
in future studies.
\section{Deep Neural Network Approximation} \label{sec:method}
The ODE \eqref{eq:ode} defines a map from the initial condition to the solution at $t=T$ as below.
\begin{equation*}
\mathbf{x}(T)=\boldsymbol{\Phi}_\mathbf{f}(\mathbf{x}_0, \boldsymbol{\alpha}(\cdot, \omega)|_{[0, T]}, T)
\end{equation*}
Suppose $\alpha(t, \omega)=\alpha_n$ for $t\in [t_n, t_{n+1}]$, then by the mean value theorem we have
\begin{align*}
\mathbf{x}_{n+1}&=\mathbf{x}_n+\int_{t_n}^{t_{n+1}} \mathbf{f}(\mathbf{x}(s), \boldsymbol{\alpha}(s, \omega))\, ds\\
&=\mathbf{x}_n+\mathbf{f}(\mathbf{x}(\tau), \boldsymbol{\alpha}(\tau, \omega)), \quad \tau\in [t_n, t_{n+1}]\\
&=\mathbf{x}_n+\mathbf{f}(\mathbf{x}(\tau), \boldsymbol{\alpha}_n)\\
&=\mathbf{x}_n+\mathbf{f}(\boldsymbol{\Phi}_\mathbf{f}(\mathbf{x}_n, \boldsymbol{\alpha}(\cdot, \omega)|_{[t_n, \tau]}, \Delta_n), \boldsymbol{\alpha}_n)\\
&=\mathbf{x}_n+\mathbf{f}(\boldsymbol{\Phi}_\mathbf{f}(\mathbf{x}_n, \boldsymbol{\alpha}_n, \Delta_n), \boldsymbol{\alpha}_n).
\end{align*}
Then if we define the effective increment $\boldsymbol{\phi}_\mathbf{f}(\mathbf{x}_n, \boldsymbol{\alpha}_n, \Delta_n)=\mathbf{f}(\boldsymbol{\Phi}_\mathbf{f}(\mathbf{x}_n, \boldsymbol{\alpha}_n, \Delta_n), \boldsymbol{\alpha}_n)$, we have the following discrete flow map.
\begin{equation*}
\mathbf{x}_{n+1}=\mathbf{x}_n+\boldsymbol{\phi}_\mathbf{f}(\mathbf{x}_n, \boldsymbol{\alpha}_n, \Delta_n).
\end{equation*}
We want to use the neural network to approxiamte the effective increment $\boldsymbol{\phi}_\mathbf{f}(\mathbf{x}_n, \boldsymbol{\alpha}_n, \Delta_n)$.
\subsection{Single Parameters}
\subsection{Multiple Parameters}
\section{Uncertainty Quantification}
Epistemic uncertainty quantification...
\section{Method Description} \label{sec:method}
In this section we present the detail of our method for deep
learning of non-autonomous systems \eqref{eq:ode}. The key ingredients
of the method include: (1) parameterizing
the external input $\gamma(t)$ locally (in time); (2) decomposing the
dynamical system into a modified system comprising of a sequence of
local systems; and (3) deep learning of the local systems.
\subsection{Local Parameterization}
The analytical solution of the unknown system \eqref{eq:ode} satisfies
$$
\mathbf{x}(t) = \mathbf{x}_0 + \int_0^t \mathbf{f}(\mathbf{x}(s), \gamma(s)) ds.
$$
Our learning method aims at providing accurate approximation to the true solution
at a prescribed set of discrete time instances,
\begin{equation} \label{tline}
0= t_0 < t_1 <\cdots < t_n <\cdots <t_N = T,
\end{equation}
where $T>0$. Let
$$
\delta_n = t_{n+1} - t_n, \qquad n=0,\dots, N-1,
$$
be the time steps, the exact solution
satisfies, for $n=0,\dots, N-1$,
\begin{equation} \label{evo}
\begin{split}
\mathbf{x}(t_{n+1}) &= \mathbf{x}(t_n) + \int_{t_n}^{t_{n+1}} \mathbf{f}(\mathbf{x}(s), \gamma(s)) ds
\\
& = \mathbf{x}(t_n) + \int_{0}^{\delta_{n}} \mathbf{f}(\mathbf{x}(t_n+\tau),
\gamma(t_n+\tau)) d\tau.
\end{split}
\end{equation}
For each time interval $[t_n, t_{n+1}]$, $n=0,\dots, N-1$, we first
seek a local parameterization for the external input function
$\gamma(t)$, in the following form,
\begin{equation}
\label{local_param}
\widetilde{\gamma}_n(\tau;\boldsymbol{\Gamma}_n) := \sum_{j=1}^{n_b} \widehat{\gamma}_n^j
b_j(\tau)\approx \gamma(t_n+\tau), \qquad \tau\in [0, \delta_{n}],
\end{equation}
where $\{b_j(\tau), j=1,\dots, n_b\}$ is a set of prescribed analytical basis functions and
\begin{equation} \label{Gamma}
\boldsymbol{\Gamma}_n=(\widehat{\gamma}_n^1, \dots, \widehat{\gamma}_n^{n_b})\in \mathbb{R}^{n_b}
\end{equation}
are the basis coefficients parameterizing the local input $\gamma(t)$ in $[t_n, t_{n+1}]$.
Note that in many practical applications, the external input/control
process $\gamma(t)$ is already prescribed in a parameterized
form. In this case, the local parameterization \eqref{local_param}
becomes exact, i.e., $\gamma(t_n+\tau) = \widetilde{\gamma}_n(\tau;\boldsymbol{\Gamma}_n,)$.
In other applications when the external input
$\gamma(t)$ is only known/measured at certain time instances, a
numerical procedure is required to create the parameterized form
\eqref{local_param}. This can be typically accomplished via a numerical
approximation method, for example, Taylor expansion, polynomial
interpolation, least squares regression etc.
\subsection{Modified System}
With the local parameterization \eqref{local_param} constructed for
each time interval $[t_n, t_{n+1}]$, we proceed to define a global
parameterized input
\begin{equation} \label{global_gamma}
\widetilde{\gamma}(t; \boldsymbol{\Gamma})=\sum_{n=0}^{N-1}
\widetilde{\gamma}_n(t-t_n;\boldsymbol{\Gamma}_n)\mathbb{I}_{[t_n, t_{n+1}]}(t),
\end{equation}
where
\begin{equation} \label{global_para}
\boldsymbol{\Gamma} =
\{\boldsymbol{\Gamma}_n\}_{n=0}^{N-1}\in\mathbb{R}^{N\times n_b}
\end{equation}
is global parameter set for $\widetilde{\gamma}(t)$, and
$\mathbb{I}_A$ is indicator function satisfying, for a set $A$,
$\mathbb{I}_A(x) =1$ if $x\in A$ and 0 otherwise.
We now define a modified system, corresponding
to the true (unknown) system \eqref{eq:ode}, as follows,
\begin{equation}
\label{eq:ode_mod}
\left\{
\begin{split}
&\frac{d}{dt}\widetilde{\mathbf{x}} (t)=\mathbf{f}(\widetilde{\mathbf{x}}, \widetilde{\gamma}(t; \boldsymbol{\Gamma})),\\
&\widetilde{\mathbf{x}}(0)=\mathbf{x}_0,
\end{split}
\right.
\end{equation}
where $\widetilde{\gamma}(t; \boldsymbol{\Gamma})$ is the globally parameterized input defined in \eqref{global_gamma}.
Note
that when the system input $\gamma(t)$ is already known or given in a parametric
form, i.e. $\widetilde{\gamma}(t) = \gamma(t)$, the modified system
\eqref{eq:ode_mod} is equivalent to the original system \eqref{eq:ode}.
When the parameterized process $\widetilde{\gamma}(t)$
needs to be numerically constructed, the modified system \eqref{eq:ode_mod}
becomes an approximation to the true system \eqref{eq:ode}. The
approximation accuracy obviously depends on the accuracy in
$\widetilde{\gamma}(t)\approx \gamma(t)$.
For the modified system, the following results holds.
\begin{lemma}
\label{thm:theorem1}
Consider system \eqref{eq:ode_mod} over the discrete set of time
instances \eqref{tline}.
There exists a function $\widetilde{\pphi}: {\mathbb R}^d\times
{\mathbb R}^{n_b}\times {\mathbb R}\rightarrow \mathbb{R}^d$, which depends on $\mathbf{f}$, such that
for any time interval $[t_n, t_{n+1}]$, the solution of
\eqref{eq:ode_mod} satisfies
\begin{equation}
\label{evo_mod}
\widetilde{\mathbf{x}}(t_{n+1})=\widetilde{\mathbf{x}}(t_n)+ \widetilde{\pphi}(\widetilde{\mathbf{x}}(t_n), \boldsymbol{\Gamma}_n, \delta_n),
\qquad n= 0, \dots, N-1,
\end{equation}
where $\delta_n=t_{n+1}-t_n$ and $\boldsymbol{\Gamma}_n$ is the local
parameter set \eqref{Gamma} for the locally parameterized input $\widetilde{\gamma}_n(t)$ \eqref{local_param}.
\end{lemma}
\begin{proof}
Let $\widetilde{\mathbf{x}}_n(t) $ denote $\widetilde{\mathbf{x}}(t)$ in the time interval $[t_n,
t_{n+1}]$, i.e.,
$$
\widetilde{\mathbf{x}}(t) = \sum_{n=0}^{N-1}
\widetilde{\mathbf{x}}_n(t)\mathbb{I}_{[t_n, t_{n+1}]}(t).
$$
With the global input $\widetilde{\gamma}(t)$ defined in the piecewise manner
in \eqref{global_gamma}, the system \eqref{eq:ode_mod} can be
written equivalently as,
for each interval $[t_n, t_{n+1}]$, $n=0,\dots, N-1$,
\begin{equation*}
\left\{
\begin{split}
&\frac{d}{dt}\widetilde{\mathbf{x}}_n (t)=\mathbf{f}(\widetilde{\mathbf{x}}_n, \widetilde{\gamma}_n(t-t_n; \boldsymbol{\Gamma}_n)), \qquad
t\in (t_n, t_{n+1}], \\
& \widetilde{\mathbf{x}}_n(t_n) = \widetilde{\mathbf{x}}(t_n).
\end{split}
\right.
\end{equation*}
Let $\boldsymbol{\Phi}_n: ({\mathbb R}^d\times{\mathbb R})\times{\mathbb R}\to {\mathbb R}^d$ be its (time dependent)
flow map such that
$$
\widetilde{\mathbf{x}}_n(r) = \boldsymbol{\Phi}_n((\widetilde{\mathbf{x}}_n(s),s),r-s), \qquad t_n\leq s\leq r\leq t_{n+1}.
$$
We then have
\begin{equation} \label{tx}
\widetilde{\mathbf{x}}_n(t_n+\tau) = \boldsymbol{\Phi}_n((\widetilde{\mathbf{x}}(t_n),0), \tau), \qquad \tau\in
[0,\delta_n],
\end{equation}
where the initial condition $\widetilde{\mathbf{x}}_n(t_n) = \widetilde{\mathbf{x}}(t_n)$ has been used.
The solution of \eqref{eq:ode_mod} from $t_n$ to $t_{n+1}$ satisfies
\begin{align*}
\widetilde{\mathbf{x}}(t_{n+1})&=\widetilde{\mathbf{x}}(t_n)+\int_{t_n}^{t_{n+1}} \mathbf{f}(\widetilde{\mathbf{x}}(t),
\widetilde{\gamma}(t; \boldsymbol{\Gamma})) dt\\
&=\widetilde{\mathbf{x}}(t_n)+\int_{0}^{\delta_{n}} \mathbf{f}(\widetilde{\mathbf{x}}_n(t_n+\tau),
\widetilde{\gamma}_n(\tau; \boldsymbol{\Gamma}_n))d\tau\\
&=\widetilde{\mathbf{x}}(t_n)+\int_{0}^{\delta_{n}} \mathbf{f}(\boldsymbol{\Phi}_n((\widetilde{\mathbf{x}}(t_n),0), \tau), \widetilde{\gamma}_n(\tau; \boldsymbol{\Gamma}_n))d\tau,
\end{align*}
where \eqref{global_gamma} and \eqref{tx} have been
applied. Let
\begin{equation*}
\widetilde{\pphi}(\widetilde{\mathbf{x}}(t_n), \boldsymbol{\Gamma}_n, \delta_n) :=\int_{0}^{\delta_{n}}
\mathbf{f}(\boldsymbol{\Phi}_n((\widetilde{\mathbf{x}}(t_n),0), \tau), \widetilde{\gamma}_n(\tau;
\boldsymbol{\Gamma}_n))d\tau
\end{equation*}
and the proof is complete.
\end{proof}
\subsection{Learning of Modified Systems}
The function $\widetilde{\pphi}$ in \eqref{evo_mod} governs the evolution of the
solution of the modified system \eqref{eq:ode_mod} and
is the target function for our proposed deep learning method. Note
that in each time interval $[t_n, t_{n+1}]$ over the prediction time
domain \eqref{tline}, the solution at $t_{n+1}$ is determined by its
state at $t_n$, the local parameter set $\boldsymbol{\Gamma}_n$ for the local
input $\widetilde{\gamma}_n$, the step size $\delta_n = t_{n+1}-t_n$, and
obviously, the
form of the original equation $\mathbf{f}$. Our learning algorithm thus seeks
to establish and train a deep neural network with input $\widetilde{\mathbf{x}}(t_n)$,
$\boldsymbol{\Gamma}_n$, $\delta_n$ and
output $\widetilde{\mathbf{x}}(t_{n+1})$. The internal feed-forward network connecting the
input and output thus serves as a model of the unknown dynamical
system \eqref{eq:ode}.
\subsubsection{Training Data Set}
To construct the training data set, we first re-organize the original data
set \eqref{data_set}. Let us assume the length of each trajectory data in
\eqref{data_set} is at least 2, i.e., $K^{(i)}\geq 2$, $\forall i$. We
then re-organize the data into pairs of two adjacent time instances,
\begin{equation} \label{data_pair}
\left\{ \mathbf{x}\left(t^{(i)}_k\right), \mathbf{x}\left(t^{(i)}_{k+1}\right); \gamma^{(i)}\right\}, \qquad k=1,\dots, K^{(i)}-1,
\quad i=1,\dots, N_T,
\end{equation}
where $N_T$ is the total number of data trajectories. Note that for
each $i=1,\dots, N_T$, its trajectory is driven by a known external input
$\gamma^{(i)}$, as shown in \eqref{data_set}. We then seek, for
the time interval $[t_k^{(i)}, t_{k+1}^{(i)}]$ with $\delta_k^{(i)}
=t_{k+1}^{(i)}- t_{k}^{(i)}$, its local
parameterized form $\widetilde{\gamma}_k^{(i)}(\tau; \boldsymbol{\Gamma}_k^{(i)})$, where
$\tau\in [0, \delta_k^{(i)}]$ and $\boldsymbol{\Gamma}_k^{(i)}$ is the parameter
set for the local parameterization of the input, in the form of
\eqref{local_param}. Again, if the external
input is already known in an analytical parametric form, this step is
trivial; if not this step usually requires a standard
regression/approximation procedure and is not discussed in detail here
for the brevity of the paper.
For each data pair \eqref{data_pair}, we now have its associated time
step $\delta_k^{(i)}$ and local parameter set $\boldsymbol{\Gamma}_k^{(i)}$ for
the external input. The total number of such pairings is
$K_{tot}= K^{(1)}+K^{(2)}+\cdots K^{(N_T)}-N_T$.
We then proceed to select $J\leq K_{tot}$ number of such pairings to
construct the training data set for the neural network model. Upon
re-ordering using a single index, the training data set takes the
following form
\begin{equation} \label{training_set}
\mathcal{S} = \left\{(\mathbf{x}^{(j)}_k, \mathbf{x}^{(j)}_{k+1}); \boldsymbol{\Gamma}^{(j)}_k,
\delta^{(j)}_k\right\}, \qquad j=1,\dots, J,
\end{equation}
where the superscript $j$ denotes the $j$-th data entry, which belongs
a certain $i$-th trajectory in the original data pairings
\eqref{data_pair}. The re-ordering can be readily enforced to be
one-on-one, with the trajectory information is implicitly embedded. Note
that one can naturally select all the data pairs in \eqref{data_pair} into the training
data set \eqref{training_set}, i.e., $J=K_{tot}$. In practice, one may
also choose a selective subset of \eqref{data_pair} to construct the
training set \eqref{training_set}, i.e.. $J<K_{tot}$, depending on the
property and quality of the original data.
\subsubsection{Network Structure and Training}
With the training data set \eqref{training_set} available, we proceed
to define and train our neural network model. The network model seeks
to learn the one-step evolution of the modified system, in the form
of \eqref{evo_mod}. Our proposed network model defines a
mapping $\widehat{\N}: {\mathbb R}^{d+n_b+1}\to{\mathbb R}^d$, such that
\begin{equation}
\mathbf{X}_{out} = \widehat{\N}(\mathbf{X}_{in}; \Theta), \qquad \mathbf{X}_{in} \in {\mathbb R}^{d+n_b+1},
\quad \mathbf{X}_{out} \in {\mathbb R}^d,
\end{equation}
where $\Theta$ are the network parameters that need to be trained.
The network structure is
illustrated in Fig.~\ref{fig:Net}.
\begin{figure}
\centering
\includegraphics[scale=0.4]{./Figures/DNN.pdf}
\caption{Illustration of the proposed neural network.}
\label{fig:Net}
\end{figure}
Inside the network, $\mathbf{N}:{\mathbb R}^{d+n_b+1}\to {\mathbb R}^{d}$ denotes the operator
associated with a feed-forward neural network with $(d+n_b+1)$ input
nodes and $d$ output nodes. The input is multiplied with $\widehat{\mathbf{I}}$ and
then re-introduced back before the final output. The
operator $\widehat{\mathbf{I}}\in {\mathbb R}^{d\times (d+n_b+1)}$ is a matrix of size ${d\times
(d+n_b+1)}$. It takes the form
\begin{equation} \label{hatI}
\widehat{\mathbf{I}} = [ \mathbb I_d, \mathbf{0} ],
\end{equation}
where $\mathbb I_d$ is identity matrix of size $d\times d$ and $\mathbf{0}$
is a zero matrix of size $d\times (n_b+1)$. Therefore, the network
effectively defines a mapping
\begin{equation} \label{net_model}
\mathbf{X}_{out} = \widehat{\N}(\mathbf{X}_{in}; \Theta) = [\widehat{\mathbf{I}} + \mathbf{N}(\cdot; \Theta)](\mathbf{X}_{in}).
\end{equation}
Training of the network is accomplished by using the training data set
\eqref{training_set}. For each of the $j$-th data entry, $j=1,\dots,
J$, we set
\begin{equation} \label{X_in}
\mathbf{X}_{in}^{(j)} \leftarrow [\mathbf{x}_k^{(j)}; \boldsymbol{\Gamma}_k^{(j)}; \delta_k^{(j)}] \in
{\mathbb R}^{d+n_b+1}.
\end{equation}
The network training is then conducted by
minimizing the mean squared loss between the network output $\mathbf{X}_{out}^{(j)}$
and the data $\mathbf{x}_{k+1}^{(j)}$, i.e.,
\begin{equation}
\label{eq:loss}
\Theta^*=\operatornamewithlimits{argmin}_{\Theta} \frac{1}{J}\sum_{\j=1}^J \left\|\widehat{\N}(\mathbf{X}_{in}^{(j)}; \Theta)-\mathbf{x}^{(j)}_{k+1}\right\|^2.
\end{equation}
\subsubsection{Learned Model and System Prediction}
Upon satisfactory training of the network parameter using
\eqref{eq:loss}, we obtain a trained network model for the unknown
modified system \eqref{eq:ode_mod}
\begin{equation} \label{trained}
\mathbf{X}_{out} = \widehat{\N}(\mathbf{X}_{in}; \Theta^*) = [\widehat{\mathbf{I}} + \mathbf{N}(\cdot; \Theta^*)](\mathbf{X}_{in}),
\end{equation}
where $\widehat{\mathbf{I}}$ is defined in
\eqref{hatI} and $\mathbf{N}$ is the operator of the FNN,
as illustrated in the
previous section and in Fig.~\ref{fig:Net}.
For system prediction with a given external input function
$\gamma(t)$, which is usually not in the training data set, let us consider the time instances
\eqref{tline}. Let
$$
\mathbf{X}_{in} = [\mathbf{x}(t_n); \boldsymbol{\Gamma}_n; \delta_n]
$$
be a concatenated vector consisting of
the state
variable at $t_n$, the parameter vector for the local parameterization of
the external input between $[t_n, t_{n+1}]$, and $\delta_n =
t_{n+1}-t_n$. Then, the trained model produces a one-step evolution of
the solution
\begin{equation}
\label{evo_mod_new}
\widehat{\mathbf{x}}(t_{n+1})= \mathbf{x}(t_n) + \mathbf{N}(\mathbf{x}(t_n), \boldsymbol{\Gamma}_n, \delta_n; \Theta^*).
\end{equation}
Upon applying \eqref{evo_mod_new} recursively, we obtain a network model for predicting the system states of the
unknown non-autonomous system \eqref{eq:ode}. For a given initial
condition $\mathbf{x}_0$ and external input $\gamma(t)$,
\begin{equation}
\label{eq:prediction}
\left\{
\begin{split}
&\wh{\mathbf{x}}(t_0) = \mathbf{x}_0, \\
&\wh{\mathbf{x}}(t_{n+1}) = \wh{\mathbf{x}}(t_{n}) + {\mathbf{N}}(\wh{\mathbf{x}}(t_n), \boldsymbol{\Gamma}_n, \delta_n; \Theta^*), \\
&t_{n+1} =
t_n + \delta_n, \qquad n=0,\dots, N-1,
\end{split}
\right.
\end{equation}
where $\boldsymbol{\Gamma}_n$ are the parameters in the local parameterization of
$\gamma(t)$ in the time interval $[t_n, t_{n+1}]$.
It is obvious that the network predicting model \eqref{evo_mod_new}
is an approximation to the one-step evolution \eqref{evo_mod} of the modified
system \eqref{eq:ode_mod}, which in turn is an approximation of the original
unknown dynamical system \eqref{eq:ode}. Therefore,
\eqref{eq:prediction} generates an approximation to the solution of
the unknown system \eqref{eq:ode} at the discrete time instances
$\{t_n\}$ \eqref{tline}.
\section{Problem Setup} \label{sec:setup}
We are interested in studying the following differential system
\begin{equation}
\label{eq:ode}
\frac{d}{dt} \mathbf{x} (t, \boldsymbol{\alpha})=\mathbf{f}(\mathbf{x}, \boldsymbol{\alpha},g(t)), \quad \mathbf{x}(t_0)=\mathbf{x}_0,
\end{equation}
where $\mathbf{x}=(x_1, \ldots, x_d)\in {\mathbb R}^d$ are state variables, $\boldsymbol{\alpha}=(\alpha_1,,\ldots, \alpha_l)\in {\mathbb R}^l$ denote measurable quantities which effect the evolution of the state variables, $g(t)$ are time dependent inputs, and $t_0$ is the initial time.
This dynamical system defines a mapping from the initial condition $t=t_0$ to the solution at $t=s$ as below
\begin{equation*}
\mathbf{x}(s; \mathbf{x}_0, \boldsymbol{\alpha}, g(t))=\boldsymbol{\Phi}_{s-t_0}(\mathbf{x}_0; \boldsymbol{\alpha}, g(t)), \quad s>t_0.
\end{equation*}
We call this mapping the \textit{flow map} for the ODE \eqref{eq:ode}.
Now assume that we can represent the time dependent inputs via
$$g(t)=\sum_{i=1}^n g_i\phi_i(t)$$
The we can represent the flow map via
$$
\mathbf{x}(s; \mathbf{x}_0, \boldsymbol{\alpha},\V{g}, t_0)=\boldsymbol{\Phi}_{s-t_0}(\mathbf{x}_0; \boldsymbol{\alpha}, \V{g}, t_0), \quad s>t_0.
$$
where $\V{g}=(g_1,\ldots,g_n)\in{\mathbb R}^n$. This expansion allows the specification of a large class of time dependent inputs. For example the expansion may be the truncation of a stochastic process such as a Weiner process which describes brownian motion.
\begin{remark}Consider the recovery of the numerical solution of a discretized ODE. Let $x(t;,\mathbf{x}, \boldsymbol{\alpha}, g(t))$ be the output of the ODE solver at time $t=t_0+\Delta$ produced using $k$ timesteps. The flow map above will be exact for any forcing $g$ if we set $n=k$ and the basis function $\phi$ to be piecewise polynomial basis functions centered at each of the $k$ timesteps in the interval $\Delta$. For example if using explicit forward Euler the $\phi$ will be piecewise linar.
\end{remark}
In this paper, we assume the form of the governing equations $\mathbf{f}(t,\mathbf{x}, \boldsymbol{\alpha}, g(t)):{\mathbb R}^d\rightarrow{\mathbb R}^d$ is unknown. Our goal is to create an accurate model for the governing equation using data of the solution trajectories $\mathbf{x}$ and the measurable quantities $\boldsymbol{\alpha}$. To approximate the governing equations we collect data int the form of pairs, each of which corresponds to the solution states along one trajectory at two different time instances with time lag $\Delta$. That is we consider the data pairs
\begin{equation*}
\mathbf{z}^{(1)}_j=\left(\mathbf{x}_j+\epsilon_j^{(1)},\boldsymbol{\alpha}_j, \V{g}, t_0\right), \quad \mathbf{z}^{(2)}_j=\boldsymbol{\Phi}_{\Delta_j}(\mathbf{x}_j; \boldsymbol{\alpha}_j, \V{g}_j, t_{0,j})+\epsilon_j^{(2)}, \quad j=1,\ldots,J
\end{equation*}
where $J$ is the total number of data pairs, and $\epsilon_j^{(1)}$ and $\epsilon_j^{(2)}$ are measurement/simulation errors. For notational convenience, we assume $\Delta_j=\Delta$ to be a constant for all $j$ throughout this paper. Consequently, the data set becomes input-output measurements of the $\Delta$-lag flow map,
\eq{\mathbf{x}\rightarrow\Phi_\Delta(\mathbf{x},\boldsymbol{\alpha},\V{g}, t_0).}
\section{Analysis}
Now we consider the consequences of not being able to approximate the input $g$ with a finite number of terms $n$, i.e. we have
$$\lVert g(t)-\sum_{i=1}^n g_i\phi_i(t)\rVert\le\epsilon,\forall t$$
\section{Numerical Examples}
\subsection*{Example 1: Linear Scalar ODE}
Let us first consider the following linear ODE with a single random parameter
\begin{equation}
\label{eq:example_1}
\frac{dx}{dt}=-\alpha\, x + g_1\sin(t)+g_2\cos\left(\frac{t}{2}\right), \quad x(t_0)=x_0,
\end{equation}
where $\alpha$ is a random coefficient. We will consider two cases when $\V{g}$ is fixed for all time and when $\V{g}$ can take a set of permissible values. The former case simulates laboratory conditions when forcing can be controlled, and the later case when the exact nature of the forcing is unknown. In both examples we take $I_\alpha=[0,1]$.
\subsubsection*{Case 1: Known forcing}
Set $I_x=[-0.2,1]$, $g_1=0.05$ and $g_2=0.0$ and make prediction with initial condition $x_0$ randomly chosen for uniform distribution on $I_x$. For case plotted $x_0=0.30042641$. Build approximation using sparse grid.
\onefig{../poly_learning/non-autonomous-example-case-I}{Linear Scalar ODE Case I}
\subsubsection*{Case 1: Unknown forcing}
Set $I_x=[-0.2,1]$, $g_1\in[-0.05,0.05]$ and $g_2\in[-0.05,0.05]$ and make prediction with initial condition $x_0$ randomly chosen for uniform distribution on $I_x$. For case plotted $x_0=0.32319388$. Build approximation using sparse grid.
\onefig{../poly_learning/non-autonomous-example-case-II}{Linear Scalar ODE Case II}
\end{document}
\section{Problem Setup} \label{sec:setup}
We are interested in studying the following differential equation with stochastic parameters.
\begin{equation}
\label{eq:ode}
\frac{d}{dt}\mathbf{x}(t, \omega)=\mathbf{f}(\mathbf{x}, \boldsymbol{\alpha}(t, \omega)), \quad \mathbf{x}(0)=\mathbf{x}_0,
\end{equation}
where $\mathbf{x} \in {\mathbb R}^d$ are sate variables with coordinates
$(x_1, x_2, \ldots, x_d)$ and $\boldsymbol{\alpha}(\omega)\in {\mathbb R}^l$ are time-dependent random parameters with coordinates $(\alpha_1(t, \omega), \alpha_2(t, \omega),\ldots, \alpha_l(t, \omega))$.
We further assume that each state variable $x_i(t, \omega)\in [a_i, b_i]$ and each random parameter is contained in $[p_i, q_i]$ for any $t$ and $\omega$. For the random parameters, we assume they are piecewise constant in time and each parameter $\alpha_i(t, \omega)$ can be parameterized by a sequence of random vectors $(\Delta_i, a_i)$, where $\Delta_i\sim U(0, \Delta)$ and $a_i\sim U(0, a)$. If we define
\begin{equation*}
t_n=\sum_{i=1}^n \Delta_i
\end{equation*}
then the random parameter $\alpha_i(t,\omega)=a_n$ if $t_n\leq t< t_{n+1}$.
We are interested in quantifying the uncertainty in the solution $\mathbf{x}(T, \Delta_1, a_1, \Delta_2, a_2, \ldots, \Delta_N, a_N)$ for $T>0$.
\section{Setup and Preliminary} \label{sec:setup}
Let us consider a general non-autonomous dynamical system:
\begin{equation}
\label{eq:ode}
\left\{
\begin{split}
&\frac{d}{dt} \mathbf{x} (t)=\mathbf{f}(\mathbf{x}, \gamma(t)), \\
& \mathbf{x}(0)=\mathbf{x}_0,
\end{split}
\right.
\end{equation}
where $\mathbf{x} \in {\mathbb R}^d$ are state variables and $\gamma(t)$ is a known time-dependent input.
For notational convenience, we shall write $\gamma(t)$ as a scalar
function throughout this paper. The method and analysis discussed in
this paper can easily be applied to vector-valued time-dependent
inputs in component-by-component manner.
\subsection{Problem Statement}
Our goal is to construct a numerical model of the unknown dynamical
system \eqref{eq:ode} using
measurement data of the system state.
We assume that observations of the system state are available as a collection of
trajectories of varying length,
\begin{equation} \label{data_set}
\mathbf{X}^{(i)} = \left\{ \mathbf{x}\left(t^{(i)}_k\right); \gamma^{(i)}\right\}, \qquad k=1,\dots, K^{(i)},
\quad i=1,\dots, N_T,
\end{equation}
where $N_T$ is the number of trajectories, $K^{(i)}$ is the length of
the $i$-th trajectory measurement, and $\gamma^{(i)}$ is the corresponding external
input process. In practice, $\gamma^{(i)}$ may be known either
analytically over $t$ or discretely at the time instances
$\{t_k^{(i)}\}$.
The state variable data may contain measurement noises, which are usually
modeled as random variables.
Note that each trajectory data may
occupy a different span over the time axis and be originated from
different (and unknown) initial conditions.
Given the trajectory data \eqref{data_set}, our goal is to construct a numerical model
to predict the dynamical behavior of the system \eqref{eq:ode}. More
specifically, for an arbitrary initial condition $\mathbf{x}_0$ and a
given external input process $\gamma(t)$, we seek a
numerical model that provides an accurate prediction $\widehat{\mathbf{x}}$ of the
true state $\mathbf{x}$ such that
such that
$$
\widehat{\mathbf{x}}(t_i; \mathbf{x}_0, \gamma) \approx \mathbf{x}(t_i; \mathbf{x}_0, \gamma), \qquad i=1,\dots, N,
$$
where
$$0 = t_0 < \cdots < t_N = T$$
is a sequence of time instances with a finite horizon $T>0$.
\subsection{Learning Autonomous Systems}
For autonomous systems, several data driven
learning methods have been developed. Here we briefly review the
method from \cite{qin2019data}, as it is related to our proposed
method for non-autonomous sytem \eqref{eq:ode}.
With the absence of $\gamma(t)$, the system \eqref{eq:ode} becomes
autonomous and time variable can be arbitrarily shifted. It defines a flow map
$\boldsymbol{\Phi}:\mathbb{R}^{d}\to\mathbb{R}^{d}$ such that
\begin{equation}
\label{eq:flow-map}
\mathbf{x}(s_1)=\boldsymbol{\Phi}_{s_1-s_2}\left(\mathbf{x}(s_2)\right),
\end{equation}
for any $s_1, s_2\geq 0$.
For any $\delta>0$, we have
\begin{equation} \label{flowmap}
\mathbf{x}(\delta) =\mathbf{x}(0)+\int_{0}^{\delta} \mathbf{f}(\mathbf{x}(s)) ds
=\left[\mathbf{I}_d + \boldsymbol{\psi}(\cdot,
\delta)\right](\mathbf{x}(0)),
\end{equation}
where $\mathbf{I}_d$ is identity matrix of size $d\times d$, and
for any $\mathbf{z}\in\mathbb{R}^d$,
$$
\boldsymbol{\psi}(\cdot, \delta)[\mathbf{z}] = \boldsymbol{\psi}(\mathbf{z}, \delta) =\int_{0}^{\delta}
\mathbf{f}(\boldsymbol{\Phi}_s(\mathbf{z})) ds
$$
is the effective increment along the trajectory from $\mathbf{z}$ over the time
lag $\delta$. This suggests that
given sufficient data of $\mathbf{x}(0)$ and $\mathbf{x}(\delta)$, one can build an accurate approximation
\begin{equation}\label{app_psi}
\hat{\boldsymbol{\psi}}\left(\mathbf{z}, \delta\right)\approx\boldsymbol{\psi}\left(\mathbf{z}, \delta\right).
\end{equation}
This in turn can be used in \eqref{flowmap} iteratively to conduct
system prediction. Except the error in constructing the approximation
for the effective increment in \eqref{app_psi}, there is no temporal
error explicitly associated with the
time step $\delta$ when system prediction is conducted using the
learned model (\cite{qin2019data}).
\subsection{Deep Neural Network}
While the approximation \eqref{app_psi} can be accomplished by a
variety of approximation methods, e.g., polynomial regression, we focus on using deep neural
network (DNN), as DNN
is more effective and flexible for high dimensional problems.
The DNN utilized here takes the form of standard
feed-forward neural network (FNN), which defines nonlinear map between
input and output.
More specifically, let $\mathbf{N}:{\mathbb R}^m\rightarrow{\mathbb R}^n$ be the operator
associated with a FNN with $L\geq 1$ hidden layers. The relation
between its input $\mathbf{y}^{in}\in{\mathbb R}^m$ and output $\mathbf{y}^{out}\in{\mathbb R}^n$ can be written as
\begin{equation}
\label{eq:fnn}
\mathbf{y}^{out}=\mathbf{N}(\mathbf{y}^{in};\Theta)=\mathbf{W}_{L+1}\circ(\sigma_L\circ \mathbf{W}_{L})\circ\cdots\circ (\sigma_1\circ \mathbf{W}_1) (\mathbf{y}^{in}),
\end{equation}
where $\mathbf{W}_j$ is weight matrix between the $j$-th layer and the
$(j+1)$-th layer, $\sigma_j:{\mathbb R}\rightarrow {\mathbb R}$ is activation
function, and $\circ$ stands for composition operator. Following the
standard notation, we have augmented network biases into the weight matrices,
and applied the activation function in component-wise manner.
We shall use $\Theta$ to represent all the parameters associated with
the network.
One particular variation of FNN is residual network (ResNet), which was
first proposed in \cite{he2016deep} for image analysis and has since seen wide applications in
practice. In ResNet, instead of direct mapping between the
input and output as in \eqref{eq:fnn}, one maps the residue between the
output and input by the FNN. This is achieved by introducing an identity
operator into the network such that
\begin{equation} \label{resnet}
\mathbf{y}^{out}=[\mathbb I+\mathbf{N}(\cdot;\Theta)](\mathbf{y}^{in})= \mathbf{y}^{in} + \mathbf{N}(\mathbf{y}^{in}; \Theta).
\end{equation}
ResNet is particularly useful for learning unknown
dynamical systems (\cite{qin2019data}). Upon comparing
\eqref{flowmap} with \eqref{resnet}, it is straightforward
to see that the FNN operator $\mathbf{N}$ becomes an approximation for the effective
increment $\boldsymbol{\psi}$.
\subsection{Theoretical Properties} \label{sec:theory}
We now present certain theoretical analysis for the proposed learning
algorithm.
The following result provides a bound between the solution of the
modified system \eqref{eq:ode_mod} and the original system
\eqref{eq:ode}. The difference between the two systems is due to the
use of the parameterized external input $\widetilde{\gamma}(t)$ \eqref{global_gamma} in
the modified system \eqref{eq:ode_mod}, as opposed to the original external input
$\gamma(t)$ in the original system \eqref{eq:ode}. Again, we emphasize that in many
practical situations when the external input is already known in a
parametric form, the modified system \eqref{eq:ode_mod} is equivalent
to the original system \eqref{eq:ode}.
\begin{proposition}
\label{thm:error_mod}
Consider the original system \eqref{eq:ode} with input $\gamma(t)$
and the modified system \eqref{eq:ode_mod} with input $\widetilde{\gamma}(t)$
\eqref{global_gamma}, and
assume the function $\mathbf{f}(\mathbf{x},\gamma)$ is Lipschitz
continuous with respect to both $\mathbf{x}$ and $\gamma$, with Lipschitz
constants $L_1$ and $L_2$, respectively.
If the difference in the inputs is bounded by
$$\|\gamma(t)-\widetilde{\gamma}(t)\|_{L^\infty([0, T])}\leq \eta, $$
where $T>0$ is a finite time horizon. Then,
\begin{equation*}
|\mathbf{x}(t)-\widetilde{\mathbf{x}}(t)|\leq L_2\, \eta\, t\, e^{L_1 t}, \quad \forall t\in [0, T].
\end{equation*}
\end{proposition}
\begin{proof}
For any $t\in [0, T]$,
\begin{align*}
\mathbf{x}(t)&=\mathbf{x}(0)+\int_0^t \mathbf{f}(\mathbf{x}(s), \gamma(s))\,ds,\\
\widetilde{\mathbf{x}}(t)&=\mathbf{x}(0)+\int_0^t \mathbf{f}(\widetilde{\mathbf{x}}(s), \widetilde{\gamma}(s))\, ds.
\end{align*}
We then have
\begin{align*}
|\mathbf{x}(t)-\widetilde{\mathbf{x}}(t)|&\leq \int^t_0 |\mathbf{f}(\mathbf{x}(s), \gamma(s))-\mathbf{f}(\widetilde{\mathbf{x}}(s), \widetilde{\gamma}(s))| \, ds\\
&\leq \int_0^t \abs{\mathbf{f}(\mathbf{x}(s), \gamma(s))-\mathbf{f}(\mathbf{x}(s), \widetilde{\gamma}(s))}\,ds+\int_0^t \abs{\mathbf{f}(\mathbf{x}(s), \widetilde{\gamma}(s))-\mathbf{f}(\widetilde{\mathbf{x}}(s), \widetilde{\gamma}(s))}\,ds\\
&\leq L_2\int_0^t \abs{\gamma(s)-\widetilde{\gamma}(s)}\,ds + L_1\int_0^t \abs{\mathbf{x}(s)-\widetilde{\mathbf{x}}(s)}\,ds\\
&\leq L_2\,\eta\,t +L_1 \int_0^t \abs{\mathbf{x}(s)-\widetilde{\mathbf{x}}(s)}\,ds.
\end{align*}
By using Gronwall's inequality, we obtain
$$
|\mathbf{x}(t)-\widetilde{\mathbf{x}}(t)| \leq L_2\,\eta\, t\, e^{L_1 t}.
$$
\end{proof}
We now recall the celebrated universal approximation property of
neural networks.
\begin{proposition}[\cite{pinkus1999}]
For any function $F\in C(\mathbb{R}^n)$ and a positive real
number $\varepsilon>0$, there exists a single-hidden-layer neural
network $N(\cdot\,; \Theta)$ with parameter $\Theta$ such that
\begin{equation*}
\max_{\mathbf{y}\in D} |F(\mathbf{y})-N(\mathbf{y}\,;\Theta)| \leq \varepsilon,
\end{equation*}
for any compact set $D\in \mathbb{R}^n$, if and only if the activation functions are continuous and are not polynomials.
\end{proposition}
Relying on this result, we assume the trained neural network model
\eqref{trained} has sufficient accuracy, which is equivalent
to assuming accuracy in the trained FNN operator $\mathbf{N}$ of
\eqref{evo_mod_new} to the
one-step evolution operator $\widetilde{\pphi}$ in \eqref{evo_mod}.
More specifically, let $\mathcal{D}$ be the convex hull of the
training data set $\mathcal{S}$, defined \eqref{training_set}. We then assume
\begin{equation}
\label{eq:err_NN}
\left\|\mathbf{N}(\cdot; \Theta^*)-\widetilde{\pphi}(\cdot)\right\|_{L^\infty(\mathcal{D})}<\mathcal{E},
\end{equation}
where $\mathcal{E}\geq 0$ is a sufficiently small real number.
\begin{proposition}
\label{prop:NN_error}
Consider the modified system \eqref{evo_mod} and the trained network
model \eqref{eq:prediction} over the time instances
\eqref{tline}. Assume the exact evolution operator \eqref{evo_mod}
is Lipschitz continuous with respect to $\mathbf{x}$, with Lipschitz
constant $L_\phi$. If the network training is sufficiently
accurate such that \eqref{eq:err_NN} holds, then
\begin{equation}
\label{eq:approx_err}
\|\widehat{\mathbf{x}} (t_n)-\widetilde{\mathbf{x}}(t_n)\| \leq \frac{1-L_\phi^n}{1-L_\phi}
\mathcal{E}, \qquad n=0,\dots, N.
\end{equation}
\end{proposition}
\begin{proof}
Let $\boldsymbol{\Phi} = \widehat{\mathbf{I}} + \widetilde{\pphi}$, where $\widehat{\mathbf{I}}$ is defined in
\eqref{hatI}, we can rewrite the one-step evolution
\eqref{evo_mod} as
$$
\widetilde{\mathbf{x}}(t_{n+1})=[\boldsymbol{\Phi}(\cdot, \boldsymbol{\Gamma}_n, \delta_n)](\widetilde{\mathbf{x}}(t_n)),
$$
Meanwhile, the learned model \eqref{eq:prediction} satisfies, by
using \eqref{trained},
$$
\widehat{\mathbf{x}}(t_{n+1}) = [\widehat{\N}(\cdot; \Theta^*)](\widehat{\mathbf{x}}(t_n)).
$$
Let $e_n = \|\widehat{\mathbf{x}}(t_n) - \widetilde{\mathbf{x}}(t_n)\|$, we then have
\begin{equation*}
\begin{split}
e_n = & \left\|[\widehat{\N}(\cdot; \Theta^*)](\widehat{\mathbf{x}}(t_{n-1})) -[\boldsymbol{\Phi}(\cdot, \boldsymbol{\Gamma}_{n-1}, \delta_{n-1})](\widetilde{\mathbf{x}}(t_{n-1}))\right\|
\\
\leq &\left\|[\widehat{\N}(\cdot; \Theta^*) - \boldsymbol{\Phi}(\cdot, \boldsymbol{\Gamma}_{n-1},
\delta_{n-1}) ](\widehat{\mathbf{x}}(t_{n-1}))\right\| + \\
&
\left\|\left[\boldsymbol{\Phi}(\widehat{\mathbf{x}}(t_{n-1}), \boldsymbol{\Gamma}_{n-1}, \delta_{n-1})\right] -
\left[\boldsymbol{\Phi}(\widetilde{\mathbf{x}}(t_{n-1}), \boldsymbol{\Gamma}_{n-1}, \delta_{n-1})\right]\right\|\\
\leq &~\mathcal{E} + L_\phi \left\|\widehat{\mathbf{x}}(t_{n-1}) - \widetilde{\mathbf{x}}(t_{n-1})\right\|
\end{split}
\end{equation*}
This gives
$$
e_n \leq \mathcal{E} + L_\phi e_{n-1}.
$$
Repeated use of this relation and with $e_0=0$ immediately gives the conclusion.
\end{proof}
Note that the assumption of Lipschitz continuity on the evolution
operator in \eqref{evo_mod} is equivalent to assuming Lipschitz
continuity on the right-hand-side of the original system
\eqref{eq:ode}. This is a very mild condition, commonly assumed for the well-posedness
of the original problem \eqref{eq:ode}.
Upon combining the results from above and using triangular inequality, we immediately obtain the following.
\begin{theorem}
Under the assumptions of Proposition \ref{thm:error_mod} and
\ref{prop:NN_error}, the solution of the trained network
model \eqref{eq:prediction} and the true solution of the original
system \eqref{eq:ode} over the time instances satisfies
\eqref{tline} satisfy
\begin{equation}
\label{eq:err_est}
\left\|\widehat{\mathbf{x}}(t_n)-\mathbf{x}(t_n)\right\| \leq L_2\,\eta\, t_n\, e^{L_1 t_n}
+ \frac{1-L_\phi^n}{1-L_\phi}
\mathcal{E}, \qquad n=0,\dots, N.
\end{equation}
\end{theorem}
\begin{remark} \label{remark1}
It is worth noting that the DNN structure employed here is
to accomplish the approximation \eqref{eq:err_NN}. Such an
approximation can be conducted by any other proper approximation
techniques using, for example, (orthogonal) polynomials,
Gaussian process, radial basis, etc. The target function is
the one-step evolution operator $\widetilde{\pphi}$ in \eqref{evo_mod}.
Since for many problems of practical interest,
$\widetilde{\pphi}:{\mathbb R}^{d+n_b+1}\to{\mathbb R}^d$ often resides in high dimensions and is
highly nonlinear, DNN represents a more flexible and practical choice
and is the focus of this paper.
\end{remark} |
1,108,101,566,381 | arxiv | \section{Experimental set-up}\label{sec:Exp}
These works motivate us to study and utilize the quantum properties of SPPs in more advanced quantum information protocols. Quantum teleportation uses entanglement as a resource to faithfully transfer unknown quantum states between distant nodes. Ever since it was first introduced by C.~H.~Bennett \textit{et al.}~\cite{Bennett1993PRL} and experimentally realized using photonic qubits~\cite{Bouwmeester1997, Boschi1998PRL}, quantum teleportation has become the essential protocol for establishing worldwide quantum networks~\citep{Cirac1997PRL.78.3221, Wehner2018Science.362.9288}. The teleportation distance has increased significantly over the last two decades~\cite{Marcikic2003Nature.421.509, Ursin2004Nature.430.849, Jin2010NP.4.376, Yin2012Nature.488.185, Ma2012Nature.489.269} and has recently been successfully extended to more than a thousand kilometres from the ground to a satellite~\cite{Ren2017Nature.549.70}. To build a quantum network with more functionalities, various physical systems are required with individual advantages in terms of transferring and processing the quantum state.
\vspace*{0.5cm}
\noindent\textbf{Results}
\noindent\sectionuser{The conceptual scheme of SPP mediated quantum teleportation}\qquad
Here, we experimentally realize the quantum state teleportation of a single photon to a single SPP, which is a single qubit consisting of collective electronic excitations typically involving {$\sim$}{$10^{6}$} electrons~\cite{Tame2013QP}. Our scheme is based on three qubits, which is first proposed by S.~Popescu~\cite{Popescu1995} and realized in experiment by D.~Boshi \textit{et al.}~\cite{Boschi1998PRL}. The conceptual framework of our experiment with the three-qubit scheme is shown in Fig.~\ref{fig1:ExpSet}\textbf{a}. The entanglement between qubits 1 (Q1) and 2 (Q2), serving as the quantum channel, is generated from the entangled photon-pair source and distributed to Alice and Bob. An input state of qubit 0 (Q0) is sent to Alice. Alice performs a Bell-state measurement (BSM)~\cite{Boschi1998PRL}, projecting Q0 and Q1 randomly into one of the four Bell states, each with a probability of 25\%. Then, the outcomes of the BSM are sent to Bob through a classical communication (CC) channel. Q2 is sent to a subwavelength hole array sample patterned on a gold film at Bob's site to facilitate the photon-SPP-photon conversion~\cite{Ebbesen1998Nature.391.667}. There, the quantum state of Q2 is transferred to qubit 3 (Q3), carried by a single SPP. This SPP propagates along the surface of the sample and subsequently couples to an optical photon (Q4), which radiates towards detectors in the far field. According to the outcomes of the BSM, the corresponding unitary transformations (UTs) are applied to Q4. Finally, we perform quantum state tomography (QST)~\cite{Leonhardt1995PRL.74.4101, James2001PRA.64.052312} on Q4 and verify whether the quantum state teleportation from a single photon to a single SPP is successful by evaluating the quantum state fidelities of Q4 to Q0 and the quantum process fidelity of the whole procedure.
\vspace*{0.3cm}
\noindent\sectionuser{Subwavelength hole array and its characterization}\qquad
Figure~\ref{fig1:ExpSet}\textbf{b} shows a scanning electron microscopy (SEM) image of the subwavelength hole array used in our experiment. The gold film is perforated over a square area of 189$\times$189 $\mu$m$^2$ with periodic hole arrays by using a focused ion beam. The hole diameter and the period are 200 nm and 700 nm, respectively. Although the hole array reduces the direct photon transmission, it allows resonant excitation of the SPP~\cite{Ebbesen1998Nature.391.667}. The transmission spectrum of our sample is shown in Fig.~\ref{fig1:ExpSet}\textbf{c} and has a peak centred at approximately 809 nm with a full width at half maximum (FWHM) of $\sim$70 nm. The peak transmittance of the sample at 809 nm is approximately 0.8\%. The transmission curves for different light polarizations are similar, indicating that our sample is nearly polarization-independent. A numerical calculation based on the geometry of the array and the wavevector matching shows that this peak is associated with the ($\pm$1,$\pm$1) SPP modes at the glass-metal interface~\cite{Ghaemi1998PRB.58.6779}. These modes can excite the SPPs propagating along the four diagonal directions. We experimentally measure the SPP propagation with a laser and a charge-coupled device (CCD), as shown in Fig.~\ref{fig1:ExpSet}\textbf{d}. By fitting to the SPP propagation along the diagonal direction, we estimate the $1/e$ decay length of the plasmonic mode to be $\sim$4.48$\pm$0.50 $\mu$m. See the Supplementary Materials for details on the numerical simulation, sample fabrication and characterizations of this device.
\vspace*{0.3cm}
\noindent\sectionuser{Realizing quantum teleportation between photon and SPP}\qquad
Figure~\ref{fig1:ExpSet}\textbf{e} presents a layout of our experimental setup. The entangled photon pairs are generated from spontaneous parametric down conversion, which is realized by embedding a periodically poled KTiOPO$_4$ (PPKTP) crystal in a Sagnac interferometer~\cite{Kim2006PRA.73.012316, Fedrizzi2007OE.15.15377}. The quantum state of photons A and B is similar to the singlet state:
\begin{align}\label{eq1:SourceState}
\ket{\Psi^{-}}_{AB} = \frac{1}{\sqrt{2}}(\ket{H}_{A}\ket{V}_{B}-\ket{V}_{A}\ket{H}_{B}),
\end{align}
which has a fidelity of approximately 98$\%$. $\ket{H}_{A}$ ($\ket{V}_{A}$) denotes the horizontal (vertical) polarization state of photon A. The same notation is used for photon B. We obtain coincidence counts at a rate of approximately 100 kHz with a pump power of 20 mW.
\begin{figure}[!htbp]
\includegraphics[width=0.8\textwidth]{fig1ExpSetup.pdf}
\caption{\label{fig1:ExpSet}Experimental layout of the surface plasmon polariton (SPP) mediated quantum teleportation. (\textbf{a})~The conceptual framework of our experiment. At Alice's site, the input states are prepared using qubit 0 (Q0). An Einstein-Podolsky-Rosen (EPR) source generates two entangled qubits, Q1 and Q2. Q1 is sent to Alice for a Bell-state measurement (BSM)~\cite{Boschi1998PRL}. Q2 is sent to Bob to excite the SPP qubit, Q3. Through the photon-plasmon-photon conversion, the quantum states of the SPPs are transformed back to a photonic qubit, Q4. The outcomes of the BSM are sent to Bob using the classical communication (CC). Bob then applies a unitary transformation (UT) to Q4. As a result, the output state $\ket{\phi}^4_B$ is identical to $\ket{\phi}^0_A$; hence, teleportation is accomplished. (\textbf{b})~The SEM image of the subwavelength hole arrays with 200 nm diameter and 700 nm period. (\textbf{c})~Transmission spectrum of the hole arrays. The resonance at approximately 809 nm (dashed line) is the ($\pm$1,$\pm$1) mode, corresponding to the SPPs propagating along the diagonal direction. (\textbf{d})~The far-field image shows SPP propagation excited with classical laser light. (\textbf{e})~Sketch of the experimental setup. The polarization-entangled source uses a type-II down-conversion Sagnac interferometer, where a $\chi^{(2)}$ nonlinear crystal (periodically poled KTiOPO$_4$, PPKTP) is coherently pumped by 405 nm laser light from clockwise and counter-clockwise directions. The central wavelength of the entangled signal (A) and idler (B) photons is approximately 810 nm. Photon A is sent to Alice. The polarization degree of freedom (DOF) (Q0) of photon A is used for preparing the six input states. The four Bell states are constructed using the path (Q1) and polarization (Q0) DOF of photon A. Photon B is sent to Bob. The polarization of photon B (Q2) is used to excite the SPPs. After undergoing a photon-plasmon-photon conversion, the quantum state of the SPPs (Q3) is transferred back to the photon (Q4). The results of the BSM (00, 01, 10, 11) are sent to Bob by CC and subsequently used to trigger the electro-optic modulators (EOMs, $\sigma_x$, $\sigma_z$) to apply the corresponding UTs ($I, \sigma_z, \sigma_x, i\sigma_y$). The quantum state is finally analysed through quantum state tomography (QST). HWP: half-wave plate; QWP: quarter-wave plate; BD: beam displacer; DM: dichromatic mirror; d-PBS: dual-wavelength polarizing beam splitter.}
\end{figure}
We employ the two-photon three-qubit scheme to realize the SPP mediated quantum teleportation~\cite{Boschi1998PRL, Jin2010NP.4.376}. The two-photon three-qubit scheme has the advantages that it avoids the very low detection rates caused by the simultaneous detection of three photons and allows a 100\% Bell state measurement~\cite{Popescu1995, Boschi1998PRL, Jin2010NP.4.376}. We note that two-photon scheme of teleportation has limitation as one can't use this scheme to teleport the quantum state of an independent photon which comes from outside. In our experiment, photons A and B are sent to Alice and Bob through single-mode fibre (SMF), respectively. We use photon A's polarization as Q0 and its path state as Q1. Photon B's polarization acts as Q2. First, we swap the entanglement between Q0 and Q2 (see Eq.~\eqref{eq1:SourceState}) to Q1 and Q2. We achieve this by sending photon A through a beam displacer (BD1 in Fig.~\ref{fig1:ExpSet}\textbf{e}), which makes the horizontal polarized component undergo a lateral displacement into the left path mode (denoted as $\ket{l}$) and transmits the vertically polarized component directly (denoted as $\ket{r}$). The two-photon (A and B) three-qubit (Q0, Q1 and Q2) state can be written as
\begin{align}\label{eq2:WholeStateBD1}
\ket{\Psi^{-}}^{012}_{AB} = \frac{1}{\sqrt{2}}(\ket{H}^{0}_{A}\ket{l}^{1}_{A}\ket{V}^{2}_{B}-\ket{V}^{0}_{A}\ket{r}^{1}_{A}\ket{H}^{2}_{B}).
\end{align}
Note that the superscripts are labelled for the qubit and the subscripts are labelled for the photon. Then, a 45$^\circ$-oriented HWP (HWP@45$^\circ$ in Fig.~\ref{fig1:ExpSet}\textbf{e}) rotates the horizontal component ($\ket{H}_A$) to the vertical polarization ($\ket{V}_A$) in the left path, $\ket{l}$. Along the right path, $\ket{r}$, a 90$^\circ$-oriented HWP (HWP@90$^\circ$ in Fig.~\ref{fig1:ExpSet}\textbf{e}) is used for phase compensation. After these two HWPs, the polarization state of photon A (qubit 0) is in $\ket{V}$ and is factorized out. The full state is as follows:
\begin{align}\label{eq2:WholeStateBD1}
\ket{\Psi^{-}}^{012}_{AB} = \frac{1}{\sqrt{2}}\ket{V}^{0}_{A}\otimes(\ket{l}^{1}_{A}\ket{V}^{2}_{B}-\ket{r}^{1}_{A}\ket{H}^{2}_{B}).
\end{align}
Consequently, the initial entanglement between the polarization states of photons A and B is swapped into the path state of photon A (qubit 1) and the polarization state of photon B (qubit 2)~\cite{Giacomini2002PRA.66.030302, Takeda2013Nature.500.315}.
The combination of HWP2 and QWP2 are then used to create the polarization state to be teleported (see Supplementary Section V), i.e.~$\ket{\phi}^{0}_{A}=\alpha\ket{H}^{0}_{A}+\beta\cdot \ket{V}^{0}_{A}$, where $\alpha$ and $\beta$ are two complex numbers satisfying $|\alpha|^2+|\beta|^2=1$. This process can be expressed as follows:
\begin{eqnarray}\label{eq3:InputState}
\ket{\Psi^{-}}^{012}_{AB} &=& \left(\alpha\ket{H}^{0}_{A}+\beta\ket{V}^{0}_{A}\right)\otimes\frac{1}{\sqrt{2}}\left(\ket{l}^{1}_{A}\ket{V}^{2}_{B}-\ket{r}^{1}_{A}\ket{H}^{2}_{B}\right) \nonumber\\
&=& \frac{1}{2}\left(i\sigma_y\ket{\phi}^{2}_{B}\ket{\Phi^{+}}^{01}_{A}+\sigma_x\ket{\phi}^{2}_{B}\ket{\Phi^{-}}^{01}_{A}-\sigma_z\ket{\phi}^{2}_{B}\ket{\Psi^{+}}^{01}_{A}+I\ket{\phi}^{2}_{B}\ket{\Psi^{-}}^{01}_{A}\right)
\end{eqnarray}
Here the polarization (Q0) and path states (Q1) of photon A are used to construct the four Bell states: $\ket{\Psi^{\pm}}^{01}_A=\frac{1}{\sqrt{2}}(\ket{V}_{0}\ket{l}_{1}\pm\ket{H}_{0}\ket{r}_{1})$ and $\ket{\Phi^{\pm}}^{01}_A=\frac{1}{\sqrt{2}}(\ket{H}_{0}\ket{l}_{1}\pm\ket{V}_{0}\ket{r}_{1})$. Alice realizes a complete BSM using the polarization (Q0) and path (Q1) DOF of photon A with BD2 and BD3 (see Supplementary Materials V for details). The outcomes of the BSM are sent from Alice to Bob via coaxial cables.
Photon B (Q2) is delayed by a 222-m-long (corresponding to a temporal delay of $\sim$1110 ns) SMF and then sent to Bob. At Bob's site, Q2 is focused on the subwavelength hole arrays and converted to a single surface plasmon (Q3). As a result, we coherently transmit the quantum state of Q2 to Q3, which is carried by the single-mode collective electronic excitations of the SPP. Then, the SPP propagates along the surface of the sample and subsequently couples out to an optical photon (Q4), radiating into the far field. After the BSM is performed by Alice, the quantum state of Q4 is projected into a pure state and equals the input state $\ket{\phi}^{0}_{A}$ up to a local UT according to the BSM result (see Eq.~\eqref{eq3:InputState}). The local UTs are realized with two EOMs, which perform the required $\sigma_x$ and $\sigma_z$ operations. Collectively, the EOMs perform the $i\sigma_y$ operation. After these local UTs, the output state of Q4 is: $\ket{\phi}^{4}_{B}=\alpha\ket{H}^{4}_{B}+\beta\ket{V}^{4}_{B}$. Finally, we collect the photons into an SMF and perform QST on Q4.
\vspace*{0.3cm}
\noindent\sectionuser{The results of quantum state and process tomography}\qquad
We prepare six input states of qubit 0: $\ket{H}$, $\ket{V}$, $\ket{D}$, $\ket{A}$, $\ket{R}$, and $\ket{L}$ (see Fig.~\ref{fig2:BlochDM}\textbf{a}). Note that $\ket{D}=(\ket{H}+\ket{V})/\sqrt{2}$/$\ket{A}=(\ket{H}-\ket{V})/\sqrt{2}$, and $\ket{R}=(\ket{H}-i\ket{V})/\sqrt{2}$/$\ket{L}=(\ket{H}+i\ket{V})/\sqrt{2}$ stand for the diagonal/anti-diagonal linearly and right/left circularly polarized states of single photons, respectively.
\begin{figure}[!htbp]
\includegraphics[width=0.6\textwidth]{fig2TomoDM.pdf}
\caption{\label{fig2:BlochDM}Reconstructed density matrices of the six teleported states. (\textbf{a})~The initial prepared states are $\ket{H}$, $\ket{V}$, $\ket{D}$, $\ket{A}$, $\ket{R}$, and $\ket{L}$ and are indicated by coloured dots on the Bloch sphere. (\textbf{b},~\textbf{d},~\textbf{f},~\textbf{h},~\textbf{j},~\textbf{l})~Real parts of the reconstructed density matrices for the six states. (\textbf{c},~\textbf{e},~\textbf{g},~\textbf{i},~\textbf{k},~\textbf{m})~Imaginary parts of the reconstructed density matrices for the six states. The ideal density matrix is shown as the wire grid. The representative data here are for experiments with a $\ket{\Phi^{+}}$ Bell-state measurement outcome with SPP. The reconstructed density matrices of the six states for all four Bell-state measurement outcomes are provided in the Supplementary Material.}
\end{figure}
To characterize the quantum teleportation mediated by the SPP, we perform single-qubit QST measurements on the teleported quantum states. In Fig.~\ref{fig2:BlochDM}\textbf{b}-\textbf{m}, we show the real and imaginary parts of the reconstructed density matrices for different input states. With the reconstructed density matrices, we calculate the state fidelity $F=\prescript{}{ideal}{\left\langle\phi\right|}\rho\ket{\phi}_{ideal}$, where $\rho$ is the reconstructed density matrix and $\ket{\phi}_{ideal}$ is the ideal quantum state. The results of the quantum state fidelity after quantum teleportation are shown in Fig.~\ref{fig3:Fid}. For a comparison, we present the state fidelities both without and with photon-SPP-photon conversion. We can see from Fig.~\ref{fig3:Fid} that all the fidelities are well above the limit of $2/3$ that can be achieved using a classical strategy without employing entanglement~\cite{Massar1995PRL}. By averaging the single photon fidelities over all input states, we obtain an average fidelity of 92.67$\pm$0.32\% (without SPP) and 88.91$\pm$0.38\% (with SPP) for the retrieved initial states, including active feed-forward operations, which exceed the classical limit of 2/3 by more than 81-$\sigma$ and 58-$\sigma$ standard deviations~\cite{Massar1995PRL}. We note that the difference in the state fidelities between the cases without the SPP and with the SPP is mainly caused by: The excited SPP distorts the beam pattern and then leads to a lower contrast of the phase flip of the two EOMs. Quantitative analysis of the reduction in the achievable fidelity can be found in the Supplementary Materials (Section VII).
\begin{figure}[!htbp]
\includegraphics[width=0.8\textwidth]{fig3Fidelity.pdf}
\caption{\label{fig3:Fid}Quantum state fidelities of quantum teleportation for the six different input states: $\ket{H}$, $\ket{V}$, $\ket{D}$, $\ket{A}$, $\ket{R}$ and $\ket{L}$ with four Bell-state measurement results: $\ket{\Psi^{-}}$, $\ket{\Psi^{+}}$, $\ket{\Phi^{-}}$ and $\ket{\Phi^{+}}$. The different BSM outcomes are denoted with different colours. (\textbf{a})~The fidelities measured without the SPP involved. We perform this measurement by moving the subwavelength hole array out from the setup. (\textbf{b})~The fidelities measured with the SPP involved. All the fidelities exceed the classical limit of 2/3 (dashed line). The error bars are calculated using a Monte Carlo routine assuming Poissonian statistics.}
\end{figure}
\begin{figure}[!htbp]
\includegraphics[width=0.6\textwidth]{fig4ProcessTomo.pdf}
\caption{\label{fig4:ProTomo}Results of quantum process tomography for the teleportation procedure. (\textbf{a})~The real part of the reconstructed process matrix $\chi$ without the SPP (W.O. SPP). The ideal process matrix has only one nonzero component $(\chi_{ideal})_{00}$=1, and we obtain a process fidelity of $F_{proc}=\text{Tr}(\chi_{ideal}\chi)=(89.80\pm0.45)\%$. (\textbf{b})~The real part of the reconstructed process matrix $\chi$ with the SPP (With SPP). The process fidelity is $F_{proc}=\text{Tr}(\chi_{ideal}\chi)=(82.01\pm0.50)\%$. (\textbf{c}, \textbf{d})~Bloch sphere representations of the process without (W.O.) (\textbf{c}) and with (\textbf{d}) the SPP involved. The plot shows how the input states lying on the surface of the initial Bloch sphere (meshed surface) are transformed by our teleportation protocol, with the output states lying on the solid surface.}
\end{figure}
Since quantum teleportation is a quantum process, it is natural to quantitatively describe the whole process with quantum process tomography~\cite{Nielsen2010}. The reconstructed density matrices of the teleported quantum states allow us to fully characterize the teleportation procedure by quantum process tomography. We choose four input states ($\rho_{in} = \ket{H}\bra{H}, \ket{V}\bra{V}, \ket{D}\bra{D}, \ket{L}\bra{L}$) and their corresponding output states $\rho_{out}$ to benchmark the process of quantum teleportation. The effect of teleportation on $\rho_{in}$ is determined by the process matrix $\chi$, which is defined by $\rho_{out} = \sum_{l,k = 0}^3 \chi_{lk} \sigma_l \rho_{in} \sigma_k$, where $\sigma_{i}$ are the Pauli matrices with $\sigma_{0}$ being the identity operator. A perfect process matrix of quantum teleportation has only one nonzero component, $\chi_{00} = 1$, indicating that the input state is faithfully teleported without a reduction in the state fidelity. The real parts of the process matrix $\chi$ for the two situations (without and with the SPP) are shown in Fig.~\ref{fig4:ProTomo}\textbf{a} (without SPP) and Fig.~\ref{fig4:ProTomo}\textbf{b} (with SPP), respectively. The quantum process fidelities, i.e.~$\mathcal{F}_{proc}=\text{Tr}(\chi_{ideal}\chi)$, for our experiment without and with the SPP are 0.898$\pm$0.005 and 0.820$\pm$0.005, respectively. These fidelities correspond to 80-$\sigma$ and 64-$\sigma$ violations over the classical bound of 0.5~\cite{Nielsen2002PLA, Ma2012Nature.489.269}. A single-qubit quantum process, including quantum teleportation, can be represented graphically by a deformation of the Bloch sphere subjected to the quantum process~\cite{Nielsen2010}. As shown in Figs.~\ref{fig4:ProTomo}(\textbf{c}) (without SPP) and~\ref{fig4:ProTomo}(\textbf{d}) (with SPP), the ideal input states of Q0 are denoted as the states lying on the meshed surface of the Bloch sphere. After the photon-to-SPP quantum teleportation, the initial Bloch spheres are deformed into an anisotropic ellipsoids as shown in the solid blue-yellow colour, corresponding to the final output states.
\vspace*{0.5cm}
\noindent\textbf{Summary and Discussion}
\noindent In conclusion, we demonstrate faithful teleportation of quantum states from one qubit of a single photon to another qubit of an SPP. The photon-to-SPP quantum teleportation is completely characterized by quantum state and process tomography. The fidelities of the six teleported states all exceed the classical limit with tens of standard deviations. The process fidelities also exceed the classical limit with tens of standard deviations. These results conclusively confirm the quantum nature of teleportation from arbitrary unknown quantum states of a single photon to a single SPP. Our work is a further step towards exploring the fascinating quantum behaviours of SPPs. The comprehensive utilization of the quantum properties of SPPs in more advanced protocols will promote the rapid development of future quantum information processing with quantum plasmonic devices.
\vspace*{0.5cm}
\noindent\textbf{References}
|
1,108,101,566,382 | arxiv | \section{Introduction}
Non-equilibrium evolution of isolated quantum many-body systems has
recently come to the center of attention \cite{2011RvMP...83..863P,2007PhRvL..98r0601K,2008Natur.452..854R,2008PhRvL.100j0601B,2009PhRvL.103j0403R,2010NJPh...12e5020C,2010PhRvE..81c6206S,2010NJPh...12e5017B,2011PhRvL.106e0405B,2011PhRvL.106v7203C,2012JSMTE..07..016C,2013PhRvL.110j0406S,2013PhRvL.111s7203M,2013PhRvA..87e3628I}
due to spectacular recent advances experiments with ultra-cold atoms
\cite{2006Natur.440..900K,2012Natur.481..484C,2012NatPh...8..325T,2013Natur.502...76F}.
Whether isolated quantum systems reach an equilibrium in some appropriate
sense, and, if the answer is yes, the nature of the steady state reached,
are long-standing and fundamental problems in theoretical physics.
For generic systems it is expected that provided driving forces are
absent, after a sufficiently long time they reach a steady state in
which the expectation values of some class of relevant observables
are described by a thermal Gibbs ensemble \cite{2008Natur.452..854R,2011RvMP...83..863P}.
The choice of the class of observables generally follows the idea
that they are supported on subsystems which in the thermodynamic limit
are infinitely smaller than the rest of the system. The rest of the
system can then act as a heat bath, leading to thermalization. Such
classes are given by local observables (e.g. short range correlators)
on a chain with local Hamiltonian, or observables involving (sums
over) few-body operators in a many-body system.
Thermalization, however, is only expected to hold for systems with
generic, i.e. non-integrable dynamics. Dynamics of integrable systems
is constrained by the conservation of extra charges which prevents
relaxation to a thermal state. It was suggested in \cite{2007PhRvL..98e0405R}
that in the integrable case the long-time asymptotic stationary state
is described by a statistical ensemble involving all the relevant
conserved charges $\{\hat{Q}_{i}\}$, the Generalized Gibbs Ensemble
(GGE). When considering local quantities as relevant observables,
it is intuitively clear that the relevant charges to include are the
local ones. In the case of integrable systems, the generating function
of such charges is a commuting family of transfer matrices as a function
of the so-called spectral parameter \cite{9780511628832}.
The GGE can be derived by applying the maximum entropy principle under
the constraint provided by the charges $\{\hat{Q}_{i}\}$, therefore
the idea is very natural in the framework of statistical mechanics.
However, it is quite difficult to construct the ensemble for strongly
correlated genuinely interacting quantum systems. Therefore most initial
studies of GGE were carried out in theories equivalent to free fermions
\cite{2006PhRvL..97o6403C,2009PhRvL.102l7204R,2012JSMTE..07..022C,2012PhRvE..85a1133C,2013JSMTE..02..014G,2013PhRvB..87x5107F,2014JSMTE..03..016F,2013PhRvL.110x5301C,2014PhRvA..89a3609K,2014JSMTE..07..024S}
or by numerical studies of relatively small systems \cite{2011PhRvL.106n0405C,2014PhRvL.113e0601W}.
More recently it became possible to examine genuinely interacting
integrable systems such as the 1D Bose gas \cite{2012PhRvL.109q5301C,2013PhRvB..88t5131K,2014PhRvA..89c3601D},
the XXZ Heisenberg spin chain \cite{2013JSMTE..07..003P,2013JSMTE..07..012F,2014PhRvB..89l5101F}
or field theories \cite{2010NJPh...12e5015F,2013PhRvL.111j0401M,2014PhLB..734...52S}.
Even so, it took quite some time until the first precision numerical
test of predictions of the GGE against real time dynamics was performed
\cite{2014PhRvB..89l5101F}.
Surprisingly, the validity of the GGE for genuinely interacting theories
has been called into question by a series of recent studies. A crucial
step in this direction was the development of the quench action approach
\cite{Caux2013}, which provided an alternative way to study the time
evolution using the overlaps of the initial state with the eigenstates
of the post-quench Hamiltonian. In particular it allows to derive
overlap thermodynamic Bethe Ansatz (oTBA) equations for the steady
states, provided the exact overlaps are known. Using the results of
\cite{2012JSMTE..05..021K}, a determinant formula for overlaps with
the Néel state was first computed in \cite{Pozsgay2014a}. It was
substantially improved in \cite{2014JPhA...47n5003B,Brockmann2014c},
allowing the evaluation of the thermodynamic limit. In addition, the
dimer state overlaps were also expressed in terms of the Néel ones
in \cite{Pozsgay2014a}. The oTBA equations for the Néel state were
first obtained in \cite{Wouters2014}, and it was also shown that
the GGE and the oTBA give different results for the Bethe root densities,
and also for the nearest neighbour spin-spin correlators for the case
of the Néel initial state; the difference, however, is very small.
The oTBA equations were also derived independently for the Néel and
dimer states in \cite{Pozsgay2014}, where we compared the GGE and
oTBA results to numerical real time evolution computed using the infinite-volume
Time-Evolving Block Decimation (iTEBD) method \cite{2004PhRvL..93d0502V,2007PhRvL..98g0201V}.
It turned out that while the precision of the iTEBD is not enough
to resolve the difference between the GGE and the oTBA for the Néel
state, in the dimer case the issue can be unambiguously decided: the
GGE built upon the local conserved charges fails to describe the local
correlators in the steady state, while the oTBA results agree perfectly
with the numerics. Three elements proved to be necessary to arrive
at a definite conclusion:
\begin{enumerate}
\item A nontrival and novel conjecture, published and thoroughly tested
in \cite{Mestyan2014}, which enabled the construction of local correlators
from the TBA solution (independently of whether it was derived for
a thermal or GGE ensemble, or from the quench action approach).
\item The exact overlaps of the dimer state, computed using the results
in \cite{Pozsgay2014a}.
\item Explicit numerical evaluation of the time evolution using the infinite-size
Time Evolving Block Decimation (iTEBD) algorithm developed in \cite{2004PhRvL..93d0502V,2007PhRvL..98g0201V}.
\end{enumerate}
In a subsequent version of \cite{Wouters2014}, using the results
for correlators derived in \cite{Mestyan2014} it was also shown that
the oTBA reproduces the diagonal ensemble, while the GGE differs from
it.
In the present paper we present previously unpublished background
material behind the work \cite{Pozsgay2014}, such as the derivation
of the dimer overlaps, the details of the numerical time evolution,
and the results for the $xx$ spin correlators. We also extend our
results to a q-deformed version of the dimer state: we give the exact
overlaps, construct the oTBA and the GGE predictions, and compare
them to the iTEBD results. It turns out that the oTBA and GGE gives
different predictions, but again the difference is to small to be
resolved by the numerics; however, it provides an important consistency
check for our framework. We also show that the exact solution proposed
for the TBA equations in \cite{Wouters2014} can be extended to the
q-dimer case as well, and give some more intuitive background for
it by relating it to the Loschmidt echo studied in \cite{Pozsgay2013}.
In addition, we derive a partially decoupled version of the formulas
for the correlation functions.
A further motivation for this paper is that the original results have
been widely discussed since its publication in \cite{Pozsgay2014},
and that some very important follow-up appeared which clarified some
interesting aspects of the failure of the GGE \cite{Goldstein2014,2014JSMTE..09..026P,Pozsgay2014c}.
We give an exposition and discussion of the issues and arguments in
our conclusions.
The outline of the paper is as follows. Section \ref{sec:Overview-of-the-XXZ-TBA}
gives a brief overview of the XXZ Bethe Ansatz, where we collect the
necessary facts and set up our notations. Section \ref{sec:Bethe-Ansatz-for-QQ-in-XXZ}
describes the application of the Bethe Ansatz to quantum quenches
in the XXZ chain. We briefly describe the GGE and how to compute thermodynamics
in the GGE using TBA formalism, and give a summary of the quench action
approach. In Section \ref{sec:steadyStateDensities} we go through
the exact overlaps and give their construction for the dimer and q-dimer
states. Then we turn to the oTBA, discuss its exact solution for the
Néel and extend it to the q-dimer case. Section \ref{sec:Computing-correlation-functions}
summarizes the methods necessary to compute the correlation functions
from the oTBA solutions, which are then compared to the iTEBD in Section
\ref{sec:Discussion}.
Some longer derivations and technical details are relegated to appendices.
Appendix \ref{sec:appendixCorrelators} contains the derivation of
the decoupled correlator formulas, while Appendix \ref{sec:appendixTEBD}
describes the details of the iTEBD, including issues of reliability
and error estimation.
\section{Overview of the Bethe Ansatz for the \emph{XXZ }spin chain\label{sec:Overview-of-the-XXZ-TBA}}
The Hamiltonian of the XXZ spin chain is
\begin{equation}
H_{\textit{XXZ}}(\Delta)=\sum_{j=1}^{L}(\sigma_{j}^{x}\sigma_{j+1}^{x}+\sigma_{j}^{y}\sigma_{j+1}^{y}+\Delta(\sigma_{j}^{z}\sigma_{j+1}^{z}-1))\,,\label{eq:XXZHamiltonian}
\end{equation}
where $\Delta$ is the anisotropy parameter, and we impose periodic
boundary conditions $\sigma_{j+L}\equiv\sigma_{j}$. The Hamiltonian
can be diagonalized using the Bethe Ansatz \cite{Bethe1931,Orbach1958},
which we briefly summarize in the following in order to set down our
notations. Since the Hamiltonian conserves the total value of the
$z$ component of the spin
\begin{equation}
[H,S^{z}]=0\qquad S^{z}=\sum_{j=1}^{L}\sigma_{j}^{z}\;,
\end{equation}
the eigenstates are of the form
\begin{equation}
\begin{aligned}|\{\lambda_{j}\}_{j=1}^{M}\rangle & =\sum_{n_{1}<n_{2}<...<n_{M}}\Psi(\{\lambda_{j}\}_{j=1}^{M}|n_{1},n_{2},...,n_{M})|n_{1},n_{2},...,n_{M}\rangle\\
|n_{1},n_{2},...,n_{M}\rangle & =\sigma_{n_{1}}^{-}\sigma_{n_{2}}^{-}...\sigma_{n_{M}}^{-}|\uparrow\uparrow...\uparrow\rangle\,,
\end{aligned}
\label{eq:basisExpansion}
\end{equation}
parametrized by rapidities $\{\lambda_{j}\}_{j=1}^{M}$, and having
$S^{z}=L/2-M$. The corresponding wave function is built from $M$
plane waves with factorized scattering amplitudes:
\begin{equation}
\Psi(\{\lambda_{j}\}_{j=1}^{M}|n_{1},n_{2},...,n_{M})=\sum_{\pi\in S_{M}}I(\pi)\prod_{j=1}^{M}\left(\frac{\sin(\lambda_{j}+i\eta/2)}{\sin(\lambda_{j}-i\eta/2)}\right)^{n_{\pi(j)}}\left(\prod_{j<k}\frac{\sin(\lambda_{\pi(j)}-\lambda_{\pi(k)}+i\eta)}{\sin(\lambda_{\pi(j)}-\lambda_{\pi(k)}-i\eta)}\right)\,,\label{eq:BetheWaveFunction}
\end{equation}
in which $\pi$ is a permutation of $1,\dots,M$ with parity $I(\pi)$,
and $\eta$ is defined by $\cosh\eta=\Delta$. The Bethe equations
follow from imposing the periodic boundary condition:
\begin{equation}
\left(\frac{\sin(\lambda_{j}+i\eta/2)}{\sin(\lambda_{j}-i\eta/2)}\right)^{L}=\prod_{k\neq j}\left(\frac{\lambda_{j}-\lambda_{k}+i\eta}{\lambda_{j}-\lambda_{k}-i\eta}\right)\,.\label{eq:BetheEquations}
\end{equation}
The thermodynamics of the chain can be described on the basis of the
string hypothesis; the exposition below follows the work \cite{9780511524332}.
The thermodynamic limit is defined by
\begin{equation}
L,\, M\rightarrow\infty\qquad\mbox{ such that}\qquad M/L=\mbox{const.}\,,
\end{equation}
and shall be abbreviated by $\TDL$. For large but finite values of
the chain length $L$, the rapidities $\{\lambda_{j}\}_{j=1}^{M}$
parametrizing the wave function are organized into configurations
of approximate strings. The $j$-th rapidity in the $\alpha$-th approximate
$n$-string is given by
\begin{equation}
\lambda_{\alpha,j}^{n}=\lambda_{\alpha}^{n}+\frac{i\eta}{2}(n+1-2j)+\delta_{\alpha,j}^{n}\,.
\end{equation}
The string hypothesis states that the deviations $\delta_{\alpha,j}^{n}$
are $\mathcal{O}(1/L)$, therefore the thermodynamic limit of the
wave function contains $n$-strings that are fully defined by their
real part $\lambda_{\alpha}^{n}$. Due to the above definition of
the thermodynamic limit, the strings will have a finite density in
rapidity space: the number of $n$-strings in a rapidity interval
$(\lambda,\lambda+d\lambda)$ is $L\rho_{n}(\lambda)d\lambda$, where
$\rho_{n}(\lambda)$ is the density of $n$-strings.
For any given set of string densities $\{\rho_{n}(\lambda)\}$, there
are usually many Bethe Ansatz eigenstates which scale to $\{\rho_{n}(\lambda)\}$.
However, we assume that the expectation values of relevant observables
are entirely determined by $\{\rho_{n}(\lambda)\}$ in the thermodynamic
limit; one can consider this as a selection condition for observables
to which the thermodynamic limit applies.
It is useful to define the entropy per site
\begin{equation}
s[\{\rho_{n}(\lambda)\}]=\frac{1}{L}\ln\,\mathcal{N}[\{\rho_{n}(\lambda)\}]\,,\label{eq:entropyDefinition}
\end{equation}
where $\mathcal{N}[\{\rho_{n}(\lambda)\}]$ is the number of Bethe
Ansatz eigenstates scaling to $\{\rho_{n}(\lambda)\}$. Since the
observables depend only on the densities by assumption, the generic
Bethe Ansatz eigenstate scaling to $\{\rho_{n}(\lambda)\}$ will be
denoted $|\{\rho_{n}(\lambda)\}\rangle$, omitting any further microscopic
labels.
The $n$-holes are defined as positions satisfying the $n$-string
quantization relation following from the Bethe equations, but absent
from the wave function. In the thermodynamic limit the density of
$n$-holes $\rho_{n}^{\text{h}}(\lambda)$ can be defined analogously
to $\rho_{n}(\lambda)$. As a consequence of the Bethe equations,
the densities $\rho_{n}(\lambda)$ and $\rho_{n}^{\text{h}}(\lambda)$
are constrained by the Bethe–Takahashi equations \cite{9780511524332}:
\begin{equation}
a_{n}(\lambda)=\rho_{n}(\lambda)+\rho_{n}^{h}(\lambda)+\sum_{m=1}^{\infty}\int_{-\frac{\pi}{2}}^{\frac{\pi}{2}}d\lambda'T_{nm}(\lambda-\lambda')\rho_{m}(\lambda)\,,\label{eq:BetheTakahashiEquations}
\end{equation}
where
\begin{equation}
a_{n}(\lambda)=\frac{1}{2\pi}i\frac{d}{d\lambda}\log\left(\frac{\sin(\lambda+in\eta/2)}{\sin(\lambda-in\eta/2)}\right)=\frac{1}{\pi}\frac{\sinh(n\eta)}{\cosh(n\eta)-\cos(2\lambda)}\qquad n\ge1\,,\label{eq:a_nDefinition}
\end{equation}
and
\begin{equation}
T_{nm}(x)=\begin{cases}
a_{|n-m|}(\lambda)+2a_{|n-m|+2}(\lambda)+...+2a_{n+m-2}(\lambda)+a_{n+m}(\lambda)\,, & \textrm{if }m\ne n\\
2a_{2}(\lambda)+2a_{4}(\lambda)+...+2a_{2n-2}(\lambda)+a_{2n}(\lambda)\,, & \textrm{if }m=n\,.
\end{cases}
\end{equation}
The equations (\ref{eq:BetheTakahashiEquations}) can be easily transformed
into a partially decoupled form \cite{9780511524332}:
\begin{equation}
\rho_{n}(\lambda)=\frac{1}{1+\eta_{n}(\lambda)}\left(s(\lambda)\delta_{n,1}+\left[s\star\left(\eta_{n}\rho_{n}+\eta_{n-1}\rho_{n-1}\right)\right](\lambda)\right)\,,\label{eq:BetheTakahashiDecoupled}
\end{equation}
where
\begin{equation}
\eta_{n}(\lambda)=\frac{\rho_{n}^{\textrm{h}}(\lambda)}{\rho(\lambda)}\,.\label{eq:etadef}
\end{equation}
Note that (\ref{eq:BetheTakahashiEquations}) determine $\{\rho_{n}^{\text{h}}(\lambda)$\}
uniquely if $\{\rho_{n}(\lambda)$\} is given, therefore we will write
functionals of $\{\rho_{n}^{\text{h}}(\lambda)$\} as functionals
$\{\rho_{n}(\lambda)$\} for the sake of brevity.
The density of particles is given by
\begin{equation}
\mathcal{M}[\{\rho_{n}(\lambda)\}]=\sum_{n=1}^{\infty}\int_{-\pi/2}^{\pi/2}d\lambda n\rho_{n}(\lambda)=\frac{M}{L}\,.\label{eq:densityOfParticles}
\end{equation}
This functional, when restricted to Bethe Ansatz string densities
satisfying (\ref{eq:BetheTakahashiEquations}), can take values from
the interval $[0,\frac{1}{2}]$ and is simply related to the magnetization
per site, with zero magnetization corresponding to $\mathcal{M}=1/2$.
The \emph{XXZ }chain has infinitely many local conserved charges which
are the logarithmic derivatives of the \emph{XXZ} transfer matrix
\cite{9780511628832}:
\begin{equation}
Q_{k}=\left(i\frac{d}{d\lambda}\right)^{k-1}\log T^{\mathit{XXZ}}(\lambda)\big|_{\lambda=0}\,,\label{eq:XXZcharges}
\end{equation}
with $Q_{2}$ proportional to the Hamiltonian. The locality of these
charges means that they are all given as sums over the chain of terms
containing a limited number of spin operators acting on adjacent sites
\cite{1994MPLA....9.2197G}. Their expectation values in Bethe Ansatz
eigenstates can be constructed using the algebraic Bethe Ansatz \cite{9780511628832}.
In the thermodynamic limit, the expectation values of conserved charges
per site in a particular state $|\{\rho_{n}(\lambda)\}\rangle$ are
obtained as a sum of integrals of the string densities with appropriate
kernel functions $q_{n}^{(j)}(\lambda)$:
\begin{equation}
\langle\{\rho_{n}(\lambda)\}|Q_{k}|\{\rho_{n}(\lambda)\}\rangle=\sum_{n=1}^{\infty}\int_{-\pi/2}^{\pi/2}d\lambda\rho_{n}(\lambda)q_{n}^{(k)}(\lambda)\,,\label{eq:chargesIntegralFormula}
\end{equation}
with
\begin{equation}
\begin{aligned}q_{n}^{(k)}(\lambda) & =-2\pi\left(i\frac{d}{d\lambda}\right)^{k-2}a_{n}(\lambda)\qquad k\ge2\,.\end{aligned}
\end{equation}
It is important to note that a one-to-one correspondence between $\rho_{1}^{\text{h}}(\lambda)$
and the expectation values of conserved charges was derived in \cite{Wouters2014,Brockmann2014b}.
In our conventions, the relation reads
\begin{equation}
\begin{aligned}\langle\{\rho_{n}(\lambda)\}|Q_{j}|\{\rho_{n}(\lambda)\}\rangle & =\sum_{m=-\infty}^{\infty}\frac{\tilde{\rho}_{1}^{\text{h}}(m)-e^{-|m|\eta}}{2\cosh(m\eta)}\,(2mi)^{j-2}\,,\\
\tilde{\rho}_{n}(m) & =\int_{\pi/2}^{\pi/2}d\lambda\rho_{n}(\lambda)e^{im\lambda}\,,\\
\rho_{n}(m) & =\frac{1}{\pi}\sum_{m=-\infty}^{\infty}\tilde{\rho}_{n}(k)e^{-im\lambda}\,.
\end{aligned}
\end{equation}
Following \cite{Fagotti2013}, it is useful to define a generating
function as
\begin{equation}
G(\lambda)=\sum_{j=0}^{\infty}\frac{\lambda^{j}}{j!}\langle\{\rho_{n}(\lambda)\}|Q_{j+1}|\{\rho_{n}(\lambda)\}\rangle\,,\label{eq:ChargeGenFunDef}
\end{equation}
which is in a one-to-one relationship with $\rho_{1}^{\text{h}}(\lambda)$
\cite{Wouters2014,Brockmann2014b}:
\begin{equation}
G(\lambda)=[s\star(\rho_{1}^{\text{h}}-a_{n})](\lambda)\,,\label{eq:generatingFunctionRho1h}
\end{equation}
with
\begin{equation}
s(\lambda)=\frac{1}{2\pi}\left(1+2\sum_{k=1}^{\infty}\frac{\cos2k\lambda}{\cosh k\eta}\right)\,.\label{eq:sFunctionDefinition}
\end{equation}
\section{Bethe Ansatz for quantum quenches in the XXZ spin chain\label{sec:Bethe-Ansatz-for-QQ-in-XXZ}}
In general, quantum quenches are described by the following protocol:
\begin{enumerate}
\item The system is initially prepared in a ground state $|\Psi_{0}\rangle$
of a local Hamiltonian $H_{0}$.
\item At $t=0$ the Hamiltonian is suddenly changed, and from then on, the
system evolves in time according to the new or \emph{post-quench}
Hamiltonian $H$.
\item After a suitably long time, the system is expected to relax into a
steady state.
\end{enumerate}
We only consider translation invariant (global) quantum quenches when
both $H_{0}$ and $H$ are translationally invariant and the quench
is realized by changing one or more coupling constant in the Hamiltonian
at $t=0$. The post-quench Hamiltonian $H$ is the \emph{XXZ} Hamiltonian
(\ref{eq:XXZHamiltonian}), while the initial states are certain translationally
invariant product states $|\Psi_{0}^{\text{\ensuremath{\gamma}}}\rangle$
of the form
\begin{equation}
\begin{aligned}|\Psi_{0}^{\text{\ensuremath{\gamma}}}\rangle & =\frac{1+\hat{T}}{\sqrt{2}}|\psi_{0}^{\gamma}\rangle\\
|\psi_{0}^{\gamma}\rangle & =\left[\otimes^{L/2}\left(\frac{|\uparrow\downarrow\rangle-\gamma|\downarrow\uparrow\rangle}{\sqrt{1+\gamma^{2}}}\right)\right]\,,
\end{aligned}
\label{eq:initialStates}
\end{equation}
where $\hat{T}$ is the one site translation operator and $\gamma$
is a constant determining the specific initial state. In this study,
three different values of $\gamma$ are considered:
\begin{enumerate}
\item $\gamma=0$ is the translationally invariant Néel state, which is
a ground state of the $\Delta\rightarrow\infty$ limit of the\emph{
XXZ }Hamiltonian, and will also be denoted by $|\Psi_{0}^{\text{N}}\rangle=|\Psi_{0}^{0}\rangle$.
\item $\gamma=1$ is translationally invariant Majumdar-Ghosh dimer product
state, which is a ground state of the Majumdar-Ghosh Hamiltonian \cite{1969JMP....10.1388M},
and will also be denoted by $|\Psi_{0}^{\text{D}}\rangle=|\Psi_{0}^{1}\rangle$.
\item $\gamma=q$, where $q+1/q=\Delta$, is the translationally invariant
$q$-deformed dimer product state, a ground state of the $q$-deformed
Majumdar-Ghosh Hamiltonian \cite{BATCHELOR1994}, which will be alternatively
denoted by $|\Psi_{0}^{q\text{D}}\rangle=|\Psi_{0}^{q}\rangle$.
\end{enumerate}
Although the quenches start from the translationally invariant states
defined above, for later convenience their non-translationally invariant
counterparts $|\psi_{0}^{0}\rangle$, $|\psi_{0}^{1}\rangle$ and
$|\psi_{0}^{q}\rangle$ will be denoted by $|\text{N}\rangle$, $|\text{D}\rangle$
and $|q\text{D}\rangle$, respectively.
\subsection{The diagonal ensemble\label{sub:The-diagonal-ensemble}}
The goal is to compute the infinite time average of the expectation
value of local observables $\mathcal{O}$ in the thermodynamic limit
\begin{equation}
\langle\mathcal{O}\rangle=\lim_{T\rightarrow\infty}\frac{1}{T}\int\limits _{0}^{T}dt\langle\Psi_{0}|e^{iHt}\mathcal{O}e^{-iHt}|\Psi_{0}\rangle\qquad(\TDL)\,.\label{eq:timeAverage}
\end{equation}
For well-behaved initial states, the average (\ref{eq:timeAverage})
can be computed as an ensemble average of the so-called \emph{diagonal
ensemble}. Expanding the expectation value $\langle\Psi_{0}|e^{iHt}\mathcal{O}e^{-iHt}|\Psi_{0}\rangle$
over the eigenstates of the post-quench Hamiltonian as
\begin{equation}
\langle\Psi_{0}|e^{iHt}\mathcal{O}e^{-iHt}|\Psi_{0}\rangle=\sum_{\alpha'}\sum_{\alpha}e^{-i(E_{\alpha}-E_{\alpha'})t}\langle\Psi_{0}|\alpha'\rangle\langle\alpha'|\mathcal{O}|\alpha\rangle\langle\alpha|\Psi_{0}\rangle\,,\label{eq:doubleSum}
\end{equation}
where $E_{\alpha}$ is the energy of the Hamiltonian eigenstate $|\alpha\rangle$,
and substituting (\ref{eq:doubleSum}) into (\ref{eq:timeAverage}):
\begin{equation}
\langle\mathcal{O}\rangle=\lim_{T\rightarrow\infty}\frac{1}{T}\int\limits _{0}^{T}dt\sum_{\alpha'}\sum_{\alpha}e^{-i(E_{\alpha}-E_{\alpha'})t}\langle\Psi_{0}|\alpha'\rangle\langle\alpha'|\mathcal{O}|\alpha\rangle\langle\alpha|\Psi_{0}\rangle\qquad(\TDL)\,.\label{eq:doubleSumTimeAverage}
\end{equation}
For non-degenerate systems and/or suitably generic starting states,
this expression simplifies to a single sum when taking the limits
in the given order, since the off-diagonal terms contain rapidly oscillating
exponentials that cancel out. The remaining terms give an average
over the so called \emph{diagonal ensemble:}
\begin{equation}
\langle\mathcal{O}\rangle=\sum|\langle\Psi_{0}|\alpha\rangle|^{2}\langle\alpha|\mathcal{O}|\alpha\rangle\qquad(\TDL)\,.\label{eq:diagonalEnsemble}
\end{equation}
The validity of the diagonal ensemble (\ref{eq:diagonalEnsemble})
is the underlying assumption of both the generalized Gibbs ensemble
hypothesis and the quench action formalism, which are introduced in
the remainder of this section.
\subsection{Remarks on the role of translational invariance}
The translational invariance of the chosen initial states $|\Psi_{0}^{\gamma}\rangle$
is important because by (\ref{eq:HamiltonianTranslationCommutator})
it assures the translational invariance of the steady state and thus
the validity of the diagonal ensemble (\ref{eq:diagonalEnsemble}).
It is clear that if the post-quench steady values are not translationally
invariant, then they cannot be described by the diagonal ensemble
(\ref{eq:diagonalEnsemble}).
Our initial states have the form (\ref{eq:initialStates})
\[
|\Psi_{0}^{\text{\ensuremath{\gamma}}}\rangle=\frac{1+\hat{T}}{\sqrt{2}}|\psi_{0}^{\gamma}\rangle
\]
where the states $|\psi_{0}^{\gamma}\rangle$ are invariant under
$\hat{T}^{2}$ and
\begin{equation}
[H_{\mathrm{XXZ}}(\Delta),\hat{T]}=0\,,\label{eq:HamiltonianTranslationCommutator}
\end{equation}
The states $|\psi_{0}^{\gamma}\rangle$ have non-zero overlaps only
with Hamiltonian eigenstates $|\alpha\rangle$ that satisfy
\begin{equation}
\hat{T}|\alpha\rangle=\pm|\alpha\rangle\,,\label{eq:eigenstateTranslationEigenvalue}
\end{equation}
which ensures that the diagonal terms of the double sum in (\ref{eq:doubleSumTimeAverage})
are translationally invariant because of (\ref{eq:HamiltonianTranslationCommutator}-\ref{eq:eigenstateTranslationEigenvalue}).
Therefore whenever translational invariance is broken, the off-diagonal
terms cannot cancel. In addition, if the diagonal ensemble is not
valid, then neither the GGE nor the quench action method can describe
for the steady state, as they both assume the validity of the diagonal
ensemble.
On the other hand, the general validity of the diagonal ensemble is
not clear for translational invariance breaking initial states; in
particular, it is an open problem whether translational invariance
of observables is restored after quenches from the state $|\text{D}\rangle$
\cite{Fagotti2014,Fagotti2014a}. Nevertheless, for translationally
invariant initial states $|\Psi_{0}^{\gamma}\rangle$ then the post-quench
steady state will be translationally invariant, since the post-quench
Hamiltonian preserves translational invariance.
\subsection{GGE and GTBA}
\subsubsection{The generalized Gibbs ensemble}
The idea of the generalized Gibbs ensemble \cite{Rigol2007} is to
include all the relevant conserved charges $Q_{j}$ in the statistical
operator with appropriate Lagrange multipliers $\beta_{j}$
\begin{equation}
\rho_{\text{GGE}}=\frac{1}{Z_{\text{GGE}}}\exp\!\left(\sum\limits _{k=1}^{\infty}\beta_{k}Q_{k}\right)\qquad Z_{\text{GGE}}=\Tr\,\exp\!\left(\sum\limits _{k=1}^{\infty}\beta_{k}Q_{k}\right)\,,\label{eq:GGE}
\end{equation}
in order to set the ensemble averages of the conserved charges to
their initial state expectation value
\[
\langle\Psi_{0}|Q_{k}|\Psi_{0}\rangle=\Tr\,\rho_{\text{GGE}}Q_{k}\,.
\]
According to the hypothesis corresponding to the generalized Gibbs
ensemble, the expectation value of observables in the post-quench
relaxed state{]} may be expressed using $\rho_{GGE}$:
\[
\langle\mathcal{O}\rangle_{\text{GGE}}=\Tr\rho_{GGE}\mathcal{O}\,.
\]
The GGE is the ensemble which follows from conditional entropy maximization
while keeping the expectation values of the charges fixed to their
pre-quench values.
What are the relevant charges to include in the GGE statistical operator?
It is clear that for quenches starting from a pure state the full
quantum state of the system remains a pure state for all times. As
a result, the GGE can only be valid for some specific class of observables,
which we call \emph{relevant}. Due to the fact that both the pre-quench
and post-quench Hamiltonians are local in spatial sense, here we choose
these observables as local correlations of spins of the form $\sigma_{j}\sigma_{j+l}$,
where the distance $l$ remains finite while $L\rightarrow\infty$
in the thermodynamic limit. The relevant conserved charges are then
expected to be the local charges $Q_{k}$, since the $k$th charge
can be expressed as a sum over terms containing products of at most
$k$ adjacent spin operators. This reasoning leads to the definition
of the GGE by including only local conserved charges and restricting
the class of relevant observables to local ones, as emphasized e.g.
in \cite{Fagotti2014}. We shall return to the role of locality in
the discussion.
\subsubsection{GGE and GTBA for the XXZ chain \label{sub:GTBA}}
For the XXZ chain the GGE predictions can be computed using two different
methods: the quantum transfer matrix (QTM) method \cite{1993ZPhyB..91..507K,Klumper2004}
and a generalized thermodynamic Bethe Ansatz (GTBA) \cite{2012JPhA...45y5001M}.
The QTM method for obtaining the mean values of local correlators
$\sigma_{j}\sigma_{j+l}$ for a truncated GGE with the statistical
operator
\begin{equation}
\rho_{\text{TGGE}}=\frac{1}{Z_{\text{TGGE}}}\exp\!\left(\sum\limits _{k=1}^{k_{max}}\beta_{k}Q_{k}\right)\hspace{1em}Z_{\text{TGGE}}=\Tr\,\exp\!\left(\sum\limits _{k=1}^{k_{max}}\beta_{k}Q_{k}\right),
\end{equation}
was initially developed in \cite{Pozsgay2013b}, and involved only
the first $k_{max}=12$ conserved charges. The reason why this truncation
works is that the charges with the lowest indices are also the most
local ones, and correlations over a distance $l$ are considered to
be insensitive to the value of charges with $k>l$. Shortly thereafter
the QTM formalism for the full GGE (\ref{eq:GGE}) was constructed
in a large $\Delta$ limit \cite{Fagotti2013}, and then for arbitrary
$\Delta>0$ \cite{Fagotti2014}. The numerical results of \cite{Pozsgay2013b}
obtained through the truncated GGE approximate the full GGE results
quite well, so the GGE is truncatable in the sense that keeping the
first few local charges gives a good approximation of the full result,
which is systematically improved by increasing $k_{max}$.
Another possibility of computing the GGE predictions for correlations
is using the TBA method for the GGE which was derived in \cite{Wouters2014}.
Using the entropy maximization principle with the condition
\begin{equation}
\langle\{\rho_{n}(\lambda)\}|Q_{k}|\{\rho_{n}(\lambda)\}\rangle=\langle\Psi_{0}|Q_{k}|\Psi_{0}\rangle\,,\label{eq:charges_fixed}
\end{equation}
a set of generalized TBA (GTBA) equations can be derived, which determine
the Bethe Ansatz string densities $\{\rho_{n}(\lambda)$\} which maximizing
the entropy (\ref{eq:entropyDefinition}) under the conditions (\ref{eq:charges_fixed}).
The GTBA equations \cite{Wouters2014,Brockmann2014b}
\begin{equation}
\ln\eta_{n}(\lambda)=-\delta_{n,1}\sum_{k=0}^{\infty}\beta_{k+2}\left(\frac{d}{d\lambda}\right)^{k}s(\lambda)+\left[s\star\left(\ln(1+\eta_{n-1})+\ln(1+\eta_{n+1})\right)\right](\lambda)\label{eq:GGEGaudinTakahashiEquation}
\end{equation}
are to be solved for the functions $\eta_{n}$ defined in (\ref{eq:etadef}),
after which the Bethe-Takahashi equations (\ref{eq:BetheTakahashiDecoupled})
can be used to obtain the string densities.
However, (\ref{eq:GGEGaudinTakahashiEquation}) contains infinitely
many unknown Lagrange multipliers that must be fixed using (\ref{eq:charges_fixed}).
It was found in \cite{Wouters2014} that the system (\ref{eq:GGEGaudinTakahashiEquation})
can be solved by using the relation (\ref{eq:generatingFunctionRho1h})
between the generating function of the charges and $\rho_{1}^{\textrm{h}}(\lambda)$,
thus avoiding the determination of Lagrange multipliers. Provided
$\rho_{1}^{\textrm{h}}(\lambda)$ is known, then the equations for
$\rho_{n}(\lambda)$ can be recast into the following form \cite{2014JSMTE..09..026P,Brockmann2014b}:
\begin{equation}
\begin{aligned}\log\eta_{2}(\lambda) & =\mbox{s\ensuremath{\star\left[\log\left(\frac{s+s\star\eta_{2}\rho_{2}}{s+s\star\eta_{2}\rho_{2}-\rho_{1}^{\textrm{h}}}\right)+\log\left(1+\eta_{3}\right)\right]}}(\lambda)\\
\rho_{2}(\lambda) & =\frac{1}{1+\eta_{2}(\lambda)}\left[s\star\left(\rho_{1}^{\textrm{h}}+\eta_{n-1}\rho_{n-1}\right)\right](\lambda)\\
\ln\eta_{n}(\lambda) & =\left[s\star\left(\ln(1+\eta_{n-1})+\ln(1+\eta_{n+1})\right)\right](\lambda)\qquad n\ge3\\
\rho_{n}(\lambda) & =\frac{1}{1+\eta_{n}(\lambda)}\left[s\star\left(\eta_{n}\rho_{n}+\eta_{n-1}\rho_{n-1}\right)\right](\lambda)\qquad n\ge3\:,
\end{aligned}
\label{eq:GGETBAEquations}
\end{equation}
for $\eta_{n}(\lambda)$ and $\rho_{n}(\lambda)$, then using (\ref{eq:GGEGaudinTakahashiEquation})
and (\ref{eq:BetheTakahashiDecoupled}) with $n=1$ to obtain $\rho_{1}(\lambda)$.
In the thermodynamic limit, these string densities determine the expectation
value of every relevant observable; in particular, the results of
\cite{Mestyan2014} allow to compute arbitrary short-range correlations
from $\{\rho_{n}(\lambda)$\}.
For the particular initial states considered in this work, the generating
functions of conserved charges are
\begin{eqnarray}
\mbox{Néel} & : & G_{N}(\lambda)=-\frac{\sinh2\eta}{\cosh2\eta+1-2\cos2\lambda}\nonumber \\
\mbox{Dimer} & : & G_{D}(\lambda)=-\sinh\eta\frac{4\cos2\lambda(\sinh^{2}\eta-\cosh\eta)+\cosh\eta+2\cosh2\eta+3\cosh3\eta-2}{4\left(\cosh2\eta-\cos2\lambda\right)^{2}}\nonumber \\
\mbox{q-dimer} & : & G_{qD}(\lambda)=\tanh\eta\frac{2\cos2\lambda-\cosh2\eta-\cosh4\eta}{2\left(\cosh2\eta-\cos2\lambda\right)^{2}}\label{eq:ChargeGenFunc}
\end{eqnarray}
The first two results were explicitly computed in \cite{Fagotti2014},
while the third one can be obtained straightforwardly using the formalism
developed there.
\subsection{The quench action approach}
The quench action approach was introduced as a way of computing long-time
expectation values of local observables after quenches to Bethe Ansatz
solvable systems \cite{2013PhRvL.110y7203C}. In this study we only
consider infinite time expectation values, and summarize the idea
of the quench action with the corresponding simplifications. The first
step is to replace the sums in (\ref{eq:diagonalEnsemble}) by a functional
integral over Bethe root densities:
\[
\sum_{\alpha}\rightarrow\int\prod_{n=1}^{\infty}D\rho_{n}(\lambda)e^{Ls[\{\rho_{n}(\lambda)\}]}\,,
\]
where the exponential of the entropy $Ls[\{\rho_{n}(\lambda)\}]$
is the number of Bethe states scaling to the set of densities $\{\rho_{n}(\lambda)\}$.
The expression (\ref{eq:diagonalEnsemble}) then takes the form
\begin{equation}
\langle\mathcal{O}\rangle=\int\prod_{n=1}^{\infty}D\rho_{n}(\lambda)e^{-L\left(-\frac{2}{L}\mbox{Re}\ln\langle\Psi_{0}|\{\rho_{n}(\lambda)\}\rangle-s[\{\rho_{n}(\lambda)\}]\right)}\langle\{\rho_{n}(\lambda)\}|\mathcal{O}|\{\rho_{n}(\lambda)\}\rangle\,.\label{eq:diagonalEnsembleBA}
\end{equation}
In the thermodynamic limit the functional integral can be evaluated
exactly using saddle point analysis. The saddle point string densities
$\{\rho_{n}^{*}(\lambda)\}$ minimize the quench action functional
\begin{equation}
\mathcal{S}[\{\rho_{n}(\lambda)\}]=-\frac{2}{L}\mbox{Re}\ln\langle\Psi_{0}|\{\rho_{n}(\lambda)\}\rangle-s[\{\rho_{n}(\lambda)\}]\,,\label{eq:quenchAction}
\end{equation}
with the condition that the Bethe-Takahashi equations (\ref{eq:BetheTakahashiEquations})
hold. The quench action is analogous to the free energy functional
appearing in the thermal thermodynamic Bethe Ansatz \cite{9780511524332}.
The first term, which parallels the energy in the context of a thermal
ensemble, competes with the second entropic term. When evaluated at
the saddle point the quench action gives the norm of the initial state:
\begin{equation}
L\mathcal{S}[\{\rho_{n}^{*}(\lambda)\}]=-\ln\langle\Psi_{0}|\Psi_{0}\rangle=0\:,\label{eq:OverlapSumRule}
\end{equation}
which is a sum rule that can be used to check whether the relevant
saddle point was found.
In terms of the saddle point string densities that minimize the quench
action, the diagonal ensemble average (\ref{eq:diagonalEnsembleBA})
can be expressed as
\begin{equation}
\langle\mathcal{O}\rangle=\langle\{\rho_{n}^{*}(\lambda)\}|\mathcal{O}|\{\rho_{n}^{*}(\lambda)\}\rangle.\label{eq:saddlePointExpValue}
\end{equation}
The following two sections discuss the computation of (\ref{eq:saddlePointExpValue})
for certain quenches of the \emph{XXZ} spin chain. The variational
analysis yielding the steady state $\{\rho_{n}^{*}(\lambda)\}$ is
treated in Section \ref{sec:steadyStateDensities}, and Section \ref{sec:Computing-correlation-functions}
deals with the calculation of expectation values in the Bethe Ansatz
eigenstate characterized by $\{\rho_{n}^{*}(\lambda)\}$.
\section{Computing the steady state of \emph{XXZ} using the quench action\label{sec:steadyStateDensities}}
\subsection{Overlaps of the initial states with Bethe Ansatz eigenstates}
Since $H_{\textit{XXZ}}$ commutes with component $z$ of the total
spin, the overlap of the above initial states is nonzero only with
Bethe states that have $\mathcal{M}[\{\rho_{n}(\lambda)\}]=\frac{1}{2}$.
As shown below, for such states the first term of (\ref{eq:quenchAction})
can be written in the integral form in the the thermodynamic limit
\begin{equation}
\begin{aligned}-2\,\mbox{Re}\ln\langle\Psi_{0}^{\gamma}|\{\rho_{n}(\lambda)\}\rangle & =\sum_{n=1}^{\infty}\int_{-\pi/2}^{\pi/2}d\lambda\rho_{n}(\lambda)g_{n}^{(\gamma)}(\lambda)\\
g_{n}^{(\gamma)}(\lambda) & =\sum_{j=1}^{n}g_{1}^{(\gamma)}\left(\lambda+\frac{i\eta}{2}(n+1-2j)\right)\,,
\end{aligned}
\label{eq:overlapIntegralForm}
\end{equation}
where the one-string kernel function $g_{1}^{(\gamma)}(\lambda)$
corresponds to the particular initial state chosen. This integral
form is very convenient in the sense that the variational equations
for minimizing $\mathcal{S}[\{\rho_{n}(\lambda)\}]$ are analogous
to the variational equations for minimizing the free energy in the
context of the thermodynamic Bethe Ansatz \cite{9780511524332}.
However, the integral formula (\ref{eq:overlapIntegralForm}) yields
finite values also for Bethe states with nonzero overall magnetization,
thus its naive use in the variational problem leads to spurious results.
The Bethe states with $\mathcal{M}[\{\rho_{n}(\lambda)\}]\neq\frac{1}{2}$
can be excluded explicitly from the set of possible solutions by varying
\begin{equation}
\mathcal{\tilde{\mathcal{S}}}[\{\rho_{n}(\lambda)\}]=\sum_{n=1}^{\infty}\int_{-\pi/2}^{\pi/2}d\lambda\rho_{n}(\lambda)g_{n}^{(\gamma)}(\lambda)-\mu\mathcal{M}[\{\rho_{n}(\lambda)\}]-s[\{\rho_{n}(\lambda)\}]\,,\label{eq:quenchActionLagrange}
\end{equation}
where $\mu$ is a Lagrange multiplier used to set $\mathcal{M}[\{\rho_{n}^{*}(\lambda)\}]=\frac{1}{2}$
\cite{Pozsgay2014,Wouters2014}. We remark here that taking $\mu\rightarrow\infty$
limits the possible solutions to the shell $\mathcal{M}[\{\rho_{n}^{*}(\lambda)\}]=\frac{1}{2}$,
since any thermodynamic Bethe Ansatz state has $\mathcal{M}[\{\rho_{n}(\lambda)\}]\leq\frac{1}{2}$.
In \cite{Brockmann2014b}, it was proved that $\mu$ \emph{has to
be} infinity by using the asymptotics of the variational equations
for $n\rightarrow\infty$. For a numerical solution of the variational
problem, it only matters to choose $\mu$ large enough to impose $\mathcal{M}[\{\rho_{n}(\lambda)\}]=\frac{1}{2}$
within the accuracy of the numerical solution.
Before moving on to the variational equations for minimizing $\mathcal{\tilde{\mathcal{S}}}[\{\rho_{n}(\lambda)\}]$,
we summarize the derivation of the one-string kernels $g_{1}^{(\gamma)}(\lambda)$
for different initial states.
\subsubsection{The Néel state}
It was shown in \cite{Brockmann2014} that only the eigenstates $|\{\pm\lambda_{j}\}_{j=1}^{M/2}\rangle$
containing pairs of rapidities have non-zero overlaps with $|\Psi_{0}^{\text{N}}\rangle$.
The logarithmic overlap of the finite volume translationally invariant
Néel state with Bethe Ansatz eigenstates of the form
\begin{equation}
|\{\pm\lambda_{j}\}_{j=1}^{M/2}\rangle\quad M=L/2\,,
\end{equation}
i.e. paired states having zero total magnetization, was derived in
Refs. \cite{2014JPhA...47n5003B,Brockmann2014c}. The result is
\begin{equation}
\ln\frac{\langle\Psi_{0}^{\text{N}}|\{\pm\lambda_{j}\}_{j=1}^{M/2}\rangle}{\sqrt{\langle\{\pm\lambda_{j}\}_{j=1}^{M/2}|\{\pm\lambda_{j}\}_{j=1}^{M/2}\rangle}}=\sum_{j=1}^{M/2}\ln\left(\frac{\sqrt{\tan(\lambda_{j}+i\eta/2)\tan(\lambda_{j}-i\eta/2)}}{2\sin(2\lambda_{j})}\right)+\frac{1}{2}\ln\frac{2\det_{L/4}G_{jk}^{+}}{\det_{L/4}G_{jk}^{-}}\,,\label{eq:NeelOverlapFiniteVolume}
\end{equation}
where
\begin{eqnarray}
G_{jk}^{\pm} & = & \delta_{jk}\left(NK_{\eta/2}(\lambda_{j})-\sum_{l=1}^{L/4}K_{\eta}^{\pm}(\lambda_{j},\lambda_{l})\right)+K_{\eta}^{\pm}(\lambda_{j},\lambda_{k})\nonumber \\
K^{\pm}(\lambda,\mu) & = & K(\lambda-\mu)\pm K(\lambda+\mu)\nonumber \\
K(\lambda) & = & \frac{\sinh(2\eta)}{\sinh(\lambda+i\eta)\sinh(\lambda-i\eta)}\,.
\end{eqnarray}
The second term of (\ref{eq:NeelOverlapFiniteVolume}) scales as $\mathcal{O}(1)$
and therefore it is negligible in the thermodynamic limit \cite{Brockmann2014b},
while the thermodynamic limit of the first term in (\ref{eq:NeelOverlapFiniteVolume})
yields the one-string kernel of (\ref{eq:overlapIntegralForm}) corresponding
to the initial state $|\Psi_{0}^{\text{N}}\rangle$:
\begin{equation}
g_{1}^{\text{N}}(\lambda)=-\ln\left(\frac{\tan(\lambda+i\eta/2)\tan(\lambda-i\eta/2)}{4\sin^{2}(2\lambda)}\right)\,.\label{eq:NeelOverlapKernel}
\end{equation}
To check the validity of the formulas (\ref{eq:overlapIntegralForm})
together with the kernel (\ref{eq:NeelOverlapKernel}), we computed
the quantity $2\,\mbox{Re}\ln\langle\Psi_{0}^{\text{N}}|\Psi_{\Delta}^{\text{GS}}\rangle$
for different values of $\Delta>1$, where $|\Psi_{\Delta}^{\text{GS}}\rangle$
is the ground state of $H_{XXZ}(\Delta)$. The state $|\Psi_{\Delta}^{\text{GS}}\rangle$
consists of 1-strings only, and its root density is given by an inverse
Fourier transform formula \cite{9780511524332}:
\begin{equation}
\rho_{1,\Delta}^{\text{GS}}(\lambda)=\frac{1}{\pi}\sum_{k=-\infty}^{\infty}\frac{1}{2\cosh k\eta}e^{i2k\lambda},\hspace{1em}\rho_{n,\Delta}^{\text{GS }}(\lambda)=0\hspace{1em}(n>1)\,.\label{eq:GS_rootdensity}
\end{equation}
The $\TDL$ values of
\[
2\,\mbox{Re}\ln\langle\Psi_{0}^{\text{N}}|\Psi_{\Delta}^{\text{GS}}\rangle\,,
\]
obtained numerically from (\ref{eq:overlapIntegralForm},\ref{eq:GS_rootdensity})
are in exact match with the values presented in \cite{Pozsgay2013},
computed independently by taking the $t\rightarrow i\infty$ limit
of the Loschmidt echo defined by
\[
\langle\Psi_{0}^{\text{N}}|e^{-iH_{\textit{XXZ}}(\Delta)t}|\Psi_{0}^{\text{N}}\rangle\,.
\]
\subsubsection{\label{sub:The-dimer-state} The dimer state }
The overlaps of the dimer state can be easily computed provided that
the overlaps of the Néel state are known. The following relation holds
between the overlaps of the Dimer state and the overlaps of the Néel
state with general zero magnetization ($M=L/2$) Bethe states:
\begin{equation}
\langle\text{D}|\{\lambda_{j}\}_{j=1}^{M}\rangle=\langle\text{N}|\{\lambda_{j}\}_{j=1}^{M}\rangle\prod_{j=1}^{M}\frac{1}{\sqrt{2}}\left(1-\frac{\sin(\lambda_{j}-i\eta/2)}{\sin(\lambda_{j}+i\eta/2)}\right)\,.\label{eq:NeelDimerRelation}
\end{equation}
This relation, first published without derivation in \cite{Pozsgay2014a},
follows from the formula
\begin{equation}
|\text{D}\rangle=\prod_{k=1}^{M}\left(\frac{1-S_{2k-1}^{-}S_{2k}^{+}}{\sqrt{2}}\right)|\text{N}\rangle\,,
\end{equation}
and the form (\ref{eq:basisExpansion}-\ref{eq:BetheWaveFunction})
of the Bethe Ansatz wave functions:
\begin{equation}
\begin{aligned}\langle\text{D}|\{\lambda_{j}\}_{j=1}^{M}\rangle & =\langle\text{N}|\prod_{k=1}^{M}\left(\frac{1-S_{2k-1}^{+}S_{2k}^{-}}{\sqrt{2}}\right)|\{\lambda_{j}\}_{j=1}^{M}\rangle\\
& =\langle\text{N}|\text{N}\rangle\sum_{\pi\in S_{M}}I(\pi)\prod_{k=1}^{M}\frac{1}{\sqrt{2}}\left(1-\frac{\sin(\lambda_{\pi(k)}+i\eta/2)}{\sin(\lambda_{\pi(k)}-i\eta/2)}\right)\\
& \qquad\times\prod_{j=1}^{M}\left(\frac{\sin(\lambda_{j}+i\eta/2)}{\sin(\lambda_{j}-i\eta/2)}\right)^{n_{\pi(j)}}\left(\prod_{j<k}...\right)\\
& =\left[\langle\text{N}|\text{N}\rangle\sum_{\pi\in S_{M}}I(\pi)\prod_{j=1}^{M}\left(\frac{\sin(\lambda_{j}+i\eta/2)}{\sin(\lambda_{j}-i\eta/2)}\right)^{n_{\pi(j)}}\left(\prod_{j<k}...\right)\right]\\
& \qquad\times\prod_{k=1}^{M}\frac{1}{\sqrt{2}}\left(1-\left(\frac{\sin(\lambda_{k}+i\eta/2)}{\sin(\lambda_{k}-i\eta/2)}\right)\right)\\
& =\langle\text{N}|\{\lambda_{j}\}_{j=1}^{M}\rangle\prod_{k=1}^{M}\frac{1}{\sqrt{2}}\left(1-\frac{\sin(\lambda_{k}+i\eta/2)}{\sin(\lambda_{k}-i\eta/2)}\right)\,.
\end{aligned}
\end{equation}
We stress that (\ref{eq:NeelDimerRelation}) is only valid for $M=L/2$.
A formula similar to (\ref{eq:NeelDimerRelation}) is true for the
translationally invariant states $|\Psi_{0}^{\text{D}}\rangle$ and
$|\Psi_{0}^{\text{N}}\rangle$ since the the Bethe states are eigenstates
of the one site translation operator $\hat{T}$:
\begin{equation}
\begin{aligned}\langle\Psi_{0}^{\text{D}}|\{\lambda_{j}\}_{j=1}^{M}\rangle & =\langle D|\frac{1+\hat{T}}{\sqrt{2}}|\{\lambda_{j}\}_{j=1}^{M}\rangle=\frac{1}{\sqrt{2}}\left(1+\prod_{j=1}^{M}\left(\frac{\sin(\lambda_{j}+i\eta/2)}{\sin(\lambda_{j}-i\eta/2)}\right)\right)\langle\text{D}|\{\lambda_{j}\}_{j=1}^{M}\rangle\\
& =\frac{1}{\sqrt{2}}\left(1+\prod_{j=1}^{M}\left(\frac{\sin(\lambda_{j}+i\eta/2)}{\sin(\lambda_{j}-i\eta/2)}\right)\right)\langle\text{N}|\{\lambda_{j}\}_{j=1}^{M}\rangle\\
& \qquad\times\prod_{j=1}^{M}\frac{1}{\sqrt{2}}\left(1-\frac{\sin(\lambda_{j}-i\eta/2)}{\sin(\lambda_{j}+i\eta/2)}\right)\\
& =\langle\text{N}|\frac{1+\hat{T}}{\sqrt{2}}|\{\lambda_{j}\}_{j=1}^{M}\rangle\prod_{j=1}^{M}\frac{1}{\sqrt{2}}\left(1-\frac{\sin(\lambda_{j}-i\eta/2)}{\sin(\lambda_{j}+i\eta/2)}\right)\\
& =\langle\Psi_{0}^{\text{N}}|\{\lambda_{j}\}_{j=1}^{M}\rangle\prod_{j=1}^{M}\frac{1}{\sqrt{2}}\left(1-\frac{\sin(\lambda_{j}-i\eta/2)}{\sin(\lambda_{j}+i\eta/2)}\right)\,.
\end{aligned}
\label{eq:translationalInvariantDimerRelation}
\end{equation}
The logarithmic overlap of the translationally invariant dimer state
is therefore
\begin{equation}
\ln\frac{\langle\Psi_{0}^{\text{D}}|\{\pm\lambda_{j}\}_{j=1}^{M/2}\rangle}{\sqrt{\langle\{\pm\lambda_{j}\}_{j=1}^{M/2}|\{\pm\lambda_{j}\}_{j=1}^{M/2}\rangle}}=\sum_{j=1}^{M/2}\frac{\sinh^{\text{2}}\eta/2\cot\lambda_{j}}{\sqrt{\sin(2\lambda+i\eta)\sin(2\lambda-i\eta)}}+\mathcal{O}(1)\,,
\end{equation}
from which it follows that the logarithmic overlap of $|\Psi_{0}^{\text{D}}\rangle$
with Bethe states $|\{\rho_{n}(\lambda)\}\rangle$ also has the form
(\ref{eq:overlapIntegralForm}) in the thermodynamic limit with the
one-string kernel function being
\begin{equation}
g_{1}^{\text{D}}(\lambda)=-\ln\left(\frac{\sinh^{4}(\eta/2)\cot^{2}(\lambda)}{\sin(2\lambda+i\eta)\sin(2\lambda-i\eta)}\right)\,.
\end{equation}
\subsubsection{The $q$-deformed dimer state}
For the overlaps of the $q$-deformed dimer state a relation similar
to (\ref{eq:NeelDimerRelation}) holds:
\begin{equation}
\langle q\text{D}|\{\lambda_{j}\}_{j=1}^{L/2}\rangle=\langle\text{N}|\{\lambda_{j}\}_{j=1}^{L/2}\rangle\prod_{j=1}^{M}\frac{1}{\sqrt{q+1/q}}\left(q^{-1/2}-q^{1/2}\frac{\sin(\lambda_{j}-i\eta/2)}{\sin(\lambda_{j}+i\eta/2)}\right)\,.\label{eq:NeelqDimerRelation}
\end{equation}
which can be derived similarly to (\ref{eq:NeelDimerRelation}). The
one-string kernel function corresponding to the $q$-deformed dimer
state is thus
\begin{equation}
\begin{aligned}g_{1}^{q\text{D}}(\lambda) & =-\ln\frac{\sinh^{2}\eta}{4\cosh\eta\sin(2\lambda+i\eta)\sin^{2}(2\lambda)\sin(2\lambda-i\eta)}\,.\end{aligned}
\end{equation}
\subsection{Overlap thermodynamic Bethe Ansatz equations}
Using the explicit formulae for the overlaps, it is now possible to
turn the quench action principle into a system of equations for the
string densities characterizing the asymptotic steady state. For the
Néel state, this was obtained prior to our results \cite{Wouters2014,Brockmann2014b};
using our results for the overlaps of the dimer and $q$-dimer state,
we obtain the equations for these initial states as well.
To achieve this, we must consider the variational problem of finding
the string densities $\{\rho_{n}^{*}(\lambda)\}$ that minimize (\ref{eq:quenchActionLagrange})
for the considered quenches of the \emph{XXZ} chain. The integral
formula (\ref{eq:overlapIntegralForm}) can be substituted into first
term of (\ref{eq:quenchActionLagrange}), while the entropy term $s${[}\{$\rho(\lambda)$\}{]}
of (\ref{eq:quenchActionLagrange}) is half of the usual Yang-Yang
entropy,
\begin{equation}
s[\{\rho_{n}(\lambda)\}]=\frac{1}{2}\sum_{n=1}^{\infty}\int_{-\pi/2}^{\pi/2}d\lambda\left[\rho_{n}(\lambda)\ln\left(1+\frac{\rho_{n}^{\text{h}}(\lambda)}{\rho_{n}(\lambda)}\right)+\rho_{n}^{\text{h}}(\lambda)\ln\left(1+\frac{\rho_{n}(\lambda)}{\rho_{n}^{\text{h}}(\lambda)}\right)\right]\,.\label{eq:YangYangEntropyHalf}
\end{equation}
The factor $\frac{1}{2}$ takes into account that only the Bethe Ansatz
eigenstates consisting of rapidity pairs $(\lambda,-\lambda)$ contribute.
Collecting all terms, the functional (\ref{eq:quenchActionLagrange})
takes the form
\begin{equation}
\mathcal{\tilde{S}}_{\textit{XXZ}}[\{\rho_{n}(\lambda)\}]=\sum_{n=1}^{\infty}\int_{-\pi/2}^{\pi/2}d\lambda\rho_{n}(\lambda)\left(g_{n}^{\Psi_{0}}(\lambda)-\mu n\right)-s_{\textit{XXZ}}[\{\rho_{n}(\lambda)\}]\,.\label{eq:quenchActionXXZSpecific}
\end{equation}
To find the Bethe Ansatz root densities that describe the steady state
after the quench, the string densities $\{\rho_{n}^{*}(\lambda)\}$
minimizing $\mathcal{\tilde{S}}_{\textit{XXZ}}[\{\rho_{n}(\lambda)\}]$
have to be computed. Since (\ref{eq:quenchActionXXZSpecific}) has
the same structure as the thermal free energy functional of the \emph{XXZ}
chain in a magnetic field, the variational equations have the same
structure as those of the \emph{XXZ} thermodynamic Bethe Ansatz \cite{9780511524332}.
Introducing $\eta_{n}$ as in (\ref{eq:etadef}), the variational
equations read
\begin{equation}
\begin{aligned}\log\eta_{n}(\lambda) & =g_{n}^{\Psi_{0}}(\lambda)-\mu n+\sum_{m=1}^{\infty}\left(T_{nm}\star\log(1+\eta_{m}^{-1})\right)(\lambda)\end{aligned}
,\qquad n=1,2,...\,,\label{eq:oTbaCoupled}
\end{equation}
where the limit $\mu\rightarrow\infty$ must be taken.
The system (\ref{eq:oTbaCoupled}) can be partially decoupled using
the properties of the functions $T_{nm}(\lambda)$ \cite{9780511524332},
as performed in Refs. \cite{Wouters2014,Brockmann2014} for quenches
starting from $|\Psi_{0}^{\text{N}}\rangle$. Since the functions
$T_{nm}(\lambda)$ are independent of the initial state, (\ref{eq:oTbaCoupled})
can be decoupled using the same method for other initial states as
well. Setting $\eta_{0}=1$, the decoupled equations take the form
\begin{equation}
\begin{aligned}\log\eta_{n}(\lambda) & =d_{n}^{\Psi_{0}}(\lambda)+[s\star\log((1+\eta_{n+1})(1+\eta_{n-1}))](\lambda)\,,\qquad n=1,2,\dots\end{aligned}
\label{eq:oTbaDecoupled}
\end{equation}
with $s(\lambda)$ defined as in (\ref{eq:sFunctionDefinition}),
and the source terms are given by
\begin{align}
d_{n}^{\Psi_{0}}(\lambda) & =g_{n}^{\Psi_{0}}(\lambda)-[s\star(g_{n-1}^{\Psi_{0}}+g_{n+1}^{\Psi_{0}})](\lambda)\,,\qquad g_{0}^{\Psi_{0}}(\lambda)=0\,.\label{eq:oTbaDecoupledSource}
\end{align}
Explicit expressions for $d_{n}^{\Psi_{0}}(\lambda)$ can be derived
for all initial states following the steps applied for the Néel state
in \cite{Wouters2014,Brockmann2014b}:
\begin{equation}
\begin{aligned}d_{n}(\lambda) & =\xi_{1,n}^{\Psi_{0}}\log\frac{\theta_{4}^{2}(\lambda)}{\theta_{1}^{2}(\lambda)}+\xi_{2,n}^{\Psi_{0}}\log\frac{\theta_{2}^{2}(\lambda)}{\theta_{3}^{2}(\lambda)}\,,\end{aligned}
\label{eq:oTbaDecoupledSourceExplicit}
\end{equation}
where $\theta_{n}(x)$ is the $n$-th Jacobi $\theta$-function with
nome $e^{-2\eta}$, and $\xi_{1,n}^{\Psi_{0}}$ and $\xi_{2,n}^{\Psi_{0}}$
are signs depending on the particular initial state and $n$ given
in Table \ref{tab:decoupledSourceSigns}. Note that in the $q$-dimer
case the even and odd equations have the same source terms. The system
of equations (\ref{eq:oTbaDecoupledSourceExplicit}) can be solved
numerically or analytically for $\{\eta_{n}(\lambda)\}$, after which
the densities $\left\{ \rho_{n}^{*}(\lambda)\right\} $ are obtained
by solving a decoupled version of the Bethe-Takahashi equations (\ref{eq:BetheTakahashiEquations}).
The numerical solution of (\ref{eq:oTbaDecoupled}) involves the truncation
of equations in $n$ by retaining only the first $n_{\mathrm{eq}}$
equations. The truncation needs to take into account the behavior
of $\eta_{n}(\lambda)$ for $n\rightarrow\infty$ which takes the
form \cite{Brockmann2014b}
\begin{eqnarray*}
\lim_{\begin{aligned}n & \rightarrow\infty\\
n & \mathrm{even}
\end{aligned}
}\eta_{n}(\lambda) & = & \eta_{\mathrm{even}}^{\Psi_{0}}\\
\lim_{\begin{aligned}n & \rightarrow\infty\\
n & \mathrm{even}
\end{aligned}
}\eta_{n}(\lambda) & = & \eta_{\mathrm{odd}}^{\Psi_{0}}\;,
\end{eqnarray*}
and in the $q$-dimer case $\eta_{\mathrm{even}}^{q\mathrm{D}}=\eta_{\mathrm{odd}}^{q\mathrm{D}}$
holds. This asymptotics can be implemented in eqns. (\ref{eq:oTbaDecoupled})
by imposing $\eta_{n_{\mathrm{eq}}-2}(\lambda)=\eta_{n_{\mathrm{eq}}}(\lambda).$
The validity of the saddle-point solution $\{\rho_{n}^{*}(\lambda)\}$
is checked by
\begin{enumerate}
\item computing the expectation values of the conserved charges in the state
described by $\{\rho_{n}^{*}(\lambda)\}$ using the formula (\ref{eq:chargesIntegralFormula}).
These values should be equal to their previously known values in the
initial state which can be computed from the generating functions
(\eqref{eq:ChargeGenFunc});
\item evaluating the overlap sum rule (\ref{eq:OverlapSumRule}), which
states that the quench action (\ref{eq:quenchAction}) should be zero
at the saddle point $\{\rho_{n}^{*}(\lambda)\}$.
\end{enumerate}
For all the cases we considered, these tests were satisfied up to
the available numerical precision: the overlap sum rule gave a result
of order $10^{-8}$, while the values of the charges were reproduced
to $8-10$ digits precision.
\begin{table}
\begin{centering}
\begin{tabular}{|c||c|c|}
\hline
$|\Psi_{0}\rangle$ & $\xi_{1,n}^{\Psi_{0}}$ & $\xi_{2,n}^{\Psi_{0}}$\tabularnewline
\hline
\hline
$|\Psi_{0}^{\text{N}}\rangle$ & $(-1)^{n}$ & $+1$\tabularnewline
\hline
$|\Psi_{0}^{\text{D}}\rangle$ & $-1$ & $(-1)^{n}$\tabularnewline
\hline
$|\Psi_{0}^{q\text{D}}\rangle$ & $-1$ & $+1$\tabularnewline
\hline
\end{tabular}
\par\end{centering}
\protect\caption{\label{tab:decoupledSourceSigns}The signs $\xi_{1,n}^{\Psi_{0}}$
and $\xi_{2,n}^{\Psi_{0}}$ that appear in the source terms $d_{n}^{\Psi_{0}}(\lambda)$
of the decoupled form of the GTBA equations for quenches starting
from different initial states.}
\end{table}
\subsection{Exact solution to the overlap thermodynamic Bethe Ansatz equations}
In \cite{Brockmann2014b}, an exact solution was given for the Néel
oTBA equations using a functional relationship between $\eta_{1}(\lambda)$
and a function $\mathfrak{a}(\lambda)$, which is an auxiliary function
of the T-system corresponding to the Y-system of equations (\ref{eq:oTbaDecoupled}).
The relationship reads
\begin{equation}
(1+\eta_{1}(\lambda))=(1+\mathfrak{a}(\lambda+i\eta/2)\mathfrak{)}(1+\mathfrak{a}^{-1}(\lambda-i\eta/2)\mathfrak{)}.\label{eq:etaARelation}
\end{equation}
As noted in \cite{Brockmann2014b}, $\mathfrak{a}^{(\textrm{N)}}(\lambda)$
can be interpreted as the auxiliary function corresponding to the
quantum transfer matrix \cite{1993ZPhyB..91..507K,Klumper2004}. The
relation (\ref{eq:etaARelation}) is quite general: the same holds
between the corresponding auxiliary functions of the thermal T-system
and Y-system \cite{kuniba1998continued}, where the T-system is the
generalization of the thermal quantum transfer matrix \cite{9780511628832},
and the Y-system is the system of the standard thermal TBA equations
\cite{9780511524332}.
Guessing $\mathfrak{a}(\lambda)$ using the analytical structure of
(\ref{eq:oTbaDecoupled}), relation (\ref{eq:etaARelation}) gives
$\eta_{1}(\lambda)$, which is enough to solve (\ref{eq:oTbaDecoupled})
for every $\eta_{n}(\lambda)$. In the Néel case, $\mathfrak{a}(\lambda)$
was found to be \cite{Brockmann2014b}
\begin{equation}
\mathfrak{a^{(\textrm{N)}}(\lambda)}=\frac{\sin(\lambda+i\eta)\sin(2\lambda-i\eta)}{\sin(\lambda-i\eta)\sin(2\lambda+i\eta)}.\label{eq:a_function_Neel}
\end{equation}
We note that the exact solution (\ref{eq:a_function_Neel}) can be
obtained more intuitively. The expression for $\mathfrak{a(\lambda)}$
can be obtained using the boundary QTM formalism \cite{Pozsgay2013}
for the dynamical free energy density
\[
g(s)=-\frac{1}{L}\log\langle\Psi_{0}|e^{-sH_{XXZ}(\Delta)}|\Psi_{0}\rangle\hspace{1em}(\TDL)\,.
\]
In the limit $s\rightarrow0$, $g(s)$ tends to the quench action
(\ref{eq:OverlapSumRule}), and the corresponding T-system of the
boundary QTM becomes the T-system of the quench action formalism.
Therefore the auxiliary function $\mathfrak{a}(\lambda)$ of the boundary
QTM also becomes the $\mathfrak{a}(\lambda)$ function of \cite{Brockmann2014b}.
For $|\Psi_{0}^{\textrm{N}}\rangle$, we found that in the the $s\rightarrow0$
limit of the dynamical free energy, $\mathfrak{a}(\lambda)$ is the
function $K(u)$ of \cite{Pozsgay2013} evaluated at $u=-i\lambda$,
which is precisely (\ref{eq:a_function_Neel}).
Following this line of thought, it is also possible to give an exact
solution of (\ref{eq:oTbaDecoupled}) for the q-dimer state $|\Psi_{0}^{q\mathrm{D}}\rangle$.
In this case the auxiliary function of the boundary QTM in the $s\rightarrow0$
limit is
\[
\mathfrak{a}^{(q\mathrm{D})}(\lambda)=\frac{\sin(2\lambda-i\eta)}{\sin(2\lambda+i\eta)}\:,
\]
which is obtained using the function $K(u)$ of \cite{Pozsgay2013}
corresponding to $|\Psi_{0}^{q\mathrm{D}}\rangle$. The corresponding
auxiliary function of the oTBA is, by (\ref{eq:etaARelation}):
\[
\eta_{1}^{(q\mathrm{D})}(\lambda)=\frac{\sin(2\lambda)}{\sin(2\lambda+2i\eta)}+\frac{\sin(2\lambda)}{\sin(2\lambda-2i\eta)}+\frac{\sin^{2}(2\lambda)}{\sin(2\lambda+2i\eta)\sin(2\lambda-2i\eta)}\:.
\]
This formula matches the numerical result for $\eta_{1}^{(q\mathrm{D)}}$
within the accuracy of the iterative solution.
We note that the above argument breaks down for the dimer state $|\Psi_{0}^{\mathrm{D}}\rangle$:
the correct form of the nonlinear integral equation in the boundary
QTM formalism \cite{Pozsgay2013} is not known. It is most likely
that the problem is related to the analytic structure of the boundary
reflection factors and of the auxiliary function $\mathfrak{a}$ entering
the boundary QTM, which for the dimer state substantially differs
from the Néel and q-dimer cases.
\section{\label{sec:Computing-correlation-functions}Computing correlation
functions}
Now we turn to the evaluation of correlation functions, based on the
conjectures published in \cite{Mestyan2014,2014JSMTE..09..026P}.
These lead to the following recipe for the steady state expectation
value of short-range correlators:
\begin{enumerate}
\item Solve the equations (\ref{eq:oTbaDecoupled}), or, alternatively (\ref{eq:GGETBAEquations})
in the context of the GGE for the $\eta_{j}$.
\item With the $\eta_{j}$ thus obtained, solve the following equations
for the auxiliary functions $\rho_{n}^{(j)}(\lambda)$ and $\sigma_{n}^{(j)}(\lambda)$:
\begin{eqnarray}
\rho_{n}^{(j)}(\lambda) & = & \delta_{n,1}s^{(j)}(\lambda)+\left[s\star\left(\frac{\rho_{n-1}^{(j)}}{1+1/\eta_{n-1}}+\frac{\rho_{n+1}^{(j)}}{1+1/\eta_{n+1}}\right)\right](\lambda)\label{eq:correlationRhoEquation}\\
\sigma_{n}^{(j)}(\lambda) & = & \delta_{n,1}t^{(j)}(\lambda)+\left[t\star\left(\frac{\rho_{n-1}^{(j)}}{1+1/\eta_{n-1}}+\frac{\rho_{n+1}^{(j)}}{1+1/\eta_{n+1}}\right)\right](\lambda)+\label{eq:correlationSigmaEquation}\\
& + & \left[s\star\left(\frac{\sigma_{n-1}^{(j)}}{1+1/\eta_{n-1}}+\frac{\sigma_{n+1}^{(j)}}{1+1/\eta_{n+1}}\right)\right](\lambda),\nonumber
\end{eqnarray}
where
\begin{eqnarray*}
s(\lambda) & = & \frac{1}{2\pi}\left(1+2\sum_{k=1}^{\infty}\frac{\cos2k\lambda}{\cosh k\eta}\right)=\frac{d}{d\lambda}s^{(0)}(\lambda),\hspace{1em}s^{(j)}(\lambda)=\frac{d}{d\lambda}s^{(0)}(\lambda)\\
t(\lambda) & = & \mbox{\ensuremath{\frac{1}{2\pi}\sum_{k=1}^{\infty}\frac{\sinh(k\eta)}{\cosh^{2}(k\eta)}\sin(2k\lambda)}},\hspace{1em}t^{(j)}(\lambda)=\frac{d}{d\lambda}t^{(0)}(\lambda),
\end{eqnarray*}
and $\rho_{0}^{(j)}$ is defined to be $0$. The equations (\ref{eq:correlationRhoEquation})
for $\rho_{n}^{(0)}(\lambda)$ are equivalent to (\ref{eq:BetheTakahashiEquations}),
therefore $\rho_{n}^{(0)}(\lambda)$ can be identified as the total
root and hole density $\rho_{n}(\lambda)+\rho_{n}^{\mathrm{h}}(\lambda)$.
The system (\ref{eq:correlationRhoEquation},\ref{eq:correlationSigmaEquation})
is partially decoupled in the sense that the $n$th equation equation
depends only on $\rho_{n}^{(j)}(\lambda)$'s and $\sigma_{n}^{(j)}(\lambda)$'s
with three consecutive lower indices $n-1$, $n$ and $n+1$. \\
The above decoupled form of the equations appeared first in \cite{2014JSMTE..09..026P}
without derivation. In Appendix \ref{sec:appendixCorrelators}, we
show that the decoupled form is in indeed generally valid by giving
a rigorous derivation of these equations.
\item Using $\rho_{n}^{(j)}(\lambda)$ and $\sigma_{n}^{(j)}(\lambda)$,
compute the quantities
\begin{equation}
\begin{aligned}\begin{aligned}\Omega_{j,l}\end{aligned}
& =4\pi\left[(-1)^{l}G_{j+l}+\int_{-\pi/2}^{\pi/2}d\lambda s^{(l)}(\lambda)\frac{\rho_{1}^{(j)}(\lambda)}{1+1/\eta_{1}(\lambda)}\right]\\
\Gamma_{j,l} & =4\pi\bigg[(-1)^{l}H_{j+l}-\int_{-\pi/2}^{\pi/2}d\lambda t^{(l)}(\lambda)\frac{\rho_{1}^{(j)}(\lambda)}{1+1/\eta_{1}(\lambda)}+\\
& +\int_{-\pi/2}^{\pi/2}d\lambda s^{(l)}(\lambda)\frac{\sigma_{1}^{(j)}(\lambda)}{1+1/\eta_{1}(\lambda)}\bigg],
\end{aligned}
\label{eq:correlationIntegrals}
\end{equation}
where
\begin{eqnarray*}
G_{j} & = & -\frac{1}{\pi}\sum_{k=-\infty}^{\infty}\frac{(2ik)^{j}}{1+e^{2\eta|k|}}\\
H_{j} & =- & \frac{1}{2\pi}\sum_{k=-\infty}^{\infty}\frac{|k|(2ik)^{j-1}}{\cosh^{2}\eta k}.
\end{eqnarray*}
\item Compute the quantities
\begin{eqnarray*}
\omega_{a,b} & = & -(-1)^{(a+b)/2}\Omega_{a,b}-(-1)^{b}\frac{1}{2}\left(\frac{\partial}{\partial u}\right)^{a+b}\mathcal{K}(u)\big|_{u=0}\\
W_{a,b} & = & (-1)^{(a+b-1)/2}\Gamma_{a,b}+(-1)^{b}\frac{1}{2}\left(\frac{\partial}{\partial u}\right)^{a+b}\tilde{\mathcal{K}}(u)\big|_{u=0},
\end{eqnarray*}
with
\begin{eqnarray*}
\mathcal{K}(u) & = & \frac{\sinh2\eta}{\sinh\left(u+\eta\right)\sinh\left(u-\eta\right)}\\
\tilde{\mathcal{K}}(u) & = & \frac{\sinh2u}{\sinh\left(u+\eta\right)\sinh\left(u-\eta\right)}.
\end{eqnarray*}
\item Substitute $\omega_{a,b}$ and $W_{a,b}$ into the QTM formulas for
short range correlations $\sigma_{j}^{z}\sigma_{j+l}^{z}$ and $\sigma_{j}^{x}\sigma_{j+l}^{x}$
that are already available in the literature \cite{2007JPhA...4010699B,2010EPJB...73..253T}.
Here we only quote the QTM formulas for the nearest neighbor and next
nearest neighbor correlations:
\end{enumerate}
\begin{equation}
\begin{split}\langle\sigma_{1}^{z}\sigma_{2}^{z}\rangle & =\coth(\eta)\omega_{0,0}+W_{1,0}\\
\langle\sigma_{1}^{x}\sigma_{2}^{x}\rangle & =-\frac{\omega_{0,0}}{2\sinh(\eta)}-\frac{\cosh(\eta)}{2}W_{1,0}\\
\langle\sigma_{1}^{z}\sigma_{3}^{z}\rangle & =2\coth(2\eta)\omega_{0,0}+W_{1,0}+\tanh(\eta)\frac{\omega_{2,0}-2\omega_{1,1}}{4}-\frac{\sinh^{2}(\eta)}{4}W_{2,1}\\
\langle\sigma_{1}^{x}\sigma_{3}^{x}\rangle & =-\frac{1}{\sinh(2\eta)}\omega_{0,0}-\frac{\cosh(2\eta)}{2}W_{1,0}-\tanh(\eta)\cosh(2\eta)\frac{\omega_{2,0}-2\omega_{1,1}}{8}+\\
& \hspace{6cm}+\sinh^{2}(\eta)\frac{W_{2,1}}{8}.
\end{split}
\label{eq:correlationQTMFormulas}
\end{equation}
It was conjectured in \cite{Mestyan2014,2014JSMTE..09..026P} that
using the steps above, one can compute short range correlations in
an arbitrary generalized thermodynamic Bethe Ansatz state characterized
by the functions $\eta_{j}(\lambda)$. The conjecture was proved using
the Hellmann-Feynman theorem for nearest neighbor correlators, i.e.
$\sigma_{j}^{z}\sigma_{j+1}^{z}$ and $\sigma_{j}^{x}\sigma_{j+1}^{x}$
\cite{Wouters2014,Mestyan2014}, and its validity for longer range
correlators was numerically checked in the context of the standard
thermal \cite{9780511524332} thermodynamic Bethe Ansatz \cite{Mestyan2014}.
\section{Numerical results for correlations\label{sec:Numerical-results-for-correlations}}
In this section the predictions of the quench-action-based oTBA and
the GGE-based GTBA are compared to real time numerical simulations
for quenches starting from $|\Psi_{0}^{\textrm{N}}\rangle,$ $|\Psi_{0}^{\textrm{D}}\rangle$
and $|\Psi_{0}^{q\textrm{D}}\rangle$. Details of the real time simulations
are described in Appendix \ref{sec:appendixTEBD}.
Results for the dimer initial state are presented in Figure \ref{fig:dimerplots}.
These data are essentially the same as in our previous paper \cite{Pozsgay2014},
with the exception that we plotted the $xx$ correlations as well,
and that the GGE prediction for all the values of $\Delta$ is computed
from the full GGE using the GTBA formalism described in Subsection
\ref{sub:GTBA}. Note that while the oTBA is fully consistent with
the result numerical simulation, the GGE significantly disagrees.
The only exceptions are the correlators $\sigma_{i}^{x}\sigma_{i+2}^{x}$
and $\sigma_{i}^{x}\sigma_{i+3}^{x}$ for the dimer case, where the
real time simulation shows a temporal drift for all accessible times
(cf. Appendix \ref{sec:appendixTEBD}). For the dimer $\sigma_{i}^{x}\sigma_{i+2}^{x}$
correlator the results are still clearly consistent with the oTBA
and disagree with the GGE, but for the dimer $\sigma_{i}^{x}\sigma_{i+3}^{x}$
the numerical uncertainty introduced by the residual drift is simply
too large.
For the q-dimer case the results are shown in Figure \ref{fig:qdimerplots}.
Here the GTBA and oTBA predictions still differ, bu the difference
is too small to be resolved by the numerical simulation. We also computed
the correlators with Néel initial state, but the data are similar
to the q-dimer case, and so we omit this case, which was already discussed
in \cite{Pozsgay2014}.
Looking at the full picture, it is clear that the conclusions of our
previous paper \cite{Pozsgay2014} still stand: the GGE clearly disagrees
with the real time evolution, while the oTBA is in agreement with
them, wherever the difference between the GGE and the oTBA is large
enough to be resolved by the numerics, and the numerically allowed
iTEBD timeframe is sufficient for the steady state to be reached.
We stress that due to the fact that the oTBA and GGE results are numerically
very close in the Néel and q-dimer cases, the dimer case plays a very
important role in deciding the issue.
\begin{figure}
\includegraphics[width=0.45\textwidth]{Dimer_zzAll-crop}~~~~\includegraphics[width=0.45\textwidth]{Dimer_xxAll-crop}
\protect\caption{\label{fig:dimerplots}Steady state correlators for quenches starting
from the dimer initial state, as a function of the anisotropy parameter
$\Delta$.}
\end{figure}
\begin{figure}
\includegraphics[width=0.45\textwidth]{qDimer_zzAll-crop}~~~~\includegraphics[width=0.45\textwidth]{qDimer_xxAll-crop}
\protect\caption{\label{fig:qdimerplots}Steady state correlators for quenches starting
from the q-dimer initial state, as a function of the anisotropy parameter
$\Delta$.}
\end{figure}
\section{Discussion\label{sec:Discussion}}
The conclusion of the present work is the same as that of the previous
paper \cite{Pozsgay2014}: the GGE fails as a general description
for the steady state after quenches in the XXZ spin chain. There are,
however, several aspects which need to be examined as a possible explanation
for failure. In particular it is necessary to understand how generic
the phenomenon is: maybe the failure is due to the particular choice
of initial states, or anomalously slow relaxation? Furthermore, is
it possible that some more general or different construction for the
GGE could be adequate?
\subsection{Steady state ensemble and the role of locality \label{sub:Steady-state-ensemble-and-locality}}
We recall that the generalized Gibbs ensemble (or, for that matter,
thermalization in the non-integrable case) is not expected to describe
any observable. Indeed, as the initial state is pure, the exact state
of the system given by
\begin{equation}
|\Psi(t)\rangle=e^{-iHt}|\Psi_{0}\rangle\label{eq:Htime_evolution}
\end{equation}
always remains pure. The relaxation towards a thermodynamic state
is always understood to work for observables defined on a particular
class of subsystems. Let us suppose that the Hilbert space of our
system can be written in the ``local'' form
\begin{equation}
\mathcal{H}=\bigotimes_{x\in S}\mathcal{V}_{x}\label{eq:local_Hilbert_space}
\end{equation}
where $x$ runs over the system $S$. For a spin chain $x$ labels
the spatial location along the chain, but this interpretation is not
strictly necessary: ``localization'' can be true in any other way,
e.g. $x$ may label particles composing a many-body system. The important
point is that the Hamiltonian governing the dynamics is supposed to
be ``local'', i.e. sum of terms consisting of product of operators
acting on a few adjacent ``local'' spaces $\mathcal{V}_{x}$, where
adjacency is some relation specifying which $x$-s are close to each
other. We also suppose that the initial state is the ground state
of some other, equally ``local'' Hamiltonian. The relevant class
of observables is then chosen to be the ``local'' ones, i.e. ones
acting on finitely many adjacent $\mathcal{V}_{x}$, and the charges
expected to play a role in the steady state statistical operator are
the ones that are ``local'' in the same way as the Hamiltonian is.
For the XXZ chain, the notion of $x$-''locality'' is just usual
spatial locality.
The generalized Gibbs ensemble is defined as a specific thermodynamic
and temporal limit, where the order of limits matters. Let us suppose
the observable $\mathcal{O}$ is localized in the above sense, i.e.
there is a finite subsystem $O$ such that
\begin{equation}
\mathcal{O}\in\bigotimes_{x\in O}L(\mathcal{V}_{x})\label{eq:localized_observable}
\end{equation}
where $L(\mathcal{V}_{x})$ are the linear operators on $\mathcal{V}_{x}$;
let us call the smallest such subsystem the support of $\mathcal{O}$.
Then the statistical operator $\rho_{\mbox{SS}}$ describes the steady
state if it is true that
\begin{equation}
\Tr\rho_{\mbox{SS}}\mathcal{O}=\lim_{t\rightarrow\infty}\TDL\,\langle\Psi(t)|\mathcal{O}|\Psi(t)\rangle\label{eq:steady_state_definition}
\end{equation}
where $\TDL$ is the thermodynamic limit in which the size of the
system $S$ goes to infinity. In fact it is possible to allow slightly
more generality and allow the support of $\mathcal{O}$ to grow in
the thermodynamic limit provided the system size becomes infinitely
larger than the support of $\mathcal{O}$.
The GGE hypothesis can be stated as
\begin{eqnarray}
\Tr\rho_{\mbox{SS}}\mathcal{O} & = & \lim_{N\rightarrow\infty}\Tr\rho_{N}\mathcal{O}\nonumber \\
\rho_{N} & = & \frac{1}{Z_{N}}\exp\!\left(\sum\limits _{k=1}^{N}\beta_{k}Q_{k}\right)\qquad Z_{N}=\Tr\exp\!\left(\sum\limits _{k=1}^{N}\beta_{k}Q_{k}\right)\label{eq:GGE_from_TGGE}
\end{eqnarray}
Note that the GGE is defined above as the limit of the truncated GGE
introduced in \cite{2013PhRvB..87x5107F,Pozsgay2013b}. This is motivated
by the fact that the infinite sum in the exponent needs some proper
definition, especially since including terms up to $N=\infty$ is
not really local. It is further justified by the observation that
the truncated GGE approximates the full one well; in fact, as described
in Subsection \ref{sub:GTBA}, charges that are much ``larger''
than the support of $\mathcal{O}$ do not have much influence on the
GGE prediction for the expectation value of $\mathcal{O}$.
The above description generally captures the way the GGE was expected
to work \emph{for systems with local dynamics} such as the XXZ spin
chain \cite{Pozsgay2013b,Fagotti2014}. What we have shown is that
\emph{the GGE defined in the above way definitely fails to describe
the steady state of the system}.
\subsection{Failure of the Generalized Eigenstate Thermalization Hypothesis}
Soon after the results of the papers \cite{Wouters2014,Pozsgay2014}
were made public, work started to explore the mechanism responsible
for the failure of the GGE. In \cite{Goldstein2014} it was pointed
out that the expectation values of the initial charges did not specify
the string densities $\rho_{n}$ uniquely. The reason is the existence
of multiple types of strings, which means that the elementary magnetic
excitations of the XXZ chain, described by 1-strings, have bound states
corresponding to longer strings. While the GGE corresponds to maximizing
the entropy among configurations allowed by the particular values
of the charges specified by the initial state, this does not lead
to the same solution that follows from the oTBA, and generally gives
different correlations. Indeed this particular point is very interesting,
and was investigated further in \cite{2014JSMTE..09..026P}. The fact
that expectation values of local operators in the thermodynamic limit
is not specified by the values of local charges means the failure
of the Generalized Eigenstate Thermalization Hypothesis (GETH).
Let us recall how the usual Eigenstate Thermalization Hypothesis works
for generic (i.e. non-integrable systems) \cite{1991PhRvA..43.2046D,1994PhRvE..50..888S,2008Natur.452..854R}.
As discussed in Subsection \ref{sub:The-diagonal-ensemble}, in the
large time limit (neglecting degeneracies) one obtains the prediction
of the diagonal ensemble (\ref{eq:diagonalEnsemble}), where each
state is weighted by the squared norm of its overlap with the initial
state. If the system reaches a thermal equilibrium, then the expectation
values of relevant operators should be close to the canonical prediction
\begin{equation}
\left\langle \mathcal{O}\right\rangle _{T}=\frac{\sum_{n}e^{-E_{n}/T}\langle n|\mathcal{O}|n\rangle}{\sum_{n}e^{-E_{n}/T}}\label{micro}
\end{equation}
with a temperature $T$ that is fixed by the requirement
\begin{equation}
\left\langle H\right\rangle _{T}=\langle\Psi_{0}|H|\Psi_{0}\rangle\;.
\end{equation}
The diagonal ensemble (\eqref{eq:diagonalEnsemble}) and the thermal
averages \eqref{micro} are expected to become equal in the $\TDL$.
The overlap coefficients entering (\eqref{eq:diagonalEnsemble}) are
typically random and different from the Boltzmann weights. However,
in a large volume $L$ the only states with non-negligible overlap
are the ones that have the same energy density as the initial state
\cite{2008Natur.452..854R}:
\begin{equation}
\frac{E_{n}}{L}\approx\frac{\langle\Psi_{0}|H|\Psi_{0}\rangle}{L}\:,\label{e1}
\end{equation}
and the width of the distribution of the energy density goes to zero
in the $\TDL$:
\begin{equation}
\Delta\left(\frac{E}{L}\right)=\frac{1}{L}\sqrt{\langle\Psi_{0}|H^{2}|\Psi_{0}\rangle-(\langle\Psi_{0}|H|\Psi_{0}\rangle)^{2}}\sim\frac{1}{\sqrt{L}}\:.\label{width}
\end{equation}
for local Hamiltonians and initial states $\ket{\Psi_{0}}$ satisfying
the cluster decomposition principle.
The \emph{Eigenstate Thermalization Hypothesis} (ETH) \cite{1991PhRvA..43.2046D,1994PhRvE..50..888S,2008Natur.452..854R}
states that eigenstates on a given energy shell have almost the same
expectation values of physical observables
\begin{equation}
\sum_{n}|c_{n}|^{2}\langle n|\mathcal{O}|n\rangle\approx\left(\sum_{n}|c_{n}|^{2}\right)\langle n_{1}|\mathcal{O}|n_{1}\rangle=\langle n_{1}|\mathcal{O}|n_{1}\rangle\:,\label{ETH}
\end{equation}
where $n_{1}$ is a reference state satisfying condition \eqref{e1}
and $c_{1}\ne0$. Recent results \cite{2014arXiv1408.0535K} indicate
that (\eqref{ETH}) holds in the strict sense that even the largest
deviations from the ETH go to zero in the $\TDL$, albeit some local
operators may have anomalously slow relaxation rates \cite{2014arXiv1410.4186K}.
From the ETH it follows that the local observables indeed thermalize:
\begin{equation}
\lim_{t\to\infty}\langle\Psi(t)|\mathcal{O}|\Psi(t)\rangle\approx\left\langle \mathcal{O}\right\rangle _{T}\:,\label{thermalization}
\end{equation}
where the equality is expected to become exact in the $\TDL$.
For integrable systems the steady state was expected to be described
by the generalized Gibbs ensemble discussed in Subsection \eqref{sub:GTBA}.
A possible mechanism for the relaxation to GGE is provided by the
\emph{Generalized Eigenstate Thermalization Hypothesis} (GETH) \cite{2011PhRvL.106n0405C},
which states that if all local conserved charges of two different
eigenstates are close to each other, then the mean values of local
operators are also close. Put differently the values of the conserved
charges uniquely determine the correlations in the state; more precisely
this is expected to become exact in the $\TDL$. The GETH was checked
for a lattice model of hard-core bosons in \cite{2011PhRvL.106n0405C}.
However, in \cite{2014JSMTE..09..026P} it was shown that\emph{ the
GETH fails in the XXZ spin chain, and it was argued that this is the
general case for models with bound states}. In \cite{Pozsgay2014c}
it was shown that, on the other hand, the GGE holds in a system of
strongly interacting bosons with no bound states. As noted in that
work, even though the GGE hypothesis was confirmed, it was not eventually
used to describe the steady state. The crucial ingredient was the
assumption that the Diagonal Ensemble is valid, plus the fact that
there is a one-to-one correspondence between the charges and the root
densities, from which the validity of the GGE follows automatically.
The steady state correlators were evaluated from the Bethe root density
that was obtained directly from the charges using the analogue of
the relation (\ref{eq:generatingFunctionRho1h}) for the q-boson model,
with the GTBA equations described in Subsection \eqref{sub:GTBA}
playing no role whatsoever. It seems that this behaviour might be
a generic feature of interacting integrable models: \emph{the GETH
only holds when there is a one-to-one correspondence between the charges
and the root densities}. In such a case, the GGE density matrix gives
the correct steady state correlators for operators whose expectation
values depend only on the root densities, since the solution of the
GTBA coincides with the unique allowed root densities for the given
initial value of the conserved charges.
On the other hand, if the GETH does not hold, then the GTBA analysis
of the GGE density matrix is expected to give wrong predictions for
a generic initial state. This shows that\emph{ the failure of the
GGE observed in \cite{Wouters2014,Pozsgay2014} and the present work
is not related to the selection of the initial states}. In addition,
the fact that the oTBA predictions agree with the numerical simulations
shows that relaxation to the diagonal ensemble has been achieved in
the time frame for which the iTEBD simulations are valid (with the
exception of some $\sigma^{x}\sigma^{x}$ correlators, cf. Appendix
\eqref{sec:appendixTEBD}), and therefore \emph{the disagreement with
GGE cannot be due to some anomalously slow relaxation}.
We remark that our results (see Fig. \eqref{zzThermalization}) are
consistent with the observation made in \cite{Fagotti2014} that translational
invariance is not restored on the observable time scale for quenches
starting from the non-translationally invariant dimer state, which
affects correlators $\sigma_{i}\sigma_{i+l}$ with $l$ odd. It is
possible that translational invariance in these cases is restored
on some anomalously long time scale, as argued in \cite{Fagotti2014a}.
Another interesting observation made in \cite{2014JSMTE..09..026P}
that the quench action (\eqref{eq:quenchAction}) evaluated at the
GGE saddle point solution is infinite, and the spectral weight of
the GGE solution (and in fact of all states with thermal asymptotics)
decays faster than exponential as a function of the volume. This means
that the sum rule (\eqref{eq:OverlapSumRule}) is violated, and \emph{the
GGE solution is very far from the states selected by the quench action,
which actually determine the dynamics of the system}.
\subsection{Completeness of the charges and possible extensions of the GGE}
The discussion of the GETH shows that \emph{one way to construct the
correct ensemble is to find charges which fix all Bethe root densities
to their correct values}, thus making the GGE complete and correct.
In this work the GGE was constructed using the local charges obtained
as derivatives of the transfer matrix.
However, new quasi-local operators have been found recently for the
regime $\Delta<1$ \cite{2011PhRvL.106u7206P,2014NuPhB.886.1177P,2014JSMTE..09..037P}
and it is interesting open question whether the addition of these
new operators to the GGE is enough to fix all root densities to their
correct values, thus obtaining an extended GGE which can describe
the steady state after the quantum quench. Unfortunately the construction
of \cite{2011PhRvL.106u7206P,2014NuPhB.886.1177P,2014JSMTE..09..037P}
does not produce quasi-local operators for $\Delta>1$, and it remains
to be seen whether new charges can be produced by other means in this
regime.
It is also important to keep in mind that consistency of a thermodynamic
description requires extensivity of the charges involved in the ensemble.
Locality is one way of ensuring extensivity, but more general (e.g.
quasi-local) charges could be suitable for inclusion into a thermodynamic
ensemble. In free systems, for example, the GGE is usually formulated
in terms of the mode occupation numbers, which are non-local quantities.
Recently the extension of the GGE by quasi-local charges was considered
for quantum field theories; however, the charges discussed in \cite{2014arXiv1411.5352E}
are already included in our set of local charges at the level of the
discrete chain. At present it is an open question whether there exists
a set of charges, possibly including quasi-local ones that would be
complete in the sense of fixing the root densities. In the light of
recent interest in experimental observation of the GGE \cite{2014arXiv1411.7185L}\emph{
it would also be preferable if the extension preserved the truncatability
of the GGE} (cf. \cite{2013JSMTE..07..003P}, see also Subsections
\ref{sub:GTBA} and \ref{sub:Steady-state-ensemble-and-locality}),
namely that truncation to a small subset of charges would give a reasonable
approximation for the exact result, so that fitting the ensemble to
measurements would only require a small number of parameters.
\paragraph*{Acknowledgments}
We would like to thank M. Kormos and G. Zaránd for numerous discussions,
their valuable feedback, and F. Pollmann and P. Moca for the help
they gave us to set up the iTEBD calculations. We are grateful to
J.-S. Caux, J. De Nardis, and B. Wouters for useful discussions on
their work, and for sharing some of their numerical data with us.
M.A.W. acknowledges financial support from Hungarian Grants No. K105149
and CNK80991.
\bibliographystyle{bib/utphys}
|
1,108,101,566,383 | arxiv | |
1,108,101,566,384 | arxiv |
\section{Introduction}
Lattice computations of the potential of a pair of static-light mesons (in the following also referred to as $B$ mesons) are of interest, because they constitute first principles determinations of a hadronic force. Until now interactions between static-light mesons have exclusively been studied in the quenched approximation \cite{Michael:1999nq,Detmold:2007wk}. Here I report on the status of an investigation with two flavors of dynamical Wilson twisted mass quarks. Forces are not only studied between the lightest static-light mesons (denoted by $S$), but also first excitations are taken into account (denoted by $P_-$). Note that there is another ongoing study of static-light meson interactions with dynamical quarks, which has also been reported during this conference \cite{BH2010}.
\section{Trial states and quantum numbers}
\subsection{\label{SEC001}Static-light mesons}
Here I consider static-light mesons, which are made from a static antiquark $\bar{Q}$ and a light quark $\psi \in \{ u \, , \, d \}$. Consequently, isospin $I = 1/2$ and $I_z \in \{ -1/2 \, , \, +1/2 \}$. Since there are no interactions involving the static quark spin, it is appropriate to classify static-light mesons by the angular momentum of their light degrees of freedom $j$. I do not consider non-trivial gluonic excitations, hence $j = 1/2$ and $j_z = \{ -1/2 \, , \, +1/2 \}$, which is the spin of the light $u/d$ quark. Parity is also a quantum number, $\mathcal{P} \in \{ + \, , \, - \}$.
The lightest static-light meson has quantum numbers $j^\mathcal{P} = (1/2)^-$ (denoted by $S$). The first excitation, which is $\approx 400 \, \textrm{MeV}$ heavier, has quantum numbers $j^\mathcal{P} = (1/2)^+$ (denoted by $P_-$). Examples of corresponding static-light meson trial states are $\bar{Q} \gamma_5 \psi | \Omega \rangle$ and $\bar{Q} \gamma_j \psi | \Omega \rangle$ for $S$ mesons and $\bar{Q} \psi | \Omega \rangle$ and $\bar{Q} \gamma_j \gamma_5 \psi | \Omega \rangle$ for $P_-$ mesons respectively.
For a more detailed discussion of static-light mesons I refer to \cite{Jansen:2008si,:2010iv}.
\subsection{\label{SEC002}$B B$ systems}
The aim of this work is to determine the potential of a pair of $B$ mesons as a function of their separation $R$ (without loss of generality I choose the axis of separation to be the $z$ axis). To this end one has to compute the energy of eigenstates of the Hamiltonian containing two static antiquarks $\bar{Q}(\mathbf{r}_1)$ and $\bar{Q}(\mathbf{r}_2)$, $\mathbf{r}_1 = (0,0,-R/2)$ and $\mathbf{r}_2 = (0,0,+R/2)$, which define the positions of the two $B$ mesons, and which will be surrounded by light quarks and gluons.
These $B B$ states are characterized by several quantum numbers. Since there are two light $u/d$ valence quarks, isospin $I \in \{ 0 \, , \, 1 \}$ and $I_z \in \{ -1 \, , \, 0 \, , \, +1 \}$. Due to the separation of the static antiquarks along the $z$ axis, rotational symmetry is restricted to rotations around this axis. Consequently, states can be classified by the $z$ component of total angular momentum. However, as already mentioned in section~\ref{SEC001} there are no interactions involving the static quark spin. Therefore, it is appropriate to label $B B$ states by the $z$ component of the angular momentum of the light degrees of freedom $j_z \in \{ -1 \, , \, 0 \, , \, +1 \}$. Parity is also a symmetry and, therefore, a quantum number, $\mathcal{P} \in \{ + \, , \, - \}$. For states with $j_z = 0$ there is an additional symmetry, reflection along an axis perpendicular to the axis of separation (without loss of generality I choose the $x$ axis). The corresponding quantum number is $\mathcal{P}_x \in \{ + \, , \, - \}$. When using $|j_z|$ instead of $j_z$, $\mathcal{P}_x$ is a quantum number for all states. To summarize, $B B$ states can be characterized by the following five quantum numbers: $(I , I_z , |j_z| , \mathcal{P} , \mathcal{P}_x)$.
I use $B B$ trial states
\begin{eqnarray}
\label{EQN001} (\mathcal{C} \Gamma)_{A B} \Big(\bar{Q}_C(\mathbf{r}_1) \psi_A^{(1)}(\mathbf{r}_1)\Big) \Big(\bar{Q}_C(\mathbf{r}_2) \psi_B^{(2)}(\mathbf{r}_2)\Big) | \Omega \rangle ,
\end{eqnarray}
where the lower indices $A$, $B$ and $C$ denote spinor indices, $\mathcal{C} = \gamma_0 \gamma_2$ is the charge conjugation matrix and $\Gamma$ is a combination of $\gamma$ matrices. Note that it is essential to couple the light degrees of freedom of both mesons in spinor space, because these degrees of freedom determine the quantum number $|j_z|$. Proceeding in a naive way by coupling light and static degrees of freedom in both $B$ mesons separately will not result in a well defined angular momentum $|j_z|$ and, therefore, will mix different sectors. To obtain $I = 0$, the flavors of the light quarks have to be chosen according to $\psi^{(1)} \psi^{(2)} = u d - d u$, while for $I = 1$ three possibilities exist, $\psi^{(1)} \psi^{(2)} \in \{ u u \, , \, d d \, , \, ud + d u \}$. $B B$ trial states are collected in Table~\ref{TAB001} together with their quantum numbers.
\begin{table}[htb]
\begin{center}
\begin{tabular}{|c|c||c|c||c|c||c|c|}
\hline
\multicolumn{2}{|c||}{\vspace{-0.40cm}} & \multicolumn{2}{c||}{} & \multicolumn{2}{c||}{} & \multicolumn{2}{c|}{} \\
\multicolumn{2}{|c||}{} & \multicolumn{2}{c||}{$\psi^{(1)} \psi^{(2)} = u d - d u$} & \multicolumn{2}{c||}{$\psi^{(1)} \psi^{(2)} = u d + d u$} & \multicolumn{2}{c|}{$\psi^{(1)} \psi^{(2)} \in \{ u u \, , \, d d \}$} \\
\multicolumn{2}{|c||}{\vspace{-0.40cm}} & \multicolumn{2}{c||}{} & \multicolumn{2}{c||}{} & \multicolumn{2}{c|}{} \\
\hline
& & & & & & & \vspace{-0.40cm} \\
$\Gamma$ & $|j_z|$ & $\mathcal{P}$, $\mathcal{P}_x$ & result & $\mathcal{P}$, $\mathcal{P}_x$ & result & $\mathcal{P}$, $\mathcal{P}_x$ & result \\
& & & & & & & \vspace{-0.40cm} \\
\hline
& & & & & & & \vspace{-0.40cm} \\
$\gamma_5$ & $0$ & $-$, $+$ & A, SS & $+$, $+$ & R, SS & $+$, $+$ & R, SS \\
$\gamma_0 \gamma_5$ & $0$ & $-$, $+$ & A, SS & $+$, $+$ & R, SS & $+$, $+$ & R, SS \\
$1$ & $0$ & $+$, $-$ & A, SP & $-$, $-$ & R, SP & $-$, $-$ & R, SP \\
$\gamma_0$ & $0$ & $-$, $-$ & R, SP & $+$, $-$ & A, SP & $+$, $-$ & A, SP \\
$\gamma_3$ & $0$ & $+$, $-$ & R, SS & $-$, $-$ & A, SS & $-$, $-$ & A, SS \\
$\gamma_0 \gamma_3$ & $0$ & $+$, $-$ & R, SS & $-$, $-$ & A, SS & $-$, $-$ & A, SS \\
$\gamma_3 \gamma_5$ & $0$ & $+$, $+$ & A, SP & $-$, $+$ & R, SP & $-$, $+$ & R, SP \\
$\gamma_0 \gamma_3 \gamma_5$ & $0$ & $-$, $+$ & R, SP & $+$, $+$ & A, SP & $+$, $+$ & A, SP \\
& & & & & & & \vspace{-0.40cm} \\
\hline
& & & & & & & \vspace{-0.40cm} \\
$\gamma_{1/2}$ & $1$ & $+$, $\pm$ & R, SS & $-$, $\pm$ & A, SS & $-$, $\pm$ & A, SS \\
$\gamma_0 \gamma_{1/2}$ & $1$ & $+$, $\pm$ & R, SS & $-$, $\pm$ & A, SS & $-$, $\pm$ & A, SS \\
$\gamma_{1/2} \gamma_5$ & $1$ & $+$, $\mp$ & A, SP & $-$, $\mp$ & R, SP & $-$, $\mp$ & R, SP \\
$\gamma_0 \gamma_{1/2} \gamma_5$ & $1$ & $-$, $\mp$ & R, SP & $+$, $\mp$ & A, SP & $+$, $\mp$ & A, SP\vspace{-0.40cm} \\
& & & & & & & \\
\hline
\end{tabular}
\caption{\label{TAB001}quantum numbers of $B B$ trial states; due to explicit isospin breaking, $(I = 1 , I_z = 0)$ and $(I = 1 , I_z = \pm 1)$ states are not degenerate in twisted mass lattice QCD (cf.\ section~3) and, therefore, listed separately; ``result'' characterizes the shapes of the numerically computed $B B$ potentials (A: attractive potential; R: repulsive potential; SS: lower asymptotic value $2 m(S)$; SP: higher asymptotic value $m(S) + m(P_-)$; cf.\ section~4).}
\end{center}
\end{table}
\section{Lattice setup}
I use $24^3 \times 48$ gauge field configurations generated by the European Twisted Mass Collaboration (ETMC). The fermion action is $N_f = 2$ Wilson twisted mass,
\begin{eqnarray}
S_\mathrm{F}[\chi,\bar{\chi},U] \ \ = \ a^4 \sum_x \bar{\chi}(x) \Big(D_\mathrm{W} + i\mu_\mathrm{q}\gamma_5\tau_3\Big) \chi(x)
\end{eqnarray}
\cite{Frezzotti:2000nk,Frezzotti:2003ni}, where $D_\mathrm{W}$ is the standard Wilson Dirac operator and $\chi = (\chi^{(u)} , \chi^{(d)})$ is the light quark doublet in the so-called twisted basis. In the continuum the twisted basis is related to the physical basis by the twist rotation $\psi = e^{i \gamma_5 \tau_3 \omega / 2} \chi$, where $\omega$ is the twist angle. $\omega$ has been tuned to maximal twist, i.e.\ $\omega = \pi / 2$, where static-light mass differences are automatically $\mathcal{O}(a)$ improved. The gauge action is tree-level Symanzik improved \cite{Weisz:1982zw}. I use $\beta = 3.9$ and $\mu_\mathrm{q} = 0.0040$ corresponding to a lattice spacing $a = 0.079(3) \, \textrm{fm}$ and a pion mass $m_\mathrm{PS} = 340(13) \, \textrm{MeV}$ \cite{Baron:2009wt}. For details regarding these gauge field configurations I refer to \cite{Boucaud:2007uk,Boucaud:2008xu}.
In twisted mass lattice QCD at finite lattice spacing SU(2) isospin is explicitely broken to U(1), i.e.\ $I_z$ is still a quantum number, but $I$ is not. Moreover, parity $\mathcal{P}$ has to be replaced by twisted mass parity $\mathcal{P}^{(\textrm{\scriptsize tm})}$, which is parity combined with light flavor exchange. The consequence is that twisted mass $B B$ sectors are either labeled by $(I_z , |j_z| , \mathcal{P}^{(\textrm{\scriptsize tm})} \mathcal{P}_x^{(\textrm{\scriptsize tm})})$ for $I_z = \pm 1$ or by $(I_z , |j_z| , \mathcal{P}^{(\textrm{\scriptsize tm})} , \mathcal{P}_x^{(\textrm{\scriptsize tm})})$ for $I_z = 0$. A comparison with the set of quantum numbers discussed in section~\ref{SEC002} shows that in the twisted mass formalism there are only half as many $B B$ sectors as in QCD, i.e.\ QCD $B B$ sectors are pairwise combined. Nevertheless, it is possible to unambiguously interpret states obtained from twisted mass correlation functions in terms of QCD quantum numbers. The method has successfully been applied in the context of static-light mesons \cite{Blossier:2009vy} and is explained in detail for kaons and $D$ mesons in \cite{Baron:2010th}. For a detailed discussion of twisted mass symmetries in the context of $B B$ systems I refer to an upcoming publication \cite{MW2010}.
When computing correlation functions, I use several techniques to improve the signal quality including operator optimization by means of APE and Gaussian smearing and stochastic propagators combined with timeslice dilution. These techniques are very similar to those used in a recent study of the static-light meson spectrum \cite{Jansen:2008si,:2010iv} and will also be explained in detail in \cite{MW2010}.
In contrast to spectrum calculations for static-light mesons \cite{Jansen:2008si,:2010iv} and static-light baryons \cite{Wagner:2010hj}, where we have always used the HYP2 static action, I perform computations both with the HYP2 static action and with unsmeared links representing the world lines of the static antiquarks. In particular for small $\bar{Q} \bar{Q}$ separations $R \raisebox{-0.5ex}{$\,\stackrel{<}{\scriptstyle\sim}\,$} 2 a$ ultraviolet fluctuations are important, which are, however, filtered out, when using HYP smeared links. The effect of HYP smearing is shown in Figure~\ref{FIG002}. For all results presented in the following potential values corresponding to $R \leq 2 a$ have been computed by means of unsmeared links, while for larger separations HYP smearing has been applied to improve the signal-to-noise ratio.
\begin{figure}[htb]
\begin{center}
\input{FIG002.tex}
\caption{\label{FIG002}the $B B$ potential corresponding to $\psi^{(1)} \psi^{(2)} = uu$, $\Gamma = \gamma_3$ computed with unsmeared links and with the HYP2 static action.}
\end{center}
\end{figure}
\section{Numerical results}
The $B B$ potentials presented and discussed in the following have been obtained by fitting constants to effective mass plateaus obtained from temporal correlation functions of trial states (\ref{EQN001}). In twisted mass lattice QCD there are 24 independent $I_z = 0$ trial states (i.e.\ trial states not related by symmetries) and 12 independent $I_z = \pm 1$ trial states, i.e.\ 36 resulting potentials, which are not related by symmetries (cf.\ Table~\ref{TAB001}). Some of these potentials are quite similar, while others are not. In total there are four significantly different types of potentials: two of them are attractive, the other two are repulsive; two have have asymptotic values for large separations $R$, which are larger by around $400 \, \textrm{MeV}$ compared to the other two (cf.\ the ``result'' columns of Table~\ref{TAB001}). For each of the four types an example is plotted in Figure~\ref{FIG001}.
\begin{figure}[htb]
\begin{center}
\input{FIG001.tex}
\caption{\label{FIG001}examples of $B B$ potentials as functions of the separation $R$.}
\end{center}
\end{figure}
To understand the asymptotic behavior, it is convenient to express the $B B$ creation operators appearing in (\ref{EQN001}) in terms of static-light meson creation operators. For the potentials shown in Figure~\ref{FIG001} one finds after some linear algebra
\begin{eqnarray}
\nonumber & & \hspace{-0.7cm} (\mathcal{C} 1)_{A B} \Big(\bar{Q}_C(\mathbf{r}_1) u_A(\mathbf{r}_1)\Big) \Big(\bar{Q}_C(\mathbf{r}_2) u_B(\mathbf{r}_2)\Big) \ \ = \\
\label{EQN_g5} & & = \ \ -S_\uparrow(\mathbf{r}_1) P_{- \downarrow}(\mathbf{r}_2) + S_\downarrow(\mathbf{r}_1) P_{- \uparrow}(\mathbf{r}_2) - P_{- \uparrow}(\mathbf{r}_1) S_\downarrow(\mathbf{r}_2) + P_{- \downarrow}(\mathbf{r}_1) S_\uparrow(\mathbf{r}_2) \\
\nonumber & & \hspace{-0.7cm} (\mathcal{C} \gamma_0)_{A B} \Big(\bar{Q}_C(\mathbf{r}_1) u_A(\mathbf{r}_1)\Big) \Big(\bar{Q}_C(\mathbf{r}_2) u_B(\mathbf{r}_2)\Big) \ \ = \\
\label{EQN_g0} & & = \ \ -S_\uparrow(\mathbf{r}_1) P_{- \downarrow}(\mathbf{r}_2) + S_\downarrow(\mathbf{r}_1) P_{- \uparrow}(\mathbf{r}_2) + P_{- \uparrow}(\mathbf{r}_1) S_\downarrow(\mathbf{r}_2) - P_{- \downarrow}(\mathbf{r}_1) S_\uparrow(\mathbf{r}_2) \\
\nonumber & & \hspace{-0.7cm} (\mathcal{C} \gamma_5)_{A B} \Big(\bar{Q}_C(\mathbf{r}_1) u_A(\mathbf{r}_1)\Big) \Big(\bar{Q}_C(\mathbf{r}_2) u_B(\mathbf{r}_2)\Big) \ \ = \\
\label{EQN_1} & & = \ \ -S_\uparrow(\mathbf{r}_1) S_\downarrow(\mathbf{r}_2) + S_\downarrow(\mathbf{r}_1) S_\uparrow(\mathbf{r}_2) - P_{- \uparrow}(\mathbf{r}_1) P_{- \downarrow}(\mathbf{r}_2) + P_{- \downarrow}(\mathbf{r}_1) P_{- \uparrow}(\mathbf{r}_2) \\
\nonumber & & \hspace{-0.7cm} (\mathcal{C} \gamma_3)_{A B} \Big(\bar{Q}_C(\mathbf{r}_1) u_A(\mathbf{r}_1)\Big) \Big(\bar{Q}_C(\mathbf{r}_2) u_B(\mathbf{r}_2)\Big) \ \ = \\
\label{EQN_g3} & & = \ \ -i S_\uparrow(\mathbf{r}_1) S_\downarrow(\mathbf{r}_2) -i S_\downarrow(\mathbf{r}_1) S_\uparrow(\mathbf{r}_2) +i P_{- \uparrow}(\mathbf{r}_1) P_{- \downarrow}(\mathbf{r}_2) +i P_{- \downarrow}(\mathbf{r}_1) P_{- \uparrow}(\mathbf{r}_2) .
\end{eqnarray}
At large separations $R$ the $B B$ potentials are expected to approach the sum of the masses of the two individual $B$ mesons. When considering (\ref{EQN_g5}) to (\ref{EQN_g3}) and Figure~\ref{FIG001}, one can see that the two potentials with the lower asymptotic value ($\psi^{(1)} \psi^{(2)} = uu$, $\Gamma = \gamma_5$ and $\psi^{(1)} \psi^{(2)} = uu$, $\Gamma = \gamma_3$) contain $S S$ combinations. These are significantly lighter than the also present $P_- P_-$ combinations and should, therefore, dominate the correlation functions and effective masses at large temporal separations. The asymptotic value of the corresponding potentials should be around $2 m(S)$, which is the case. In contrast to that the other two potentials with the higher asymptotic value \\ ($\psi^{(1)} \psi^{(2)} = uu$, $\Gamma = 1$ and $\psi^{(1)} \psi^{(2)} = uu$, $\Gamma = \gamma_0$) exclusively contain $S P_-$ combinations. Their asymptotic value is expected at around $m(S) + m(P_-)$, which is also reflected by Figure~\ref{FIG001}.
This expansion of $B B$ creation operators in terms of static-light meson creation operators also provides an explanation, why potentials computed with different operators, but which have identical quantum numbers, are of different type. An example is given by $\psi^{(1)} \psi^{(2)} = uu$, $\Gamma = \gamma_3$ and $\psi^{(1)} \psi^{(2)} = uu$, $\Gamma = 1$, both having quantum numbers $(I = 1, I_z = +1, |j_z| = 0, \mathcal{P} = -, \mathcal{P}_x = -)$. The $\Gamma = \gamma_3$ potential is attractive with an asymptotic value at around $2 m(S)$, while the $\Gamma = 1$ potential is repulsive with an asymptotic value at around $m(S) + m(P_-)$. From (\ref{EQN_g5}) and (\ref{EQN_g3}) one can read off that the static-light meson content is essentially ``orthogonal'': the $\Gamma = \gamma_3$ operator contains $S S$ and $P_- P_-$ combinations, whereas the $\Gamma = 1$ operator is exclusively made from $S P_-$ combinations. While the corresponding $\Gamma = \gamma_3$ correlator yields the ground state in the $(I = 1, I_z = +1, |j_z| = 0, \mathcal{P} = -, \mathcal{P}_x = -)$ sector, which closely resembles a pair of $S$ mesons, the $\Gamma = 1$ operator mainly excites the first excitation, which is similar to an $S P_-$ combination. The generated ground state overlap is, therefore, rather small and, consequently, very large temporal separations would be needed to extract the ground state potential. Presumably, the potential corresponding to the $\Gamma = 1$ operator has a small ground state contribution, which contaminates the first excited state potential. This is supported by the observation that the asymptotic value of the $\Gamma = 1$ potential is slightly lower than $m(S) + m(P_-)$. For a clean extraction of this first excited state an analysis of a $2 \times 2$ correlation matrix is needed.
From the 36 independent potentials one can also deduce a rule stating, whether a $B B$ potential is attractive or repulsive. The rule is quite simple. \\
\textbf{A }$B B$\textbf{ potential is attractive, if the trial state is symmetric under meson exchange, repulsive, if the trial state is antisymmetric under meson exchange.} \\
Here meson exchange means exchange of flavor, spin and parity. One can easily verify this rule for the examples discussed above: the operators (\ref{EQN_g0}) and (\ref{EQN_g3}) are symmetric under meson exchange and give rise to attractive potentials, while the operators (\ref{EQN_g5}) and (\ref{EQN_1}) are antisymmetric under meson exchange and yield repulsive potentials. This more general rule is in agreement to what has been observed in quenched $B B$ computations for $S S$ potentials \cite{Michael:1999nq,Detmold:2007wk}.
\section{Conclusions}
I have presented results of an ongoing computation of $B B$ potentials. Various channels characterized by the quantum numbers $(I , I_z , |j_z| , \mathcal{P} , \mathcal{P}_x)$ have been investigated. The computations have been performed with dynamical, rather light quark masses ($m_\mathrm{PS} \approx 340 \, \textrm{MeV}$). The results have been interpreted in terms of individual $S$ and $P_-$ mesons. A simple rule has been established stating, whether a $B B$ potential is attractive or repulsive.
The statistical accuracy of the correlation functions needs to be improved. $B B$ systems are rather heavy and, hence, effective masses are quickly lost in noise. At the present level of statistics slight contamination from excited states cannot be excluded. To this end contractions are ongoing.
Future plans include studying the light quark mass dependence, the continuum limit and finite volume effects. Moreover, also $B B_s$ and $B_s B_s$ potentials could be computed. To treat the $s$ quark as a fully dynamical quark, such computations should be performed on $N_f = 2+1+1$ flavor gauge field configurations currently produced by ETMC \cite{Baron:2010bv}. It would also be interesting to supplement the lattice computation by a perturbative calculation of $B B$ potentials at small separations $R \raisebox{-0.5ex}{$\,\stackrel{<}{\scriptstyle\sim}\,$} 2$. Finally, one could use the obtained $B B$ potentials as input for phenomenological considerations to answer e.g.\ the question, whether two $B$ mesons are able to form a bound state.
\begin{acknowledgments}
I acknowledge useful discussions with Pedro Bicudo, William Detmold, Rudolf Faustov, \\ Roberto Frezzotti, Vladimir Galkin, Chris Michael and Attila Nagy. This work has been supported in part by the DFG Sonderforschungsbereich TR9 Computergest\"utzte The\-o\-re\-tische Teilchenphysik.
\end{acknowledgments}
|
1,108,101,566,385 | arxiv | \section{INTRODUCTION}
Our understanding of supersymmetric QCD (SQCD) has grown rapidly over
the last four years \cite{seiberg}. There are now
solid examples of four dimensional gauge
theories that confine by the dual Meissner effect, exhibit chiral
symmetry breaking and even theories that give rise to massless
composites. One natural question to ask is how these phenomena relate to
non-supersymmetric theories and in particular to QCD. Some progress has
been made in including perturbing soft SUSY breaking interactions whilst
retaining the ``exactness'' of the supersymmetric results
\cite{soft1,lattice,soft2,soft3}. We review
some of those continuum results and their importance to lattice
simulations of pure glue SQCD. Finally we discuss non-supersymmetric
``brane'' configurations in string theory and their field theory
interpretation, identifying the corresponding soft SUSY breaking
operators.
\input epsf
\newwrite\ffile\global\newcount\figno \global\figno=1
\def\writedefs{\immediate\openout\lfile=labeldefs.tmp \def\writedef##1{%
\immediate\write\lfile{\string\def\string##1\rightbracket}}}
\def}\def\writedef#1{{}\def\writedef#1{}
\def\epsfcheck\figin}\def\figins{\epsfcheck\figins{\epsfcheck\epsfcheck\figin}\def\figins{\epsfcheck\figins}\def\figins{\epsfcheck\figins}
\def\epsfcheck{\ifx\epsfbox\UnDeFiNeD
\message{(NO epsf.tex, FIGURES WILL BE IGNORED)}
\gdef\epsfcheck\figin}\def\figins{\epsfcheck\figins##1{\vskip2in}\gdef\figins##1{\hskip.5in
\else\message{(FIGURES WILL BE INCLUDED)}%
\gdef\epsfcheck\figin}\def\figins{\epsfcheck\figins##1{##1}\gdef\figins##1{##1}\fi}
\def{}
\def\ifig#1#2#3{\xdef#1{}
\writedef{#1\leftbracket fig.\noexpand~\the\figno}%
\epsfcheck\figin}\def\figins{\epsfcheck\figins{\centerline{#3}}\medskip\centerline{\vbox{\baselineskip12pt
\advance\hsize by -1truein\center\footnotesize{ } #2}}
\bigskip\endinsert\global\advance\figno by1}
\def}\def\endinsert{{}\def\endinsert{}
\section{SOFT BREAKINGS IN N=1 SQCD}
We begin from the N=1 $SU(N_c)$ SQCD theories with $N_f$ flavors
described by the UV Lagrangian
\begin{eqnarray}
{\cal L}& =& K (Q^\dagger_i Q_i + \tilde{Q}^\dagger_i
\tilde{Q}_i)|_D + {1 \over 8 \pi}
Im \tau W^\alpha W^\alpha|_F \nonumber \\
&&+ 2 Re\, m_{ij} Q_i \tilde{Q}_j|_F
\end{eqnarray}
where $Q$ and $\tilde{Q}$ are the standard chiral matter superfields and
$W^\alpha$ the gauge superfield. The coupling $K$
determines the kinetic normalization of the matter fields. The
gauge coupling $\tau = \theta/2 \pi + i 4 \pi/g^2$ defines
a dynamical scale of SQCD:
$\Lambda^{b_0} = \mu^{b_0} exp( 2\pi i \tau)$,
with $b_0 = 3 N_c - N_f$ the one loop coefficient of the SQCD
$\beta$-function.
And, finally, $m$ is a supersymmetric mass term for
the matter fields. We may raise these couplings to the status of spurion
chiral superfields which are then frozen with scalar
component vevs.
Soft supersymmetry breaking parameters may be introduced through
the F-component of the spurion coupling fields. A gaugino mass may be
generated from the gauge coupling $\tau$, $F_\tau = i
8 \pi m_\lambda$
\begin{equation}
{1 \over 8 \pi} Im \tau WW|_F \rightarrow Re m_\lambda \lambda \lambda
\end{equation}
Scalar masses and interactions may be introduced through the mass
spurion and kinetic normalization $K$. $F_m \neq 0$ gives
\begin{equation}
Re m \tilde{Q} Q|_F \rightarrow Re F_m A_{\tilde{Q}} A_Q
\end{equation}
and allowing a component of $K = m_Q^2 \theta^2 \bar{\theta}^2$
\begin{equation}
K Q^\dagger e^V Q|_D \rightarrow m_Q^2 |A_Q|^2
\end{equation}
It is particularly useful to write the soft breakings as the components
of the spurion fields because the symmetries of the SQCD model are left
unaltered even in the softly broken model.
The SQCD theory
without a mass term has the symmetries
\begin{equation}
\begin{tabular}{ccccc}
&$SU(N_f)$ & $SU(N_f)$ & $U(1)_B$ & $U(1)_R$\\
$Q$ & $N_f$ & 1 & 1 & ${N_f - N_c \over N_f}$\\
$\tilde{Q}$ &1& $\bar{N}_f$ & -1 & ${N_f - N_c \over N_f}$\\
$W^\alpha$ & 1 & 1 & 0 & 1\end{tabular}
\end{equation}
The mass term breaks the chiral symmetries to the vector symmetry. The
classical $U(1)_A$ symmetry on the matter fields is anomalous and,
if there is a massless quark, may be used to rotate away the $\theta$
angle. In the massive theory the flavor symmetries may be used to
rotate $m_{ij}$ to diagonal form and the anomalous $U(1)_R$ symmetry
under which the $Q$s have charge $+1$ may be used to rotate $\theta$ on
to the massless gaugino. Including
the spurion fields the non-anomalous $U(1)_R$ symmetry charges are
\begin{equation}\label{sym}
\begin{tabular}{ccccc}
$W$ & $Q$ & $\tilde{Q}$ & $\tau$ & $m$ \\
1 & ${N_f - N_c \over N_f}$ & ${N_f - N_c \over N_f}$ & 0 & ${2N_c \over
N_f}$ \end{tabular}
\end{equation}
The anomalous symmetries may be restored to the status of symmetries of
the model if we also allow the spurions to transform. The appropriate
charges are
\begin{equation}
\begin{tabular}{cccccc}
&$W$ & $Q$ & $\tilde{Q}$ & $\Lambda^{b_0}$ & $m$\\
$U(1)_R$ & 1 & 0 & $ 0 $ & $2(N_c-N_f)$ & 2 \\
$U(1)_A$ & 0 & 1 & 1 & $2N_f$ & -2
\end{tabular}
\end{equation}
The $m_{ij}$ spurions also transform under the
chiral flavor group.
These symmetries and supersymmetry remain symmetries of the model no
matter which components of the spurions are non-zero and hence they may
be used to determine the low energy theory of the softly broken
models (there is an assumption that there is not a phase transition to
a totally different set of variables as soon as supersymmetry is
broken).
The use of these symmetries is completely analgous to the use of
chiral symmetry in QCD to find the mass dependence of the QCD chiral
Lagrangian.
For pure supersymmetric models the potential minima may be found from
the superpotential alone which is holomorphic in the fields and
spurions. The exact results for the far IR behaviour of the theories
result from the very limited number of possible terms compatible with
the symmetries. There is an immediate problem in the softly broken
theories though which is that
scalar masses may be generated from non-holomorphic
Kahler terms. For example
\begin{equation}
\tau^\dagger \tau Q^\dagger Q|_D \rightarrow |F_\tau|^2 |A_Q|^2
\end{equation}
Thus for example if one begins from an SQCD theory with a moduli space
in the scalar vevs the minima of the potential in the softly broken
model will depend on these unknown terms. In particular one does not
know the sign of these mass terms; a negative mass would indicate a
higgs mechanism and a mass gap, a positive mass would leave the scalar's
fermionic partner massless.
A solution to this problem \cite{soft1}
is to begin with an SQCD theory in which the
scalars have supersymmetric masses. For small soft breakings relative to
that mass the unknown Kahler terms may be neglected. As the simplest
example consider SQCD with a mass term for the matter fields. The
resulting theories
have a mass gap on the scale $m$ and the induced meson $M_{ij}= Q^i
\tilde{Q}_j$ vev is determined independently of $N_f$ by holomorphy
\begin{equation}\label{Slimit}
M_{ij} = \Lambda^{{3N_c - N_f \over N_c}} (detm)^{1/N_c}\left( {1 \over
m} \right) _{ij} = |M_{ij}| e^{i\alpha}~~~
\end{equation}
The resulting supersymmetric theories have $N_c$ distinct vacua
corresponding to the $N_c$th roots of unity, $\alpha = 2n\pi/N_c$
(as predicted by the Witten
index). Note that for the theories
with magnetic duals putting masses in for all flavors breaks the dual
gauge group completely. For simplicity henceforth we shall take $m_{ij}$
to be proportional to the identity matrix; in this basis $\langle
M_{ij} \rangle$ is also proportional to the identity matrix.
These massive theories may be softly broken in a controlled fashion.
If the spurion generating the soft breaking enters
the superpotential linearly then we may obtain desirable results when that
spurion's F-component $F \ll m \ll \Lambda$. Any Kahler term contributions to
the scalar potential take the form $F_X^\dagger F_Y$ with $X$ and $Y$
standing for generic fields or spurions. In the supersymmetric limit all
F-components are zero and will grow as the vacuum expectation value of the
soft breaking spurion.
These Kahler terms are therefore higher order in
the soft breaking parameter than the linear term from the
superpotential. The unknown corrections to the squark masses in the
theory are subleading to the masses generated by the supersymmetric mass
term and hence we may determine the potential minima at lowest order.
As an example we introduce a gaugino mass through the spurion $\tau$.
In the IR theory $\tau$ enters through the strong interaction scale
$\Lambda$ which occurs linearly in the superpotential of the
theory. Taking $F_{\tau} \ll m \ll \Lambda$ we may determine
the vacuum structure. The IR superpotential terms compatible with the
symmetries of the theory involving $\Lambda$ are
\begin{equation}
Re[ m M_{ij} + ({\rm det} M_{ij})^ { 1 \over (N_f - N_c) } \Lambda^{(3N_c-N_f)
\over (N_c-N_f)}]
\end{equation}
where the final term results from non-perturbative effects in the broken gauge
group. At lowest order in perturbation theory the vev of $M_{ij}$ is
given by (\ref{Slimit}) which also contains $\Lambda$ and hence has a
non-zero F-component. Including $F_\tau$ and performing the superspace
integral we obtain up to a coefficient the following corrections
to the potential that
break the degeneracy between the $N_c$ SQCD vacua
\begin{eqnarray}
\label{gpot}
\Delta V & = & Re\left[ m^{N_f/N_c} 8 \pi m_\lambda \Lambda^{(3N_c-N_f)/
N_c}\right]\\
& = \nonumber & \left|m^{N_f/N_c} 8 \pi m_\lambda \Lambda^{(3N_c-N_f)\over
N_c}\right| \\
&& \left. \right. \hspace{0.7cm} \times
\cos[ ~ {\theta_{phys} \over N_c} ~+~ \alpha ~]
\end{eqnarray}
where $\alpha$ are the $N_c$th roots of unity and
$\theta_{phys}$ is the physical $\theta$ angle in which the
physics must be $2 \pi$ periodic
\begin{equation}
\theta_{phys} ~=~ \theta_0 ~+~ N_c ( \theta_{m_\lambda}+ N_f \theta_m )
\end{equation}
The gaugino mass has explicitly broken the anomalous U(1) symmetries of
the SQCD model and hence the $\theta$ angle may not be rotated away.
There is also an additional contribution to the vacuum energy
arising from the gaugino condensate. Using the Konishi
anomaly \cite{KA}, we see that it has the same form as
(\ref{gpot}).
The supersymmetry breaking contributions to the potential
break the degeneracy
between the $N_c$ supersymmetric vacua. The model has interesting phase
structure as the bare $\theta$ angle is changed. There are phase transitions
as $\theta_{phys}$ is varied, occurring
at $\theta_{phys} ~=~ $(odd)$\pi$.
This behavior can be compared
with the $\theta$ angle dependence of the QCD chiral Lagrangian \cite{chiral}
for which there are $N_f$ distinct vacua which interchange through first
order phase transitions at $\theta =$(odd)$\pi$.
Unfortunately if we wish to keep control of the low energy solution we
are forced to keep the soft breakings small and we can not decouple the
superpartners to recover non-supersymmetric QCD. There is however
one conclusion for QCD that we can tentatively draw from this
analysis. In these models the form of the confined effective
theory changes smoothly with the $\theta$ angle and there is no sign of a
break down of confinement as suggested in \cite{schierholz}. This lends some
support to the assumption \cite{chiral} that the chiral Lagrangian remains
the correct discription of QCD in the IR even at non-zero $\theta$.
\section{PURE GLUE SQCD AND LATTICE TESTS}
Although the techniques for solving the supersymmetric and softly broken
theories described above provide an extremely plausible
picture of the low-energy dynamics of these models, one may feel a
little discomfort at the absence of direct
non-perturbative tests of the results.
An obvious possibility is that these models could be simulated directly
on the lattice. Some initial work in these directions has already been
performed in \cite{Montvay}.Unfortunately, as is well known,
lattice regularization violates
supersymmetry \cite{CV}, and a special fine-tuning is required to
recover the SUSY limit (this is analogous to the case of chiral
symmetry in lattice QCD). Away from the SUSY point, the continuum
limit of the lattice theory is described by a model with explicit SUSY
violating
interactions. In some cases, these violations may correspond only
to soft breakings,
although this is not guaranteed in general.
Pure glue SQCD is a simple theory with only one parameter,
the gauge coupling. The only low-dimension (renormalizable)
SUSY violation allowed by gauge invariance is a gaugino mass,
which is a soft violation. Therefore, the continuum limit of the
lattice regularized version of SYM is simply SYM with a massive gaugino.
The SUSY limit can be reached by fine-tuning the lattice parameter
corresponding to a bare gaugino mass.
In order to understand this limit as well as possible, we will study continuum
SYM with explicit gaugino mass \cite{lattice}, and derive some relations describing
the approach to the SUSY limit. Several non-trivial predictions can
be made regarding the vacuum energy and of the behavior of the
gaugino condensate. A less rigorously derived description of the
lightest bound states of SYM theory has also been proposed in the
literature \cite{VY} from which predictions for the masses
of the gluino-gluino and glue-gluino bound states and their splittings
away from the supersymmetric point may be obtained.
The bare Lagrangian of SYM with $SU(N_c)$ gauge group
is
\begin{equation}
\label{SYM}
{\cal L} ~=~ \frac{1}{g_0^2}\left[ \,-\frac{1}{4}
G_{\mu\nu}^a G_{\mu\nu}^a
~+~ i\lambda_{\dot\alpha}^\dagger D^{\dot\alpha\beta}\lambda_\beta
\right] ~.
\end{equation}
This model possesses a discrete global $Z_{2N_c}$ symmetry, a
residual non-anomalous subgroup of the anomalous chiral $U(1)$. The
theory is believed to generate a gaugino condensate and have a mass
gap.
In supersymmetric notation the Lagrangian (\ref{SYM}) can be written as
\begin{equation}
\label{SSYM}
{\cal L} ~=~ \int d^2 \theta~
\frac{1}{8\pi} Im\, \tau_0 W^{\alpha}W_{\alpha}~,
\end{equation}
where the gauge coupling is defined to be
$\tau_0 = \frac{4\pi i}{g^2_0} + \frac{\theta}{2\pi}$.
Note that $\Lambda^{b_0} = e^{2\pi i \tau_0} \mu^{b_0}~$ is
explicitly $2\pi$-periodic in the
$\theta$-angle.
To derive the low energy effective theory of SQCD we note that there are
two anomalous symmetries of the theory, $U(1)_R$ and scale
invariance. In
fact their anomalies are related since their currents belong to the same
supermultiplet. These symmetries can be restored in an enlarged theory
provided we allow the spurion couplings to transform:
\begin{eqnarray}
U(1)_R: \hspace {1cm} &&
W(x,\theta) \rightarrow e^{i\zeta} W(x, \theta e^{i\zeta}) \nonumber \\
&& \Lambda \rightarrow \Lambda e^{i2 \zeta/3} \nonumber
\end{eqnarray}
\begin{eqnarray}
{\rm Scale} \hspace{0.1cm} {\rm Invariance}:
&& W(x,\theta) \rightarrow e^{3\xi/2} W(xe^\xi, \theta e^{\xi/2})
\nonumber \\
&& \Lambda \rightarrow \Lambda e^\xi ~~~~~.\nonumber
\end{eqnarray}
We may now determine the general form of the partition function
(assuming a mass gap) as a
function of $\tau$ subject to these symmetries. The only possible terms
are
\begin{equation}
\label{SZ}
Z[\tau] = {\rm} \hspace{0.1cm} {\rm exp} ~ iV \left[ {9 \over \alpha}
\Lambda^\dagger \Lambda|_D + ( \beta
\Lambda^3|_F +h.c.) \right]
\end{equation}
The numerical coefficients $\alpha$ and $\beta$ remain undetermined from
the above symmetry arguments. $\beta$ may be determined from the results
for SQCD with massive quarks where for $N_f = N_c -1$ the full gauge
symmetry may be higgsed and the coefficient of the superpotential term
calculated by perturbative instanton methods.
We find $\beta = N_c$ \cite{cordes}.
These strong arguments lead to two predictions for the
condensates of the SYM theory. The source $J$ for the
gaugino correlator $\lambda \lambda$ occurs in the same position as the
F-component of $\tau$ and is hence known.
There are two independent correlators
\begin{eqnarray}
\label{cond}
\langle \lambda \lambda \rangle & = & - 32 \pi^2 \Lambda^3\nonumber \\
\langle \bar{\lambda} \bar{\lambda} \lambda \lambda\rangle & = &
{- 1024 i \pi^4 \over \alpha N_c^2} |\Lambda|^2 / V~~~.
\end{eqnarray}
The IR theory has a gaugino condensate $\simeq \Lambda^3$, with phase
$2\pi i\tau /N_c$ and hence there are $N_c$ degenerate
vacua associated
with the $N_c$th roots of unity. Below, therefore, $\Lambda^3$
is an $N_c$ valued constant with phases $n 2\pi i /N_c$ where $n$ runs from
$0...$ $N_c-1$.
\subsection{Soft Supersymmetry Breaking}
We may induce a bare gaugino mass through a non zero
F-component of the bare gauge coupling
$\tau = \tau_0 + 8 \pi i m_\lambda \theta \theta$
In the IR theory $\tau$ enters through the spurion
$\Lambda$ which occurs linearly in the superpotential of the
theory. Thus there will be a correction to the potential
of the form:
\begin{equation}
\label{correction}
\Delta V ~=~32 \pi^2 Re ( m_\lambda \Lambda^3) - {256 \pi^4
\over \alpha N_c^2} |m_\lambda \Lambda|^2
\end{equation}
Terms with superderivatives acting on the spurion field
can also give rise to contributions to the potential but these are
higher order in an expansion in $m_\lambda/\Lambda$. The shift in
the potential energy of the $N_c$ degenerate vacua of the SYM theory
at linear order in $m_\lambda$ is known and we may determine the vacuum
structure
\begin{equation}
\label{deltaV}
\Delta V = 32 \pi^2 |m_\lambda \Lambda^3|~ \cos \left[ {2 \pi n \over N_c} +
\theta_{m_\lambda} \right]
\end{equation}
For small soft
breakings, $m_\lambda \ll \Lambda$, where the linear term dominates,
the degeneracy between the SYM vacua is broken favoring one vacuum
dependent on
the phase of the gaugino mass.
The coefficient in the energy shift
is a test of the exact superpotential in (\ref{SZ}).
We may also determine the leading shift in the gaugino condensate
\begin{equation}
\label{Stau}
\langle \lambda \lambda \rangle
~=~ - 32 \pi^2 \Lambda^3 \,
~+~ \frac{512 \pi^4}{\alpha N^2_c} m_\lambda^* |\Lambda|^2~,
\end{equation}
which depends on the unknown parameter $\alpha$. Strictly speaking
there are also divergent contributions to this quantity which are
proportional to $m_\lambda$ times the cut-off squared.
Reinserting the bare $\theta_0$ angle into the expression for the shift in
vacuum energy we find
\begin{equation}
\Delta V = 32 \pi^2 |m_\lambda \Lambda^3|~ \cos \left[ {2 \pi n \over N_c} +
\theta_{m_\lambda} + {\theta_0 \over N_c} \right]
\end{equation}
As $\theta_0$ is changed first order phase transitions occur at
$\theta_0 = ({\rm odd}) \pi$ where two of the $N_c$ SYM vacua
interchange as the minimum of the softly broken theory.
\subsection{The Lightest Bound States}
An alternative description of the low energy behaviour of SYM theory has
been presented by Veneziano and Yankelowicz \cite{VY} which attempts to
describe the lightest bound states of the theory. The form of their
effective action can be rigourously obtained from the discussion above.
Since the source $J$ for $WW$ occurs in the same places as the
coupling $\tau$ we also know the source dependence of $Z$. If we wish we
may Legendre transform $Z[\tau,J]$ to obtain the effective potential for
the classical field
\begin{equation}
S \equiv
-\frac{1}{32\pi^2}\,\mbox{Tr}\,\langle W^2 \rangle~~.
\end{equation}
We find
\begin{eqnarray}
\label{VY}
\Gamma[\tau,S] & = & {9 \over \alpha} \left( \bar S S
\right)^{1/3}\Big|_D\\
&& ~+~
N_c \left( S - S\ln (S/ \Lambda^3) \right)\Big|_F+ \, \mbox{h.c.}~ \nonumber
\end{eqnarray}
So derived this effective action contains no more information than
(\ref{SZ}) simply being a classical potential whose minimum determines
the vev of $S$ and we find, by construction, Eq(\ref{cond}).
A stronger interpretation can also though be given to the VY action. If
we assert that the lightest bound states of the theory are those that
interpolate in the perturbative regime to the field $WW$, and hence share
the same symmetry properties, then those symmetries again reproduce the
VY action for those lightest fields. To obtain the physical states
one performs
an appropriate rescaling of the $S$-field
\begin{equation}
\label{Sres}
\Phi~=~ \frac{3}{\sqrt{\alpha}} \,S^{1/3}
\end{equation}
in the Lagrangian (\ref{VY})
to make the kinetic term canonical
\begin{eqnarray}
\label{VYLR}
{\cal L} &~=~& \left( \bar \Phi \Phi \right)\Big|_D~+~
\frac{a^{3/2}N_c}{9}\left(\frac{1}{3}\Phi^3 \right.
\nonumber \\
&& \left. - \Phi^3
\ln{(\frac{a^{1/2}}{3}\frac{\Phi}{\Lambda} )}\right)\Big|_F
~+~ \,
\mbox{h.c.}~
\end{eqnarray}
In fact, as pointed out and corrected in \cite{Shifman}, this effective
Lagrangian is not complete since it does not possess the full $\rm Z_{2N_c}$
symmetry of the quantum theory. To restore that symmetry the extra term
\begin{equation}
\Delta {\cal L} = { 2 \pi i m \over 3} \left( S - \bar{S} \right)
\end{equation}
where $m$ is an integer valued Lagrange multiplier must be added. For
the $n=0$ vacuum with vanishing phase this extra term vanishes
and the VY model above is recovered. We shall concentrate on that vacuum.
One must worry about possible mixing between $\phi$ and the next
massive state with the same quantum numbers but it seems reasonable that
this state may be significantly heavier and hence may be neglected. We
shall move on and use the VY action as a description of the lightest
states to make predictions about the masses of those states. A
lattice simulation will hopefully test these predictions and shed light
on whether the action is indeed the correct description.
The straightforward evaluation of bosonic ($\lambda \lambda $)
and fermionic ($g\lambda$) excitation masses
around
the minimum from Eq(\ref{VYLR}) gives
\begin{equation}
\label{susymass}
m_{\lambda \lambda}~=~m_{g\lambda}~=~N_c \alpha \Lambda~.
\end{equation}
It is important to stress that these masses are not the
physical masses of the bound states. Rather, they are zero-momentum
quantities, which are related to the physical ones by wave function
factors $Z(p^2 = m_{phys}^2)$. These wave function factors result from
higher-derivative Kahler terms in ${\cal L}$, and are unknown.
A soft breaking gaugino mass may again be introduced through the
F-component of the spurion $\Lambda$.
We can calculate the shifts in the masses of the bound
states. The two scalar fields and the fermionic field are split in
mass
\begin{eqnarray}
M_{\rm fermion} & = & N_c \alpha \Lambda - {16 \pi^2 m_\lambda
\over N_c} \nonumber \\
M_{\rm scalar} & = \nonumber & N_c \alpha \Lambda - {56 \pi^2 m_\lambda
\over 3 N_c}\\
M_{\rm p-scalar} & = & N_c \alpha \Lambda - {40 \pi^2 m_\lambda
\over 3 N_c}
\end{eqnarray}
The physical masses are again related to these quantities by
unknown wave function renormalizations $Z$ which arise from Kahler terms,
$$
M_{\rm physical}
~ \equiv ~ Z ~ M ~~.
$$
Fortunately, we know that
in the SUSY limit the wavefunction factors are common
within a given multiplet. This degeneracy holds even after the vev of
the field is shifted by the soft breakings since a shift in the vev
alone (without SUSY breaking) leaves the physical masses degenerate
within a multiplet.
We also know that the {\it relative change} in these Kahler terms is
${\cal O} (f_\tau^2)$,
and hence can be ignored at leading order in
the soft breakings. Therefore, we may
still obtain a prediction for the rate of change of
the ratios of the physical masses,
\begin{eqnarray}
\label{bm}
\bar{M} (m_\lambda) & ~\equiv ~ & { Z (m_\lambda) M(m_\lambda )- Z(0) M(0)
\over Z(0) M(0) }~~,
\nonumber\\
& \simeq & {\partial M \over \partial m_\lambda}
\left[ \frac{1}{M} +
\frac{1}{Z} {\partial Z \over \partial M } \right] m_\lambda
\end{eqnarray}
near the SUSY limit.
The factor in brackets is common within a given multiplet.
Since the quantity $Z(m_0)$ is unknown, we can only predict
the {\it ratios} of $\bar{M}$ at the SUSY point or equivalently the
ratios
of $\partial M / \partial m_\lambda$.
Finally we note, as pointed out in \cite{Shifman},
that the VY model apparently has an extra SUSY vacuum corresponding to
$\langle \phi \rangle = 0$. At this point $\langle S \rangle$
is singular and so it is not clear how to interpret this vacuum. Shifman
and Kovner have proposed that the vacuum is real and represents some
conformal, $Z_{2N_c}$ preserving
point of the theory. It would be interesting to look for this
vacuum in lattice simulations but unfortunately as can be seen from
(22) there is no value of soft breaking mass for which such a
vacuum would be the global minimum. This will make it difficult to
observe in lattice simulations.
\section{FIELD THEORY DUALITY AND SOFT BREAKING FROM STRING THEORY}
The most recent progress in understanding SQCD has come from string
theory. In type IIA string theory D-brane constructions can be made
that realize 4D field theories in the world volume of one of the
D-branes \cite{elitzur,witten,brand}.
The standard construction is to suspend $N_c$ D4-branes
between two NS5-branes to generate an $SU(N_c)$ gauge symmetry in the
D4-branes' world volume. $N_f$ D6-branes intersecting the D4-branes contribute
vector matter multiplets transforming under the gauge symmetry and a
gauged flavor symmetry $SU(N_f)$. For example an N=2 configuration may
be described as follows \cite{witten}.
$\left. \right.$ \hspace{-0.15cm}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|}
\hline
& $\#$ & $R^4$ & $x^4$ & $x^5$ & $x^6$ & $x^7$ & $x^8$ & $x^9$ \\
\hline
NS & 2 & $-$ & $-$ & $-$ & $\bullet$ & $\bullet$ & $\bullet$ & $\bullet$ \\
\hline
D4 & $N_c$ & $-$ & $\bullet$ & $\bullet$ & $[-]$ & $\bullet$ &
$\bullet$ & $\bullet$ \\
\hline
D6 & $N_f$ & $-$ & $\bullet$ & $\bullet$ & $\bullet$ & $-$ &
$-$ & $-$ \\
\hline
\end{tabular} \vspace{0.3cm}
$R^4$ is the space $x^0-x^3$ which will correspond to the 4D space in
which the $SU(N_c)$ gauge theory will live. A dash $-$ represents a
direction along a brane's world wolume while a dot $\bullet$ is
transverse. For the special case of the D4-branes' $x^6$ direction,
where a world volume is a finite interval, we use the symbol $[-]$. On
scales much greater than the $L_6$ distance between the NS5s the fourth
space like direction of the D4-branes generates the coupling of the
gauge theory in an effective 3+1D theory.
A rotation of the two NS5s relative to each other breaks the
supersymmetry of the configuration further. An N=1 SQCD theory results
from the configuration.
$\left. \right.$ \hspace{-0.15cm}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|}
\hline
& $\#$ & $R^4$ & $x^4$ & $x^5$ & $x^6$ & $x^7$ & $x^8$ & $x^9$ \\
\hline
NS & 1 & $-$ & $-$ & $-$ & $\bullet$ & $\bullet$ & $\bullet$ & $\bullet$ \\
\hline
NS & 1 & $-$ & $\bullet$ & $\bullet$ & $\bullet$ & $\bullet$ & $-$ & $-$ \\
\hline
D4 & $N_c$ & $-$ & $\bullet$ & $\bullet$ & $[-]$ & $\bullet$ &
$\bullet$ & $\bullet$ \\
\hline
D6 & $N_f$ & $-$ & $\bullet$ & $\bullet$ & $\bullet$ & $-$ &
$-$ & $-$ \\
\hline
\end{tabular} \vspace{0.3cm}
This configuration first considered in \cite{elitzur} was used to derive
the field theory duality for $N_f > N_c$. It can be drawn pictorially as
$\left. \right.$ \hspace{-0.4in}\ifig\prtbdiag{}
{\epsfxsize 7truecm\epsfbox{loom11.ps}} \vspace{-1cm}
The duality was realized by
the motion of the two NS5s through each other in the $x^6$
direction. This motion corresponds to changing the coupling constant of
the classical gauge theory. In the quantum theory where there is an IR
fixed point such a change of the UV coupling leaves the IR physics
invariant and hence it is claimed that the configurations after these
motions continue to
describe the same quantum theory. Everytime a NS5 passes through a
D6-brane an extra D4-brane must be extruded between them in order to
preserve the number of matter fields in the theory on the $N_c$
D4-branes. When the two NS5s pass through each other the string theory
and the field theory pass through a strong coupling regime. There is
again a conservation rule for the number of D4-branes connecting the two
NS5s. The resulting motion corresponds to the final configuration
$\left. \right.$ \hspace{-0.4in}\ifig\prtbdiag{}
{\epsfxsize 7truecm\epsfbox{loom44.ps}} \vspace{-1cm}
This configuration has an $SU(N_f-N_c)$ gauge symmetry, $N_f$ quark
fields and $N_f^2$ ``mesons'' associated with the freedom of motion of
the connections between the $N_f$ D4 branes to the left and the
D6-branes in the $x^8$, $x^9$ directions.
More recently \cite{witten} it has been realized that for the theories without
dualities the dynamically generated superpotential of the theories may
be derived from the curves describing the brane configurations when they
are extended to include the M-theory compactified dimension.
Our interest here is in the possibility that these brane configurations
may be able to shed light on non-supersymmetric configurations. As
pointed out in \cite{brand} a different rotation of the N=2 configuration
leads to an N=0 configuration. We will only consider the pure glue case
in the hope of identifying the resulting field theories.
$\left. \right.$ \hspace{-0.15cm}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|}
\hline
& $\#$ & $R^4$ & $x^4$ & $x^5$ & $x^6$ & $x^7$ & $x^8$ & $x^9$ \\
\hline
NS & 1 & $-$ & $-$ & $-$ & $\bullet$ & $\bullet$ & $\bullet$ & $\bullet$ \\
\hline
NS & 1 & $-$ & $-$ & $\bullet$ & $\bullet$ & $\bullet$ &
$\bullet$ & $-$ \\
\hline
D4 & $N_c$ & $-$ & $\bullet$ & $\bullet$ & $[-]$ & $\bullet$ &
$\bullet$ & $\bullet$ \\
\hline
\end{tabular}\vspace{0.3cm}
This configuration describes an $SU(N_c)$ gauge theory with a real
adjoint scalar field corresponding to the freedom to separate the D4-branes
in the $x^4$ direction. Can we identify the SUSY breaking terms
introduced into the N=2 theory that leaves this N=0 theory? We expect
the N=2 SUSY in 10D string theory when broken by the string dynamics to
appear as spontaneous SUSY breaking in the low energy field theory
description. We also expect non-renormalizable operators to be
suppressed by the planck scale. Thus any SUSY breakings will be precisely
those of the form of soft breakings that may be introduced through the
vevs of spurion fields in the theory. The N=2 theory has a single
spurion field $\tau$ which is a member of a full N=2 spurion
multiplet. That multiplet has three real auxilliary fields that may
acquire SUSY breaking vevs, the complex F-component of the matter field
and the real D component of the gauge field. These
spurion breakings have been investigated in \cite{soft2} and give rise to the
bare UV soft breakings
\begin{eqnarray}
&{1 \over 8 \pi} Im ( F^* \psi_A^\alpha \psi_A^\alpha + F
\lambda^\alpha \lambda^\alpha + D \psi_A^\alpha\lambda^\alpha ) \\
& - {|F|^2 + D \over 4 \pi Im \tau} (Im
a^\alpha)^2 & \nonumber
\end{eqnarray}
These breakings indeed leave a massless real adjoint scalar in the
theory. We identify the three components of the spurion field with
rotations into the $x^7$, $x^8$ and $x^9$ directions.
These softly broken theories have been studied \cite{soft2}
for small soft breakings
in SU(2) gauge theory with up to three flavors and the theories have been
seen to confine and have a mass gap on the scale of the soft breaking
with a dual weakly coupled IR description.
The N=0 theories with
matter fields appear to have the motions described above that move the
theory to a dual description. If the identification of the N=0 theories
is correct though there are no massless adjoint fermions and if the
electric and dual theories had massless fermions would not match
anomalies. The duality presumably continues to hold in the spirit of
\cite{soft2} as a dual Meissner description of confinement.
Although the
holomorphic properties of supersymmetry are lost in these brane
configurations it remains to be seen whether they can shed further light
on softly broken theories.
|
1,108,101,566,386 | arxiv | \section{Introduction}\label{sec:intro}
Multiple previous works have proposed algorithms for data repair using Denial Constraints (DCs) \cite{DiscoveringChuIP13} or subsets thereof \cite{RekatsinasCIR17,VolkovsCSM14,ChuIP13,BohannonFGJK07}.
These approaches employ algorithms that use the constraints to detect and change values in a database table.
We propose {\em a system that provides explanations for data repairs by presenting the influence of each constraint and table cell.}
An explanation for such a repair may be useful both as means of understanding the repair process and algorithm, and as a tool for debugging the quality of the constraints for the repair of this specific data.
\sysName \footnote{Please refer to the video of the system at \textit{\color{blue}\url{https://youtu.be/xPVWzHPOuAk}}} is a novel system for data repair explanations based on {\em Shapley values} \cite{shapley1953value}.
The notion of Shapley values was originally suggested in the context of Game Theory as a measure of quantifying the contribution of each player in a cooperative game. It was later adopted by the Machine Learning (ML) community as a tool for evaluating the contribution of each feature in the model \cite{LundbergL17}.
Given a repaired cell, \sysName\ computes and presents the Shapley values of the DCs and table cells that have influenced this repair. Our approach evaluates the contribution of the input directly rather than the contribution of hidden features which are used by a specific algorithm. {\em This allows our solution to treat the repair algorithm as a black box and only query it to compute the Shapley values of DCs and cells.}
Explanations for the influence of DCs on the repair may assist users in correcting them and adapting them to the specific data and repair algorithm, while explanations about the influence of data cells can help in understanding the repair algorithm itself and changing specific cells to make the repair more accurate.
\begin{figure}[]
\begin{scriptsize}
\begin{lstlisting}[mathescape=true, basicstyle=\linespread{1.5}]
$\frac{1}{6}$: (C1) $\forall t_1, t_2.~ \neg (t_1[Team] = t_2[Team] \land t_1[City] \neq t_2[City])$
$\frac{1}{6}$: (C2) $\forall t_1, t_2.~ \neg (t_1[City] = t_2[City] \land t_1[Country] \neq t_2[Country])$
$\frac{2}{3}$: (C3) $\forall t_1, t_2.~ \neg (t_1[League] = t_2[League] \land$
$t_1[Country] \neq t_2[Country])$
0: (C4) $\forall t_1, t_2.~ \neg (t_1[Team] \neq t_1[Team] \land t_1[Year] = t_1[Year] \land$
$t_1[League] = t_2[League] \land t_1[Place] = t_2[Place])$
\end{lstlisting}
\end{scriptsize}
\caption{Denial constraints with their Shapley value}\label{fig:dcs}
\end{figure}
\begin{figure*}
\centering
\begin{subfigure}{0.48\textwidth}
\includegraphics[width=\textwidth]{photos/laliga/dirtyTable.JPG}
\caption{Dirty table (red cells are dirty)}
\label{fig:dirtyTable}
\end{subfigure}\hfill%
\begin{subfigure}{0.48\textwidth}
\includegraphics[width=\textwidth]{photos/laliga/cleanTable.JPG}
\caption{Clean table (blue cells have been repaired)}
\label{fig:cleanTable}
\end{subfigure}
\caption{Input dirty table and output clean table for La Liga standings}
\label{fig:running}
\end{figure*}
\begin{algorithm}[t]
\begin{small}
\SetKwInOut{Input}{Input}\SetKwInOut{Output}{Output}
\LinesNumbered
\Input{Set of constraints $\mathcal{C}$, a dirty database table $T^d$}
\BlankLine
\begin{enumerate}
\item
If tuple $t$ has a contradiction according to $C1$ then the $City$ attribute will be modified to the most common one, i.e., $\argmax_c \mathbb{P} \left[ City = c \right]$.
\item
If tuple $t$ has a contradiction according to $C2$ then the $Country$ attribute will be modified to the most probable one given $t \left[ City \right]$.
i.e., $\argmax_c \mathbb{P} \left[ Country = c \mid City = t \left[ City \right] \right]$.
\item
If tuple $t$ has a contradiction according to $C3$ then the $Country$ attribute will be modified to the most common one, i.e., $\argmax_c \mathbb{P} \left[ Country = c \right]$.
\item
If tuple $t$ has a contradiction according to $C4$ then the $Place$ attribute will be modified to the most probable one given $t \left[ Team \right]$, i.e., $\argmax_p \mathbb{P} \left[ Place = p \mid Team = t \left[ Team \right] \right]$.
\end{enumerate}
\caption{\small Simple Repair Algorithm}
\label{algo:repair}
\end{small}
\end{algorithm} \DecMargin{1em}
\vspace{-1mm}
\begin{example}
Consider the table in Figure \ref{fig:dirtyTable} and the DCs in Figure \ref{fig:dcs} with the Shapley values of each DC on its left.
C1 says that two tuples that share a team value must be in the same city, C2 says that if a pair of tuples share a city, they must have the same country, C3 says that two tuples that have the same league must have the same country, and C4 says that it is impossible for two different teams of the same league to finish in the same place in the same year.
Consider the cell $Country$ in the fifth row, denoted by $t_5[Country]$.
For simplicity, assume that we have Algorithm \ref{algo:repair} as a n\"aive repair algorithm\footnote{In practice, the repair algorithm may be more sophisticated; our solution is agnostic to the complexity of the repair algorithm.}.
\sysName\ computes the contribution of each DC and ranks them accordingly, where C3 is the most influential DC. It contributed the most as the $League$ value ``La Liga'' appears in 3 other tuples coupled with the value ``Spain'' in the attribute $Country$. C1 and C2 each contributed equally as C1 caused the change of ``Capital'' to ``Madrid'' first and then C2 caused the change of the value in the $Country$ cell. C4 is not involved in the repair so its contribution is $0$.
Next, we measure the influence of different data cells on this repair.
Given Algorithm \ref{algo:repair}, observe that the value of $t_1[Place]$ has no influence on the modification of $t_5[Country]$ -- as $t_1$ has no contradictions with $t_5$, and the attribute $Place$ does not affect $Country$ in Algorithm \ref{algo:repair}. However, how can we determine if $t_5[League]$ was more or less influential on the repair compared to $t_6[City]$?
Intuitively, $t_5[League]$ is more influential than $t_6[City]$.
This is because if $t_5[League]$ had a different value, then tuple $t_5$ would not have any contradictions according to $C3$. While if $t_6[City]$ had a different value, then according to $C1$ there would have been a contradiction between $t_3$ and $t_6$ (as both tuples would have $Team$ value of ``Real Madrid", and an inconsistent $City$) which would have been resolved by Algorithm \ref{algo:repair}.
As a result \sysName\ will assign higher contribution to $t_5[League]$ compared to $t_6[City]$.
\end{example}
\sysName\ takes as input the algorithm itself and its input which is a set of DCs and a dirty database table. Another input to the system is a specific table cell of interest whose repair requires explaining.
The system then ranks the influencing DCs and table cells based on their Shapley value for this cell of interest.
Generally, computing the Shapley value is exponential time in the number of DCs/table cells, and thus \sysName\ employs different algorithms to compute the Shapley value for DCs and for table cells.
With DCs, the n\"aive approach is feasible as the number of DCs is usually small.
Conversely, the number of cells in a table can be very large, so \sysName\ uses a sampling algorithm based on \cite{StrumbeljK14}.
To compute the Shapley values, the system repeatedly changes the input of the repair algorithm and queries it, so it does not rely on the components or approach of a specific algorithm.
\section{Demo Scenario}\label{sec:scenario}
Our demonstration will show that explaining repairs through Shapley values assists in understanding the repair process and debugging it.
We will use a soccer database, scraped from Wikipedia, similarly to Figure \ref{fig:dirtyTable}, and errors will be manually added into the table.
We will start with an initial set of DCs.
To get the repair, we will employ {\tt HoloClean}\ that will output a clean table. Then, we will indicate a repaired cell of interest and show the most influential table cells and DCs involved in this repair, ranked according to their Shapley value.
We will show how removing or changing the highest ranked DCs improves the repair of the specified table cell.
We will use a similar scenario for table cells, where the DCs will be appropriate but some of the cells will cause a specific cell to be repaired in the wrong manner. After showing the obtained repair, we will invoke \sysName\ to rank the influencing table cells.
We will then allow users to change values in the initial table and the DCs and choose different cells of interest to them.
Users could then use \sysName\ to compute the Shapley value of the table cells and DCs that influenced the repair of their chosen cell and explore the system.
\section{System Overview}\label{sec:system}
\sysName\ is implemented in Python 3.6 and an underlying database engine in PostgreSQL 10.6. Its web-based GUI was built using JavaScript, CSS and HTML.
The three screens of the system are shown in Figure \ref{fig:sys_ui} and the general architecture of \sysName\ is shown in Figure \ref{fig:framework}.
Users first input a database table and a set of DCs to the {\tt HoloClean}\ system (Figure \ref{fig:sys_input} and the arrow 1 in Figure \ref{fig:framework}). {\tt HoloClean}\ \cite{RekatsinasCIR17} is a holistic data repair system, that supports DCs, among other forms of constraints, and repairs the input table based on a probabilistic model involving machine learning techniques.
After clicking the ``Repair'' button, users are presented with the repaired table, where repaired cells are highlighted (Figure \ref{fig:sys_repair}). Furthermore, when hovering over a repaired cell, the system shows its value before the repair.
Now, \sysName\ allows users to choose any cell, $t^d[A]$, from the original table, $T^d$, whose value was changed, and mark it as a cell of interest and click the ``Explain'' button.
The system then computes the Shapley values w.r.t. the chosen options by querying {\tt HoloClean}\ as part of the computation.
Once done, \sysName\ displays the DCs and table cells ranked from highest to lowest in terms of their Shapley value w.r.t. $t^d[A]$, where influencing DCs and cells are highlighted green and the darker the color, the more influencing the DC/cell is (Figure \ref{fig:sys_exp}). Again, when hovering over the DCs/cells users can also see their Shapley values.
The user can continue the process by changing the DCs or values in $T^d$, and inputting it again to {\tt HoloClean}\ to infer another repair, thus improving the repair iteratively.
\section{Technical Details}\label{sec:tech}
We give a short overview of the approach underlying \sysName.
\subsection{Database Repair}
$T$ will denote a database table with schema $(A_1, \ldots, A_m)$ where $A_i$ is the $i$th attribute of $T$. For a tuple $t\in T$, the notation $t[A_i] = v$ means that $t$ has the value $v$ in attribute $A_i$. We denote by $T^d$ and $T^c$ the database table prior to the repair and after it respectively. Extending this, $t^d[A]$ and $t^c[A]$ will also be used to denote a dirty and clean cell, respectively.
\begin{example}
Consider the dirty and clean tables shown in Figures \ref{fig:dirtyTable}, \ref{fig:cleanTable}, referred to as $T^c$ and $T^d$. If we consider $t_5$ in both tables, then the attribute $t_5^d[Country]$ in $T^d$ is changed in $T^c$ from the value ``Espa\~na'' to ``Spain''.
\end{example}
We denote the repair algorithm by $Alg$ and its input by (1) $\mathcal{C}$, a set of DCs and (2) $T^d$, a dirty table. Also, denote $Alg(\mathcal{C}, T^d) = T^c$ as the output table of $Alg$.
For our purposes, we will refer to $Alg$ as a binary function as follows.
Given a table cell $t^d[A]\in T^d$, the repair algorithm is a function $Alg|_{t^d[A]}:(\mathcal{C}, T^d) \to \{0,1\}$, where $1$ signals that the value in $t^d[A]$ is repaired to the value in $t^c[A]$, and $0$ otherwise.
\begin{example}
Consider the cell $t_5[City]$ in Figures \ref{fig:dirtyTable} and \ref{fig:cleanTable}. Without C1 it would not have changed from ``Capital'' to ``Madrid'', therefore: $Alg|_{t_5[C]}(\{C1, C2, C3\}, T^d) = 1$ while $Alg|_{t_5[C]}(\{C2, C3\}, T^d) = 0$.
\end{example}
\subsection{Shapley Value}
In Cooperative Game Theory, Shapley value \cite{shapley1953value} is a way to distribute the worth of all players, assuming they cooperate.
Let $N$ be a finite set of players and $v : 2^N \to \mathbb{R}$, $v(\emptyset) = 0$ be a function (called a characteristic function). $v$ maps sets of players to the joint worth they generate according to the game. The Shapley value of a player $a$ is then defined as:
\vspace{-1.5mm}
\begin{scriptsize}
\begin{equation*}
\begin{split}
Shap(N,v,a) = \sum_{S\subseteq N\setminus \{a\}} \frac{|S|!(|N|-|S|-1)!}{|N|!}\cdot (v(S\cup \{a\}) - v(S))
\end{split}
\end{equation*}
\end{scriptsize}
In our scenario, the model is a black box so the Shapley values are computed on the input itself, i.e., the constraints and the table.
For constraints, we adapt the definition so that it reflects the contribution of a specific constraint to the repair of a cell, as follows.
\vspace{-1.5mm}
\begin{scriptsize}
\begin{equation*}
\begin{split}
Shap(\mathcal{C},Alg|_{t^d[A]},C) = \sum_{\makebox[0pt]{$S\subseteq \mathcal{C}\setminus \{C\}$}} \frac{|S|!(|\mathcal{C}|-|S|-1)!}{|\mathcal{C}|!}\cdot (Alg|_{t^d[A]}(S \cup \{C\},T^d) -\\ Alg|_{t^d[A]}(S,T^d))
\end{split}
\end{equation*}
\end{scriptsize}
Where $t^d[A]$ is a specific cell of interest and $C$ is a constraint whose contribution we want to determine. The ``set of players'' is the set of DCs while the table $T^d$ remains constant.
\begin{example}\label{ex:ic}
Recall the tables in Figure \ref{fig:running} with the DCs in Figure \ref{fig:dcs} (Shapley values are on the left) and Algorithm \ref{algo:repair}.
We now compute the contribution of each DC to the repair of the cell $t_5[Country]$, denoted $t_5[C]$.
Algorithm \ref{algo:repair} will repair $t_5[C]$ only if we have the DCs $\{C1,C2\}$, or $\{C3\}$.
According to the definition, we can compute the contribution of $C_1$ as follows: there are 8 subset of $\{C2,C3,C4\}$, and only for $S = \{C2\}$ and $S = \{C2, C4\}$ we have $Alg|_{t_5[C]}(S\cup\{C1\},T^d) = 1$ and $Alg|_{t_5[C]}(S,T^d) = 0$, so $Shapley(\mathcal{C}, T^d, C1) = \frac{2}{12}$. The same computation applies to $C2$.
For $C3$ we have 6 out of 8 subsets $S$ of $\{C1,C2,C4\}$ that result in $Alg|_{t_5[C]}(S\cup\{C3\},T^d) = 1$ and $Alg|_{t_5[C]}(S,T^d) = 0$, including $S = \emptyset$. Thus, $Shapley(\mathcal{C}, T^d, C3) = \frac{2}{3}$.
As for $C4$, its presence or absence does not change the value of $t_5[C]$, so $Shapley(\mathcal{C}, T^d, C4) = 0$.
Let us explain the intuition for the value of $C3$ being double that of the pair $\{C1, C2\}$. Ignore for now $C4$ since its contribution is $0$. There are $5$ subsets of the DCs $\{C1, C2, C3\}$ for which we repair $t_5[C]$. These are $\{C3\}$, $\{C1, C2\}$, $\{C1, C3\}$, $\{C2, C3\}$, and $\{C1, C2, C3\}$.
Four of these sets contain $C3$ while only two contain the pair $\{C1,C2\}$ (for the subsets where one of these is present without its partner, the repair is due to $C3$),
thus, the contribution of $C1$ and $C2$, as a pair, is half that of $C3$.
\end{example}
Similarly, we adjust the definition for the Shapley value of a cell.
Given a repair of cell $t^d[A]$ we define the formula for calculating the Shapley value of a cell $t_i[B]$, or intuitively, its contribution to the repair of $t^d[A]$.
\vspace{-1.5mm}
\begin{scriptsize}
\begin{equation*}
\begin{split}
Shap(D,Alg|_{t^d[A]},t_{i}[B]) = \sum_{\makebox[0pt]{$S\subseteq T^d\setminus \{t_{i}[B]\}$}} \frac{|S|!(|T^d|-|S|-1)!}{|T^d|!}\cdot \\
{} {} (Alg|_{t^d[A]}(\mathcal{C},S\cup \{t_{i}[B]\}) - Alg|_{t^d[A]}(\mathcal{C},S))
\end{split}
\end{equation*}
\end{scriptsize}
Where $S\subseteq T^d$ means $\forall t_j[C] \in T^d\setminus S.~ t_j[C] = null$.
Here, the ``set of players'' here is the set of cells in the table $T^d$ while the set of constraints remains constant.
\begin{example}
Reconsider our example with the DCs from Figure \ref{fig:dcs}, Algorithm \ref{algo:repair}, and the tables in Figure \ref{fig:running}. Consider the cell $t_5[Country]$ whose value is changed from ``Espa\~na'' to ``Spain''.
Among all the cells, $t_5[League]$ has the highest Shapley value, next we will explain why. Notice that based on C3 the inclusion of $t_5[League]$ to any coalition that contains at least one of the pairs $\{t_i[Country], t_i[League]\}$ for any $i\in \{1, 2, 3, 6\}$ would result in the repair of $t_5[Country]$ to ``Spain". Observe that there are $175 \cdot 2^{27}$ such coalitions (since out of the relevant $8$ cells there are $2^8-3^4=175$ options to choose a coalition such that at least one pair exists, and excluding those cells and $t_5[League]$ there are $27$ remaining cells that can be either included or excluded from the coalition).
Next, we will estimate the number of coalitions that are required for the fix based on C1 and C2. According to these DCs, a coalition that contains $\{t_3[Team], t_3[City], t_3[Country], t_5[Team]\}$ is required. There are $2^{32}$ such coalitions.
Since $175 \cdot 2^{27}$ is more than five times larger than $2^{32}$ we conclude that $t_5[League]$ has the highest influence on the repair of $t_5[Country]$ from ``Espa\~na'' to ``Spain''.
For simplicity we overlooked the coalitions sizes, though they too play a role in the evaluation of Shapley values.
\end{example}
\subsection{Computing Shapley Values}
Shapley values can be computed from the definition, but the computation time may be exponential.
For constraints, we can use the formula directly as their number is typically small.
However, the number of table cells can be huge.
Therefore, we use a novel algorithm based on probabilistic sampling \cite{StrumbeljK14} to approximate the contribution of a table cell.
\begin{example}
Reconsider the table in Figure \ref{fig:dirtyTable}. Suppose we are interested in the effect of the cell $t_5[City]$ on the repair of the cell $t_5[Country]$. We initialize a variable $\varphi = 0$.
We vectorize the table
to get the vector $x_T = (t_1[Team], t_1[City], \ldots, t_2[Team],\\ \ldots, t_6[Place])$.
To sample a cell coalition, we take a random permutation of $x_T$-- the coalition is the set of all of the cells that precede $t_5[City]$.
Values of cells that are not part of the coalition will be replaced with a sample value from their column distribution.
Once the cell coalition was formed we generate two instances of vectorized tables: one with the original value of $t_5[City]$, and the second where the $t_5[City]$ value is replaced with random value.
We then compute the difference in the result of $Alg|_{t_5[Country]}$ for these two instances and add it to $\varphi$. We repeat this $m$ times and output $\frac{\varphi}{m}$.
\end{example}
|
1,108,101,566,387 | arxiv | \section{Introduction}
There are many interesting and widely used estimators of a functional with
finite semi-parametric variance bound that depend on the estimation, in a
first step, of nuisance functions, such as conditional expectations or
densities. Examples include estimators of the mean with data missing at
random, the average treatment effect, the expected conditional covariance,
partially linear models, and weighted average derivatives. Because the
nuisance functions can often be high dimensional it is desirable to minimize
the impact of estimating these functions. By using cross-fitting (i.e. sample
splitting) to estimate the nuisance functions we obtain novel estimators whose
second order remainders converge to zero as fast as known possible. In
particular, such estimators are often root-n consistent under minimal
smoothness conditions. Furthermore, such estimators may have higher order mean
square error that converges to zero as fast as known possible.
Bias reduction is key to constructing semiparametric estimators with fast
remainder rates. The rates at which the variance of remainders goes to zero
are quite similar for different semiparametric estimators but the bias rates
differ greatly. We use cross-fitting for bias reduction. We show how fast
remainder rates can be attained by using different parts of an i.i.d. sample
to estimate different components of an estimator.
In this paper we consider regression spline estimation of average linear
functionals of conditional expectations with a finite semiparametric variance
bound, as we have been able to obtain general, precise results for functionals
in this class. The class includes the five examples mentioned above.
We define a cross fit (CR)$\ $plug-in estimator to be one where we estimate
the functional by simply replacing the unknown conditional expectation by a
nonparametric estimator from a separate part of the sample. Cross-fitting
eliminates an "own observation" bias term, thereby decreasing the size of the
remainder. Functionals in our class have doubly robust influence functions
that depend on two unknown functions. This implies there exists an estimator
depending on both unknown functions that has exact bias zero if the unknown
functions are replaced by fixed functions, at least one of which is equal to
the truth. Here we use double cross-fitting where the two unknown functions
are themselves estimated from separate subsamples, so that the final estimator
depends on three separate subsamples. Surprisingly, single cross fitting in
which both unknown functions are estimated from the same subsample has a
remainder that can converge even slower than CF plug-in estimators. In
contrast, doubly robust estimators with double cross fitting improve on
cross-fit plug-in estimators in the sense that remainder terms can converge at
faster rates. We also show how multiple cross-fitting could be used to reduce
bias for any semiparametric estimator that is a polynomial in first step
spline estmators of unknown functions.
We construct cross-fit (CF) plug-in and doubly cross-fit doubly robust (DCDR)
estimators that are semiparametrically efficient under minimal conditions when
the nuisance functions are in a Holder class of order less than or equal to
one. When a nuisance function is Holder of order exceeding one, we propose
DCDR estimators that have remainders that converge no slower and often faster
than the CF plug-in estimator. In the special case of the expected conditional
covariance functional, the DCDR estimator is always semiparametric efficient
under minimal conditions. For other functionals in our class the CF plug-in
and DCDR estimator are semiparametric efficient under minimal conditions,
provided the conditional expectation is Holder of order greater than or equal
to one-half the regressor dimension; furthermore, in this case, the remainder
goes to zero as fast as known possible for both CF plug-in and DCDR
estimators. When the conditional expectation is Holder of order less than or
equal to one-half the regressor dimension but greater than or equal to one,
the remainder for the DCDR has a remainder that converges faster than the CF
plug-in estimator.
In the case where the conditional expectation is Holder of order no less than
one but less than one-half the regressor dimension, we show semiparametric
efficiency under minimal conditions for the expected conditional covariance,
but not for other functionals. The higher order influence function (HOIF)
estimators of Robins et al. (2008, 2017) and Mukherjee, Newey, and Robins
(2017) will be semiparametric efficient under minimal conditions for these
other functionals, including the mean with data missing at random and the
average treatment effect.
CF plug-in estimators have been considered by Bickel (1982) in the context of
adaptive semiparametric efficient estimation, Powell, Stock, and Stoker (1989)
for density weighted average derivatives, and by many others. Kernel and
series CF plug-in estimators of the integrated squared density and certain
other functionals of a density have been shown to be semiparametric efficient
under minimal conditions by Bickel and Ritov (1988), Laurent (1996), Newey,
Hsieh, and Robins (2004), and Gine and Nickl (2008). Our DCDR estimator
appears to be novel as does the fact that a CF plug-in estimator can be
semiparametric efficient under minimal conditions. Ayyagari (2010), Robins et
al. (2013), Kandasamy et. al. (2015), Firpo and Rothe (2016), and Chernozhukov
et al.(2017) have considered doubly robust estimators that eliminate own
observation terms. Double cross-fitting in double robust estimation appears
not to have been analyzed before.
Our results for splines make use of the Rudelson (1999) law of large numbers
for matrices similarly to Belloni et al.(2015). The results for the CF plug-in
estimator for general splines extend those of Ichimura and Newey (2017) to
sample averages of functionals. The double robustness of the influence
function for the functionals we consider is shown in Chernozhukov et
al.(2016), where the doubly robust estimators of Scharfstein, Rotnitzky, and
Robins (1999), Robins, Rotnitzky, and van der Laan (2000), Robins et. al.
(2008), and Firpo and Rothe (2016) are extended to a wide class of average
linear functionals of expectations.
The DCDR estimator for the mean with missing data and average treatment effect
uses a spline approximation to the reciprocal of the propensity score rather
than the reciprocal of a propensity score estimator. The reciprocal of a
propensity score estimator has been used in much of the previous literature on
plug in and doubly robust estimation, including Robins and Rotnitzky (1995),
Rotnitzky and Robins (1995), Hahn (1998), and Hirano, Imbens, and Ridder
(2003). Estimators based on approximating the reciprocal of the propensity
score have been considered by Robins et al. (2007), Athey, Imbens, and Wager
(2017), and recently in independent work by Hirschberg and Wager (2017).
Other approaches to bias reduction for semiparametric estimators have been
proposed. Robins et al.(2008, 2017) and Mukherjee, Newey, and Robins (2017)
develop higher order influence function (HOIF) estimators with smaller bias.
In Section 2 we will discuss the relationship of this paper to HOIF. Cattaneo
and Jansson (2017) propose promising bootstrap confidence intervals for
plug-in kernel estimators that include bias corrections. Also, Cattaneo,
Jansson, and Ma (2017) show that the jackknife can be used to reduce bias of
plug-in series estimators. For the class of functionals in this paper
cross-fitting removes bias so that there is no need for bootstrap or jackknife
bias corrections in order to attain the fastest remainder rates.
In Section 2 we will describe the cross-fitting approach to bias reduction and
show how it relates to HOIF. Section 3 describes the linear functionals and
regression spline estimators we consider. Sections 4, 5, and 6 give results
for the CF plug-in estimator, the DCDR expected conditional covariance
estimator, and DCDR\ estimators of other linear functionals, respectively.
Before explaining the results of this paper it is helpful to be more specific
about our goal. We will consider i.i.d. data $z_{1},...,z_{n}$. We are
interested in an asymptotically linear semiparametric estimator $\hat{\beta}$
satisfyin
\begin{equation}
\sqrt{n}\left( \hat{\beta}-\beta_{0}\right) =\frac{1}{\sqrt{n}}\sum
_{i=1}^{n}\psi\left( z_{i}\right) +O_{p}\left( \Delta_{n}\right)
,\Delta_{n}\longrightarrow0, \label{exp
\end{equation}
where $\psi\left( z\right) $ is the influence function of $\hat{\beta}$ and
$\Delta_{n}$ characterizes the size of the remainder. Our goal is to find
estimators where $\Delta_{n}$ converges to zero at the fastest known rate.
For the integrated squared density, Bickel and Ritov (1988) gave a kernel
based estimator where the rate for $\Delta_{n}$ is fast enough that
$\hat{\beta}$ is semiparametric efficient under minimal conditions.
To motivate our candidate for the optimal rate the remainder can converge to
zero for series estimators of an average linear functional of a conditional
expectation with positive information bound, we consider the series estimator
of the coefficients of a partially linear regression in Donald and Newey
(1994). The model there is $E[y_{i}|a_{i},x_{i}]=a_{i}^{T}\beta_{0
+\lambda_{0}(x_{i})$ where $\lambda_{0}(x_{i})$ is an unknown function of an
$r\times1$ vector $x_{i}$. Consider the estimator $\hat{\beta}$ obtained from
regressing $y_{i}$ on $a_{i}$ and a $\ K\times1$ vector $p(x_{i})$ of power
series or regression splines in an i.i.d. sample of size $n$. Assume that the
functions $\lambda_{0}(x)$ and $\alpha_{0}(x)=E[a_{i}|x_{i}=x]$ are each
members of a Holder class of order $s_{\lambda}$ and $s_{\alpha}$
respectively. Define
\[
\Delta_{n}^{\ast}=\sqrt{n}K^{-(s_{\gamma}+s_{\alpha})/r}+K^{-s_{\gamma
/r}+K^{-s_{\alpha}/r}+\sqrt{\frac{K}{n}}.
\]
Donald and Newey (1994) showed that under regularity conditions, including
$K/n\longrightarrow0$, equation (\ref{exp}) is satisfied with $\Delta
_{n}=\Delta_{n}^{\ast}.$ Here $\sqrt{n}K^{-(s_{\gamma}+s_{\alpha})/r}$ gives
the rate at which the bias of $\sqrt{n}(\hat{\beta}-\beta_{0})$ goes to zero.
Also, $K^{-s_{\gamma}/r}$ and $K^{-s_{\alpha}/r}$ are stochastic
equicontinuity bias terms, and $\sqrt{K/n}.$ that accounts for stochastic
equicontinuity and degenerate U-statistic variance terms. Furthermore, there
exists $K=K_{n}$ satisfying $K_{n}/n\longrightarrow0$ such that $\Delta
_{n}^{\ast}\longrightarrow0$ if and only if $s_{\gamma}+s_{\alpha}>r/2$.
However the Donald and Newey (1994) result used the fact that the partially
linear model implies $y_{i}-a_{i}^{T}\beta_{0}$ is mean independent of $a_{i}$
given $x_{i}$ and thus is not a locally nonparametric model. A model is said
to be locally nonparametric if, at each law $P$ in the model, the tangent
space is all of $L_{2}\left( P\right) .$Henceforth in this paper, we shall
always assume a locally nonparametric model.
Robins et al. (2009) showed that the condition $s_{\gamma}+s_{\alpha}>r/2$ is
necessary and sufficient for the existence of a semparametric efficient
estimator of
\[
\xi_{0}=E\left[ cov\left( a_{i},y_{i}|x_{i}\right) \right] /E\left[
var(a_{i}|x_{i})\right] ,
\]
Note $\xi_{0}$ is the probability limit of the Donald and Newey (1994)
estimator regardless of whether the partially linear model holds. That is,
$\xi_{0}$ is the coefficient $b$ in the population linear projection of
$y_{i}$ on all functions of the form $a_{i}b+\lambda(x_{i})$. Robins et al.
(2008) proved sufficiency using a higher order influence function estimator of
$\xi_{0}$, which is a U-statistic whose order increases as $\ln\left(
n\right) .$ In contrast, the aforementioned estimator of Donald and Newey
(1994), although much simpler, is not semiparametric efficient for $\xi_{0}$
in a locally nonparametric model under the minimal condition $s_{\gamma
}+s_{\alpha}>r/2.$ The current paper was thus motivated by the question
whether one could construct a simple efficient estimator of $\xi_{0}$ whose
remainder $\Delta_{n}$ will go to zero as fast as $\Delta_{n}^{\ast},$ the
fastest rate known to be possible. In summary, our goal is to construct
estimators that are much simpler than the HOIF estimators and yet satisfy
equation (\ref{exp}) with $\Delta_{n}=\Delta_{n}^{\ast}.$
\section{Cross-Fitting and Fast Remainder Rates}
To explain how cross-fitting can help achieve fast remainder rates we consider
estimation of the expected conditional covarianc
\[
\beta_{0}=E[Cov(a_{i},y_{i}|x_{i})]=E[a_{i}\left\{ y_{i}-\gamma_{0
(x_{i})\right\} ],
\]
where $\gamma_{0}(x_{i})=E[y_{i}|x_{i}]$. This object is useful in the
estimation of weighted average treatment effects as further explained below.
We assume that the functions $\gamma_{0}(x)$ and $\alpha_{0}(x)=E[a_{i
|x_{i}=x]$ are each members of a Holder class of order $s_{\gamma}$ and
$s_{\alpha}$ respectively.
One way to construct an estimator of $\beta_{0}$ is the \textquotedblleft
plug-in\textquotedblright\ method where a nonparametric estimator $\hat
{\gamma}$ is substituted for $\gamma_{0}$ and a sample average for the
expectation to for
\[
\bar{\beta}=\frac{1}{n}\sum_{i=1}^{n}a_{i}\{y_{i}-\hat{\gamma}(x_{i})\}.
\]
This estimator generally suffers from an "own observation" bias that is of
order $K/\sqrt{n}$ when $\hat{\gamma}$ is a spline regression estimator, which
converges to zero slower than $\Delta_{n}^{\ast}$. This bias can be eliminated
by replacing $\hat{\gamma}(x)$ with an estimator $\hat{\gamma}_{-i}(x)$ that
does not use $z_{i}$ in its construction. The resulting estimator of
$\beta_{0}$ is
\[
\hat{\beta}=\frac{1}{n}\sum_{i=1}^{n}a_{i}\{y_{i}-\hat{\gamma}_{-i}(x_{i})\}.
\]
This estimator is a cross-fit (CF) plug-in estimator in the sense that
$\hat{\gamma}_{-i}$ uses a subsample that does not include $i$. The
cross-fitting eliminates the own observation bias. The remainder rate
$\Delta_{n}$ for $\hat{\beta}$ will be often be faster than for $\bar{\beta}$,
sometimes as fast as $\Delta_{n}^{\ast}$ as explained below. This approach to
eliminating own observation bias when the first step is a density estimator
has been used by Bickel (1982), Bickel and Ritov (1988), Powell, Stock, and
Stoker (1989), Laurent (1996), and others. Here we obtain the novel result
that, for a spline regression first step, a CF plug-in estimator can have the
fastest rate $\Delta_{n}^{\ast}$ even when the usual plug-in estimator does not.
Doubly robust estimators have another source of bias that can also be
eliminated by double cross-fitting. To explain we consider a single cross-fit
doubly robust estimator of the expected conditional covariance. Let
$\hat{\gamma}_{-i}(x)$ and $\hat{\alpha}_{-i}(x)$ be nonparametric estimators
of $\gamma_{0}(x_{i})=E[y_{i}|x_{i}]$ and $\alpha_{0}(x_{i})=E[a_{i}|x_{i}]$
that do not depend on the $i^{th}$ observation. Consider the estimato
\[
\check{\beta}=\frac{1}{n}\sum_{i=1}^{n}[a_{i}-\hat{\alpha}_{-i}(x_{i
)][y_{i}-\hat{\gamma}_{-i}(x_{i})].
\]
This estimator is doubly robust in the sense of Scharfstein, Rotnitzky, and
Robins (1999) and Robins, Rotnitzky, and van der Laan (2000), being consistent
if either $\hat{\alpha}_{-i}$ or $\hat{\gamma}_{-i}$ are consistent. It uses
cross-fitting to eliminate own observation bias$.$ This estimator does have a
nonlinearity bias since $\hat{\alpha}_{-i}(x_{i})$ and $\hat{\gamma
_{-i}(x_{i})$ are constructed from the same data in single crossfitting. That
bias is of the same order $K/\sqrt{n}$ as the own observation bias for a
spline regression plug-in estimator. This bias can be thought of as arising
from nonlinearity of $\check{\beta}$ in the two nonparametric estimators
$\hat{\alpha}_{-i}(x_{i})$ and $\hat{\gamma}_{-i}(x_{i}).$
One can remove the nonlinearity bias in the doubly robust estimator by using
different parts of the data to construct the two nonparametric estimators. Let
$\hat{\gamma}_{-i}(x_{i})$ be constructed from a subset of the observations
that does not include observation $i$ and let $\tilde{\alpha}_{-i}(x_{i})$ be
constructed from a subset of the observations that does not include $i$ or any
observations used to form $\hat{\gamma}_{-i}$. A doubly cross-fit doubly
robust estimator (DCDR) i
\[
\tilde{\beta}=\frac{1}{n}\sum_{i=1}^{n}[a_{i}-\tilde{\alpha}_{-i
(x_{i})][y_{i}-\hat{\gamma}_{-i}(x_{i})].
\]
This estimator uses cross-fitting to remove both the own observation and the
nonlinearity biases. We will show that $\Delta_{n}^{\ast}=\Delta_{n}$ when
$\tilde{\alpha}_{-i}(x_{i})$ and $\hat{\gamma}_{-i}(x_{i})$ are spline
regression estimators for a $K\times1$ vector of multivariate splines of at
least order $\max\{s_{\gamma},s_{a}\}-1$ with evenly spaced knots.
Consequently, this estimator will be root-n consistent and semiparametric
efficient when $s_{\gamma}+s_{\alpha}>r/2$ and $K$ is chosen appropriately,
which is the minimal condition of Robins et al. (2009).
Remarkably, the doubly robust estimator $\check{\beta}$ where $\hat{\alpha
}_{-i}(x_{i})$ and $\hat{\gamma}_{-i}(x_{i})$ use the same data may have a
slower remainder rate than the CF plug-in estimator $\hat{\beta}$. The use of
the same data for $\hat{\alpha}_{-i}(x_{i})$ and $\hat{\gamma}_{-i}(x_{i})$
introduces a bias term of size $K/\sqrt{n}$. Such a term is not present in the
CF plug-in estimator. The $K/\sqrt{n}$ term is eliminated for the doubly
robust estimator by forming $\tilde{\alpha}_{-i}(x_{i})$ and $\hat{\gamma
}_{-i}(x_{i})$ from different samples. We find that the DCDR estimator
$\tilde{\beta}$ improves on the CF plug in estimator by increasing the rate at
which a certain part of $\Delta_{n}$ goes to zero. Specifics will be given below.
We note that the own observation bias can also be thought of as nonlinearity
bias. The parameter $\beta_{0}$ has the for
\[
\beta_{0}=\int a\{y-\gamma_{0}(x)\}F_{0}(dz),
\]
where $F_{0}$ denotes the distribution of $z=(y,a,x).$ This object is
quadratic in $\gamma_{0}$ and $F_{0}$ jointly. The own observation bias can be
thought of as a quadratic bias resulting from using all the data to
simultaneously estimate $\gamma_{0}$ and the distribution $F_{0}$ of a single
observation. The CF plug in estimator $\hat{\beta}$ eliminates this
nonlinearity bias. Also, the doubly robust estimator can be thought of as
estimating $\int[a-\alpha_{0}(x)][y-\gamma_{0}(x)]F_{0}(dz),$ which is cubic
in $\alpha_{0},$ $\gamma_{0}$, and $F_{0}$ jointly. The DCDR estimator can be
thought of as eliminating the cubic bias by estimating each of $\alpha
_{0}(x),$ $\gamma_{0}(x)$, and $F_{0}$ from distinct groups of observations.
One potential concern about DCDR estimators is that each of the nonparametric
components $\hat{\gamma}$ and $\tilde{\alpha}$ only use a fraction of the data
because they are each based on subsamples that the other does not use. For
example, they only use less than half the data if they are based on
approximately the same subsample size. This does not affect remainder rates
but could affect small sample efficiency. One might be able to improve small
sample efficiency by averaging over DCDR estimators that use different sample
splits to construct $\hat{\gamma}$ and $\tilde{\alpha}$, though that is beyond
the scope of this paper. Our concern in this paper is remainder rates for
asymptotically efficient estimation.
Cross-fitting can be applied to eliminate bias terms for any estimator that
depends on powers of nonparametric estimators. Such cross-fitting would
replace each power by a product of nonparametric estimators that are computed
from distinct subsamples of the data, analogously to the DCDR estimators above.
We now provide a more quantitative version of our results. Let $p(x)$ be a
vector of multivariate regression splines of dimension $K$ with evenly spaced
knots. We will always take $K=K_{n}$ to satisfy $K\ln\left( K\right)
/n\rightarrow0.$ Suppose that $\hat{\gamma}_{-i}(x)=p(x)^{T}[\Sigma
_{j\in\mathcal{I}_{\ell}}p(x_{j})p(x_{j})^{T}]^{-1}\Sigma_{j\in\mathcal{I
_{\ell}}p(x_{j})y_{j}$ is a series estimator from regressing $y_{j}$ on
$p(x_{j})$ in a subsample of observations indexed by $\mathcal{I}_{\ell}$,
where $\left\{ \mathcal{I}_{\ell}\right\} _{\ell=1}^{L}$ is a partition of
$\{1,...,n\},$ $i\notin\mathcal{I}_{\ell},$ $L$ is fixed and the number of
elements of each $\mathcal{I}_{\ell}$ is of order $n$. Suppose that for the
doubly robust estimator $\tilde{\alpha}(x_{i})$ is constructed analogously
from a separate subsample.
When $s_{\gamma}\leq1$ and $s_{\alpha}\leq1$ and $p(x)$ is a Haar basis of
dummy variables that are indicator functions of cubes partitioning the support
of $x_{i}$ we show that the CF plug-in estimator has $\Delta_{n}=\Delta
_{n}^{\ast}+\ln(n)K^{-s_{\gamma}/r}$ and the DCDR doubly robust estimator has
$\Delta_{n}=\Delta_{n}^{\ast}.$ Hence the DCDR estimator has the fast
remainder rate. Further the CF plug-in estimator has the fast remainder
$\Delta_{n}^{\ast},$ except at those laws where $K^{-s_{\gamma}/r}$ is the
dominating term in $\Delta_{n}^{\ast}$. At such laws, the DCDR estimator
improves on the CF\ plug-in but only by a factor of $\ln(n).$ We also show
that these results extend to the entire class of average linear functionals of
a conditional expectation with finite semiparametric variance bound.
When $s_{\gamma}$ and $s_{\alpha}$ are any positive numbers and $p(x)$ is a
spline basis of order at least $\max\{s_{\gamma},s_{\alpha}\}-1$ we show that
the CF plug in estimator of the expected conditional covariance has
$\Delta_{n}=\Delta_{n}^{\ast}+\sqrt{K\ln(K)/n}K^{1/2-s_{\gamma}/r}$ and the
DCDR estimator has $\Delta_{n}=\Delta_{n}^{\ast}$.Here the plug-in estimator
has the fast remainder $\Delta_{n}=\Delta_{n}^{\ast}$ for $s_{\gamma}>r/2$ and
the doubly robust estimator has $\Delta_{n}=\Delta_{n}^{\ast}$ for all
$s_{\gamma}$. For other functionals in our class we show that the DCDR
estimator has $\Delta_{n}=\Delta_{n}^{\ast}+\sqrt{K^{3}\ln(K)^{2}/n^{3
}K^{1/2-s_{\gamma}/r},$ which has $\Delta_{n}=\Delta_{n}^{\ast}$ when
$[K\ln(K)/n]K^{1/2-s_{\gamma}/r}\longrightarrow0\ .$ Thus the DCDR estimator
has remainder that can converge to zero at a faster rate that of the CF
plug-in estimator.
We note that the source of the term in $\Delta_{n}$ that is added to
$\Delta_{n}^{\ast}$ in each case can be attributed to estimators of the second
moment matrix $\Sigma=E[p(x_{i})p(x_{i})^{T}]$ of the regression splines. If
each $\hat{\Sigma}_{\ell}$ were replaced by $\Sigma$ in the estimators then
the resulting objects would all have $\Delta_{n}=\Delta_{n}^{\ast}.$
For brevity, we demonstrate this only for plug-in estimator. Consider the
plug-in object $\dot{\beta}$ having the same formula as $\hat{\beta}$ except
that $\hat{\gamma}_{-i}(x)$ is replaced by $\dot{\gamma}_{-i}(x)=p(x)^{T
\Sigma^{-1}\sum_{j\in\mathcal{I}_{\ell}}p(x_{i})y_{i}/n_{\ell}.$ Let
$\bar{\alpha}(x)=p(x)^{T}\Sigma^{-1}E[p(x_{i})\alpha_{0}(x_{i})]$. Standard
approximation properties of splines give the approximation rates
$\{E[\{\gamma_{0}(x_{i})-\bar{\gamma}(x_{i})\}^{2}]\}^{1/2}=O(K^{-s_{\gamma
}/r})$ and $\{E[\{\alpha_{0}(x_{i})-\bar{\alpha}(x_{i})\}^{2}]\}^{1/2
=O(K^{-s_{\alpha}/r}).$ By the Cauchy-Schwartz inequality
\begin{align*}
\sqrt{n}E[\{\alpha_{0}(x_{i})-\bar{\alpha}(x_{i})\}\{\gamma_{0}(x_{i
)-\bar{\gamma}(x_{i})\}] & \leq\sqrt{n}\{E[\{\alpha_{0}(x_{i})-\bar{\alpha
}(x_{i})\}^{2}]\}^{1/2}\{E[\{\gamma_{0}(x_{i})-\bar{\gamma}(x_{i
)\}^{2}]\}^{1/2}\\
& =O(\sqrt{n}K^{-(s_{\gamma}+s_{\alpha})/r}).
\end{align*}
Note also that $E[\dot{\gamma}_{-i}(x)]=\bar{\gamma}(x)=p(x)^{T}\Sigma
^{-1}E[p(x_{i})\gamma_{0}(x_{i})].$ Then the root-n normalized bias of
$\dot{\beta}$ i
\begin{align}
E\left[ \sqrt{n}\left( \dot{\beta}-\beta_{0}\right) \right] & =\sqrt
{n}\int a\{y-E\left[ \dot{\gamma}_{-i}(x)\right] \}F_{0}\left( dz\right)
-E[a_{i}\{y_{i}-\gamma_{0}(x_{i})\}]\nonumber\\
& =\sqrt{n}E[a_{i}\{\gamma_{0}(x_{i})-\bar{\gamma}(x_{i})\}]=\sqrt{n
E[\alpha_{0}(x_{i})\{\gamma_{0}(x_{i})-\bar{\gamma}(x_{i})\}]\label{bias}\\
& =\sqrt{n}E[\{\alpha_{0}(x_{i})-\bar{\alpha}(x_{i})\}\{\gamma_{0
(x_{i})-\bar{\gamma}(x_{i})\}]=O(\sqrt{n}K^{-(s_{\gamma}+s_{\alpha
)/r}),\nonumber
\end{align}
which has our desired $\Delta_{n}^{\ast}$ rate. Also, there will be stochastic
equicontinuity bias terms of order $K^{-s_{\gamma}/r}$ and $K^{-s_{\alpha}/r}$
and stochastic equicontinuity variance and degenerate U-statistic variance
terms of order $\sqrt{K/n}$. Overall the remainder for $\dot{\beta}$ will
satisfy $\Delta_{n}=\Delta_{n}^{\ast}$. Thus, a CF plug-in object $\dot{\beta
}$ where $\Sigma$ replaces each $\hat{\Sigma}_{\ell}$ will have the fast
remainder rate.
We note that the bias in equation (\ref{bias}) depends on the product
$K^{-(s_{\gamma}+s_{\alpha})/r}$ of the approximation rate $K^{-s_{\gamma}/r}$
for $\gamma_{0}(x)$ and the approximation rate $K^{-s_{\alpha}/r}$ for
$\alpha_{0}(x),$ rather than just the bias rate $K^{-s_{\gamma}/r}$ for the
nonparametric estimator being plugged-in. This product form results from the
fact that the parameter of interest $\beta_{0}$ has a finite semiparametric
variance bound. The product bias form in equation (\ref{bias}) for plug-in
series estimators was shown in Newey (1994).
It is interesting to compare our estimators with HOIF estimators. We continue
to focus on the average conditional covariance. The HOIF estimator of that
$\beta_{0}$ can depend on initial estimators $\hat{\gamma}(x)$ and
$\hat{\alpha}(x)$ of $\gamma_{0}(x)$ and $\alpha_{0}(x)$ obtained from a
training subsample. For a vector of spline regressors $p(x)$ let $\hat{\Sigma
}$ be the sample second moment matrix of $p(x)$ from the training sample. Let
$\hat{B}(x)=\hat{\Sigma}^{-1}[p(x)p(x)^{T}-\hat{\Sigma}]$ an
\begin{align*}
\hat{\beta}_{H} & =\frac{1}{n}\sum_{i=1}^{n}[a_{i}-\hat{\alpha
(x_{i})][y_{i}-\hat{\gamma}(x_{i})]-\frac{1}{n(n-1)}\sum_{i\neq j}[a_{i
-\hat{\alpha}(x_{i})]p(x_{i})^{T}\hat{\Sigma}^{-1}p(x_{j})[y_{j}-\hat{\gamma
}(x_{j})]\\
& +\sum_{q=1}^{Q}\frac{(-1)^{q+1}(n-2-q)!}{n!}\sum_{i\neq j}[a_{i
-\hat{\alpha}(x_{i})]p(x_{i})^{T}\left[ \sum_{\ell_{1}\neq\cdots\neq\ell
_{q}\neq i\neq j}\Pi_{r=1}^{q}\hat{B}(x_{\ell_{r}})\right] \hat{\Sigma
^{-1}p(x_{j})[y_{j}-\hat{\gamma}(x_{j})],
\end{align*}
where all the sums are over an estimation subsample that does not overlap with
the training sample. This $\hat{\beta}_{H}$ is the empirical HOIF estimator of
Mukherjee, Newey, and Robins (2017) of order $Q+2$. By Theorem 3 of Mukherjee,
Newey, and Robins (2017) the bias of $\sqrt{n}(\hat{\beta}_{H}-\beta_{0})$
conditional on the training sample has orde
\[
\sqrt{n}\left\Vert \hat{\alpha}-\alpha_{0}\right\Vert _{2}\left\Vert
\hat{\gamma}-\gamma_{0}\right\Vert _{2}\left( \frac{K\ln(K)}{n}\right)
^{Q/2}=\left\Vert \hat{\alpha}-\alpha_{0}\right\Vert _{2}\left\Vert
\hat{\gamma}-\gamma_{0}\right\Vert _{2}K\ln(K)\left( \frac{K\ln(K)
{n}\right) ^{(Q-1)/2}.
\]
where $\left\Vert \delta\right\Vert _{2}=\{E[\delta(x_{i})^{2}]\}^{1/2}.$ The
order of this bias will be smaller than $\sqrt{K/n}$ as long as $K$ grows no
faster than $n^{1-\varepsilon\text{ }}$for some $\varepsilon>0$, although that
is not needed for semiparametric efficiency. As shown in Mukherjee, Newey, and
Robins (2017), if $Q$ grows like $\sqrt{\ln(n)},$ $K$ like $n/\ln(n)^{3},$ and
other regularity conditions are satisfied then $\hat{\beta}_{H}$ will be
semiparametric efficient under the minimal condition $s_{\gamma}+s_{\alpha
}>r/2$ of Robins et al.(2009).
We can explain the different properties of HOIF and series estimators by
comparing the CF plug-in estimator with the HOIF when the training sample
estimators $\hat{\gamma}$ and $\hat{\alpha}$ are set equal to zero. In that
case the HOIF estimator i
\begin{align*}
\hat{\beta}_{H} & =\frac{1}{n}\sum_{i=1}^{n}a_{i}y_{i}-\frac{1}{n(n-1)
\sum_{i\neq j}a_{i}p(x_{i})^{T}\hat{\Sigma}^{-1}p(x_{j})y_{j}\\
& +\sum_{q=1}^{Q}\frac{(-1)^{q+1}(n-2-q)!}{n!}\sum_{i\neq j}a_{i}p(x_{i
)^{T}\left[ \sum_{\ell_{1}\neq\cdots\neq\ell_{q}\neq i\neq j}\Pi_{r=1
^{q}\hat{B}(x_{\ell_{r}})\right] \hat{\Sigma}^{-1}p(x_{j})y_{j}.
\end{align*}
Consider $\check{\gamma}_{-i}(x)=p(x)^{T}\hat{\Sigma}^{-1}\sum_{j\neq
i}p(x_{j})y_{j}/(n-1).$ This is an estimator of $\gamma_{0}(x)$ that is like a
series estimator except the inverse second moment matrix $\hat{\Sigma}^{-1}$
comes from the training sample and the cross-moments $\sum_{j\neq i
p(x_{j})y_{j}/(n-1)$ from the estimation subsample. The first two terms of the
HOIF estimator can then be written a
\[
\check{\beta}=\frac{1}{n}\sum_{i=1}^{n}a_{i}[y_{i}-\check{\gamma}_{-i
(x_{i})].
\]
Let $T$ denote the training sample. Then we have
\begin{align*}
E[\check{\beta}-\beta_{0}|T] & =E[\alpha_{0}(x_{i})\{\gamma_{0
(x_{i})-\check{\gamma}_{-i}(x_{i})\}]=E[\alpha_{0}(x_{i})\gamma_{0
(x_{i})]-E[\alpha_{0}(x_{i})p(x_{i})^{T}]\hat{\Sigma}^{-1}E[p(x_{i})\gamma
_{0}(x_{i})]\\
& =E[\alpha_{0}(x_{i})\gamma_{0}(x_{i})-\bar{\alpha}(x_{i})\bar{\gamma
(x_{i})]+E[\alpha_{0}(x_{i})p(x_{i})^{T}](\Sigma^{-1}-\hat{\Sigma
^{-1})E[p(x_{i})\gamma_{0}(x_{i})]\\
& =O(K^{-(s_{\gamma}+s_{\alpha})/r})+\Lambda(\hat{\Sigma},\Sigma
),\Lambda(\hat{\Sigma},\Sigma)=E[\alpha_{0}(x_{i})p(x_{i})^{T}](\Sigma
^{-1}-\hat{\Sigma}^{-1})E[p(x_{i})\gamma_{0}(x_{i})].
\end{align*}
Thus the bias of $\check{\beta}$ is the sum of the approximation bias
$K^{-(s_{\gamma}+s_{\alpha})/r}$ and $\Lambda(\hat{\Sigma},\Sigma).$ The rest
of the HOIF estimator, i.e. $\hat{\beta}_{H}-\check{\beta}$, can be thought of
as a bias correction for $\Lambda(\hat{\Sigma},\Sigma).$ Note that
\[
E[\hat{\beta}_{H}-\check{\beta}|T]=\sum_{q=1}^{Q}\frac{(-1)^{q+1}(n-2-q)!
{n!}E[\alpha_{0}(x_{i})p(x_{i})]^{T}\left[ \hat{\Sigma}^{-1}(\Sigma
-\hat{\Sigma})\right] ^{q}\hat{\Sigma}^{-1}E[p(x_{i})\gamma_{0}(x_{i})].
\]
Here we see that $E[\hat{\beta}_{H}-\check{\beta}|T]$ is the negative of a
Taylor expansion to order $Q$ of $\Lambda(\hat{\Sigma},\Sigma)$ in
$\hat{\Sigma}$ around $\Sigma.$ Therefore, it will follow tha
\[
E[\hat{\beta}_{H}-\beta_{0}|T]=O(K^{-(s_{\gamma}+s_{\alpha})/r})+O(\left\Vert
\hat{\Sigma}-\Sigma\right\Vert _{op}^{Q})=O(K^{-(s_{\gamma}+s_{\alpha
)/r})+O(\left( \frac{K\ln(K)}{n}\right) ^{Q/2}),
\]
where $\left\Vert \cdot\right\Vert _{op}$ is the operator norm for a matrix
and the second equality follows by the Rudelson (1999) matrix law of large
numbers. This equation is similar to the conclusion of Theorem 3 of Mukherjee,
Newey, and Robins (2017).
In comparison with the HOIF estimator the CF plug-in series estimator has a
remainder rate from estimating $\Sigma$ that is $\ln(n)K^{-s_{\gamma}/r}$ for
$s_{\gamma},s_{\alpha}\leq1$ and Haar splines and $\sqrt{K\ln(K)/n
K^{1/2-s_{\gamma}/r}$ more generally, without any higher order U-statistic
correction for the presence of $\hat{\Sigma}^{-1}.$ The DCDR estimator has
$\Delta_{n}=\Delta_{n}^{\ast},$ also without the need to rely on any
higher-order U-statistics. The key difference between the HOIF and these other
estimators is that the plug-in and doubly robust estimators use spline
regression in their construction and the HOIF estimator uses $\hat{\Sigma
}^{-1}$ from a training subsample.
Previously the HOIF estimator was the only known method of obtaining an
semiparametric efficient estimator of the expected conditional covariance
under the minimal conditions of Robins et al.(2009). We find here that the CF
plug-in estimator with a Haar basis can do this for $s_{\gamma},s_{\alpha
\leq1$ and for a general spline basis with $s_{\gamma}\geq r/2.$ We also find
that the DCDR estimator can do this for all $s_{\gamma}$ and $s_{\alpha}$.
These estimators are simpler than the HOIF\ estimator in not requiring the
higher order U-statistic terms$.$ It would be interesting to compare the size
of constants in respective remainder terms where HOIF could have an advantage
by virtue of its higher order influence function interpretation. That
comparison is beyond the scope of this paper.
The HOIF estimator remains the only known estimator that is semiparametric
efficient under the Robins et al.(2009) minimal conditions for the mean with
missing data over all $s_{\gamma}$ and $s_{\alpha}$. We expect that property
of HOIF to extend to all the linear average functionals we are considering in
this paper.
In summary, cross-fitting can be used to reduce bias of estimators and obtain
faster remainder rates. If cross fitting is not used for either the plug-in or
the doubly robust estimator there would be an additional $K/\sqrt{n}$ bias
term in the remainder. This extra term can increase the bias of the estimator
significantly for large $K.$ It is well known to be very important in some
settings, such as instrumental variables estimation as shown by Blomquist and
Dahlberg (1999) and Imbens, Angrist, and Krueger (1999). Also, its presence
prevents the plug-in estimator from attaining root-n consistency under minimal
conditions. Cross-fitting eliminates this large remainder for the linear
functionals we consider and results in plug-in and doubly robust estimators
with remainders that converge to zero as fast as known possible for
$s_{\gamma},s_{\alpha}\leq1,$ for $s_{\gamma}>r/2$, and for any $s_{\alpha}$
and $s_{\gamma}$ for a doubly robust estimator of the expected conditional covariance.
\section{Estimators of Average Linear Functionals}
We will analyze estimators of functionals of a conditional expectatio
\[
\gamma_{0}(x)=E[y_{i}|x_{i}=x],
\]
where $y_{i}$ is a scalar component and $x_{i}$ a subvector of $z_{i}$. Let
$\gamma$ represent a possible conditional expectation function and
$m(z,\gamma)$ denote a function of $\gamma$ and a possible realization $z$ of
a data observation. We consider
\[
\beta_{0}=E\left[ m(z_{i},\gamma_{0})\right] ,
\]
where $m(z,\gamma)$ is an affine functional of $\gamma$ for every $z,$ meaning
$m(z,\gamma)-m(z,0)$ is linear in $\gamma$.
There are many important examples of such an object. One of these is the
expected conditional covariance we consider in Section 2. There $m(z,\gamma
)=a[y-\gamma(x)]$. This object shows up in different forms in the numerator
and denominator o
\[
\xi_{0}=\frac{E[Cov(a_{i},y_{i}|x_{i})]}{E[Var(a_{i}|x_{i})]}.
\]
Here $\delta_{0}$ is the coefficient of $a_{i}$ in the population least
squares projection of $y_{i}$ on functions of the form $a_{i}\delta+g(x_{i}).$
Under an ignorability assumption this object $\delta_{0}$ can be interpreted
as a weighted average of conditional average treatment effects when $a_{i}$ is
a binary indicator for treatment and $x_{i}$ are covariates.
Another important example is the mean when data are missing at random. The
object of interest is $\beta_{0}=E[Y_{i}]$ where $Y_{i}$ is a latent variable
that is not always observed. Let $a_{i}$ be an observed binary indicator where
$a_{i}=1$ if $Y_{i}$ is observed. Suppose that there are observed covariates
$w_{i}$ such that $Y_{i}$ is mean independent of $a_{i}$ conditional on
$w_{i}$, i.e. $E[Y_{i}|a_{i}=1,w_{i}]=E[Y_{i}|w_{i}].$ Then for the observed
variable $y_{i}=a_{i}Y_{i}$ we hav
\[
E[E[y_{i}|a_{i}=1,w_{i}]]=E[E[Y_{i}|a_{i}=1,w_{i}]]=E[E[Y_{i}|w_{i
]]=\beta_{0}.
\]
Let $x=(a,w)$ and $\gamma_{0}(x_{i})=E[y_{i}|x_{i}].$ Then for $m(z,\gamma
)=\gamma(1,w)$ we have $\beta_{0}=E[m(z_{i},\gamma_{0})]$.
A third example is a weighted average derivative, where the object of interest
i
\[
\beta_{0}=\int v(x)\left[ \partial\gamma_{0}(x)/\partial x_{1}\right] dx,
\]
for some weight function $v(x),$ with $x_{1}$ continuously distributed and
$\int v(x)dx=1$. This object is proportional to $\beta_{10}$ in a conditional
mean index model where $E[y_{i}|x_{i}]=\tau(x_{i}^{T}\beta_{0})$ for some
unknown function $\tau(\cdot),$ as in Stoker (1986). This object is included
in the framework of this paper for $m(z,\gamma)=\int v(x)\left[
\partial\gamma(x)/\partial x_{1}\right] dx.$ Assuming that $v(x)$ is zero at
the boundary, integration by parts give
\[
m(z,\gamma)=m(\gamma)=\int\omega(x)\gamma(x)dx,\omega(x)=-\partial
v(x)/\partial x_{1}.
\]
Throughout we will focus on the case where estimators of $\beta_{0}$ have a
finite semiparametric variance bound and so should be root-n consistently
estimable under sufficient regularity conditions. As discussed in Newey
(1994), this corresponds to $E[m\left( z_{i},\gamma\right) ]$ being mean
square continuous as a function of $\gamma$, so that by the Riesz
representation theorem the following condition is satisfied:
\bigskip
\textsc{Assumption 1:} \textit{There is }$\alpha_{0}\left( x\right)
$\textit{ with }$E[\alpha_{0}(x_{i})^{2}]<\infty$\textit{ and for all
$\gamma$ with $E[\gamma(x_{i})^{2}]<\infty$
\begin{equation}
E\left[ m\left( z_{i},\gamma\right) -m(z_{i},0)\right] =E\left[
\alpha_{0}\left( x_{i}\right) \gamma\left( x_{i}\right) \right] .
\label{Riesz
\end{equation}
\bigskip
The function $\alpha_{0}(x)$ has an important role in the asymptotic theory.
The bias in a series estimator of $\beta_{0}$ will depend on the expected
product of biases in approximating $\gamma_{0}(x)$ and $\alpha_{0}(x)$.
Consequently there will be a trade-off in conditions that can be imposed on
$\gamma_{0}(x)$ and $\alpha_{0}(x)$ so that the estimators of $\beta_{0}$ have
good properties.
To help explain this condition we give the form of $\alpha_{0}(x)$ in each of
the examples. In the expected conditional covariance example iterated
expectations give
\begin{align}
E\left[ m\left( z_{i},\gamma\right) -m(z_{i},0)\right] & =-E[a_{i
\gamma(x_{i})]=-E[E[a_{i}|x_{i}]\gamma(x_{i})]=E[\alpha_{0}(x_{i})\gamma
(x_{i})],\label{ccriesz}\\
\alpha_{0}(x_{i}) & =-E[a_{i}|x_{i}].\nonumber
\end{align}
In the missing data example, for the propensity score $\Pr(a_{i}=1|w_{i
)=\pi_{0}(w_{i})$, iterated expectations gives
\begin{align}
E\left[ m\left( z_{i},\gamma\right) -m(z_{i},0)\right] & =E[\gamma
(1,w_{i})]=E[\frac{\pi_{0}(w_{i})}{\pi_{0}(w_{i})}\gamma(1,w_{i
)]=E[\frac{a_{i}}{\pi_{0}(w_{i})}\gamma(1,w_{i})]\label{mdriesz}\\
& =E[\frac{a_{i}}{\pi_{0}(w_{i})}\gamma(x_{i})]=E[\alpha_{0}(x_{i
)\gamma(x_{i})],\alpha_{0}(x_{i})=\frac{a_{i}}{\pi_{0}(w_{i})}.\nonumber
\end{align}
In the average derivative example, multiplying and dividing by the pdf
$f_{0}(x)$ of $x_{i}$ give
\begin{align}
E\left[ m\left( z_{i},\gamma\right) -m(z_{i},0)\right] & =\in
\omega(x)\gamma(x)dx=\int\frac{\omega(x)}{f_{0}(x)}\gamma(x)f_{0
(x)dx=E[\frac{\omega(x_{i})}{f_{0}(x_{i})}\gamma(x_{i})]\label{adriesz}\\
& =E[\alpha_{0}(x_{i})\gamma(x_{i})],\alpha_{0}(x_{i})=\frac{\omega(x_{i
)}{f_{0}(x_{i})}.\nonumber
\end{align}
Our estimators of $\beta_{0}$ will be based on a nonparametric estimator
$\hat{\gamma}$ of $\gamma_{0}$ and possibly on a nonparametric estimator
$\tilde{\alpha}$ of $\alpha_{0}.$ The CF plug-in estimator is given b
\[
\hat{\beta}=\frac{1}{n}\sum_{\ell=1}^{L}\sum_{i\in I_{\ell}}m(z_{i
,\hat{\gamma}_{\ell}),
\]
where $I_{\ell},(\ell=1,...,L)$ is a partition of the observation index set
$\{1,...,n\}$ into $L$ distinct subsets of about equal size and $\hat{\gamma
}_{\ell}$ only uses observations \textit{not} in $I_{\ell}.$ We will consider
a fixed number of groups $L$ in the asymptotics. It would be interesting to
consider results where the number of groups grows with the sample size, even
"leave one out" estimators where $I_{\ell}$ only includes one observation, but
theory for those estimators is more challenging and we leave it to future work.
The DCDR estimator makes use of $\tilde{\alpha}_{\ell}$ that may be
constructed from different observations than $\hat{\gamma}_{\ell}.$ The doubly
robust estimator i
\[
\tilde{\beta}=\frac{1}{n}\sum_{\ell=1}^{L}\sum_{i\in I_{\ell}}\left\{
m(z_{i},\hat{\gamma}_{\ell})+\tilde{\alpha}_{\ell}(x_{i})[y_{i}-\hat{\gamma
}_{\ell}(x_{i})]\right\} .
\]
This estimator has the form of a plug-in estimator plus the sample average of
$\tilde{\alpha}_{\ell}(x_{i})[y_{i}-\hat{\gamma}_{\ell}(x_{i})],$ which is an
estimator of the influence function of $\int m(z,\hat{\gamma}_{\ell
)F_{0}(dz).$ The addition of $\tilde{\alpha}_{\ell}(x_{i})[y_{i}-\hat{\gamma
}_{\ell}(x_{i})]$ will mean that the nonparametric estimators $\hat{\gamma
}_{\ell}$ and $\tilde{\alpha}_{\ell}$ do not affect the asymptotic
distribution of $\tilde{\beta},$ i.e. the limit distribution would be the same
if $\hat{\gamma}_{\ell}$ and $\tilde{\alpha}_{\ell}$ were replaced by their
true values and $\Delta_{n}\longrightarrow0$. This estimator allows for full
cross-fitting where $\tilde{\alpha}$ and $\hat{\gamma}$ may be based on
distinct subsamples.
The cross-fit estimator $\tilde{\beta}$ is doubly robust in the sense that
$\tilde{\beta}$ will be consistent as long as either $\hat{\gamma}_{\ell}$ or
$\tilde{\alpha}_{\ell}$ is consistent, as shown by Chernozhukov et al.(2016)
for this general class of functionals. When $\hat{\gamma}(x)$ is a series
estimator like that described above the CF plug-in estimator $\hat{\beta}$ is
also doubly robust in a more limited sense. It will be consistent with fixed
$p(x)$ if either $\gamma_{0}(x)$ or $\alpha_{0}(x)$ is a linear combination of
$p(x)$, as shown for the mean with missing data in Robins et al.(2007) and in
Chernozhukov et al.(2016) for the general linear function case we are considering.
Throughout the paper we assume that each data point $z_{i}$ is used for
estimation for some group $\ell$ and that the number of observations in group
$\ell$, the number used to form $\hat{\gamma}_{\ell}$, and the number used to
form $\tilde{\alpha}_{\ell}$ grow at the same rate as the sample size. To make
this condition precise let $\bar{n}_{\ell}$ be the number of elements in
$I_{\ell},$ $\hat{n}_{\ell}$ be the number used to form $\hat{\gamma}_{\ell},$
and $\tilde{n}_{\ell}$ be the number of observations used to form
$\tilde{\alpha}_{\ell}$. We will assume throughout that all the observations
are used for each $\ell$, i.e. that either $\bar{n}_{\ell}+\hat{n}_{\ell}=n$
or $\bar{n}_{\ell}+\hat{n}_{\ell}+\tilde{n}_{\ell}=n$ if different
observations are used for $\hat{\gamma}_{\ell}$ and $\tilde{\alpha}_{\ell}$.
\bigskip
\textsc{Assumption 2:} \textit{ There is a constant }$C>0$\textit{ such that
either} $\bar{n}_{\ell}+\hat{n}_{\ell}=n$\textit{ and }$\min_{\ell}\{\bar
{n}_{\ell},\hat{n}_{\ell}\}\geq Cn$\textit{ or }$\bar{n}_{\ell}+\hat{n}_{\ell
}+\tilde{n}_{\ell}=n$\textit{ and }$\min_{\ell}\{\bar{n}_{\ell},\hat{n}_{\ell
},\tilde{n}_{\ell}\}\geq Cn.$ \textit{For the plug-in estimator groups are as
close as possible to being of equal size.}
\bigskip
The assumption that the group sizes are as close to equal as possible for the
plug-in estimator is made for simplicity but could be relaxed.
We turn now to conditions for the regression spline estimators of $\gamma
_{0}(x)$ and $\alpha_{0}(x)$. We continue to consider regression spline first
steps where $p(x)$ is a $K\times1$ vector of regression splines. The
nonparametric estimator of $\gamma_{0}(x)$ will be a series regression
estimator wher
\[
\hat{\gamma}_{\ell}(x)=p(x)^{T}\hat{\delta}_{\ell},\text{ }\hat{\delta}_{\ell
}=\hat{\Sigma}_{\ell}^{-}\hat{h}_{\ell},\text{ }\hat{\Sigma}_{\ell}=\frac
{1}{\hat{n}_{\ell}}\sum_{i\in\hat{I}_{\ell}}p(x_{i})p(x_{i})^{T},\text{
\hat{h}_{\ell}=\frac{1}{\hat{n}_{\ell}}\sum_{i\in\hat{I}_{\ell}}p(x_{i
)y_{i},
\]
where a $T$ superscript denotes the transpose, $\hat{I}_{\ell}$ is the index
set for observations used to construct $\hat{\gamma}_{\ell}(x)$, and $A^{-}$
denotes any generalized inverse of a positive semi-definite matrix $A$. Under
conditions given below $\hat{\Sigma}_{\ell}$ will be nonsingular with
probability approaching one so that $\hat{\Sigma}_{\ell}^{-}=\hat{\Sigma
}_{\ell}^{-1}$ for each $\ell.$
The DCDR estimator $\tilde{\beta}$ uses an estimator of $\alpha_{0}(x).$ The
function $\alpha_{0}(x)$ cannot generally be interpreted as a conditional
expectation and so cannot generally be estimated by a linear regression.
Instead we use Assumption 1 and equation (\ref{Riesz}) to construct an
estimator. Let $v(z)=(m(z,p_{1})-m(z,0),...,m(z,p_{K})-m(z,0))^{T}$. Then by
Assumption 1
\[
E[v(z_{i})]=E[p(x_{i})\alpha_{0}(x_{i})],
\]
so that $\tilde{h}_{\ell\alpha}=\sum_{i\in\tilde{I}_{\ell}}v(z_{i})/\tilde
{n}_{\ell}$ is an unbiased estimator of $E[p(x_{i})\alpha_{0}(x_{i})].$ A
series estimator of $\alpha_{0}(x)$ is the
\[
\tilde{\alpha}_{\ell}(x)=p(x)^{T}\tilde{\delta}_{\ell\alpha},\tilde{\delta
}_{\ell\alpha}=\tilde{\Sigma}_{\ell}^{-}\tilde{h}_{\ell\alpha},\tilde{\Sigma
}_{\ell}=\frac{1}{\tilde{n}_{\ell}}\sum_{i\in\tilde{I}_{\ell}}p(x_{i
)p(x_{i})^{T}.
\]
Here $\tilde{\delta}_{\ell\alpha}$ is an estimator of the coefficients of the
population regression of $\alpha_{0}(x)$ on $p(x),$ but $\tilde{\delta
_{\ell\alpha}$ is not obtained from a linear regression. This type of
estimator of $\alpha_{0}(x)$ was used to construct standard errors for
functionals of series estimators in Newey (1994).
Now that we have specified the form of the estimators $\hat{\gamma}_{\ell}$
and $\tilde{\alpha}_{\ell}$ we can give a complete description of the
estimators in each of the examples. For the expected conditional covariance
recall that $m(z,\gamma)=a[y-\gamma(x)].$ Therefore the CF plug-in estimator
will b
\begin{equation}
\hat{\beta}=\frac{1}{n}\sum_{\ell=1}^{L}\sum_{i\in I_{\ell}}a_{i}[y_{i
-\hat{\gamma}_{\ell}(x_{i})]. \label{ccplug
\end{equation}
Also, as discussed above, for the expected conditional covariance $\alpha
_{0}(x)=-E[a_{i}|x_{i}=x]$ and $v(z_{i})=-a_{i}p(x_{i})$, so that
$\tilde{\alpha}_{\ell}(x)=-\tilde{\gamma}_{a\ell}(x)$ where $\tilde{\gamma
}_{a\ell}(x)=p(x)^{T}\tilde{\Sigma}_{\ell}^{-}\sum_{i\in\tilde{I}_{\ell
}p(x_{i})a_{i}/\tilde{n}_{\ell}$ is the regression of $a_{i}$ on $p(x_{i})$
for the observations indexed by $\tilde{I}_{\ell}.$ Then the DCDR estimator i
\begin{equation}
\tilde{\beta}=\frac{1}{n}\sum_{\ell=1}^{L}\sum_{i\in I_{\ell}}[a_{i
+\tilde{\alpha}_{\ell}(x_{i})][y_{i}-\hat{\gamma}_{\ell}(x_{i})]=\frac{1
{n}\sum_{\ell=1}^{L}\sum_{i\in I_{\ell}}\{a_{i}-\tilde{E}[a_{i}|x_{i
]\}[y_{i}-\hat{\gamma}_{\ell}(x_{i})], \label{ccrob
\end{equation}
where $\tilde{E}[a_{i}|x_{i}]=-\tilde{\alpha}_{\ell}(x_{i})$ is the predicted
value from the regression of $a_{i}$ on $p(x_{i}).$ This estimator is the
average of the product of two nonparametric regression residuals, where the
average and each of the nonparametric estimators can be constructed from
different samples.
For the missing data example the estimators are based on series estimation of
$E[y_{i}|a_{i}=1,w_{i}]$. Let $q(w)$ denote a $K\times1$ vector of splines,
$x=(a,w^{T})^{T},$ and $p(x)=(aq(w)^{T},(1-a)q(w)^{T})^{T}$. The predicted
value $\hat{\gamma}(1,w)$ will be the same as from a linear regression of
$y_{i}$ on $q(w_{i})$ for observations with $a_{i}=1.$ That is, $\hat{\gamma
}(1,w)=q(w)^{T}\hat{\delta}_{\ell}$ where
\[
\hat{\delta}_{\ell}=\hat{\Sigma}_{\ell}^{-}\hat{h}_{\ell},\text{ }\hat{\Sigma
}_{\ell}=\frac{1}{\hat{n}_{\ell}}\sum_{i\in\hat{I}_{\ell}}a_{i}q(w_{i
)q(w_{i})^{T},\text{ }\hat{h}_{\ell}=\frac{1}{\hat{n}_{\ell}}\sum_{i\in\hat
{I}_{\ell}}a_{i}q(w_{i})y_{i}.
\]
The CF plug-in estimator i
\[
\hat{\beta}=\frac{1}{n}\sum_{\ell=1}^{L}\sum_{i\in I_{\ell}}q(w_{i})^{T
\hat{\delta}_{\ell}.
\]
The DCDR estimator is based on an estimator of the inverse propensity score
$\pi_{0}(w_{i})^{-1}=1/\pi_{0}(w_{i})$ given b
\[
\widetilde{\pi(w_{i})_{\ell}^{-1}}=q(w_{i})^{T}\tilde{\delta}_{\ell}^{\alpha
},\tilde{\delta}_{\ell}^{\alpha}=\tilde{\Sigma}_{\ell}^{-}\tilde{h}_{\ell
}^{\alpha},\tilde{\Sigma}_{\ell}=\frac{1}{\tilde{n}_{\ell}}\sum_{i\in\tilde
{I}_{\ell}}a_{i}q(w_{i})q(w_{i})^{T},\text{ }\tilde{h}_{\ell}^{\alpha
=\frac{1}{\tilde{n}_{\ell}}\sum_{i\in\tilde{I}_{\ell}}q(w_{i}),
\]
where $\tilde{n}_{\ell}$ is the number of observation indices in $\tilde
{I}_{\ell}$. This estimator of the inverse propensity score is a version of
one discussed in Robins et al.(2007). The DCDR estimator i
\[
\tilde{\beta}=\frac{1}{n}\sum_{\ell=1}^{L}\sum_{i\in I_{\ell}}\left\{
q(w_{i})^{T}\hat{\delta}_{\ell}+a_{i}\widetilde{\pi(w_{i})_{\ell}^{-1}
[y_{i}-q(w_{i})^{T}\hat{\delta}_{\ell}]\right\} .
\]
This has the usual form for a doubly robust estimator of the mean with data
missing at random. It differs from previous estimators in having the full CF
form where the nonparametric estimators are based on distinct subsamples of
the data.
For the average derivative example $m(z,\gamma)=\int\omega(x)\gamma(x)dx$ does
not depend on $z$ so we can use all the data in the construction of the
plug-in estimator. That estimator is given b
\begin{equation}
\hat{\beta}=\int\omega(x)\hat{\gamma}(x)dx=v^{T}\hat{\delta}\text{,
v=\int\omega(x)p(x)dx,\hat{\delta}=[\sum_{i=1}^{n}p(x_{i})p(x_{i})^{T
]^{-}\sum_{i=1}^{n}p(x_{i})y_{i}. \label{adplug
\end{equation}
As shown in equation (\ref{adriesz}), $\alpha_{0}(x)=f_{0}(x)^{-1}\omega(x),$
where $f_{0}(x)$ is the pdf of $x$. Also here $v(z)=v$ so the estimator of
$\alpha_{0}(x)$ is $p(x)^{T}\tilde{\Sigma}_{\ell}^{-}v.$ The DCDR estimator is
then
\begin{equation}
\tilde{\beta}=\frac{1}{n}\sum_{\ell=1}^{L}\sum_{i\in I_{\ell}}\{\in
\omega(x)\hat{\gamma}_{\ell}(x)dx+\left[ p(x_{i})^{T}\tilde{\Sigma}_{\ell
}^{-}v\right] [y_{i}-\hat{\gamma}_{\ell}(x_{i})]\}. \label{adrob
\end{equation}
Both the plug-in and the DCDR estimators depend on the integral $v=\in
\omega(x)p(x)dx.$ Generally this vector of integrals will not exist in closed
form so that construction of these estimators will require numerical
computation or estimation of $v$, such as by simulation.
We now impose some specific conditions on $p(x)$.
\bigskip
\textsc{Assumption 3:} $p(x)=aq(w)$\textit{ where i) the support of }$w_{i
$\textit{ is }$[0,1]^{r}$\textit{, }$w_{i}$\textit{ is continuously
distributed with bounded pdf that is bounded away from zero; ii)
$q(w)$\textit{ are tensor product b-splines of order }$\kappa$\textit{ with
knot spacing approximately proportional to the number of knots; iii)
$q(w)$\textit{ is normalized so that }$\lambda_{\min}(E[q(w_{i})q(w_{i
)^{T}])\geq C>0$\textit{ and }$\sup_{w\in\lbrack0,1]^{r}}\left\Vert
q(w)\right\Vert \leq C\sqrt{K};$\textit{ iv) }$a_{i}$\textit{ is bounded and
}$E[a_{i}^{2}|w_{i}]$\textit{ is bounded away from zero.}
\bigskip
Under condition i) it is known that there is a normalization such that
condition iii) is satisfied, e.g. as in Newey (1997). To control the bias of
the estimator we require that the true regression function $\gamma_{0}(x)$ and
the auxiliary function $\alpha_{0}(x)$ each be in a Holder class of functions.
We define a function $g(x)$ to be Holder of order $s$ if there is a constant
$C$ such that $g(x)$ is continuously differentiable of order $\bar{s}=int[s]$
and each of its $\bar{s}$ partial derivatives $\nabla^{\bar{s}}g(x)$ satisfies
$\left\vert \nabla^{\bar{s}}g(\tilde{x})-\nabla^{\bar{s}}g(x)\right\vert \leq
C\left\Vert \tilde{x}-x\right\Vert ^{s-\bar{s}}.$
$\bigskip$
\textsc{Assumption 4:} $\gamma_{0}(x)$\textit{ and }$\alpha_{0}(x)$\textit{
are Holder of order }$s_{\gamma}$\textit{ and }$s_{\alpha}$\textit{
respectively.}
\bigskip
This condition implies that the population least squares approximations to
$\gamma_{0}(x)$ and \textit{ }$\alpha_{0}(x)$ converge at certain rates. Let
$\zeta_{\gamma}=\min\{1+\kappa,s_{\gamma}\}/r,$ $\zeta_{\alpha}=\min
\{1+\kappa,s_{a}\}/r,$ $\Sigma=E[p(x_{i})p(x_{i})^{T}]$, $\delta=\Sigma
^{-1}E[p(x_{i})\gamma_{0}(x_{i})],$ $\gamma_{K}(x)=p(x)^{T}\delta,$
$\delta_{\alpha}=\Sigma^{-1}E[p(x_{i})\alpha_{0}(x_{i})],$ $\alpha
_{K}(x)=p(x)^{T}\delta_{a}.$ Then standard approximation theory for splines
give
\begin{align*}
E[\{\gamma_{0}(x_{i})-\gamma_{K}(x_{i})\}^{2}] & =O(K^{-2\zeta_{\gamma
}),\sup_{x\in\lbrack0,1]^{r}}\left\vert \gamma_{0}(x)-\gamma_{K}(x)\right\vert
=O(K^{-\zeta_{\gamma}}),\\
E[\{\alpha_{0}(x_{i})-\alpha_{K}(x_{i})\}^{2}] & =O(K^{-2\zeta_{\alpha}}).
\end{align*}
We will use these results to derive the rates at which certain remainders
converge to zero.
We also impose the following condition:
\bigskip
\textsc{Assumption 5: }$Var(y_{i}|x_{i})\leq C,$ $K\longrightarrow\infty$,
\textit{and} $K\ln(K)/n\longrightarrow0$.
\bigskip
These are standard conditions for series estimators of conditional
expectations. A bounded conditional variance for $y_{i}$ helps bound the
variance of series estimators. The upper bound on the rate at which $K$ grows
is slightly stronger than $K/n\longrightarrow0$. This upper bound on $K$
allows us to apply the Rudelson (1999) law of large numbers for symmetric
matrices to show that the various second moment matrices of $p(x)$ converge in
probability. Another condition we impose is:
\bigskip
\textsc{Assumption 6:} $\lambda_{\max}(E[v(z_{i})v(z_{i})^{T}])\leq Cd_{K}$
\textit{and }$\left\{ E[\{m(z_{i},\gamma_{K})-m(z_{i},\gamma_{0
)\}^{2}]\right\} ^{1/2}=O(K^{-\zeta_{m}}).$
\bigskip
The first condition will be satisfied with $d_{K}=1$ in the examples under
specific regularity conditions detailed below. The second condition gives a
rate for the mean square error convergence of $m(z,\gamma_{K})-m(z,\gamma
_{0})$ as $K$ grows. In all of the examples this rate will be $\zeta_{m
=\zeta_{\gamma}.$ In other examples, including those where $m(z,\gamma)$ and
$v(z)$ depend on derivatives with respect to $x,$ we will have $d_{K}$ growing
with $K$ and $\zeta_{m}<\zeta_{\gamma}.$
For the statement of the results to follow it is convenient to work with the
remainder ter
\[
\bar{\Delta}_{n}^{\ast}=\sqrt{n}K^{-\zeta_{\gamma}-\zeta_{\alpha}
+K^{-\zeta_{\gamma}}+K^{-\zeta_{\alpha}}+\sqrt{\frac{K}{n}}.
\]
This remainder coincides with the fast remainder $\Delta_{n}^{\ast}$ when the
spline order is high enough with $\kappa\geq\max\{s_{\gamma},s_{\alpha}\}-1.$
The only cases where it would not be possible to choose such a $\kappa$ are
for the Haar basis where $\kappa=0.$
\bigskip
\section{The Plug-in Estimator}
In this Section we derive bounds on the size of remainders for the plug-in
estimator. Some bounds are given for general plug-in estimators, some for
plug-ins that are series regression with Haar splines, and some for other
splines. We begin with a result that applies to all plug-ins. We drop the CF
designation because all the estimators from this point on will use cross-fitting.
The cross-fit form of the plug-in estimator allows us to partly characterize
its properties under weak conditions on a general plug-in estimator that need
not be a series regression. This characterization relies on independence of
$\hat{\gamma}_{\ell}$ from the observations in $I_{\ell}$ to obtain relatively
simple stochastic equicontinuity remainders. Also, this result accounts for
the overlap across groups in observations used to form $\hat{\gamma}_{\ell}$.
Let $\mathcal{A}_{n}$ denote an event that occurs with probability approaching
one. For example, $\mathcal{A}_{n}$ could include the set of data points where
$\hat{\Sigma}_{\ell}$ is nonsingular for each $\ell.$
\bigskip
\textsc{Lemma 1:} \textit{If Assumptions 1 and 2 are satisfied and there is
}$\Delta_{n}^{m}$ \textit{such that
\[
1(\mathcal{A}_{n})\left\{ \int[m(z,\hat{\gamma}_{\ell})-m(z,\gamma_{0
)]^{2}F_{0}(dz)\right\} ^{1/2}=O_{p}(\Delta_{n}^{m}),\left( \ell
=1,...,L\right) ,
\]
\textit{then for }$\bar{m}(\gamma)=\int m(z,\gamma)F_{0}(dz),
\[
\sqrt{n}(\hat{\beta}-\beta_{0})=\frac{1}{\sqrt{n}}\sum_{i=1}^{n
[m(z_{i},\gamma_{0})-\beta_{0}]+\sqrt{n}\sum_{\ell=1}^{L}\frac{\bar{n}_{\ell
}{n}[\bar{m}(\hat{\gamma}_{\ell})-\beta_{0}]+O_{p}(\Delta_{n}^{m}).
\]
\textit{If in addition there is }$\Delta_{n}^{\phi}$ \textit{such that for
each }$\left( \ell=1,...,L\right) ,$
\[
\sqrt{\hat{n}_{\ell}}[\bar{m}(\hat{\gamma}_{\ell})-\beta_{0}]=\frac{1
{\sqrt{\hat{n}_{\ell}}}\sum_{i\notin I_{\ell}}\alpha_{0}(x_{i})[y_{i
-\gamma_{0}(x_{i})]+O_{p}(\Delta_{n}^{\phi}),
\]
\textit{then for }$\delta(z)=m(z,\beta_{0})-\beta_{0}+\alpha_{0
(x)[y-\gamma_{0}(x)]
\[
\sqrt{n}(\hat{\beta}-\beta_{0})=\frac{1}{\sqrt{n}}\sum_{i=1}^{n}\psi
(z_{i})+O_{p}(\Delta_{n}^{m}+\Delta_{n}^{\phi}+n^{-1}).
\]
\textit{ }
\bigskip
This result gives a decomposition of remainder bounds into two kinds. The
first $\Delta_{n}^{m}$ is a stochastic equicontinuity bound that has the
simple mean-square form given here because of the sample splitting. The second
$\Delta_{n}^{\phi}$ is a bound that comes from the asymptotically linear
expansion of the linear functional estimator $\bar{m}(\hat{\gamma}_{\ell})$.
For general b-splines we can apply Ichimura and Newey (2017) to obtain
$\Delta_{n}^{\phi}$. For zero order splines we give here sharper remainder bounds.
For series estimators the stochastic equicontinuity remainder bound
$\Delta_{n}^{m}$ will b
\[
\Delta_{n}^{m}=\sqrt{(d_{K}+1)\frac{K}{n}}+K^{-\zeta_{m}},
\]
where $d_{K}$ and $\zeta_{m}$ are as given in Assumption 6. As mentioned
above, in the examples in this paper $d_{K}\leq C$ and $\zeta_{m
=\zeta_{\gamma}$. Here we can take $\Delta_{n}^{m}\leq C\bar{\Delta}_{n
^{\ast}$, so the stochastic equicontinuity remainder bound is the same size as
$\bar{\Delta}_{n}^{\ast}$.
Our next result gives remainder bounds for the Haar basis.
\bigskip
\textsc{Theorem 2:} \textit{If Assumptions 1-6 are satisfied, }$\kappa
=0,$\textit{ and }$K[\ln(n)]^{2}/n\longrightarrow0$ \textit{then
\[
\sqrt{n}(\hat{\beta}-\beta_{0})=\frac{1}{\sqrt{n}}\sum_{i=1}^{n}\psi
(z_{i})+O_{p}(\bar{\Delta}_{n}^{\ast}+\Delta_{n}^{m}+K^{-\zeta_{\gamma}
\ln(n)).
\]
\textit{If in addition }$d_{K}$\textit{ is bounded as a function of
$K$\textit{ and }$\zeta_{m}=\zeta_{\gamma}$\textit{ then }$\Delta_{n}^{m}\leq
C\bar{\Delta}_{n}^{\ast}$\textit{.}
\bigskip
Here we see that for a Haar basis the order of the remainder term for the
plug-in estimator is a sum of the stochastic equicontinuity term $\Delta
_{n}^{m}$ and $\bar{\Delta}_{n}^{\ast}$, with $K^{-\zeta_{\gamma}}\ln(n)$
being the size of the fast remainder up to $\ln(n).$ In the examples and other
settings where $d_{K}$ is bounded and $\zeta_{m}=\zeta_{\gamma}$ the
$\Delta_{n}^{m}$ remainder will just be of order $\bar{\Delta}_{n}^{\ast}$.
The following result states conditions for the examples.
\bigskip
\textsc{Corollary 3:} \textit{Suppose that Assumptions 1-3 and 5 are
satisfied, }$\kappa=0$\textit{, }$K[\ln(n)]^{2}/n\longrightarrow0,$
\textit{and }$\gamma_{0}(x)$\textit{ is Holder of order }$s_{\gamma}.$\textit{
If either i) }$\hat{\beta}$\textit{ is the expected conditional covariance
estimator, }$E[a_{i}|x_{i}=x]$\textit{ is Holder of order }$s_{\alpha
$\textit{, }$E[a_{i}^{2}|x_{i}]$\textit{ is bounded, or ii) }$\hat{\beta
$\textit{ is the missing data mean estimator, }$\Pr(a_{i}=1|x_{i})$\textit{ is
bounded away from zero and is Holder of order }$s_{\alpha},$\textit{ or iii)
}$\hat{\beta}$\textit{ is the average derivative estimator, }$\omega
(x)$\textit{ and }$f_{0}(x)$\textit{ are Holder of order }$s_{\alpha
$\textit{, and }$f_{0}(x)$\textit{ is bounded away from zero on the set where
}$\omega(x)>0,$\textit{ then
\[
\sqrt{n}(\hat{\beta}-\beta_{0})=\frac{1}{\sqrt{n}}\sum_{i=1}^{n}\psi
(z_{i})+O_{p}(\bar{\Delta}_{n}^{\ast}+K^{-\zeta_{\gamma}}\ln(n)).
\]
\bigskip
The remainder bound means that the plug-in estimator can attain root-n
consistency under minimal conditions, when the dimension $r$ is small enough.
There will exist $K$ such that $\bar{\Delta}_{n}^{\ast}$ goes to zero if an
only if
\begin{equation}
1/2<\zeta_{\gamma}+\zeta_{\alpha}=\frac{\min\{1,s_{\gamma}\}+\min
\{1,s_{\alpha}\}}{r}. \label{min cond
\end{equation}
This condition can be satisfied for $r<4$ but not for $r\geq4.$ For $r=1$ this
condition will be satisfied if and only if
\[
s_{\gamma}+s_{\alpha}>\frac{1}{2},
\]
which is the minimal condition of Robins et al.(2009) for existence of a
semiparametric efficient estimator for the expected conditional covariance and
missing data parameters when $r=1$. For $r=2$ we note tha
\[
\min\{1,s_{\gamma}\}+\min\{1,s_{\alpha}\}\geq1\text{ if and only if
s_{\gamma}+s_{\alpha}\geq1.
\]
For $r=2$ equation (\ref{min cond}) is $\min\{1,s_{\gamma}\}+\min
\{1,s_{\alpha}\}>1$, which requires both $s_{\alpha}>0$ and $s_{\gamma}>0$ and
so is slightly stronger than the Robins et al.(2009) condition $s_{\gamma
}+s_{\alpha}>1$. For $r=3$ the situation is more complicated. Equation
(\ref{min cond}) is stronger than the corresponding condition $s_{\gamma
}+s_{\alpha}>3/2$ of Robins et al.(2009), although it is the same for the set
of $(s_{\gamma},s_{\alpha})$ where $s_{\gamma}\leq1$ and $s_{\alpha}\leq1.$
Along the diagonal where $s_{\alpha}=s_{\gamma}$ the two conditions coincide
as $s_{\gamma}>3/4.$
The limited nature of these results is associated with the Haar basis, which
limits the degree to which smoothness of the underlying function results in a
faster approximation rate. If Theorem 2 and Corollary 3 could be extended to
other, higher order b-splines, this limitation could be avoided. For the
present we are only able to do this for the doubly robust estimator of a
partially linear projection, as discussed in the next Section.
There is a key result that allows us to obtain the remainder bound
$\bar{\Delta}_{n}^{\ast}$ in Theorem 2. Let $\hat{h}_{2}=\sum_{i=1}^{n
p(x_{i})[\gamma_{0}(x_{i})-\gamma_{K}(x_{i})]/n$, $\hat{\Sigma}=\sum_{i=1
^{n}p(x_{i})p(x_{i})^{T}/n$, and $\Sigma=E[p(x_{i})p(x_{i})^{T}.$ We show in
the Appendix that for the Haar basis
\begin{equation}
\lambda_{\max}(E[(\Sigma-\hat{\Sigma})^{j}\hat{h}_{2}\hat{h}_{2}^{T
(\Sigma-\hat{\Sigma})^{j}])\leq\frac{K^{-2\zeta_{\gamma}}}{n}\left( \frac
{CK}{n}\right) ^{j}. \label{key result
\end{equation}
If b-spline bases other than Haar also satisfied this condition then we could
obtain results analogous to Theorem 2 and Corollary 3 for these bases. We do
not yet know if other bases satisfy this condition. The Haar basis is
convenient in $p(x)^{T}p(x)$ being piecewise constant. Cattaneo and Farrell
(2013) exploited other special properties of the Haar basis to obtain sharp
uniform nonparametric rates.
For b-splines of any order we can obtain remainder rates by combining Lemma 1
with Theorem 8 of Ichimura and Newey (2017).
\bigskip
\textsc{Theorem 4:} \textit{If Assumptions 1-6 are satisfied} \textit{then
\[
\sqrt{n}(\hat{\beta}-\beta_{0})=\frac{1}{\sqrt{n}}\sum_{i=1}^{n}\psi
(z_{i})+O_{p}(\bar{\Delta}_{n}^{\ast}+\Delta_{n}^{m}+\bar{\Delta}_{n
),\bar{\Delta}_{n}=\left( \frac{K\ln K}{n}\right) ^{1/2}K^{(1/2)-\zeta
_{\gamma}}.
\]
\textit{If in addition }$d_{K}$\textit{ is bounded as a function of
$K$\textit{ and }$\zeta_{m}=\zeta_{\gamma}$\textit{ then }$\Delta_{n}^{m}\leq
C\bar{\Delta}_{n}^{\ast}$\textit{.}
\bigskip
Here we see that the remainder bound for splines with $\kappa>0$ has an
additional term $\bar{\Delta}_{n}$. When $\zeta_{\gamma}$ is large enough,
i.e. $\gamma_{0}(x)$ is smooth enough and the order of the spline is big
enough, so that $\zeta_{\gamma}>1/2,$ the additional $\bar{\Delta}_{n}$ will
be no larger than $\bar{\Delta}_{n}^{\ast}.$ Also, when $\zeta_{\gamma}>1/2$
the condition of Robins et al.(2009) for semiparametric efficient estimation
is met for the expected conditional covariance and missing data examples for
any $\zeta_{\alpha}$. Thus, when $\gamma_{0}(x)$ is smooth enough to meet the
Robins et al.(2009) condition without imposing any smoothness on $\alpha
_{0}(x)$ the plug-in estimator will have the remainder bound $\bar{\Delta
_{n}^{\ast}.$
More generally there will exist a $K$ such that $\bar{\Delta}_{n}+\bar{\Delta
}_{n}^{\ast}$ goes to zero if and only if
\begin{equation}
2\min\{\kappa+1,s_{\gamma}\}+\min\{\kappa+1,s_{\alpha}\}>r. \label{minimal
\end{equation}
This condition is slightly stronger than that of Robins et al.(2009) which is
$2s_{\gamma}+2s_{\alpha}>r.$ Also, the remainder may go to zero when when $K$
is chosen to maximize the rate at which the mean square error of $\hat{\gamma
}_{0}(x)$ goes to zero. Setting $K^{-2\zeta_{r}}$ proportional to $K/n$ is
such a choice of $K$. Here the remainder term goes to zero for $\min
\{\kappa+1,s_{\gamma}\}>r/\left[ 2(1+r)\right] $ and $\min\{\kappa
+1,s_{\alpha}\}>r/2,$ a stronger condition for $s_{\gamma}$ and the same
condition for $s_{\alpha}$ as would hold if the remainder were $\bar{\Delta
}_{n}^{\ast}$.
\section{Partially Linear Projection}
In this Section we consider a series estimator of partially linear projection
coefficients. We give this example special attention because the DCDR
estimator will have a remainder bound that is only $\bar{\Delta}_{n}^{\ast}$.
The remainder bounds we find for other doubly robust estimators may be larger.
What appears to make the partially linear projection special in this respect
is that $\alpha_{0}(x)$ is a conditional expectation of an observed variable.
In other cases where $\alpha_{0}(x)$ is not a conditional expectation we do
not know if the remainder bound will be $\bar{\Delta}_{n}^{\ast}$ for bases
other than Haar.
The parameter vector of interest in this Section i
\[
\beta_{0}=\left( E[\{a_{i}-E[a_{i}|w_{i}]\}a_{i}^{T}]\right) ^{-1
E[\{a_{i}-E[a_{i}|w_{i}]\}y_{i}].
\]
This vector $\beta_{0}$ can be thought of as the coefficients of $a_{i}$ in a
projection of $y_{i}$ on the set of functions of the form $a_{i}^{T
\beta+\lambda(x_{i})$ that have finite mean square. Note that this definition
of $\beta_{0}$ places no substantive restrictions on the distribution of data,
unlike the conditional expectation partially linear model where $E[y_{i
|a_{i},w_{i}]=a_{i}^{T}\beta_{0}+\xi_{0}(x_{i}).$
The object $\beta_{0}$ is of interest in a treatment effects model where
$a_{i}$ is a binary treatment, $y_{i}$ is the observed response, $x_{i}$ are
covariates, and outcomes with and without treatment are assumed to be mean
independent of $a_{i}$ conditional on $w_{i}$. Under an ignorability condition
that the outcome is mean independent of treatment conditional on covariates,
$E[y_{i}|a_{i}=1,x_{i}]-E[y_{i}|a_{i}=0,x_{i}]$ is the average treatment
effect conditional on $x_{i}$. Also for $\pi_{i}=\Pr(a_{i}=1|x_{i}),
\[
\beta_{0}=\frac{E[\pi_{i}(1-\pi_{i})\{E[y_{i}|a_{i}=1,x_{i}]-E[y_{i
|a_{i}=0,x_{i}]\}]}{E[\pi_{i}(1-\pi_{i})]}.
\]
Here we have the known interpretation of $\beta_{0}$ as a weighted average of
conditional average treatment effects, with weights $\pi_{i}(1-\pi_{i
)/E[\pi_{i}(1-\pi_{i})].$
It is straightforward to construct a DCDR estimator of $\beta_{0}$. Let
$\gamma_{0}(x_{i})=E[y_{i}|x_{i}]$ and $\alpha_{0}(x_{i})=-E[a_{i}|x_{i}]$ as
before, except that $a_{i}$ may now be a vector. Also let $I_{\ell}$ denote
the index set for the $\ell^{th}$ group, and $\hat{I}_{\ell}$ and $\tilde
{I}_{\ell}$ the index sets for the observations used to obtain $\hat{\gamma
}_{\ell}$ and $\tilde{\alpha}_{\ell}$ respectively. For any function $g(z)$
let
\[
\bar{F}\{g(z)\}=\frac{1}{\bar{n}_{\ell}}\sum_{i\in I_{\ell}}g(z_{i}),\hat
{F}\{g(z)\}=\frac{1}{\hat{n}_{\ell}}\sum_{i\in\hat{I}_{\ell}}g(z_{i
),\tilde{F}\{g(z)\}=\frac{1}{\tilde{n}_{\ell}}\sum_{i\in\tilde{I}_{\ell
}g(z_{i}).
\]
These represent sample averages over each of the groups of observations. Let
$\hat{\gamma}_{\ell}(x),$ $\hat{\alpha}_{\ell}(x),$ and $\tilde{\alpha}_{\ell
}(x)$ be series estimators of $\gamma_{0}(x)$ and $\alpha_{0}(x)$ given by
\begin{align*}
\hat{\gamma}_{\ell}(x) & =p(x)^{T}\hat{\delta}_{\ell},\hat{\alpha}_{\ell
}(x)=p(x)^{T}\hat{\delta}_{\ell\alpha},\tilde{\alpha}_{i}(x)=p(x)^{T
\tilde{\delta}_{\ell\alpha},\\
\hat{\delta} & =\hat{\Sigma}^{-1}\hat{h},\text{ }\hat{\delta}_{\alpha
=\hat{\Sigma}^{-}\hat{h}_{\alpha},\text{ }\tilde{\delta}_{\alpha
=\tilde{\Sigma}^{-}\tilde{h}_{\alpha},\text{ }\hat{\Sigma}=\hat{F
\{p(x)p(x)^{T}\},\text{ }\tilde{\Sigma}=\tilde{F}\{p(x)p(x)^{T}\},\\
\hat{h} & =\hat{F}\{p(x)y\},\text{ }\hat{h}_{\alpha}=\hat{F}\{p(x)a\},\text{
}\tilde{h}_{\alpha}=\tilde{F}\{p(x)a\}.
\end{align*}
The estimator we consider i
\begin{equation}
\tilde{\beta}=\left( \sum_{\ell=1}^{L}\sum_{i\in I_{\ell}}[a_{i
-\tilde{\alpha}_{\ell}(x_{i})][a_{i}-\hat{\alpha}_{\ell}(x_{i})]^{T}\right)
^{-1}\sum_{\ell=1}^{L}\sum_{i\in I_{\ell}}[a_{i}-\tilde{\alpha}_{\ell
(x_{i})][y_{i}-\hat{\gamma}_{\ell}(x_{i})]. \label{drpartlin
\end{equation}
This estimator can be thought of as an instrumental variables estimator with
left hand sides variable $y_{i}-\hat{g}_{i}(x_{i})$, right hand side variables
$a_{i}-\hat{\alpha}_{i}(x_{i}),$ and instruments $a_{i}-\tilde{\alpha
_{i}(x_{i}).$ Here the instrumental variables form is used to implement the
cross-fitting and not to correct for endogeneity. This form means that every
element of the matrix that is inverted and of the vector it is multiplying is
a DCDR estimator of an expected conditional covariance like that described earlier.
\bigskip
\textsc{Theorem 5: }\textit{If Assumptions 1 - 3 and 5 are satisfied,
}$\lambda_{0}(x)=E[y_{i}-a_{i}^{T}\beta_{0}|x_{i}=x]$\textit{ is Holder of
order }$s_{\gamma}$\textit{ and each component of }$E[a_{i}|x_{i}=x]$\textit{
is Holder of order }$s_{\alpha}$\textit{, }$H=E[Var(a_{i}|x_{i})]$\textit{
exists and is nonsingular, and }$\Omega=E[\{a_{i}-\alpha_{0}(x_{i
)\}\{a_{i}-\alpha_{0}(x_{i})\}^{T}\varepsilon_{i}^{2}]$\textit{ exists then
for }$\varepsilon_{i}=y_{i}-a_{i}^{T}\beta_{0}-\lambda_{0}(x_{i})$\textit{ and
}$\psi(z_{i})=H^{-1}(a_{i}-E[a_{i}|x_{i}])\varepsilon_{i},
\[
\sqrt{n}(\tilde{\beta}-\beta_{0})=\frac{1}{\sqrt{n}}\sum_{i=1}^{n}\psi
(z_{i})+O_{p}(\bar{\Delta}_{n}^{\ast}).
\]
\bigskip
The regularity conditions here are somewhat stronger than those of Donald and
Newey (1994), who do not require any restrictions on the marginal distribution
of $x_{i}$ nor use any sample splitting. This strengthening is useful to
achieve the fast remainder for partially linear projections rather than for
the coefficients $\beta_{0}$ in the conditional mean model $E[y_{i
|a_{i},x_{i}]=a_{i}^{T}\beta_{0}+\lambda_{0}(x_{i})$ of Donald in Newey
(1994). The upper bound on the rate at which $K$ can grow is slightly stricter
than in Donald and Newey (1994) due to the presence of the $\ln(K)$ term in
Assumption 5. Thus, under somewhat stronger conditions than those of Donald
and Newey (1994) the DCDR estimator of a partially linear projection has a
fast remainder just as in Donald and Newey (1994). Consequently, the estimator
will be root-n consistent under minimal conditions.
When the Robins et al. (2009) minimal condition $(s_{\gamma}+s_{\alpha
})/r>1/2$ holds, consider a spline with $\kappa>\max\{s_{\gamma},s_{\alpha
}\}-1$, so that $\zeta_{\gamma}+\zeta_{\alpha}=(s_{\gamma}+s_{\alpha})/r>1/2$.
Then there will exist a $K$ such that $\bar{\Delta}_{n}^{\ast}\longrightarrow
0$ and hence $\tilde{\beta}$ will be semiparametric efficient. Thus we see
that the DCDR estimator $\tilde{\beta}$ of equation (\ref{drpartlin
)\thinspace will be semiparametric efficient under nearly minimal conditions
and has a fast remainder term.
\bigskip\ \
\section{The Doubly Robust Estimator}
In this Section we show that the DCDR estimator has improved properties
relative to the plug-in estimator, in the sense that the remainder bounds are
smaller for the DCDR robust estimator. We have not yet been able to obtain the
fast remainder for the doubly robust estimator for general splines, for the
same reasons as for plug-in estimators.
Before giving results for series estimators we give a result that applies to
any doubly robust estimator of a linear functional. Let $\mathcal{A}_{n}$
denote an event that occurs with probability approaching one. For example,
$\mathcal{A}_{n}$ could include the set of data points where $\hat{\Sigma
}_{\ell}$ is nonsingular.
\bigskip
\textsc{Lemma 6:} \textit{If Assumptions 1 and 2 are satisfied, }$\hat{\gamma
}_{\ell}(x)$\textit{ and }$\hat{\alpha}_{\ell}(x)$\textit{ do not use
observations in }$I_{\ell}$\textit{, }$Var(y_{i}|x_{i})$\textit{ is bounded,
and there are }$\Delta_{n}^{m},$ $\Delta_{m}^{\gamma}$, and $\Delta
_{m}^{\alpha}$, \textit{such that for each }$\left( \ell=1,...,L\right)
$,\textit{
\begin{align*}
1(\mathcal{A}_{n})\left\{ \int[m(z,\hat{\gamma}_{\ell})-m(z,\gamma_{0
)]^{2}F_{0}(dz)\right\} ^{1/2} & =O_{p}(\Delta_{n}^{m}),\\
1(\mathcal{A}_{n})\left\{ \int\alpha_{0}(x)^{2}[\hat{\gamma}_{\ell
(x)-\gamma_{0}(x)]^{2}F_{0}(dz)\right\} ^{1/2} & =O_{p}(\Delta_{n}^{\gamma
}),\\
1(\mathcal{A}_{n})\left\{ \int[\tilde{\alpha}_{\ell}(x)-\alpha_{0
(x)]^{2}F_{0}(dz)\right\} ^{1/2} & =O_{p}(\Delta_{n}^{\alpha}),
\end{align*}
\textit{ then
\[
\sqrt{n}(\tilde{\beta}-\beta_{0})=\frac{1}{\sqrt{n}}\sum_{i=1}^{n}\psi
(z_{i})-\frac{1}{\sqrt{n}}\sum_{\ell=1}^{L}\sum_{i\in I_{\ell}}[\tilde{\alpha
}_{\ell}(x_{i})-\alpha_{0}(x_{i})][\hat{\gamma}_{\ell}(x_{i})-\gamma_{0
(x_{i})]+O_{p}(\Delta_{n}^{m}+\Delta_{n}^{\gamma}+\Delta_{n}^{\alpha}).
\]
\bigskip
This result does not require that $\hat{\gamma}_{\ell}(x)$ and\textit{
$\hat{\alpha}_{\ell}(x)$ be computed from different samples. It only uses the
sample splitting in averaging over different observations that are used to
construct $\hat{\gamma}_{\ell}$ and $\tilde{\alpha}_{\ell}.$ Also, it is known
from Newey, Hsieh, and Robins (1998, 2004) and Chernozhukov et. al. (2016)
that adding the adjustment term to the plug-in estimator makes the remainder
second order. The conclusion of Lemma 6 gives an explicit form of that result.
Under weak conditions that only involve mean-square convergence the doubly
robust estimator has a remainder that is the sum of three stochastic
equicontinuity remainders and the quadratic, split sample remainder involving
the product of the estimation remainders for the two nonparametric estimators
$\hat{\gamma}$ and $\tilde{\alpha}$.
For series estimators the DCDR estimator will have $\bar{\Delta}_{n}^{\ast}$
as its primary remainder for the Haar basis
\bigskip
\textsc{Theorem 7:} \textit{If Assumptions 1-6 are satisfied, }$\kappa
=0$\textit{, and }$K[\ln(n)]^{2}/n\longrightarrow0$ \textit{then
\[
\sqrt{n}(\tilde{\beta}-\beta_{0})=\frac{1}{\sqrt{n}}\sum_{i=1}^{n}\psi
(z_{i})+O_{p}(\bar{\Delta}_{n}^{\ast}+\Delta_{n}^{m}).
\]
\textit{If in addition }$d_{K}$\textit{ is bounded as a function of
$K$\textit{ and }$\zeta_{m}=\zeta_{\gamma}$\textit{ then }$\Delta_{n}^{m}\leq
C\bar{\Delta}_{n}^{\ast}$\textit{.}
\bigskip
One improvement of the DCDR estimator over the plug-in estimator is that the
remainder no longer contains the $K^{-\zeta_{\gamma}}\ln(n)$ term. The
elimination of this term is the direct result of the DCDR estimator having a
smaller remainder than the plug-in estimator.
For splines of order $\kappa>0$ we can obtain a result for the DCDR estimator
that improves on the plug-in remainder bound.
\bigskip
\textsc{Theorem 8:} \textit{If Assumptions 1-6 are satisfied} \textit{then
\[
\sqrt{n}(\tilde{\beta}-\beta_{0})=\frac{1}{\sqrt{n}}\sum_{i=1}^{n}\psi
(z_{i})+O_{p}(\bar{\Delta}_{n}^{\ast}+\Delta_{n}^{m}+\tilde{\Delta
_{n}),\tilde{\Delta}_{n}=\sqrt{\frac{K^{3}\left[ \ln(K)\right] ^{2
(1+d_{K})}{n^{3}}}K^{(1/2)-\zeta_{\gamma}}.
\]
\textit{If in addition }$d_{K}$\textit{ is bounded as a function of
$K$\textit{ and }$\zeta_{m}=\zeta_{\gamma}$\textit{ then }$\Delta_{n}^{m}\leq
C\bar{\Delta}_{n}^{\ast}$\textit{.}
\bigskip
Here we see that the remainder bound for the DCDR estimator will generally be
smaller than the remainder bound for the plug-in estimator because the term
$K\ln(K)/n$ is raised to the 3/2 power rather than the 1/2 power. Here it
turns out that there will exist a $K$ such that all of the remainder terms go
to zero i
\[
4\zeta_{\gamma}+3\zeta_{\alpha}\geq2.
\]
For example, if $s_{\gamma}=s_{\alpha}$ and $\kappa\geq\max\{s_{\gamma
},s_{\alpha}\}-1,$ this requires $s_{\gamma}>2r/7,$ which is only slightly
stronger than the $s_{\gamma}>r/4$ condition of Robins et al.(2009) that is
required for existence of a semiparametric efficient estimator. Also,
existence of $K$ such that the remainder will be of size no larger than
$\bar{\Delta}_{n}^{\ast}$ require
\[
2\zeta_{\gamma}+\zeta_{\alpha}\geq1.
\]
For example, if $\zeta_{\gamma}=\zeta_{\alpha}$ this requires $\zeta_{\gamma
}>1/3,$ which is weaker than the condition $\zeta_{\gamma}>1/2$ for the
remainder for the plug-in estimator. In these ways the DCDR estimator improves
on the plug-in estimator.
\bigskip
\section{Appendix}
This Appendix gives the proofs of the results in the body of the paper. We
begin with the proofs of Lemma 1 and Lemma 6 because they are not restricted
to series estimators.
\bigskip
\textbf{Proof of Lemma 1:} Define $\hat{\Delta}_{i\ell}=m(z_{i},\hat{\gamma
})-m(z_{i},\gamma_{0})-\bar{m}(\hat{\gamma}_{\ell})+\beta_{0}$ for $i\in
I_{\ell}$ and let $Z(I_{\ell})^{c}$ denote the set of observations $z_{i}$ for
$i\notin I_{\ell}$. Note that $E[\hat{\Delta}_{i\ell}|Z(I_{\ell})^{c}]=0$ by
construction for $i\in I_{\ell}.$ Also by independence of the observations,
$E[\hat{\Delta}_{i\ell}\hat{\Delta}_{j\ell}|Z(I_{\ell})^{c}]=0$ for $i,j\in
I_{\ell}.$ Furthermore, $E[\hat{\Delta}_{i\ell}^{2}|Z(I_{\ell})^{c}]\leq
\int[m(z,\hat{\gamma}_{\ell})-m(z,\gamma_{0})]^{2}F_{0}(dz)=O_{p}((\Delta
_{n}^{m})^{2})$ for $i\in I_{\ell}$. Then we have
\[
E[\left( \frac{1}{\sqrt{n}}\sum_{i\in I_{\ell}}\hat{\Delta}_{i\ell}\right)
^{2}|Z(I_{\ell})^{c}]=\frac{1}{n}E[\left( \sum_{i\in I_{\ell}}\hat{\Delta
}_{i\ell}\right) ^{2}|Z(I_{\ell})^{c}]=\frac{\bar{n}_{\ell}}{n}E[\hat{\Delta
}_{i\ell}^{2}|Z(I_{\ell})^{c}]=O_{p}((\Delta_{n}^{m})^{2}).
\]
Therefore, by the Markov inequality we have $\sum_{i\in I_{\ell}}\hat{\Delta
}_{i\ell}/\sqrt{n}=O_{p}(\Delta_{n}^{m}).$ The first conclusion then follows fro
\[
\sqrt{n}(\hat{\beta}-\beta_{0})=\sum_{\ell=1}^{L}\frac{1}{\sqrt{n}}\sum_{i\in
I_{\ell}}\hat{\Delta}_{i\ell}+\frac{1}{\sqrt{n}}\sum_{i=1}^{n}[m(z_{i
,\gamma_{0})-\beta_{0}]+\sqrt{n}\sum_{\ell=1}^{L}\frac{\bar{n}_{\ell}}{n
[\bar{m}(\hat{\gamma}_{\ell})-\beta_{0}].
\]
For the second conclusion note by the subsamples being as close to equal size
as possible
\[
\frac{\bar{n}_{\ell}}{\hat{n}_{\ell}}=\frac{\bar{n}_{\ell}/n}{\hat{n}_{\ell
}/n}=\frac{1/L}{(L-1)/L}+O(n^{-1})=\frac{1}{(L-1)}+O(n^{-1}).
\]
Then b
\begin{align*}
\sqrt{n}\sum_{\ell=1}^{L}\frac{\bar{n}_{\ell}}{n}[\bar{m}(\hat{\gamma}_{\ell
})-\beta_{0}] & =\frac{1}{\sqrt{n}}\sum_{\ell=1}^{L}\bar{n}_{\ell
\sqrt{\frac{1}{\hat{n}_{\ell}}}\sqrt{\hat{n}_{\ell}}[\bar{m}(\hat{\gamma
}_{\ell})-\beta_{0}]=\sum_{\ell=1}^{L}\frac{\bar{n}_{\ell}}{\hat{n}_{\ell
}\frac{1}{\sqrt{n}}\sum_{i\notin I_{\ell}}\phi(z_{i})+O_{p}(\Delta_{n}^{\phi
})\\
& =\frac{1}{L-1}\frac{1}{\sqrt{n}}\sum_{\ell=1}^{L}\sum_{i\notin I_{\ell
}\phi(z_{i})+O_{p}(\Delta_{n}^{\phi}+n^{-1})\\
& =\frac{1}{L-1}\frac{1}{\sqrt{n}}\sum_{\ell=1}^{L}(\sum_{i=1}^{n}\phi
(z_{i})-\sum_{i\in I_{\ell}}\phi(z_{i}))+O_{p}(\Delta_{n}^{\phi}+n^{-1})\\
& =\frac{1}{\sqrt{n}}\sum_{i=1}^{n}\phi(z_{i})+O_{p}(\Delta_{n}^{\phi
+n^{-1}).
\end{align*}
The conclusion then follows by the triangle inequality. \textit{Q.E.D}.
\bigskip
\textbf{Proof of Lemma 6:} By adding and subtracting terms it follows that for
$\varepsilon_{i}=y_{i}-\gamma_{0}(x_{i})$ and $\phi(z_{i})=\alpha_{0
(x_{i})[y_{i}-\gamma_{0}(x_{i})]
\begin{align*}
\tilde{\alpha}_{\ell}(x_{i})[y_{i}-\hat{\gamma}_{\ell}(x_{i})] & =\phi
(z_{i})-\alpha_{0}(x_{i})[\hat{\gamma}(x_{i})-\gamma_{0}(x_{i})]+[\tilde
{\alpha}_{\ell}(x_{i})-\alpha_{0}(x_{i})]\varepsilon_{i}\\
& -[\tilde{\alpha}_{\ell}(x_{i})-\alpha_{0}(x_{i})][\hat{\gamma
(x_{i})-\gamma_{0}(x_{i})].
\end{align*}
The first conclusion of Lemma 1 with $m(z,\gamma)=\alpha_{0}(x)\gamma(x)$
give
\[
\frac{1}{\sqrt{n}}\sum_{\ell=1}^{L}\sum_{i\in I_{\ell}}\alpha_{0}(x_{i
)[\hat{\gamma}_{\ell}(x_{i})-\gamma_{0}(x_{i})]=\sqrt{n}\sum_{\ell=1}^{L
\frac{\bar{n}_{\ell}}{n}\int\alpha(x)[\hat{\gamma}_{\ell}(x)-\gamma
_{0}(x)]F_{0}(dx)+O_{p}(\Delta_{n}^{\gamma}).
\]
Assumption 1 and the first conclusion of Lemma 1 also giv
\begin{align*}
\sqrt{n}\sum_{\ell=1}^{L}\frac{\bar{n}_{\ell}}{n}\int\alpha(x)[\hat{\gamma
}_{\ell}(x)-\gamma_{0}(x)]F_{0}(dx) & =\sqrt{n}\sum_{\ell=1}^{L}\frac
{\bar{n}_{\ell}}{n}[\bar{m}(\hat{\gamma}_{\ell})-\beta_{0}]\\
& =\frac{1}{\sqrt{n}}\sum_{\ell=1}^{L}\sum_{i\in I_{\ell}}[m(z_{i
,\hat{\gamma}_{\ell})-m(z_{i},\gamma_{0})]+O_{p}(\Delta_{n}^{m}).
\end{align*}
In addition, if we take $\gamma=\alpha$ and $m(z,\alpha)=\alpha(x)\varepsilon$
then $\int m(z,\alpha)F_{0}(dz)=0$, so that by Lemma 1
\[
\frac{1}{\sqrt{n}}\sum_{\ell=1}^{L}\sum_{i\in I_{\ell}}[\tilde{\alpha}_{\ell
}(x_{i})-\alpha_{0}(x_{i})]\varepsilon_{i}=O_{p}(\Delta_{n}^{\alpha}).
\]
Then collecting terms we hav
\begin{align*}
\sqrt{n}(\tilde{\beta}-\beta_{0}) & =\frac{1}{\sqrt{n}}\sum_{i=1
^{n}[m(z_{i},\gamma_{0})-\beta_{0}]\\
& +\frac{1}{\sqrt{n}}\sum_{\ell=1}^{L}\sum_{i\in I_{\ell}}\{m(z_{i
,\hat{\gamma})-m(z_{i},\gamma_{0})+\tilde{\alpha}_{\ell}(x_{i})[y_{i
-\hat{\gamma}_{\ell}(x_{i})]\}\\
& =\frac{1}{\sqrt{n}}\sum_{i=1}^{n}\psi(z_{i})+\frac{1}{\sqrt{n}}\sum
_{\ell=1}^{L}\sum_{i\in I_{\ell}}\alpha_{0}(x_{i})[\hat{\gamma}_{\ell
(x_{i})-\gamma_{0}(x_{i})]+O_{p}(\Delta_{n}^{m}+\Delta_{n}^{\gamma})\\
& \frac{1}{\sqrt{n}}\sum_{\ell=1}^{L}\sum_{i\in I_{\ell}}\{-\alpha_{0
(x_{i})[\hat{\gamma}(x_{i})-\gamma_{0}(x_{i})]+[\tilde{\alpha}_{\ell
(x_{i})-\alpha_{0}(x_{i})]\varepsilon_{i}\}\\
& -\frac{1}{\sqrt{n}}\sum_{\ell=1}^{L}\sum_{i\in I_{\ell}}[\tilde{\alpha
}_{\ell}(x_{i})-\alpha_{0}(x_{i})][\hat{\gamma}(x_{i})-\gamma_{0}(x_{i})]\\
& =\frac{1}{\sqrt{n}}\sum_{i=1}^{n}\psi(z_{i})-\frac{1}{\sqrt{n}}\sum
_{\ell=1}^{L}\sum_{i\in I_{\ell}}[\tilde{\alpha}_{\ell}(x_{i})-\alpha
_{0}(x_{i})][\hat{\gamma}(x_{i})-\gamma_{0}(x_{i})]\\
& +O_{p}(\Delta_{n}^{m}+\Delta_{n}^{\gamma}+\Delta_{n}^{\alpha}).Q.E.D.
\end{align*}
\bigskip
We now turn to proofs of the results involving series estimators. Let
$\Sigma=E[p(x_{i})p(x_{i})^{T}]$. It follows from Assumption 3 that $\Sigma$
is nonsingular, so we can replace $p(x)$ by $\Sigma^{-1/2}p(x)$ and so
normalize $\Sigma=I$ without changing the assumptions. We impose this
normalization throughout. Also, throughout the Appendix $C$ will denote a
generic constant not depending on $n$ or $K.$
We next prove the key result in eq. (\ref{key result}) for a zero order
spline. Let $r(x)=\gamma_{0}(x)-\gamma_{K}(x)$ and $\hat{h}_{2}=\sum_{i=1
^{n}p(x_{i})r(x_{i})/n$ as in the body of the paper. Also let $\left\Vert
A\right\Vert _{op}$ denote the operator norm of a symmetric matrix $A$, being
the largest absolute value of eigenvalues.
\bigskip
\textsc{Lemma A1: }\textit{If Assumptions 1-6 are satisfied, }$\kappa=0,$
$K[\ln(n)]^{2}/n\longrightarrow0$, \textit{then for }$\hat{U}=\sum_{j=0
^{J-1}(I-\hat{\Sigma})^{j}\hat{h}_{2},$ $\hat{W}=\hat{\Sigma}^{-1
(I-\hat{\Sigma})^{J}\hat{h}_{2},$ $J=int[\ln(n)]$ \textit{and any constant
}$\Delta>0,
\[
\left\Vert E[\hat{U}\hat{U}^{T}]\right\Vert _{op}\leq C\frac{K^{-2\zeta
_{\gamma}}[\ln(n)]^{2}}{n},\hat{W}^{T}\hat{W}=o_{p}(n^{-\Delta}).
\]
Proof: Let $Q_{i}=p(x_{i})p(x_{i})^{T},$ $\Delta_{i}=I-Q_{i}$, and
$h_{i}=p(x_{i})r(x_{i})$. Note that $E[\Delta_{i}]=0$ and $E[h_{i}]=0.$ For
each $j$ let $L=2j+2$. Let $\hat{U}_{j}=(I-\hat{\Sigma})^{j}\hat{h}_{2}.$ Then
we hav
\[
E[\hat{U}_{j}\hat{U}_{j}^{T}]=\frac{1}{n^{2j+2}}\sum_{i_{1},...,i_{L}=1
^{n}E[\left( \Pi_{\ell=1}^{j}\Delta_{i_{\ell}}\right) h_{i_{j+1}}h_{i_{j+2
}^{T}\left( \Pi_{\ell=j+3}^{L}\Delta_{i_{\ell}}\right) ].
\]
Consider any $(i_{1},...,i_{L})$ such that $i_{j+1}\neq i_{j+2}.$ Let
$i^{\ast}=i_{j+1}$ and let $Z_{i^{\ast}}^{c}$ denote the vector of
observations other than $z_{i^{\ast}}.$ Note tha
\[
E[\left( \Pi_{\ell=1}^{j}\Delta_{i_{\ell}}\right) h_{i_{j+1}}h_{i_{j+2}
^{T}\left( \Pi_{\ell=j+3}^{L}\Delta_{i_{\ell}}\right) ]=E[E[\left(
\Pi_{\ell=1}^{j}\Delta_{i_{\ell}}\right) h_{i^{\ast}}h_{i_{j+2}}^{T}\left(
\Pi_{\ell=j+3}^{L}\Delta_{i_{\ell}}\right) |Z_{i^{\ast}}^{c}]].
\]
We proceed to show tha
\[
E[\left( \Pi_{\ell=1}^{j}\Delta_{i_{\ell}}\right) h_{i^{\ast}}h_{i_{j+2
}^{T}\left( \Pi_{\ell=j+3}^{L}\Delta_{i_{\ell}}\right) |Z_{i^{\ast}
^{c}]=0.
\]
Note that conditional on $Z_{i^{\ast}}^{c}$ we can treat all terms where
$i_{\ell}\neq i^{\ast}$ as constant. Also, because $i_{j+1}\neq i_{j+2}$ all
terms where $i_{\ell}=i^{\ast}$ depend only on $p(x_{i^{\ast}}).$ Therefore
for the scalar $r(x)=\gamma_{0}(x)-\gamma_{K}(x)$ we hav
\[
E[\left( \Pi_{\ell=1}^{j}\Delta_{i_{\ell}}\right) h_{i^{\ast}}h_{i_{j+2
}^{T}\left( \Pi_{\ell=j+3}^{L}\Delta_{i_{\ell}}\right) |Z_{i^{\ast}
^{c}]=E[A_{1}(p(x_{i^{\ast}}))p(x_{i^{\ast}})r(x_{i^{\ast}})A_{2
(p(x_{i^{\ast}}))]=E[A(p(x_{i^{\ast}}))r(x_{i^{\ast}})],
\]
where $A_{1}(p)$ and $A_{2}(p)$ are $K\times K$ and $1\times K$ matrices of
functions of $p$ and $A(p)=A_{1}(p)pA_{2}(p).$ Let $X_{k}$ denote the interval
where $p_{k}(x)$ is nonzero. Note that $p_{k}(x)=1(x\in X_{k})c_{k}$ for a
constant $c_{k}$, and henc
\[
A(p(x_{i^{\ast}}))=\sum_{k=1}^{K}A_{k}1(x_{i^{\ast}}\in X_{k}),A_{k
=A((0,...,0,c_{k},0,...,0)^{T}).
\]
Therefore by orthogonality of each $p_{k}(x_{i})$ with $r(x_{i})$ in the
population
\[
E[\left( \Pi_{\ell=1}^{j}\Delta_{i_{\ell}}\right) h_{i^{\ast}}h_{i_{j+2
}^{T}\left( \Pi_{\ell=j+3}^{L}\Delta_{i_{\ell}}\right) |Z_{i^{\ast}
^{c}]=\sum_{k=1}^{K}A_{k}E[1(x_{i^{\ast}}\in X_{k})r(x_{i^{\ast}})]=\sum
_{k=1}^{K}A_{k}c_{k}^{-1}E[p_{k}(x_{i^{\ast}})r(x_{i^{\ast}})]=0.
\]
Therefore by iterated expectations, if $i_{j+1}\neq i_{j+2}$ we hav
\[
E[\left( \Pi_{\ell=1}^{j}\Delta_{i_{\ell}}\right) h_{i_{j+1}}h_{i_{j+2}
^{T}\left( \Pi_{\ell=j+3}^{L}\Delta_{i_{\ell}}\right) ]=0.
\]
It then follows that for $\Psi=E[h_{i_{j+1}}h_{i_{j+1}}^{T}]=E[r(x_{i
)^{2}p(x_{i})p(x_{i})^{T}]$ and $\tilde{\Delta}_{i_{j+1}}=h_{i_{j+1
}h_{i_{j+1}}^{T}-\Psi,
\begin{align*}
E[\hat{U}_{j}\hat{U}_{j}^{T}] & =\frac{1}{n^{2j+2}}\sum_{i_{1
,..,i_{j+1},i_{j+3}....,i_{L}=1}^{n}E[\left( \Pi_{\ell=1}^{j}\Delta_{i_{\ell
}}\right) h_{i_{j+1}}h_{i_{j+1}}^{T}\left( \Pi_{\ell=j+3}^{L}\Delta
_{i_{\ell}}\right) ]=T_{1}^{j}+T_{2}^{j},\\
T_{1}^{j} & =\frac{1}{n^{2j+1}}\sum_{i_{1},..,i_{j},i_{j+3}....,i_{L}=1
^{n}E[\left( \Pi_{\ell=1}^{j}\Delta_{i_{\ell}}\right) \Psi\left( \Pi
_{\ell=j+3}^{L}\Delta_{i_{\ell}}\right) ],\\
T_{2}^{j} & =\frac{1}{n^{2j+2}}\sum_{i_{1},..,i_{j+1},i_{j+3}....,i_{L
=1}^{n}E[\left( \Pi_{\ell=1}^{j}\Delta_{i_{\ell}}\right) \tilde{\Delta
}_{i_{j+1}}\left( \Pi_{\ell=j+3}^{L}\Delta_{i_{\ell}}\right) ].
\end{align*}
Consider first $T_{2}^{j}$. Note that $\Delta_{i}$ and $\tilde{\Delta}_{i}$
are diagonal matrices, so that $E[\left( \Pi_{\ell=1}^{j}\Delta_{i_{\ell
}\right) \tilde{\Delta}_{i_{j+1}}\left( \Pi_{\ell=j+3}^{L}\Delta_{i_{\ell
}\right) ]$ is a diagonal matrix, with $k^{th}$ diagonal element given by
$E[\left( \Pi_{\ell=1}^{j}\Delta_{k,i_{\ell}}\right) \tilde{\Delta
}_{k,i_{j+1}}\left( \Pi_{\ell=j+3}^{L}\Delta_{k,i_{\ell}}\right) ]$ where
\[
\Delta_{k,i}=p_{k}(x_{i})^{2}-E[p_{k}(x_{i})^{2}],\tilde{\Delta}_{k,i_{j+1
}=r(x_{i})^{2}p_{k}(x_{i})^{2}-E[r(x_{i})^{2}p_{k}(x_{i})^{2}].
\]
The largest absolute value of the eigenvalues of a diagonal matrix is the
maximum of the absolute values of the diagonal elements, so it suffices to
show that the conclusion holds for these diagonal elements. We will consider
the $k^{th}$ diagonal element but for notational convenience drop the $k$
subscript in what follows.
Note that $p_{k}(x_{i})^{2}\leq BK$ for some $B$ that does not vary with $k$
or $j.$ Also, for any random variable $Y_{i}$ and $\mu=E[Y_{i}],$ note that by
Jensen's inequality, $\left\vert \mu\right\vert ^{s}\leq E[|Y_{i}|^{s}]$ for
$s\geq1.$ Then for any positive $s$,
\[
E[|Y_{i}-\mu|^{s}]\leq E[\left( |Y_{i}|+|\mu|\right) ^{s}]\leq
E[2^{s-1}\left( |Y_{i}|^{s}+|\mu|^{s}\right) ]\leq2^{s-1}\left(
E[|Y_{i}|^{s}]+|\mu|^{s}\right) ]\leq2^{s}E[|Y_{i}|^{s}]
\]
Then for any positive integer $s$, by the triangle inequality and the
definitions of $\Delta_{i},
\begin{equation}
\left\vert E[\Delta_{i}^{s}]\right\vert \leq2^{s}E[p_{k}(x_{i})^{2s}]\leq
2^{s}(BK)^{s-1}E[p_{k}(x_{i})^{2}]\leq(4BK)^{s-1}\leq(CK)^{s-1}.
\label{powbound1
\end{equation}
Also, by $r(x_{i})^{2}\leq DK^{-2\zeta_{\gamma}}$ we hav
\begin{align}
\left\vert E[(\Delta_{i})^{s}\tilde{\Delta}_{i}]\right\vert & \leq
E[\left\vert \Delta_{i}\right\vert ^{s}(r(x_{i})^{2}p_{k}(x_{i})^{2
+E[r(x_{i})^{2}p_{k}(x_{i})^{2}])]\label{powbound}\\
& \leq E[(p_{k}(x_{i})^{2}+E[p_{k}(x_{i})^{2}])^{s+1}]DK^{-2\zeta_{\gamma
}\nonumber\\
& \leq2^{s+1}E[p_{k}(x_{i})^{2s+2}]DK^{-2\zeta_{\gamma}}\leq2^{s+1
(BK)^{s}DK^{-2\zeta_{\gamma}}\nonumber\\
& \leq(4(D+1)BK)^{s}K^{-2\zeta_{\gamma}}\leq(CK)^{s}K^{-2\zeta_{\gamma
}.\nonumber
\end{align}
\qquad\qquad\qquad
Next conside
\[
T_{2}^{j}=\frac{1}{n^{2j+2}}\sum_{i_{1},..,i_{j+1},i_{j+3}....,i_{2j+2}=1
^{n}E[\left( \Pi_{\ell=1}^{j}\Delta_{i_{\ell}}\right) \tilde{\Delta
}_{i_{j+1}}\left( \Pi_{\ell=j+3}^{L}\Delta_{i_{\ell}}\right) ].
\]
The only terms in this sum that are nonzero are those where every index
$i_{\ell}$ is equal to at least one other index $i_{\ell^{\prime}}$, i.e.
where each index is "matched" with at least one other. Let $\tilde{\imath
}=(i_{1},..,i_{j+1},i_{j+3}....,i_{2j+2})^{T}$ denote the $2j+1$ dimensional
vector of indices where each $i_{\ell}$ is an integer in $[1,n].$ Let
$\Upsilon_{d}$ denote a set of all such $\tilde{\imath}$ with specified
indices that are equal to each other, but those matched indices are not equal
to any other indices. For example, one $\Upsilon_{d}$ is the set of
$\tilde{\imath}$ with $i_{1}=i_{j+1}=i_{j+3}=\cdots=i_{2J+2}$ and another is
the set of $\tilde{\imath}$ with $i_{1}=i_{2},i_{3}=\cdots=i_{2J+2},i_{2}\neq
i_{3}.$ For each $d$ each group of index coordinates that are equal to each
other can be thought of as a group of matching indices that we index by
$g_{d}.$ Let $m_{g_{d}}$ denote the number of indices in group $g_{d}$ and
$G_{d}$ denote the total number of groups. Note that the total number of
indices is $2j+1=\sum_{g_{d}=1}^{G_{d}}m_{g_{d}}$. Also, by eqs.
(\ref{powbound1}) and (\ref{powbound}) for each $\tilde{\imath}\in \Upsilon
_{d}$ we have
\[
|E[\left( \Pi_{\ell=1}^{j}\Delta_{i_{\ell}}\right) \tilde{\Delta}_{i_{j+1
}\left( \Pi_{\ell=j+3}^{L}\Delta_{i_{\ell}}\right) ]|\leq K^{-2\zeta
_{\gamma}
{\displaystyle\prod\limits_{g_{d}=1}^{G_{d}}}
\left( CK\right) ^{m_{gd}-1}=K^{-2\zeta_{\gamma}}\left( CK\right)
^{2j+1-G_{d}}.
\]
Also, the number of indices in $\Upsilon_{d}$ is less than or equal to
$n^{G_{d}}$ since each match can be regarded as a single index. Therefore
\begin{align*}
\left\vert \frac{1}{n^{2j+2}}\sum_{\tilde{\imath}\in \Upsilon_{d}}^{n}E[\left(
\Pi_{\ell=1}^{j}\Delta_{i_{\ell}}\right) \tilde{\Delta}_{i_{j+1}}\left(
\Pi_{\ell=j+3}^{L}\Delta_{i_{\ell}}\right) ]\right\vert & \leq\frac
{1}{n^{2j+2}}\sum_{\tilde{\imath}\in \Upsilon_{d}}^{n}\left\vert E[\left(
\Pi_{\ell=1}^{j}\Delta_{i_{\ell}}\right) \tilde{\Delta}_{i_{j+1}}\left(
\Pi_{\ell=j+3}^{L}\Delta_{i_{\ell}}\right) ]\right\vert \\
& \leq\left( \frac{1}{n^{2j+2}}\right) n^{G_{d}}K^{-2\zeta_{\gamma}}\left(
CK\right) ^{2j+1-G_{d}}\\
& =\frac{1}{n}K^{-2\zeta_{\gamma}}\left( \frac{CK}{n}\right) ^{2j+1-G_{d}}.
\end{align*}
By hypothesis $K/n\longrightarrow0$ so that for large enough $n$ we have
$CK/n<1$. For such $n$ we have $\left( CK/n\right) ^{2j+1-G_{d}}$ decreasing
in $G_{d}.$ Also, the largest $G_{d}$ is $j$, because each group must contain
at least two elements. Therefore, for large enough $n$ we hav
\[
\left\vert \frac{1}{n^{2j+2}}\sum_{\tilde{\imath}\in \Upsilon_{d}}^{n}E[\left(
\Pi_{\ell=1}^{j}\Delta_{i_{\ell}}\right) \tilde{\Delta}_{i_{j+1}}\left(
\Pi_{\ell=j+3}^{L}\Delta_{i_{\ell}}\right) ]\right\vert \leq\frac{1
{n}K^{-2\zeta_{\gamma}}\left( \frac{CK}{n}\right) ^{j+1}.
\]
Note that the bound on the right does not depend on $d$. Let $D$ denote the
total number of possible $\Upsilon_{d}$. Then since $E[\left( \Pi_{\ell
=1}^{j}\Delta_{i_{\ell}}\right) \tilde{\Delta}_{i_{j+1}}\left( \Pi
_{\ell=j+3}^{L}\Delta_{i_{\ell}}\right) ]=0$ if $\tilde{\imath}\noti
\cup_{d=1}^{D}\Upsilon_{d}$ we have
\[
\left\vert T_{2}^{j}\right\vert \leq\sum_{d=1}^{D}\left\vert \frac{1
{n^{2j+2}}\sum_{\tilde{\imath}\in \Upsilon_{d}}^{n}E[\left( \Pi_{\ell=1
^{j}\Delta_{i_{\ell}}\right) \tilde{\Delta}_{i_{j+1}}\left( \Pi_{\ell
=j+3}^{L}\Delta_{i_{\ell}}\right) ]\right\vert \leq\frac{D}{n}K^{-2\zeta
_{\gamma}}\left( \frac{CK}{n}\right) ^{j+1}.
\]
Note that there are exactly $j^{2j+1}$ ways of forming $2j+1$ indices into $j$
groups. Ignoring the fact that we can exclude ways where any group has only
one index we have the bound $D\leq j^{2j+1}$. Plugging in this bound into the
above inequality and maximizing over diagonal elements give
\[
\left\Vert T_{2}^{j}\right\Vert _{op}\leq\frac{j^{2j+1}K^{-2\zeta_{\gamma}
}{n}\left( \frac{CK}{n}\right) ^{j+1}.
\]
Arguing similarly for $T_{1}^{j}$ give
\[
\left\Vert T_{1}^{j}\right\Vert _{op}\leq\frac{j^{2j}K^{-2\zeta_{\gamma}}
{n}\left( \frac{CK}{n}\right) ^{j},
\]
where we take $0^{0}=1.$
Next note that by $K\ln(n)^{2}/n\longrightarrow0$ we have $CK/n\leq
1/[2\ln(n)^{2}]$ for large enough $n.$ Also, $j/\ln(n)\leq1$ for all $j<J.$
Then for $n$ large enoug
\[
\sum_{j=0}^{J-1}j^{2j}\left( \frac{CK}{n}\right) ^{j}\leq\sum_{j=0
^{J-1}j^{2j}\left( \frac{1}{2\ln(n)^{2}}\right) ^{j}\leq\sum_{j=0
^{J-1}\left( \frac{j}{\ln(n)}\right) ^{2j}\left( \frac{1}{2}\right)
^{j}\leq\sum_{j=0}^{J-1}\left( \frac{1}{2}\right) ^{j}\leq\sum_{j=0
^{\infty}\left( \frac{1}{2}\right) ^{j}=\frac{1}{1-\varepsilon_{n}}\leq2.
\]
Similarly it follows that for large enough $n,
\[
\sum_{j=0}^{J-1}j^{2j+1}\left( \frac{CK}{n}\right) ^{j+1}\leq\frac{1
{2\ln(n)}\sum_{j=0}^{J-1}\left( \frac{j}{\ln(n)}\right) ^{2j+1}\left(
\frac{1}{2}\right) ^{j}\leq\frac{1}{\ln(n)}.
\]
Then we have for large enough $n$,
\begin{align*}
\left\Vert \sum_{j=0}^{J-1}E[\hat{U}_{j}\hat{U}_{j}^{T}]\right\Vert _{op} &
\leq\left\Vert \sum_{j=0}^{J-1}\left( T_{1}^{j}+T_{2}^{j}\right) \right\Vert
_{op}\leq\sum_{j=0}^{J-1}\left( \left\Vert T_{1}^{j}\right\Vert
_{op}+\left\Vert T_{2}^{j}\right\Vert _{op}\right) \\
& \leq\frac{K^{-2\zeta_{\gamma}}}{n}\left( 2+\frac{1}{\ln(n)}\right)
\leq\frac{CK^{-2\zeta_{\gamma}}}{n}.
\end{align*}
Also by the Cauchy Schwartz inequality, $\hat{U}\hat{U}^{T}=\left( \sum
_{j=0}^{J-1}\hat{U}_{j}\right) \left( \sum_{j=0}^{J-1}\hat{U}_{j}\right)
^{T}\leq J^{2}\sum_{j=0}^{J-1}\hat{U}_{j}\hat{U}_{j}^{T}.$ Therefore, for
large enough $n,
\[
\left\Vert E[\hat{U}\hat{U}^{T}]\right\Vert _{op}\leq J^{2}\left\Vert
\sum_{j=0}^{J-1}E[\hat{U}_{j}\hat{U}_{j}^{T}]\right\Vert _{op}\leq\frac
{C\ln(n)^{2}K^{-2\zeta_{\gamma}}}{n},
\]
giving the first conclusion.
For the second conclusion note that for any $\Delta>0$
\[
\ln\{n^{\Delta}[\ln(n)]^{-2\ln(n)+2}\}=\ln(n)[\Delta-2\ln(\ln(n))]+2\ln
(\ln(n))\longrightarrow-\infty.
\]
It follows that $[\ln(n)]^{-2\ln(n)+2}=o(n^{-\Delta})$ for any $\Delta$. Also,
by $K/n=o(\left[ 1/\ln(n)\right] ^{2})$ we have $K\ln\left( K\right)
/n=o(1/\ln(n)),$ so tha
\[
\left( \frac{K\ln(K)}{n}\right) ^{2J}=o([\ln(n)]^{-2int(\ln(n))
)=o([\ln(n)]^{-2(\ln(n))+2})=o(n^{-\Delta}),
\]
for any $\Delta>0$. Then we hav
\[
\hat{1}\hat{W}^{T}\hat{W}\leq4\hat{h}_{2}^{T}(I-\hat{\Sigma})^{2J}\hat{h
_{2}\leq4\hat{h}_{2}^{T}\hat{h}_{2}\left\Vert I-\hat{\Sigma}\right\Vert
_{op}^{2J}=O_{p}(\frac{K^{1-2\zeta_{\gamma}}}{n}\left[ \frac{K\ln(K)
{n}\right] ^{2J})=o_{p}(n^{-\Delta}),
\]
for any $\Delta>0$ by Rudelson's (1999) law of large numbers for random
matrices, giving the second conclusion. $Q.E.D.$
\bigskip
In the Appendix we focus on one subset $\bar{I}=I_{\ell}$ of observations and
let $\hat{I}$ and $\tilde{I}$ denote the observations used to compute
$\hat{\delta}$ and $\tilde{\delta}_{\alpha}$ respectively. Let $\bar{n},$
$\hat{n},$ $\tilde{n}$ denote the number of elements of $\bar{I},$ $\hat{I}$,
and $\tilde{I}$ respectively an
\[
\bar{F}\{g(z)\}=\frac{1}{\bar{n}}\sum_{i\in\bar{I}}g(z_{i}),\hat
{F}\{g(z)\}=\frac{1}{\hat{n}}\sum_{i\in\hat{I}}g(z_{i}),\tilde{F
\{g(z)\}=\frac{1}{\tilde{n}}\sum_{i\in\tilde{I}}g(z_{i}),
\]
denote averages over the respective subsets of observations.
Next we make a few definitions we will use throughout. Let $\zeta_{\gamma},$
$\zeta_{\alpha},$ $\delta,$ $\gamma_{K},$ $\delta_{\alpha},$ and $\alpha_{K}$
be as defined following Assumption 4. Also, let
\begin{align*}
\varepsilon_{i} & =y_{i}-\gamma_{0}(x_{i}),r_{i}=\gamma_{0}(x_{i
)-\gamma_{K}(x_{i}),\eta_{i}=v(z_{i})-p(x_{i})\alpha_{0}(x_{i}),r_{i}^{\alpha
}=\alpha_{0}(x_{i})-\alpha_{K}(x_{i}),\\
\hat{h}_{1} & =\hat{F}\{p(x)\varepsilon\},\hat{h}_{2}=\hat{F
\{p(x)r\},\tilde{h}_{1}^{\alpha}=\tilde{F}\{\eta\},\tilde{h}_{2}^{\alpha
}=\tilde{F}\{p(x)r^{\alpha}\},\hat{\Sigma}=\hat{F}\{p(x)p(x)^{T
\},\tilde{\Sigma}=\tilde{F}\{p(x)p(x)^{T}\},\\
\hat{\Delta}_{1} & =\hat{\Sigma}^{-}\hat{h}_{1},\hat{\Delta}_{2}=\hat
{\Sigma}^{-}\hat{h}_{2},\tilde{\Delta}_{1}^{\alpha}=\tilde{\Sigma}^{-
\tilde{h}_{1}^{\alpha},\tilde{\Delta}_{2}^{\alpha}=\tilde{\Sigma}^{-}\tilde
{h}_{2}^{\alpha},\bar{\Sigma}=\bar{F}\{p(x)p(x)^{T}\},
\end{align*}
One\ piece of algebra we will use throughout is that, when $\hat{\Sigma}$ and
$\tilde{\Sigma}$ are nonsingular, by adding and subtracting $\hat{\Sigma
^{-1}\hat{F}\{p(x)\gamma_{0}(x)\}$ and $\tilde{\Sigma}^{-1}\tilde
{F}\{p(x)\alpha_{0}(x_{i})\}$ respectively we hav
\begin{equation}
\hat{\delta}-\delta=\hat{\Delta}_{1}+\hat{\Delta}_{2},\tilde{\delta}_{\alpha
}-\delta_{\alpha}=\tilde{\Delta}_{1}^{\alpha}+\tilde{\Delta}_{2}^{\alpha}.
\label{coeff
\end{equation}
Some properties of these objects will be useful in the proofs to follow. We
collect these properties in the following result. Let $\hat{1}$ and $\tilde
{1}$ denote the indicator function that the smallest eigenvalue of
$\hat{\Sigma}$ or $\tilde{\Sigma}$ is larger than $1/2$ respectively. As in
Belloni et al.(2015) $\Pr(\hat{1}=1)\longrightarrow1$ and $\Pr(\bar
{1}=1)\longrightarrow1$. Also, let $\hat{Z}^{c},$ $\tilde{Z}^{c},$ $\bar
{Z}^{c}$ denote all the other observations other than those indexed by
$\hat{I},$ $\tilde{I},$ or $\bar{I}$ respectively and $X=(x_{1},...,x_{n})$.
\bigskip
\textsc{Lemma A2:} \textit{If Assumptions 1-6 are is satisfied then
\begin{align*}
\text{i) }\hat{1}\left\Vert \hat{\Delta}_{1}\right\Vert & =O_{p}\left(
\frac{K}{n}\right) ;\text{ ii) }\hat{1}\left\Vert \hat{\Delta}_{2}\right\Vert
=o_{p}\left( K^{-2\zeta_{\gamma}}\frac{K}{n}\right) ;\text{ }\\
\text{iii) }\hat{1}\left\Vert \hat{\Delta}_{1}^{\alpha}\right\Vert &
=O_{p}\left( \frac{(1+d_{K})K}{n}\right) ,\text{ iv) }\hat{1}\left\Vert
\hat{\Delta}_{2}^{\alpha}\right\Vert =O_{p}\left( \frac{K}{n}\right) ,\\
\text{v) }\hat{1}\left\Vert \hat{\delta}-\delta\right\Vert ^{2} &
=O_{p}\left( \frac{K}{n}\right) \text{; vi) }\tilde{1}\left\Vert
\tilde{\delta}_{\alpha}-\delta_{\alpha}\right\Vert ^{2}=O_{p}\left(
\frac{d_{K}K}{n}\right) ,\text{ }\\
\text{vii) }\hat{1}E[\hat{\Delta}_{1}\hat{\Delta}_{1}^{T}|X,\hat{Z}^{c}] &
\leq\frac{C}{n}I,\text{ viii) }\hat{1}\int[\hat{\gamma}(x)-\gamma_{0
(x)]^{2}F_{0}(dx)=O_{p}(\frac{K}{n}+K^{-2\zeta_{\gamma}}),\text{ }\\
\text{ix) }\tilde{1}\int[\tilde{\alpha}(x)-\alpha_{0}(x)]^{2}F_{0}(dx) &
=O_{p}\left( \frac{(d_{K}+1)K}{n}+K^{-2\zeta_{\alpha}}\right) .
\end{align*}
\bigskip
Proof: Note that for $\varepsilon_{i}=y_{i}-\gamma_{0}(x_{i})$, $E[\varepsilon
_{i}^{2}|x_{i}]=Var(y_{i}|x_{i})\leq C$. Note that $\hat{1}\hat{\Sigma
^{-2}\leq4I$ in the positive semi-definite semi-order so tha
\[
E[\hat{1}\left\Vert \hat{\Delta}_{1}\right\Vert ^{2}]\leq4E[\hat{h}_{1
^{T}\hat{h}_{1}]=\frac{4}{\hat{n}^{2}}\sum_{i,j\in\hat{I}}E[p(x_{i
)^{T}p(x_{j})\varepsilon_{i}\varepsilon_{j}]=\frac{4}{\hat{n}}E[\left\Vert
p(x_{i})\right\Vert ^{2}\varepsilon_{i}^{2}]\leq\frac{4C}{\hat{n}}E[\left\Vert
p(x_{i})\right\Vert ^{2}]=O(\frac{K}{n}).
\]
The first conclusion then follows by the Markov inequality. Next, we have
$\sup_{x}\left\vert \gamma_{K}(x)-\gamma_{0}(x_{i})\right\vert =O(K^{-\zeta})$
and hence for
\[
E[\hat{1}\left\Vert \hat{\Delta}_{2}\right\Vert ^{2}]\leq4E[\hat{h}_{2
^{T}\hat{h}_{2}]=\frac{4}{\hat{n}^{2}}\sum_{i,j\in\hat{I}}E[p(x_{i
)^{T}p(x_{j})r_{i}r_{j}]=\frac{4}{\hat{n}}E[\left\Vert p(x_{i})\right\Vert
^{2}]O(K^{-2\zeta_{\gamma}})=O\left( K^{-2\zeta_{\gamma}}\frac{K}{n}\right)
,
\]
so the second equality also follows by the Markov inequality. Next, note that
\[
E[\eta_{i}^{T}\eta_{i}]\leq2E[v(z_{i})^{T}v(z_{i})]+2E[\alpha_{0}(x_{i
)^{2}\left\Vert p(x_{i})\right\Vert ^{2}]=O(K(d_{K}+1)).
\]
Then we hav
\[
E[\tilde{1}\left\Vert \tilde{\Delta}_{1}^{\alpha}\right\Vert ^{2
]\leq4E[\tilde{h}_{1}^{\alpha T}\tilde{h}_{1}^{\alpha}]=\frac{4}{\tilde{n
^{2}}\sum_{i,j\in\tilde{I}}E[\eta_{i}^{T}\eta_{j}]=\frac{4}{\hat{n}}E[\eta
_{i}^{T}\eta_{i}]=O(\frac{K(d_{K}+1)}{n}),
\]
so the third conclusion follows from the Markov inequality. The fourth
conclusion follows exactly like the second conclusion. the fifth and sixth
conclusions follow by eq. (\ref{coeff}) and the triangle inequality.
Next, note that by independence of the observation
\begin{align*}
E[\hat{1}\hat{\Delta}_{1}\hat{\Delta}_{1}^{T}|X,\hat{Z}^{c}] & =\hat{1
\hat{\Sigma}^{-1}E[\hat{h}_{1}\hat{h}_{1}^{T}|X]\hat{\Sigma}^{-1}=\hat{1
\hat{\Sigma}^{-1}\left\{ \frac{1}{\hat{n}^{2}}\sum_{i,j\in\hat{I}
p(x_{i})p(x_{j})^{T}E[\varepsilon_{i}\varepsilon_{j}|X]\right\} \hat{\Sigma
}^{-1}\\
& =\hat{1}\hat{\Sigma}^{-1}\left\{ \frac{1}{\hat{n}^{2}}\sum_{i\in\hat{I
}p(x_{i})p(x_{i})^{T}E[\varepsilon_{i}^{2}|x_{i}]\right\} \hat{\Sigma
^{-1}\leq\hat{1}\frac{C}{\hat{n}}\hat{\Sigma}^{-1}\leq\frac{2C}{n}I,
\end{align*}
giving the seventh conclusion.
Next, note that $\int p(x)[\gamma_{K}(x)-\gamma_{0}(x)]F_{0}(dx)=0$, so tha
\begin{align*}
\hat{1}\int[\hat{\gamma}(x)-\gamma_{0}(x)]^{2}F_{0}(dx) & =\hat{1}\in
[\hat{\gamma}(x)-\gamma_{K}(x)+\gamma_{K}(x)-\gamma_{0}(x)]^{2}F_{0}(dx)\\
& =\hat{1}\left\Vert \hat{\delta}-\delta\right\Vert ^{2}+\hat{1
K^{-2\zeta_{\gamma}}=O_{p}\left( \frac{K}{n}+K^{-2\zeta_{\gamma}}\right) ,
\end{align*}
giving the eighth conclusion. The last conclusion follows similarly.
\textit{Q.E.D.}
\bigskip
Next, we give an important intermediate result:
\bigskip
\textsc{Lemma A3}: \textit{If Assumptions 1-6 are satisfied then}
\[
\hat{1}\int[m(z,\hat{\gamma})-m(z,\gamma_{0})]^{2}F_{0}(dz)=O_{p}\left(
\frac{d_{K}K}{n}+K^{-2\zeta_{m}}\right) .
\]
Proof: By linearity of $m(z,\gamma)-m(z,0)$, we have $m(z,\hat{\gamma
})-m(z,\gamma_{K})=v(z)^{T}(\hat{\delta}-\delta).$ Then by Lemma A2
\begin{align*}
\hat{1}\int[m(z,\hat{\gamma})-m(z,\gamma_{0})]^{2}F_{0}(dz) & \leq2\hat
{1}\int[m(z,\hat{\gamma})-m(z,\gamma_{K})]^{2}F_{0}(dz)+2\hat{1
\int[m(z,\gamma_{K})-m(z,\gamma_{0})]^{2}F_{0}(dz)\\
& \leq2\hat{1}(\hat{\delta}-\delta)^{T}E[v(z_{i})v(z_{i})^{T}](\hat{\delta
}-\delta)+O(K^{-2\zeta_{m}})\\
& \leq2d_{K}\hat{1}\left\Vert \hat{\delta}-\delta\right\Vert ^{2
+O(K^{-2\zeta_{m}})=O_{p}\left( \frac{d_{K}K}{n}+K^{-2\zeta_{m}}\right)
.\text{ }Q.E.D.
\end{align*}
\bigskip
The proof of the results for the doubly robust estimators will make use of a
few Lemmas, that we now state.
\bigskip
\textsc{Lemma A4:}\textit{ If Assumptions 1-6 are satisfied then the
hypotheses of Lemma 6 are satisfied with
\[
\Delta_{n}^{m}=\sqrt{\frac{d_{K}K}{n}}+K^{-\zeta_{m}},\Delta_{n}^{\gamma
}=\sqrt{\frac{K}{n}}+K^{-\zeta_{\gamma}},\Delta_{n}^{\alpha}=\sqrt{\frac
{d_{K}K}{n}}+K^{-\zeta_{\alpha}}.
\]
\bigskip
Proof: The first conclusion follows by Lemma A3 and the second and third by
parts viii) and ix) of Lemma A2. $Q.E.D$.
\bigskip
\textsc{Lemma A5:}\textit{ If Assumptions 1-6 are satisfied and }$\hat{\gamma
}_{\ell}$ and $\tilde{\alpha}_{\ell}$ are computed from distinct samples then
for $\bar{\Sigma}=\bar{F}\{p(x)p(x)^{T}\}
\[
\sqrt{n}\bar{F}\{[\tilde{\alpha}_{\ell}(x)-\alpha_{0}(x)][\hat{\gamma}_{\ell
}(x)-\gamma_{0}(x)]\}=\sqrt{n}\hat{\Delta}_{2}^{T}\bar{\Sigma}\tilde{\Delta
}_{1}^{\alpha}+O_{p}(\bar{\Delta}_{n}^{\ast}+\Delta_{n}^{m}).
\]
\bigskip
Proof: Let $\bar{h}_{2}=\bar{F}\{p(x)[\gamma_{K}(x)-\gamma_{0}(x)]\}$ and
$\bar{h}_{2}^{\alpha}=\bar{F}\{p(x)[\alpha_{K}(x)-\alpha_{0}(x)]\}.$ Note tha
\begin{align*}
& \bar{F}\{[\tilde{\alpha}_{\ell}(x)-\alpha_{0}(x)][\hat{\gamma}_{\ell
}(x)-\gamma_{0}(x)]\}\\
& =\bar{F}\{[p(x)^{T}(\tilde{\delta}_{\alpha}-\delta_{\alpha})+\alpha
_{K}(x)-\alpha_{0}(x)][p(x)^{T}(\hat{\delta}-\delta)+\gamma_{K}(x)-\gamma
_{0}(x)]\}\\
& =(\hat{\delta}-\delta)^{T}\bar{\Sigma}(\tilde{\delta}_{\alpha
-\delta_{\alpha})+(\hat{\delta}-\delta)^{T}\bar{h}_{2}^{\alpha}+(\hat{\delta
}_{\alpha}-\delta_{\alpha})^{T}\bar{h}_{2}+\bar{F}\{[\alpha_{K}(x)-\alpha
_{0}(x)][\gamma_{K}(x)-\gamma_{0}(x)]\}.
\end{align*}
By the Markov inequalit
\begin{equation}
\sqrt{n}\bar{F}\{[\alpha_{K}(x)-\alpha_{0}(x)][\gamma_{K}(x)-\gamma
_{0}(x)]\}=O_{p}(\sqrt{n}K^{-\zeta_{\gamma}-\zeta_{\alpha}}). \label{termbias
\end{equation}
Note that
\[
E[\tilde{h}_{2}^{\alpha}(\tilde{h}_{2}^{\alpha})^{T}]=\frac{1}{\bar{n
}E[p(x_{i})p(x_{i})^{T}(r_{i}^{\alpha})^{2}]\leq C\frac{1}{n}I.
\]
Therefore by Lemma A2 we hav
\[
E[\{\hat{1}(\hat{\delta}-\delta)^{T}\bar{h}_{2}^{\alpha}\}^{2}|\bar{Z
^{c}]=\hat{1}(\hat{\delta}-\delta)^{T}E[\tilde{h}_{2}^{\alpha}(\tilde{h
_{2}^{\alpha})^{T}](\hat{\delta}-\delta)\leq C\hat{1}\frac{1}{n}\left\Vert
\hat{\delta}-\delta\right\Vert ^{2}=O_{p}(\frac{K}{n^{2}}).
\]
Then by the Markov inequality it follows tha
\begin{equation}
\sqrt{n}(\hat{\delta}-\delta)^{T}\bar{h}_{2}^{\alpha}=O_{p}(\sqrt{\frac{K}{n
}). \label{term var1
\end{equation}
It follows similarly tha
\begin{equation}
\sqrt{n}(\hat{\delta}_{\alpha}-\delta_{\alpha})^{T}\bar{h}_{2}=O_{p
(\sqrt{\frac{d_{K}K}{n}}). \label{term var2
\end{equation}
Next, note that
\[
(\hat{\delta}-\delta)^{T}\bar{\Sigma}(\tilde{\delta}_{\alpha}-\delta_{\alpha
})=\hat{\Delta}_{1}^{T}\bar{\Sigma}(\tilde{\delta}_{\alpha}-\delta_{\alpha
})+\hat{\Delta}_{2}^{T}\bar{\Sigma}\tilde{\Delta}_{2}^{\alpha}+\hat{\Delta
}_{2}^{T}\bar{\Sigma}\tilde{\Delta}_{1}^{\alpha}.
\]
Let $\bar{1}$ be the event that $\lambda_{\max}(\bar{\Sigma})\leq2$. Then by
conclusion vii) of Lemma A2, and $\bar{1}$, $\hat{1}$, and $\tilde{1}$ all
functions of $X$ we hav
\begin{align*}
E[\bar{1}\hat{1}\tilde{1}\{\hat{\Delta}_{1}^{T}\bar{\Sigma}(\tilde{\delta
}_{\alpha}-\delta_{\alpha})\}^{2}|X,\hat{Z}^{c}] & =\bar{1}\hat{1}\tilde
{1}(\tilde{\delta}_{\alpha}-\delta_{\alpha})^{T}\bar{\Sigma}E[\hat{\Delta
_{1}\hat{\Delta}_{1}^{T}|X,\hat{Z}^{c}]\bar{\Sigma}(\tilde{\delta}_{\alpha
}-\delta_{\alpha})\\
& \leq C\frac{1}{n}\bar{1}\tilde{1}(\tilde{\delta}_{\alpha}-\delta_{\alpha
})^{T}\bar{\Sigma}^{2}(\tilde{\delta}_{\alpha}-\delta_{\alpha})\leq4C\frac
{1}{n}\tilde{1}\left\Vert \tilde{\delta}_{\alpha}-\delta_{\alpha}\right\Vert
^{2}\\
& =O_{p}(\frac{d_{K}K}{n^{2}}).
\end{align*}
Therefore we hav
\begin{equation}
\sqrt{n}\hat{\Delta}_{1}^{T}\bar{\Sigma}(\tilde{\delta}_{\alpha
-\delta_{\alpha})=O_{p}(\sqrt{\frac{d_{K}K}{n}}). \label{term var3
\end{equation}
Finally, note that by the Cauchy-Schwartz inequalit
\[
\hat{1}\hat{\Delta}_{2}^{T}\hat{\Delta}_{2}\leq\hat{1}2\hat{h}_{2}^{T
\hat{\Sigma}^{-1}\hat{h}_{2}\leq2\hat{F}\{[\gamma_{K}(x)-\gamma_{0
(x)]^{2}\}=O_{p}(K^{-2\zeta_{\gamma}}).
\]
It follows similarly that $\hat{1}(\hat{\Delta}_{2}^{\alpha})^{T}(\hat{\Delta
}_{2}^{\alpha})=O_{p}(K^{-2\zeta_{\alpha}})$ so tha
\begin{equation}
\sqrt{n}\bar{1}\hat{1}\tilde{1}\hat{\Delta}_{2}^{T}\bar{\Sigma}\tilde{\Delta
}_{2}^{\alpha}\leq2\sqrt{n}\sqrt{\hat{1}\hat{\Delta}_{2}^{T}\hat{\Delta}_{2
}\sqrt{\tilde{1}(\tilde{\Delta}_{2}^{\alpha})^{T}(\tilde{\Delta}_{2}^{\alpha
})}=O_{p}(\sqrt{n}K^{-\zeta_{\gamma}-\zeta_{\alpha}}). \label{term bias2
\end{equation}
The conclusion then follows by eqs. (\ref{termbias}), (\ref{term var1}),
(\ref{term var2}), (\ref{term var3}), (\ref{term bias2}), and the triangle
inequality. Q.E.D.
\bigskip
\textbf{Proof of Theorem 2: }It follows by Lemma A2 that the first hypothesis
of Lemma 1 is satisfied with $\Delta_{n}^{m}=\sqrt{d_{K}/n}+K^{-\zeta_{m}}\,$.
Le
\[
\bar{m}(\gamma)=\int[m(z,\gamma)-m(z,\gamma)]F_{0}(dz)=E[\alpha_{0
(x_{i})\gamma(x_{i})],
\]
where the first equality is a definition and the second follows by Assumption
1. Then the first conclusion of Lemma 1 holds.
Next let $n=\hat{n}_{\ell}$ and $\hat{\gamma}=\hat{\gamma}_{\ell}$ for some
$\ell$ and $\phi(z)=\alpha_{0}(x)[y-\gamma_{0}(x)]$. Then it follows as in
Ichimura and Newey (2017), pp. 29 tha
\begin{align}
\hat{1}\sqrt{n}[\bar{m}(\hat{\gamma})-\beta_{0}-\frac{1}{n}\sum_{i=1}^{n
\phi(z_{i})] & =\hat{1}\left( \hat{R}_{1}+\hat{R}_{2}+\hat{R}_{3}\right)
,\hat{R}_{1}=\sqrt{n}E[\alpha_{0}(x_{i})\{\gamma_{K}(x_{i})-\gamma_{0
(x_{i})\}],\label{plugexp}\\
\hat{R}_{2} & =\sqrt{n}v^{T}\hat{\Sigma}^{-1}\hat{h}_{2},\hat{R}_{3
=\frac{1}{\sqrt{n}}\sum_{i=1}^{n}[\alpha_{K}(x_{i})-\alpha_{0}(x_{i
)][y_{i}-\gamma_{0}(x_{i})].\nonumber
\end{align}
By $\gamma_{K}(x_{i})-\gamma_{0}(x_{i})$ orthogonal to $p(x_{i})$ in the
population and the Cauchy-Schwartz inequality
\begin{align*}
\left\vert \hat{R}_{1}\right\vert & =\sqrt{n}\left\vert E[\{\alpha_{0
(x_{i})-\alpha_{K}(x_{i})\}\{\gamma_{0}(x_{i})-\gamma_{K}(x_{i})\}]\right\vert
\leq\sqrt{n}\{E[\{\alpha_{0}(x_{i})-\alpha_{K}(x_{i})\}^{2}]E[\{\gamma
_{0}(x_{i})-\gamma_{K}(x_{i})\}^{2}]\}^{1/2}\\
& =O(\sqrt{n}K^{-\zeta_{\gamma}-\zeta_{\alpha}})=O(\bar{\Delta}_{n}^{\ast}).
\end{align*}
Also,
\[
E[\hat{R}_{3}^{2}]=E[\{\alpha_{K}(x_{i})-\alpha(x_{i})\}^{2}\varepsilon
_{i}^{2}]\leq CE[\{\alpha_{K}(x_{i})-\alpha(x_{i})\}^{2}]=O(K^{-2\zeta
_{\alpha}}),
\]
so by the Markov inequality
\[
\hat{R}_{3}=O_{p}(K^{-\zeta_{\alpha}})=O_{p}(\bar{\Delta}_{n}^{\ast}).
\]
Next, note that $\hat{R}_{2}=\hat{R}_{21}+\hat{R}_{22}$ where $\hat{R
_{21}=v^{T}\hat{h}_{2}$ and $\hat{R}_{22}=\sqrt{n}v^{T}(\hat{\Sigma
^{-1}-I)\hat{h}_{2}.$ As noted following Assumption 4, $\sup_{x}|\gamma
_{K}(x)-\gamma_{0}(x)|=O(K^{-\zeta_{\gamma}})$, so that
\[
E[\hat{R}_{21}^{2}]=v^{T}E[p(x_{i})p(x_{i})^{T}r_{i}^{2}]v\leq O(K^{-2\zeta
_{\gamma}})v^{T}v\leq O(K^{-2\zeta_{\gamma}})E[\alpha_{0}(x_{i})^{2
]=O(K^{-2\zeta_{\gamma}}).
\]
Then by the Markov inequalit
\[
\hat{R}_{21}=O_{p}(K^{-\zeta_{\gamma}})=O_{p}(\bar{\Delta}_{n}^{\ast}).
\]
Finally, note that $(\hat{\Sigma}^{-1}-I)\hat{h}_{2}=\hat{U}+\hat{W}$ for
$\hat{U}$ and $\hat{W}$ defined in the statement of Lemma A1, so that for any
$\Delta>0$ we have
\begin{align*}
\hat{1}\hat{R}_{22}^{2} & =\hat{1}n\cdot v^{T}(\hat{U}+\hat{W})(\hat{U
+\hat{W})v\leq2\hat{1}n\cdot v^{T}(\hat{U}\hat{U}^{T}+\hat{W}\hat{W})v\\
& \leq CK^{-2\zeta_{\gamma}}[\ln(n)]^{2}+O_{p}(n^{-\Delta+1})\text{,
\end{align*}
for any $C$. It then follows by eq. (\ref{plugexp}) and the triangle
inequality tha
\[
\sqrt{n}[\bar{m}(\hat{\gamma})-\beta_{0}]=\frac{1}{\sqrt{n}}\sum_{i=1}^{n
\phi(z_{i})+O_{p}(\bar{\Delta}_{n}^{\ast}+K^{-\zeta_{\gamma}}\ln(n)).
\]
The first conclusion then follows from the second conclusion of Lemma 1. The
second conclusion follows by $\Delta_{n}^{m}=C\sqrt{K/n}+K^{-\zeta_{\gamma
}=O(\bar{\Delta}_{n}^{\ast})$ when $d_{K}$ is bounded and $\zeta_{m
=\zeta_{\gamma}$. $Q.E.D.$
\bigskip
\textbf{Proof of Corollary 3: }To prove this result it suffices to show that
Assumptions 4 and 6 are satisfied in each of the examples with $d_{K}$ bounded
and $\zeta_{m}=\zeta_{\gamma}.$
For the conditional covariance $\alpha_{0}(x)=-E[a_{i}|x_{i}=x]$. This being
Holder of order $s_{\alpha}$ is a hypothesis. Also, $v(z)=a\cdot p(x),$ so
that
\[
E[v(z_{i})v(z_{i})^{T}]=E[a_{i}^{2}p(x_{i})p(x_{i})^{T}]\leq CE[p(x_{i
)p(x_{i})^{T}]\leq CI
\]
by $E[a_{i}^{2}|x_{i}]$ bounded. Also $\zeta_{m}=\zeta_{\gamma}$ b
\[
E[\left\{ m(z_{i},\gamma_{K})-m(z_{i},\gamma_{0})\right\} ^{2}]=E[a_{i
^{2}\{\gamma_{K}(x_{i})-\gamma_{0}(x_{i})\}^{2}]\leq CK^{-2\zeta_{\gamma}}.
\]
For the missing data mean $\alpha_{0}(x)=a/\pi_{0}(w)$ is Holder of order
$s_{\alpha}$ by $\pi_{0}(w_{i})$ being bounded away from zero and Holder of
order $s_{\alpha}.$ Furthermore $v(z)=q(w)$, so that by Assumption 3,
\[
E[v(z_{i})v(z_{i})^{T}]=E[q(w_{i})q(w_{i})^{T}]\leq CI,
\]
and by $a_{i}$ bounded and $\pi_{0}(w_{i})$ bounded away from zero
\begin{align*}
E[\left\{ m(z_{i},\gamma_{K})-m(z_{i},\gamma_{0})\right\} ^{2}] &
=E[\{q(w_{i})^{T}\delta-E[y_{i}|a_{i}=1,w_{i}]\}^{2}]\\
& =E[\frac{a_{i}}{\pi_{0}(w_{i})}\{\gamma_{K}(x_{i})-\gamma_{0}(x_{i
)\}^{2}]\\
& \leq CE[\{\gamma_{K}(x_{i})-\gamma_{0}(x_{i})\}^{2}]\leq CK^{-2\zeta
_{\gamma}}.
\end{align*}
For the average derivative example $\alpha_{0}(x)=\omega(x)/f_{0}(x)$ which is
Holder of order $s_{\alpha}$ by each of $\omega(x)$ and $f_{0}(x)$ being
Holder of order $s_{\alpha}$ and by $f_{0}(x)$ bounded away from zero where
$\omega(x)$ is non zero. Furthermore $v(z)=\int\omega(x)p(x)dx,$ so that by
Cauchy-Schwartz,
\begin{align*}
E[v(z_{i})v(z_{i})^{T}] & =\int\omega(x)p(x)dx\int\omega(x)p(x)^{T}dx\\
& =E[\alpha_{0}(x_{i})p(x_{i})]E[\alpha_{0}(x_{i})p(x_{i})^{T}]\\
& \leq E[\alpha_{0}(x_{i})^{2}]E[p(x_{i})p(x_{i})^{T}]\leq CI.
\end{align*}
Furthermore,
\begin{align*}
E[\left\{ m(z_{i},\gamma_{K})-m(z_{i},\gamma_{0})\right\} ^{2}] &
=\{\int\omega(x)[\gamma_{K}(x)-\gamma_{0}(x)]dx\}^{2}\\
& =E[\alpha_{0}(x_{i})\{\gamma_{K}(x_{i})-\gamma_{0}(x_{i})\}]^{2}\\
& \leq E[\alpha_{0}(x_{i})^{2}]E[\{\gamma_{K}(x_{i})-\gamma_{0}(x_{i
)\}^{2}]=O(K^{-2\zeta_{\gamma}}).\text{ }Q.E.D.
\end{align*}
\bigskip
\textbf{Proof of Theorem 4:} The conclusion follows from Lemma 1 and Theorem 8
of Ichimura and Newey (2017) similarly to the proof of Theorem 2 above, with
the conclusion of Theorem 8 of Ichimura and Newey (2017) replacing the
argument following eq. (\ref{plugexp}) in the proof of Theorem 2. $Q.E.D.$
\bigskip
\textbf{Proof of Theorem 5:} Let $\hat{\lambda}_{\ell}(x)$ denote the series
regression of $u_{i}=y-a_{i}^{T}\beta_{0}$ on $p(x_{i})$ in the $\hat{I
_{\ell}$ sample. By a standard formula for instrumental variables estimation
and series estimation,
\begin{align}
\sqrt{n}(\hat{\beta}-\beta_{0}) & =\hat{H}^{-1}\frac{1}{\sqrt{n}}\sum
_{\ell=1}^{L}\sum_{i\in I_{\ell}}[a_{i}-\hat{\alpha}_{\ell}(x_{i})]\left\{
y_{i}-\hat{\gamma}_{\ell}(x_{i})-[a_{i}-\hat{\alpha}_{\ell}(x_{i})]^{T
\beta_{0}\right\} \label{plex}\\
& =\hat{H}^{-1}\frac{1}{\sqrt{n}}\sum_{\ell=1}^{L}\sum_{i\in I_{\ell}
[a_{i}-\hat{\alpha}_{\ell}(x_{i})]\left[ u_{i}-\hat{\lambda}_{\ell
(x_{i})\right] \nonumber
\end{align}
Assume for the moment that $a_{i}$ is a scalar and let $y_{i}=u_{i}$. Then
$\sum_{\ell=1}^{L}\sum_{i\in I_{\ell}}[a_{i}-\hat{\alpha}_{\ell
(x_{i})]\left[ u_{i}-\hat{\lambda}_{\ell}(x_{i})\right] /n$ is the doubly
robust estimator with $m(z,\gamma)=a[y-\gamma(x)],$ i.e. for the expected
conditional covariance. It then follows as in the proof of Corollary 3 that
$\max\{\Delta_{n}^{m},\Delta_{n}^{\gamma},\Delta_{n}^{\alpha}\}\leq
C\bar{\Delta}_{n}^{\ast}.$ Then by Lemmas 6 and A5, for $\varphi
(z)=[a_{i}-\alpha_{0}(x_{i})]\varepsilon_{i}$,
\[
\frac{1}{\sqrt{n}}\sum_{\ell=1}^{L}\sum_{i\in I_{\ell}}[a_{i}-\hat{\alpha
}_{\ell}(x_{i})]\left[ u_{i}-\hat{\lambda}_{\ell}(x_{i})\right] =\frac
{1}{\sqrt{n}}\sum_{i=1}^{n}\varphi(z_{i})+O_{p}(\bar{\Delta}_{n}^{\ast
)+\sqrt{n}\hat{\Delta}_{2}^{T}\bar{\Sigma}\tilde{\Delta}_{1}^{\alpha}.
\]
Note that here $\alpha_{0}(x_{i})=-E[a_{i}|x_{i}]$ so that
\[
\tilde{h}_{1}^{\alpha}=\tilde{F}\{v(z)-\alpha_{0}(x)p(x)\}=-\tilde
{F}\{[a-\alpha_{0}(x)]p(x)\}.
\]
Then we have
\[
E[\tilde{1}\tilde{\Delta}_{1}^{\alpha}\tilde{\Delta}_{1}^{\alpha T
|X,\tilde{Z}^{c}]=\tilde{1}\frac{1}{\tilde{n}}\tilde{\Sigma}^{-1}\tilde
{F}\{p(x)p(x)^{T}Var(a_{i}|x_{i}=x)\}\tilde{\Sigma}^{-1}\leq\frac{C}{n}I.
\]
Therefore it follows by Lemma A2 tha
\[
E[\hat{1}\tilde{1}(\hat{\Delta}_{2}^{T}\bar{\Sigma}\tilde{\Delta}_{1}^{\alpha
})^{2}|X,\tilde{Z}^{c}]=\hat{1}\hat{\Delta}_{2}^{T}E[\tilde{1}\tilde{\Delta
}_{1}^{\alpha}\tilde{\Delta}_{1}^{\alpha T}|X,\tilde{Z}^{c}]\hat{\Delta
_{2}\leq\hat{1}\frac{C}{n}\hat{\Delta}_{2}^{T}\hat{\Delta}_{2}=o_{p
(K/n^{2}).
\]
Then by the Markov inequalit
\[
\sqrt{n}\hat{\Delta}_{2}^{T}\bar{\Sigma}\tilde{\Delta}_{1}^{\alpha
=o_{p}\left( \sqrt{\frac{K}{n}}\right) =O_{p}(\bar{\Delta}_{n}^{\ast}).
\]
Consequently we hav
\begin{equation}
\frac{1}{\sqrt{n}}\sum_{\ell=1}^{L}\sum_{i\in I_{\ell}}[a_{i}-\hat{\alpha
}_{\ell}(x_{i})]\left[ u_{i}-\hat{\lambda}_{\ell}(x_{i})\right] =\frac
{1}{\sqrt{n}}\sum_{i=1}^{n}\varphi(z_{i})+O_{p}(\bar{\Delta}_{n}^{\ast}).
\label{plscore
\end{equation}
Next, note tha
\begin{align*}
\bar{F}\{[a-\tilde{\alpha}_{\ell}(x)][a-\hat{\alpha}_{\ell}(x)]\} & =\bar
{F}\{[a-\alpha_{0}(x)+\alpha_{0}(x)-\tilde{\alpha}_{\ell}(x)][a-\alpha
_{0}(x)+\alpha_{0}(x)-\hat{\alpha}_{\ell}(x)]\}\\
& =\bar{F}\{[a-\alpha_{0}(x)]^{2}+[a-\alpha_{0}(x)][\alpha_{0}(x)-\tilde
{\alpha}_{\ell}(x)]\\
& +[a-\alpha_{0}(x)][\alpha_{0}(x)-\hat{\alpha}_{\ell}(x)]+[\alpha
_{0}(x)-\tilde{\alpha}_{\ell}(x)][\alpha_{0}(x)-\hat{\alpha}_{\ell}(x)]\}
\end{align*}
Note that by Lemma A3 and $E[a_{i}^{2}|x_{i}]$ bounded,
\begin{align*}
\tilde{1}E[(\bar{F}\{[a-\alpha_{0}(x)][\alpha_{0}(x)-\tilde{\alpha}_{\ell
}(x)]\})^{2}|\tilde{Z}] & =\frac{\tilde{1}}{\bar{n}}\int[a-\alpha
_{0}(x)]^{2}[\alpha_{0}(x)-\tilde{\alpha}(x)]^{2}F_{0}(dz)\\
& \leq C\frac{\tilde{1}}{n}\int E[a_{i}^{2}|x_{i}=x][\alpha_{0
(x)-\tilde{\alpha}(x)]^{2}F_{0}(dz)\\
& \leq C\frac{\tilde{1}}{n}\int[\alpha_{0}(x)-\tilde{\alpha}(x)]^{2
F_{0}(dz)=O_{p}\left( \frac{1}{n}\left( \frac{K}{n}+K^{-2\zeta_{\gamma
}\right) \right) .
\end{align*}
so that by the Markov inequality it follows tha
\begin{equation}
\bar{F}\{[a-\alpha_{0}(x)][\alpha_{0}(x)-\tilde{\alpha}_{\ell}(x)]\}=O_{p
(\bar{\Delta}_{n}^{\ast}). \label{Term1
\end{equation}
\qquad\ It follows similarly tha
\begin{equation}
\bar{F}\{[a-\alpha_{0}(x)][\alpha_{0}(x)-\hat{\alpha}_{\ell}(x)]\}=O_{p
(\bar{\Delta}_{n}^{\ast}). \label{term2
\end{equation}
Also, by the Cauchy-Schwartz inequality
\[
\hat{1}\tilde{1}\left\vert \bar{F}\{[\alpha_{0}(x)-\tilde{\alpha}_{\ell
}(x)][\alpha_{0}(x)-\hat{\alpha}_{\ell}(x)]\}\right\vert \leq(\tilde{1}\bar
{F}\{[\alpha_{0}(x)-\tilde{\alpha}_{\ell}(x)]^{2}\})^{1/2}(\hat{1}\bar
{F}\{[\alpha_{0}(x)-\hat{\alpha}_{\ell}(x)]^{2}\})^{1/2}.
\]
Also
\[
E[\tilde{1}\bar{F}\{[\alpha_{0}(x)-\tilde{\alpha}_{\ell}(x)]^{2}\}|\tilde
{Z}]=\tilde{1}\int[\tilde{\alpha}_{\ell}(x)-\alpha_{0}(x)]^{2}F_{0
(dx)=O_{p}\left( \frac{K}{n}+K^{-2\zeta_{\gamma}}\right) ,
\]
so that $\tilde{1}\bar{F}\{[\alpha_{0}(x)-\tilde{\alpha}_{\ell}(x)]^{2
\}=O_{p}(K/n+K^{-2\zeta_{\gamma}}).$ It follows similarly that $\hat{1}\bar
{F}\{[\alpha_{0}(x)-\hat{\alpha}_{\ell}(x)]^{2}\}=O_{p}(K/n+K^{-2\zeta
_{\gamma}}),$ so tha
\begin{equation}
=\bar{F}\{[\alpha_{0}(x)-\tilde{\alpha}_{\ell}(x)][\alpha_{0}(x)-\hat{\alpha
}_{\ell}(x)]\}=O_{p}(\bar{\Delta}_{n}^{\ast}) \label{Term3
\end{equation}
Also, note that by $E[\left\Vert a_{i}\right\Vert ^{4}]<\infty,
\[
\bar{F}\{[a-\alpha_{0}(x)]^{2}\}=E[\{a-\alpha_{0}(x)\}^{2}]+O_{p}\left(
\frac{1}{\sqrt{n}}\right) =E[\{a_{i}-\alpha_{0}(x_{i})\}^{2}]+O_{p
(\bar{\Delta}_{n}^{\ast}).
\]
It then follows by eqs. (\ref{Term1}), (\ref{term2}), (\ref{Term3}) and the
triangle inequality tha
\[
\bar{F}\{[a-\tilde{\alpha}_{\ell}(x)][a-\hat{\alpha}_{\ell}(x)]\}=E[\{a_{i
-\alpha_{0}(x_{i})\}^{2}]+O_{p}(\bar{\Delta}_{n}^{\ast}).
\]
Applying this argument to each element of $\hat{H}=\sum_{\ell=1}^{L}\sum_{i\in
I_{\ell}}[a_{i}-\tilde{\alpha}_{\ell}(x_{i})][a_{i}-\hat{\alpha}_{\ell
(x_{i})]^{T}/n$ and each group of observations $I_{\ell}$ and summing up gives
$\hat{H}=H+O_{p}(\bar{\Delta}_{n}^{\ast}).$ It then follows by a standard
argument and nonsingularity of $H$ that
\begin{equation}
\hat{H}^{-1}=H^{-1}+O_{p}(\bar{\Delta}_{n}^{\ast}). \label{plhess
\end{equation}
Finally, it follows from eqs. (\ref{plex}), (\ref{plscore}), (\ref{plhess})
and from $\sum_{i=1}^{n}\varphi(z_{i})/\sqrt{n}=O_{p}(1)$ tha
\[
\sqrt{n}(\hat{\beta}-\beta_{0})=[H^{-1}+O_{p}(\bar{\Delta}_{n}^{\ast
)][\frac{1}{\sqrt{n}}\sum_{i=1}^{n}\varphi(z_{i})+O_{p}(\bar{\Delta}_{n
^{\ast})]=H^{-1}\frac{1}{\sqrt{n}}\sum_{i=1}^{n}\varphi(z_{i})+O_{p
(\bar{\Delta}_{n}^{\ast}).\text{ }Q.E.D.
\]
\bigskip
\textbf{Proof of Theorem 7:} By Lemmas 6 and A5 it suffices to show that
$\bar{1}\hat{1}\tilde{1}\sqrt{n}\hat{\Delta}_{2}^{T}\bar{\Sigma}\tilde{\Delta
}_{1}^{\alpha}=O_{p}(\Delta_{n}^{m}).$ Note tha
\[
\hat{1}\hat{\Delta}_{2}=\hat{1}\hat{h}_{2}+\hat{1}\hat{U}+\hat{1}\hat{W}.
\]
By $E[\hat{h}_{2}\hat{h}_{2}^{T}]\leq Cn^{-1}K^{-2\zeta_{\gamma}}I$ \ and
Lemma A2 iii) we have
\begin{align*}
E[\left( \bar{1}\hat{1}\tilde{1}\sqrt{n}\hat{h}_{2}^{T}\bar{\Sigma
\tilde{\Delta}_{1}^{\alpha}\right) ^{2}|\hat{Z}^{c}] & =n\bar{1}\tilde
{1}\left( \tilde{\Delta}_{1}^{\alpha}\right) ^{T}\bar{\Sigma}E[\hat{1
\hat{h}_{2}\hat{h}_{2}^{T}]\bar{\Sigma}\tilde{\Delta}_{1}^{\alpha}\leq
n\bar{1}\tilde{1}\left( \tilde{\Delta}_{1}^{\alpha}\right) ^{T}\bar{\Sigma
}E[\hat{h}_{2}\hat{h}_{2}^{T}]\bar{\Sigma}\tilde{\Delta}_{1}^{\alpha}\\
& \leq CK^{-2\zeta_{\gamma}}\bar{1}\tilde{1}\left( \tilde{\Delta
_{1}^{\alpha}\right) ^{T}\bar{\Sigma}^{2}\tilde{\Delta}_{1}^{\alpha
=O_{p}\left( \frac{(1+d_{K})K^{1-2\zeta_{\gamma}}}{n}\right) =O_{p
((\Delta_{n}^{m})^{2}).
\end{align*}
Also, by the first conclusion of Lemma A1 and by Lemma A2 iii)
\begin{align*}
E[\left( \bar{1}\hat{1}\tilde{1}\sqrt{n}\hat{U}^{T}\bar{\Sigma}\tilde{\Delta
}_{1}^{\alpha}\right) ^{2}|\hat{Z}^{c}] & =n\bar{1}\tilde{1}\left(
\tilde{\Delta}_{1}^{\alpha}\right) ^{T}\bar{\Sigma}E[\hat{1}\hat{U}\hat
{U}^{T}]\bar{\Sigma}\tilde{\Delta}_{1}^{\alpha}\leq n\bar{1}\tilde{1}\left(
\tilde{\Delta}_{1}^{\alpha}\right) ^{T}\bar{\Sigma}E[\hat{U}\hat{U}^{T
]\bar{\Sigma}\tilde{\Delta}_{1}^{\alpha}\\
& \leq CK^{-2\zeta_{\gamma}}\ln(n)^{2}\bar{1}\tilde{1}\left( \tilde{\Delta
}_{1}^{\alpha}\right) ^{T}\bar{\Sigma}^{2}\tilde{\Delta}_{1}^{\alpha
=O_{p}\left( \frac{(1+d_{K})K^{1-2\zeta_{\gamma}}[\ln(n)]^{2}}{n}\right)
=O_{p}((\Delta_{n}^{m})^{2}).
\end{align*}
Also by the second conclusion of Lemma A1 and Lemma A2 iii), for $\Delta>0$
large enough,
\[
\bar{1}\hat{1}\tilde{1}\sqrt{n}\hat{\Delta}_{2}^{T}\bar{\Sigma}\tilde{\Delta
}_{1}^{\alpha}=O_{p}(n^{(1/2)-\Delta}\sqrt{(1+d_{K})/n}))=O_{p}(\Delta_{n
^{m}).
\]
The conclusion then follows by the Markov and triangle inequalities.
\textit{Q.E.D.}
\bigskip
\textbf{Proof of Theorem 8: }By Lemmas 6 and A5 it suffices to show that
$\bar{1}\hat{1}\tilde{1}\sqrt{n}\hat{\Delta}_{2}^{T}\bar{\Sigma}\tilde{\Delta
}_{1}^{\alpha}=O_{p}(\Delta_{n}^{m}+\tilde{\Delta}_{n}).$ Note tha
\begin{align*}
\bar{1}\hat{1}\tilde{1}\sqrt{n}\hat{\Delta}_{2}^{T}\bar{\Sigma}\tilde{\Delta
}_{1}^{\alpha} & =T_{1}+T_{2}+T_{3},T_{1}=\bar{1}\hat{1}\tilde{1}\sqrt
{n}\hat{h}_{2}^{T}\bar{\Sigma}\tilde{\Delta}_{1}^{\alpha},\\
T_{2} & =\bar{1}\hat{1}\tilde{1}\sqrt{n}\hat{\Delta}_{2}^{T}(I-\hat{\Sigma
})\bar{\Sigma}\tilde{h}_{1}^{\alpha},\\
T_{3} & =\bar{1}\hat{1}\tilde{1}\sqrt{n}\hat{\Delta}_{2}^{T}(I-\hat{\Sigma
})\bar{\Sigma}(I-\tilde{\Sigma})\tilde{\Delta}_{1}^{\alpha}.
\end{align*}
By Lemma A2 iii),
\[
E[T_{1}^{2}|\hat{Z}^{c}]\leq\bar{1}\tilde{1}n(\tilde{\Delta}_{1}^{\alpha
)^{T}\bar{\Sigma}E[\hat{h}_{2}\hat{h}_{2}^{T}]\bar{\Sigma}\tilde{\Delta
_{1}^{\alpha}\leq CK^{-2\zeta_{\gamma}}\tilde{1}(\tilde{\Delta}_{1}^{\alpha
})^{T}\tilde{\Delta}_{1}^{\alpha}=O_{p}(K^{-2\zeta_{\gamma}}\left( \Delta
_{n}^{m}\right) ^{2}),
\]
so by the Markov inequality, $T_{1}=O_{p}(\Delta_{n}^{m}).$ By Lemma A2 ii),
\begin{align*}
E[T_{2}^{2}|\tilde{Z}^{c}] & \leq\hat{1}\sqrt{n}\hat{\Delta}_{2}^{T
(I-\hat{\Sigma})E[\tilde{h}_{1}^{\alpha}\left( \tilde{h}_{1}^{\alpha}\right)
^{T}](I-\hat{\Sigma})\hat{\Delta}_{2}\leq Cd_{K}\hat{\Delta}_{2}^{T
(I-\hat{\Sigma})^{2}\hat{\Delta}_{2}\\
& =O_{p}((1+d_{K})\frac{K^{1-2\zeta_{\gamma}}}{n}\frac{K\ln(K)}{n}).
\end{align*}
Note that by the Markov inequality and $K\ln(K)/n\longrightarrow0$ it follows
that $T_{2}=O_{p}(\bar{\Delta}_{n}^{\ast}+\Delta_{n}^{m}).$ Finally, by the
Caucy-Schwartz inequality and Lemma A2
\[
T_{3}=O_{p}(\sqrt{\frac{K^{3}\ln(K)(1+d_{K})}{n^{3}}}K^{(1/2)-\zeta_{\gamma
})=O_{p}(\tilde{\Delta}_{n}).
\]
The conclusion then follows by the triangle inequality. \textit{Q.E.D.}
\section*{Acknowledgements}
We appreciate the hospitality of the Cowles Foundation where much of the work
for this paper was accomplished. We also appreciate the comments of M.
Cattaneo, X. Chen, M. Jansson and seminar participants at UCL.
\bigskip
\setlength{\parindent}{-.5cm} \setlength{\parskip}{.1cm}
\begin{center}
\textbf{REFERENCES}
\end{center}
\textsc{Athey, S., G. Imbens, and S. Wager} (2017): "Efficient Inference of
Average Treatment Effects in High Dimensions via Approximate Residual
Balancing," \textit{Journal of the Royal Statistical Society, Series B,} forthcoming.
\textsc{Ayyagari, R. }(2010): Applications of Influence Functions to
Semiparametric Regression Models, Ph.D. Thesis, Harvard School of Public
Health, Harvard University.
\textsc{Belloni, A., V. Chernozhukov, D. Chetverikov, K. Kato} (2015):
\textquotedblleft Some New Asymptotic Theory for Least Squares Series:
Pointwise and Uniform Results,\textquotedblright\ \textit{Journal of
Econometrics} 186, 345--366.
\textsc{Bickel, P.J.} (1982): "On Adaptive Estimation," \textit{Annals of
Statistics} 10, 647-671.
\textsc{Bickel, P. and Y. Ritov} (1988): "Estimating Integrated Squared
Density Derivatives: Sharp Best Order of Convergence Estimates,"
\textit{Sankhya: The Indian Journal of Statistics}, \textit{Series A} 50, 381--393.
\textsc{Blomquist, S. and M. Dahlberg} (1999): "Small Sample Properties of
LIML and Jackknife IV Estimators: Experiments with Weak Instruments,"
\textit{Journal of Applied Econometrics }14, 69--88.
\textsc{Cattaneo, M.D., and M. Farrell} (2013): "Optimal Convergence Rates,
Bahadur Representation, and Asymptotic Normality of Partitioning Estimators,"
\textit{Journal of Econometrics} 174, 127-143.
\textsc{Cattaneo, M.D., and M. Jansson }(2017): "Kernel-Based Semiparametric
Estimators: Small Bandwidth Asymptotics and Bootstrap Consistency,"
\textit{Econometrica}, forthcoming.
\textsc{Cattaneo, M.D., M. Jansson, and X. Ma (2017): "}Two-step Estimation
and Inference with Possibly Many Included Covariates," working paper, Michigan.
\textsc{Chernozhukov, V., J.C. Escanciano, H. Ichimura, W.K. Newey, J.M.
Robins }(2016): "Locally Robust Semiparametric Estimation," arXiv 1608.00033.
\textsc{Chernozhukov, V., D. Chetverikov, M. Demirer, E. Duflo, C. Hansen,
W.K. Newey, J.M. Robins }(2017): "Double/Debiased Machine Learning for
Treatment and Structural Parameters," \textit{Econometrics Journal}, forthcoming.
\textsc{Donald, S.G. and W.K. Newey }(1994): \textquotedblleft Series
Estimation of Semilinear Models," \textit{Journal of Multivariate Analysis}
50, 30-40.
\textsc{Firpo, S. and C. Rothe} (2016): "Semiparametric Two-Step Estimation
Using Doubly Robust Moment Conditions," working paper.
\textsc{Gine, E. and R. Nickl} (2008): "A Simple Adaptive Estimator of the
Integrated Square of a Density," \textit{Bernoulli} 14, 47--61.
\textsc{Hahn, J. (1998):} "On the Role of the Propensity Score in Efficient
Semiparametric Estimation of Average Treatment Effects," \textit{Econometrica}
66, 315-331.
\textsc{Hirano, K., G. Imbens, and G. Ridder} (2003): "Efficient Estimation of
Average Treatment Effects Using the Estimated Propensity Score,"
\textit{Econometrica} 71: 1161--1189.
\textsc{Hirschberg, D.A and S. Wager} (2017): "Balancing Out Regression Error:
Efficient Treatment Effect Estimation without Smooth Propensities," arXiv:1712.00038.
\textsc{Ichimura, H. and W.K. Newey} (2017): "The Influence Function of
Semiparametric Estimators," CEMMAP working paper CWP06/17.
\textsc{Imbens G., J. Angrist, A. Krueger} (1999): "Jackknife Instrumental
Variables Estimation," \textit{Journal of Applied Econometrics }14, 57-67.
\textsc{Kandasamy, K., A. Krishnamurthy, B. Poczos, L. Wasserman, J. Robins
}(2015) "Nonparametric von Mises Estimators for Entropies, Divergences and
Mutual Informations," \textit{Advances in Neural Information Processing
Systems} 28 (NIPS 2015).
\textsc{Laurent, B.} (1996): "Efficient Estimation of Integral Functionals of
a Density," \textit{Annals of Statistics} 24, 659-681.
\textsc{Mukherjee, R., W.K. Newey, J.M. Robins} (2017): "Semiparametric
Efficient Empirical Higher Order Influence Function Estimators," arXiv:1705.07577.
\textsc{Newey, W.K.} (1994): "The Asymptotic Variance of Semiparametric
Estimators," \textit{Econometrica} 62, 1349-1382.
\textsc{Newey, W.K. }(1997): \textquotedblleft Convergence Rates and
Asymptotic Normality for Series Estimators,\textquotedblright\ \textit{Journal
of Econometrics }79, 147-168.
\textsc{Newey, W.K., F. Hsieh, {\small and} J.M. Robins} (1998):
\textquotedblleft Undersmoothing and Bias Corrected Functional Estimation,"
MIT Dept. of Economics working paper\ 72, 947-962.
\textsc{Newey, W.K., F. Hsieh, {\small and} J.M. Robins} (2004):
\textquotedblleft Twicing Kernels and a Small Bias Property of Semiparametric
Estimators,\textquotedblright\ \textit{Econometrica} 72, 947-962.
\textsc{Powell, J.L., J.H. Stock, and T.M. Stoker }(1989): "Semiparametric
Estimation of Index Coefficients," \textit{Econometrica} 57, 1403-1430.
\textsc{Robins, J.M. and A. Rotnitzky (1995): }"Semiparametric Efficiency in
Multivariate Regression Models with Missing Data," \textit{Journal of the
American Statististical Association} 90, 122--129.
\textsc{Robins, J.M., A. Rotnitzky, and M. van der Laan} \ (2000): "Comment on
'On Profile Likelihood'\ by S. A. Murphy and A. W. van der Vaart,"
\textit{Journal of the American Statistical Association} 95, 431-435.
\textsc{Robins, J., M. Sued, Q. Lei-Gomez, and A. Rotnitzky} (2007): "Comment:
Performance of Double-Robust Estimators When Inverse Probability' Weights Are
Highly Variable," \textit{Statistical Science} 22, 544--559.
\textsc{Robins, J.M., E.T. Tchetgen, L. Li, A. van der Vaart (2009):}
"Semiparametric Minimax Rates," \textit{Electronic Journal of Statistics }3, 1305--1321.
\textsc{Robins, J.M., L. Li, E. Tchetgen, and A. van der Vaart} (2008) "Higher
Order Influence Functions and Minimax Estimation of Nonlinear Functionals," in
\textit{IMS Collections Vol. 2, Probability and Statistics: Essays in Honor of
David A. Freedman, }D. Nolan and T. Speed (eds.), Beachwood, Ohio: Institute
of Mathematical Statistics, 335-421.
\textsc{Robins, J.M, P. Zhang, R. Ayyagari, R. Logan, E. Tchetgen, L. Li, T.
Lumley, A. van der Vaart A, HEI Health Review Committee }(2013): "New
Statistical Approaches to Semiparametric Regression with Application to Air
Pollution Research," Research Report Health Eff Instm 175:3-129.
\textsc{Robins, J.M., L. Li, R. Mukherjee, E. Tchetgen, A. van der Vaart}
(2017): "Minimax Estimation of a Functional on a Structured High Dimensional
Model," \textit{Annals of Statistics,} forthcoming.
\textsc{Rotnitzky, A. and J.M. Robins} (1995): "Semi-parametric Estimation of
Models for Means and Covariances in the Presence of Missing Data,"
\textit{Scandinavian Journal of Statistics} 22, 323--333.
\textsc{Rudelson, M.} (1999): "Random Vectors in the Isotropic Position,"
\textit{Journal of Functional Analysis} 164, 60-72.
\textsc{Scharfstein D.O., A. Rotnitzky, and J.M. Robins (1999): }Rejoinder to
\textquotedblleft Adjusting For Nonignorable Drop-out Using Semiparametric
Non-response Models,\textquotedblright\ \textit{Journal of the American
Statistical Association }94, 1135-1146.
\textsc{Stoker, T. }(1986): "Consistent Estimation of Scaled Coefficients,"
\textit{Econometrica} 54, 1461-1482.
\setlength{\parindent}{.0cm} \setlength{\parskip}{.1cm}
\end{document} |
1,108,101,566,388 | arxiv | \section*{Introduction}
3$d$-containing materials are well known for strong electron correlations, narrow band widths, site- and orbital-selective states, and robust magnetism whereas
4- and 5$d$ systems are recognized for strong spin–orbit coupling, increased hybridization, extended orbitals, and a tendency toward dimerization \cite{Streltsov2017}.
Combining these qualities
in mixed metal
materials leads to a variety of unexpected properties.
Examples include interpenetrating sublattices with independent spin dynamics and ground states in Sr$_2$CoOsO$_6$ \cite{Morrow2013,Yan2014}, self-healing photoelectrode materials like CuRhO$_2$ \cite{Gu2014}, covalency-driven collapse of spin-orbit coupling in Ba$_5$CuIr$_3$O$_{12}$ \cite{Ye2018}, an ultra-high coercive field in Sr$_3$NiIrO$_6$ \cite{Singleton2016,ONeal2019}, magnetoelectric coupling in Co$_4$Nb$_2$O$_9$ \cite{Khanh2016}, surprising spin entropy effects across the magnetic quantum phase transition in CoNb$_2$O$_6$ \cite{Liang2015}, and nonreciprocal directional dichroism in Ni$_3$TeO$_6$ \cite{Yokosuk2020}.
Another mixed metal system with exciting properties and curious hybridization is
Fe$_2$Mo$_3$O$_8$ - also known as the mineral Kamiokite \cite{Inosov2018}. While magnetism and magnetoelectric coupling have been widely studied \cite{Sheckelton2012,Mourigal2014,Wang2015,Kurumaji2015,Li2017,Chen2018,Solovyev2019,Nikolaev2021}, the charge excitations are highly under-explored.
Fe$_2$Mo$_3$O$_8$ is a polar magnet with giant magnetoelectric coupling, strong Dzyaloshinski-Moriya interactions, valence bond condensation (creating a cluster magnet), and the possibility of orbitally-selective
transitions \cite{Sheckelton2012,Mourigal2014,Wang2015,Kurumaji2015,Li2017,Chen2018,Solovyev2019,Nikolaev2021}.
Zinc substitution, first on the tetrahedral Fe site and then on the octahedral Fe site \cite{Kurumaji2015,Streltsov2019}, is of interest for magnetic properties as well \cite{Nakayama2011,Mourigal2014,Inosov2018,Streltsov2019}.
The structure of Fe$_2$Mo$_3$O$_8$ consists of corner-shared tetrahedral and octahedral sites separated by layers of Mo trimers [Fig. \ref{FMOBandgap}(a,b)] \cite{McCarroll1957,Ansell1966,Sheckelton2012}.
The FeO$_4$ tetrahedron is significantly elongated and distorted, and the FeO$_6$ octahedron is trigonally distorted as well, leading to a C$_{3v}$ point group on both tetrahedral and octahedral Fe sites. As a result, Fe$_2$Mo$_3$O$_8$ has no inversion symmetry.
The system has a 61 K magnetic ordering transition to a collinear antiferromagnetic state
with a concomitant structural distortion \cite{Varret1971,Czeskleba1972,Wang2015,Stanislavchuk2019,Reschke2020}. Antiferromagnetic antiphase domain boundaries have been imaged in this state \cite{Kim2018}. Fe$_2$Mo$_3$O$_8$
also displays a 5 T transition to the ferrimagnetic state with an extremely large magnetoelectric coefficient \cite{Wang2015,Kurumaji2015}.
Ni$_2$Mo$_3$O$_8$ also hosts robust magnetoelectric coupling with a field-tunable coupling mechanism \cite{Tang2021}.
Spectroscopic highlights in Fe$_2$Mo$_3$O$_8$ include (i) nonreciprocal directional dichroism \cite{Yu2018}, phonon trends across $T_{\rm N}$ \cite{Stanislavchuk2019,Reschke2020}, and a variety of magnetic excitations in the terahertz range \cite{Csizi2020}, (ii) M\"ossbauer to confirm the 2+ charge on the iron site \cite{Varret1971,Czeskleba1972}, and (iii) studies of charge transfer via time-dependent optical Kerr effects \cite{Sheu2019} complemented by first principles electronic structure calculations \cite{Biwas2017,Streltsov2019,Reschke2020}.
In order to place the charge excitations on a firm foundation, we measured the optical properties of the $A_2$Mo$_3$O$_8$ family of materials (where $A$ = Fe, Ni, Mn, Zn)
and compared our findings with complementary
electronic structure calculations.
We show that the 1.7 eV gap in Zn$_2$Mo$_3$O$_8$
is determined by the charge excitations of the Mo trimer. Replacing Zn on the octahedral site with Fe yields FeZnMo$_3$O$_8$. This system has a substantially reduced and renormalized gap determined by Fe-O hybridized bands
that appear due to the periodic lattice potential.
Further substitution yields Fe$_2$Mo$_3$O$_8$ which has both trigonally-distorted octahedral and tetrahedral sites occupied by Fe atoms, and the gap is further reduced to 1.0 eV.
Here, the charge gap is more complex to describe because it has mixed band and Mott features.
In other words, some orbitals hybridize strongly with oxygen and form very narrow bands whereas other orbitals exhibit real space localization and are Mott insulating. Mixed band and Mott gaps are commonly called orbitally- or site-selective Mott states~\cite{Chen2020,Pascut2020,Lichtenstein2001,Anisimov2002,deMedici2005,Haule2017}.
What distinguishes Fe$_2$Mo$_3$O$_8$ from other orbitally-selective Mott systems is the narrow many-body resonance
emanating from the edge of the flat valence band.
The Kondo effect is, of course, normally studied in metals. In this work, we show that the Kondo effect can also appear in mixed metal semiconductors.
In addition, the gap in Fe$_2$Mo$_3$O$_8$ is
sensitive to magnetic ordering at 61 K due to the heavily mixed character of the charge excitations. %
Moreover, the $d$-to-$d$ excitations on the distorted octahedral Fe site are vibronically activated, and spin-orbit related features ride on top of the distorted tetrahedral on-site excitations below the magnetic ordering transition. We discuss these findings in terms of band-Mott mixing in 3- and 4$d$-containing quantum cluster magnets.
\begin{figure*}[tbh]
\begin{minipage}{7.0in}
\includegraphics[width = 7.0in]{Fig1_OptCond_glp_v1.pdf}
\end{minipage}
\begin{minipage}{7.0in}
\caption{\label{FMOBandgap}
(a) Crystal structure of the $A_2$Mo$_3$O$_8$ compounds where $A$ = Mn, Fe, Co, Ni, Zn. $A$(T) and $A$(O) represent ions in trigonally-distorted tetrahedral and octahedral environments, respectively. (b) Schematic view of the Mo trimer.
(c-e) Absorption spectra of Zn$_2$Mo$_3$O$_8$, FeZnMo$_3$O$_8$, and Fe$_2$Mo$_3$O$_8$ measured at room temperature.
(f) A Tauc plot reveals the direct band gap of Zn$_2$Mo$_3$O$_8$ and the Fe substituted analogs.
(g) Band gap schematic showing the impact of two different types of T and O-site substitution. The upper and lower trend lines correspond to the (Fe,Zn)$_2$Mo$_3$O$_8$ series and the $A_2$Mo$_3$O$_8$ ($A$ = Fe, Mn, Ni) materials, respectively.
(h,i) Calculated optical conductivity of the $A_2$Mo$_3$O$_8$ materials.
}
\end{minipage}
\end{figure*}
\section*{Methods}
High quality single crystals of Fe$_2$Mo$_3$O$_8$, the Zn-substituted analogs FeZnMo$_3$O$_8$, and Zn$_2$Mo$_3$O$_8$, as well as Mn$_2$Mo$_3$O$_8$ and Ni$_2$Mo$_3$O$_8$ were grown by chemical vapor transport as discussed previously \cite{Wang2015}. Special care was taken to assure the stoichiometry of FeZnMo$_3$O$_8$.
Crystals were polished to control optical density and expose the hexagonal face.
A Bruker 55 Fourier transform infrared spectrometer equipped with a microscope attachment was used to measure transmittance over the 0.41 - 2.0 eV energy range. Absorption was calculated as $\alpha(E)= -\frac{1}{d}{\rm ln}({\mathcal{T}}(E$)), where ${\mathcal{T}}$($E$) is the transmittance and \emph{d} is the thickness. Performing these measurements in transmittance rather than reflectance avoids light leakage problems. Temperature was controlled by an open-flow cryostat.
For theoretical calculations we used the density functional theory (DFT) as implemented in WIEN2k \cite{Blaha2019} and a charge-self-consistent dynamical mean field theory (DMFT) as implemented in the eDMFT code~\cite{Haule,Haule2018}. At the DFT level, we
used the generalized gradient approximation Perdew-Burke-Ernzerhof (GGA-PBE) functional \cite{Perdew1996}, with RKmax = 7.0 and 312 k-points in the irreducible part of the 1$^{st}$ Brillouin zone. At the eDMFT level, we used the fully rotationally invariant Coulomb interaction, a nominal double counting scheme \cite{Haule2015}, with the $d$-orbital occupations for double counting corrections for Mn, Fe, Co and Ni set to be 5, 6, 7 and 8, respectively. The temperature is fixed at $500\,$K. To define the DMFT projector, we used quasi-atomic
orbitals by projecting bands in a large hybridization window (-10 to +10 eV) with respect to the Fermi level, in
which partially screened Coulomb interactions have values of
U = 10 eV and J$_H$ = 1 eV in Mn, Fe, Co and Ni ions.
In order to solve the auxiliary quantum impurity problem, a continuous-time quantum Monte Carlo method in the hybridization-expansion (CT-HYB) was used~\cite{Haule2007}, where the five $d$ orbitals for the Mn, Fe, Co and Ni ions (grouped according to the local C$_{3v}$ point group symmetry) were chosen as our correlated subspace in a single-site DMFT approximation. For the CT-HYB calculations, up to 10$^8$ Monte Carlo steps were employed for each Monte Carlo run. The self-energy on the real axis was obtained using the analytical continuation maximum entropy method for the local cumulant as explained in \cite{ME_haule}. During the calculation, the position of the chemical potential was kept fixed within the gap.
The experimental crystal structures used for our computations \cite{Stanislavchuk2019} as well as details of our calculations are given in the Supplementary information \cite{Supp}.
\section*{Results and Discussion}
\subsection*{Optical response of Fe$_2$Mo$_3$O$_8$ and the $A$-substituted analogs ($A$ = Zn, Mn, Ni)}
Figure \ref{FMOBandgap}(c-e) summarizes the optical properties of the (Fe,Zn)Mo$_3$O$_8$ family of materials. The absorption spectrum of the parent compound, Zn$_2$Mo$_3$O$_8$, is low and flat in the near infrared, rising on approach to the O 2$p$ $\rightarrow$ Mo 3$d$ charge transfer excitation. The direct band gap is 1.75 eV. Because Zn$^{2+}$ has a $d^{10}$ configuration, there are no $d$-to-$d$ on-site excitations. Zn$_2$Mo$_3$O$_8$ therefore provides an opportunity to study how the Mo trimer interacts with oxygen in isolation. At the same time, it is an important
scaffold upon which additional complexity can be built.
Sequential $A$-site substitution of Fe, first into the distorted octahedral site in FeZnMo$_3$O$_8$, here denoted as Fe(O), and then into the distorted tetrahedral site in Fe$_2$Mo$_3$O$_8$, henceforth Fe(T), lowers the charge gap significantly [Fig.~\ref{FMOBandgap}(f)]. We find direct gaps of 1.2 and 1.0 eV for FeZnMo$_3$O$_8$ and Fe$_2$Mo$_3$O$_8$, respectively.
The gap values were determined from Tauc plots of
(${\alpha}{\cdot}E$)$^2$ vs. energy \cite{Pankove2010}.
\begin{figure*}[tbh]
\begin{minipage}{2.2in}
\caption{(a, d) Tauc plot of (${\alpha}{\cdot}E$)$^2$ vs. energy for Fe$_2$Mo$_3$O$_8$ and a plot of band gap vs. temperature with a fit to the Varshni model. (b, e) Close-up view of the Fe$^{2+}$ $d$-to-$d$ on-site excitation on the tetrahedral site showing the fine structure that develops due to spin-orbit coupling below the 61 K magnetic ordering transition. (c, f) Close-up view of the Fe$^{2+}$ $d$-to-$d$ on-site excitation on the octahedral site and oscillator strength vs. temperature along with a fit to a modified vibronic coupling model \cite{Ballhausen1962,Stoneham2001,ONeal2017}.\label{Temperature}}
\end{minipage}
\begin{minipage}{4.4in}
\includegraphics[width=4.4in]{FMO_temp17.pdf}
\end{minipage}
\end{figure*}
Figure~\ref{FMOBandgap}(d) displays the absorption of FeZnMo$_3$O$_8$. The charge excitations across the gap consist of mixed O 2$p$ + Mo 3$d$ + Fe(O) 3$d$ transitions. The lowest energy excitation across this gap comes from Fe(O) hybridizing with Mo-O trimers. The 1.2 eV gap is substantially lower in energy than the fundamental Mo-O band gap, which theoretically remains roughly equal to that in Zn$_2$Mo$_3$O$_8$. As Fe$^{2+}$ populates the octahedral site in FeZnMo$_3$O$_8$, an on-site $d$-to-$d$ transition arises near 0.95 eV. It overlaps strongly with the leading edge of the charge transfer band and is activated by vibronic coupling [Fig. \ref{Temperature}(c,f)] \cite{Ballhausen1962,Stoneham2001,ONeal2017}. Notice that absorption is low and flat near 0.5 eV - a sign of crystal quality and stoichiometry. Once the Fe(T) site is populated as well (as in Fe$_2$Mo$_3$O$_8$), the gap is reduced further, and two
different types of $d$-to-$d$ on-site excitations are identified inside the charge gap. As shown in Fig.~\ref{FMOBandgap}(e), Fe on the trigonally-distorted tetrahedral site contributes additional atomic-like excitations centered at 0.5\,eV. While oscillator strength is fully conserved as a function of temperature [Fig. \ref{Temperature}(e)], a great deal of fine structure due to spin-orbit coupling rides on top of the band below the 61 K magnetic ordering transition [Fig. \ref{Temperature}(b)]. A similar response develops in Fe$^{2+}$:ZnSe \cite{Evans2017}. The behavior of the Fe$^{2+}$ on-site $d$-to-$d$ excitations in Fe$_2$Mo$_3$O$_8$ is summarized in Fig. \ref{Temperature}(b,c,e,f) and further discussed in Supplementary information \cite{Supp}. The full sequence of gap values
is shown schematically in Fig.~\ref{FMOBandgap}(g).
To test the influence of $A$-site substitution on the band gap and strength of the metal to MoO-trimer hybridization, we measured the optical properties of the Mn and Ni analogs of Fe$_2$Mo$_3$O$_8$ [Fig. S2, Supplementary information] \cite{Supp}.
Mn$_2$Mo$_3$O$_8$ and Ni$_2$Mo$_3$O$_8$ have charge gaps of 1.65 and 1.7 eV, respectively - very similar to that of the Zn end member.
This result is attributable to 3$d$ orbital filling and character. The Mn system has a half-filled $d$-manifold,
which corresponds to a high spin Mott state with a large gap.
The $d^8$ configuration in the Ni analog also has two holes and is thus in the large gap Mott insulating state. As a result, there is little mixing
between the metal center and Mo-O trimer, hence these transition metals do not play an active role in determining the low-energy excitations across the gap. We mention in passing that the Mn and Ni compounds have on-site $d$-to-$d$ excitations as well [Fig. S2, Supplementary information] \cite{Supp}.
On the other hand, Fe$_2$Mo$_3$O$_8$ has both Mott-type and band-insulating orbitals, which strongly hybridize with the Mo trimer. These interactions enable the Fe centers to control the low energy physics as discussed below. Moreover, we find that the band gap of Fe$_2$Mo$_3$O$_8$ is sensitive to the 61 K magnetic ordering transition and the associated structural distortion [Fig. \ref{Temperature}(a,d)].
This is different from the other $A_2$Mo$_3$O$_8$ compounds, where the temperature dependence of the band gaps are in excellent overall agreement with the Varshni model \cite{Sarswat2012,ODonnell1991} implying no (or extremely subtle) structural aspects to the magnetic ordering transitions. That the band gap decreases across $T_{\rm N}$ is due to coupling of charge, structure, and magnetism and the flat bands emanating from the trigonally-distorted tetrahedral Fe site.
\begin{figure*}[tbh]
\begin{minipage}{7.0in}
\includegraphics[width = 7.0in]{Fig2_DOS_Orb.png}
\end{minipage}
\begin{minipage}{7.0in}
\caption{\label{DOS}
Density of states (DOS) for $A_2$Mo$_3$O$_8$ ($A$ = Fe and Zn): (a) total DOS; (b-g) atom- and orbital-projected DOS.
The (T) and (O) symbols refer to trigonally-distorted tetrahedral and octahedral environments.
The vertical solid lines are placed at zero chemical potential. The schematic insets of gray tetrahedra$/$octahedra are guides to the eye pointing the reader to the electronic states of the transition metal ions in the corresponding environment. (h, i) Orbital projected spectral functions for Fe$_2$Mo$_3$O$_8$ and FeZnMo$_3$O$_8$ (blue - $e$(1) tetrahedra; green - $a_1$ and $e$(2) octahedra; red - $a_1$, $e$(2) tetrahedra and $e$(1) octahedra)}.
\end{minipage}
\end{figure*}
\begin{figure*}[tbh]
\begin{minipage}{7.0in}
\includegraphics[width = 7.0in]{Fig4_Orb_Character.pdf}
\end{minipage}
\begin{minipage}{7.0in}
\caption{\label{DOS_orbitals}
Atom- and orbital-projected density of states (DOS) together with a schematic view of the orbital occupation and character for $A_2$Mo$_3$O$_8$ ($A$ = Mn, Fe, Co and Zn).
Each panel shows a schematic view of the orbital occupation (left side) and DOS (right side). Panels (a, c, e, g) refer to ions in trigonally-distorted tetrahedral environments ($A$(T)), whereas panels (b, d, f, h) refer to ions in trigonally-distorted octahedral environments ($A$(O)).
}
\end{minipage}
\end{figure*}
\subsection{Strong hybridization, resonance, and interaction with the Mo trimer}
Figure~\ref{FMOBandgap}(h) displays the theoretical optical conductivity of Zn$_2$Mo$_3$O$_8$, FeZnMo$_3$O$_8$ and Fe$_2$Mo$_3$O$_8$ computed using a combination of Density Functional Theory and embedded Dynamical Mean Field Theory (DFT + eDFMT) methods \cite{Haule,Haule2018}.
Here, we find the same trend of decreasing charge gap with Fe substitution. In Zn$_2$Mo$_3$O$_8$, the size of the optical gap is $\approx$1.7$\,$eV which decreases to approximately 1.5 and 1.4 eV in
FeZnMo$_3$O$_8$ and Fe$_2$Mo$_3$O$_8$, respectively.
Figure~\ref{FMOBandgap}(i) compares the theoretical optical conductivity of the Mn, Ni and Co analogs. We find that the predicted gap is larger in all of these compounds (near $1.55\,$eV) as compared to Fe$_2$Mo$_3$O$_8$. The edge of the gap is very smooth and temperature smeared. This is quite different from Fe$_2$Mo$_3$O$_8$ where the gap is smaller with an additional peak at the onset.
To better understand what determines the low energy excitations and character of the gap in this class of compounds, we calculated the local density of states for the full set of $A_2$Mo$_3$O$_8$ materials [Fig.~\ref{DOS}]. While the optical gap in general is different than the gap of the single-particle excitations measured by the local density of states, the two are very similar in these compounds. This is because the band gap is direct in Zn$_2$Mo$_3$O$_8$, and the hybridized bands in the Fe-containing compounds are extremely narrow. Hence momentum-conserving excitations have essentially the same gap size as the finite momentum single particle excitations.
The insets show that the position of the conduction band in FeZnMo$_3$O$_8$ and Fe$_2$Mo$_3$O$_8$ decreases slightly as compared to Zn$_2$Mo$_3$O$_8$, although the change is small. Most of the action is in the valence bands, where the FeZn band edge moves considerably upward. In Fe$_2$Mo$_3$O$_8$, a very narrow many-body excitation forms at the onset of the gap. This is the origin of the first peak in the optical conductivity [Fig.~\ref{FMOBandgap}(h,i)] and the reason that the gap is drawn strongly downward in this system.
Figure~\ref{DOS}(b-g) displays the projected density of states (DOS) per transition metal center and per orbital in the distorted tetrahedral (T) and octahedral (O) environments. The local point group symmetry around each iron center is C$_{3v}$. Therefore, the $e$ and $t_2$ levels at the distorted tetrahedral site break into $e$(1), $a_1$, and $e$(2) orbitals. Similarly symmetry at the trigonally-distorted Fe octahedral site breaks $t_{2g}$ and $e_g$ into $e$(2), $a_1$, and $e$(1) states. These energy levels are shown in Fig. \ref{DOS_orbitals}, and the symmetry breaking is diagrammed in Fig. S1, Supplementary Information \cite{Supp}.
Getting back to Figure~\ref{DOS}(b-g), we notice that the Mo states (dotted grey line) are very similar across this entire family of materials. The low-energy excitations are, however, not on the Mo site when Fe is present. Figure~\ref{DOS}(b) reveals that the sharp many-body resonance near $-0.4\,$eV emanates primarily from the Fe center in the trigonally-distorted tetrahedral environment, $e$(1). Panel (c) shows that a broader, but still reasonably sharp excitation around $-0.8\,$eV arises from the doubly-degenerate $e$(2) state on the distorted octahedral site. Since both of these fairly sharp excitations come from band formation via hybridization, the Mo partial density of states also has a small peak at the same energy (Figs. S19 and S20). This demonstrates the quasi-particle nature of these peaks, which are Kondo-like and come from local spin screening on the aforementioned $e$(1)
and $e$(2) distorted tetrahedral and octahedral
orbitals, respectively.
What is exciting about this finding is that screening and many body Kondo peak formation \cite{Yee2010,Hewson1993} are normally expected in a metal - not an insulator.
The possibility of a Kondo resonance in a semiconductor like Fe$_2$Mo$_3$O$_8$ is potentially quite interesting, opening the door to deeper exploration of the Kondo effect in a significantly wider variety of materials.
While broader peaks like that at $-0.8\,$eV are not uncommon in transition metal compounds and appear for example in monoxides~\cite{Mandal2019,Mandal2019_1},
the very narrow resonance emanating from the $e$(1) orbitals on the distorted tetrahedral site is unique to Fe$_2$Mo$_3$O$_8$. We assign it as an analog of the Zhang-Rice singlet in cuprates,~\cite{Zhang1988,Eskes1988} arising due to screening of the spin $1/2$ hole on the trigonally-distorted tetrahedral Fe $e$(1) sites in the Mott insulating state. This characteristic peak appears with very well defined energy. It also mixes strongly with the Mo-O trimers (see supplement Fig. S5).
In order to test these predictions, we compare the theoretical optical conductivity [Fig. \ref{FMOBandgap}(h)] to the measured absorption spectrum of Fe$_2$Mo$_3$O$_8$ [Fig. \ref{FMOBandgap}(f)]. Overall, the calculated optical conductivity and the measured absorption spectrum are very consistent - although the
features are not as well-defined as we might prefer. As a reminder, the spectral functions are very flat on the valence edge due to Fe occupation on the distorted tetrahedral site. This causes a sharp many-body resonance
in the density of states [Fig \ref{DOS}(b)] which manifests as a small peak on the leading edge of the theoretical optical conductivity. The contribution from the distorted octahedral site is similar but less pronounced. Our calculations therefore predict that the many-body resonance on the valence band edge [Fig \ref{DOS}(b,c)] should lower the gap.
This is exactly what we find. Obviously this structure is strongly broadened in the experimental result, but the presence of these states causes the gap to be drawn noticeably downward in this system [Fig.~\ref{FMOBandgap}(f)]. Similar reasoning applies to FeZnMo$_3$O$_8$, although only the distorted octahedral site is operative. Because the density of states are momentum-averaged, we also show the spectral functions [Fig. \ref{DOS}(h,i)].
\subsection*{Structure-property relations in the metal-substituted analogs}
Figure \ref{DOS_orbitals} compares the density of states in Fe$_2$Mo$_3$O$_8$ with several other transition metal analogs which have orbital filling between $d^5$ and $d^8$ - namely the Mn, Co and Ni analogs. We also show a schematic view of the orbital occupation for both the distorted tetrahedral and octahedral sites (left and right columns, respectively). The gap in each orbital can originate from the band structure due to the periodic potential (band gap) or from Mott localization of electrons on a given $A$ site, which we denote as a Mott gap. In principle, we can distinguish between the two because the single-particle spectral function is modified from the DFT bands by self-energy effects. We expect the spectral function to be either zero or finite in the band gap case; it should diverge inside the gap for the Mott case \cite{Pavarini2017,Demchenko2004,Kotliar2006}. Here, the electron state can no longer be described within the band picture.
The top panels of Fig.~\ref{DOS_orbitals} show calculations for Mn$_2$Mo$_3$O$_8$ in which the electrons are in the high-spin $d^5$ configuration. The Mott gap opens in all orbitals on both tetrahedral and octahedra sites, and it is much larger than the band gap of Mo-trimer. Consequently the charge gap and low energy excitations in the Mn and Zn compounds are determined by the same Mo-trimer states. This is why they appear so similar.
Fe$_2$Mo$_3$O$_8$ is the most interesting of the series showing a complex interplay of band gaps, Mott gaps, and quasi-particle multiplets. The $a_1$ and $e$(2)
states on the distorted tetrahedra are Mott insulating with a large gap. As discussed above, the doubly degenerate $e$(1)
orbitals contain one hole, which is equally distributed among the two orbitals, and the Mott mechanism opens the gap - even though the self-energy pole is less strong and the gap smaller than in the $a_1$ and $e$(2)
states. Moreover, at the edge of the gap, strong hybridization, directly computable by the DMFT hybridization function, shows a very strong and narrow peak due to many body screening effects. With this mechanism, the spin-$1/2$ emanating from the $e$(1)
orbitals on the tetrahedral site is screened by the Mo-trimer electrons - a mechanism that is analogous to the Zhang-Rice state in cuprates~\cite{Zhang1988}. Note that this is different from the sharp peak due to the valence band edge in transition metal monoxides \cite{Mandal2019,Mandal2019_1}. The latter appears in the antiferromagnetic state where all gaps are band-like in nature and spin states are split by a Zeeman field. As a consequence, many-body screening of spin is not possible. Here, spin preserves SU(2) symmetry, and the resonance screens the local spin on Fe. In addition, the narrow peak due to many body screening effects disappears in the antiferromagnetic ground state [Fig. S19, Supplementary information] \cite{Supp}.
The distorted octahedral site in the Fe $d^6$ state contains a combination of a Mott gap in the $e$(1)
states and a band gap in the $a_1$ and $e$(2)
states. This unusual combination is a so-called orbitally-selective Mott state~\cite{Anisimov2002,deMedici2005}. Note that the sharp peak at the valence band edge appears as well, even though it is not as sharp as the resonance from the distorted tetrahedral Fe $e$(1)
site. Its nature is different, as it appears in band-insulating $a_1$ and $e$(2)
orbitals similar to the valence band edge discussed in transition metal monoxides~\cite{Mandal2019_1,Mandal2019}. Note that hybridization and band formation with oxygen and Mo electrons are needed to open the band gap in the octahedral Fe $a_1$ and $e$(2)
states. This is because the latter contains only four electrons in three nearly degenerate $a_1$ and $e$(2)
orbitals.
Next we discuss the Co analog, which is in the $d^7$ configuration. In this case, the $e$(1)
orbitals on the tetrahedral site are fully filled, and the $a_1$ and $e$(2)
orbitals are in the high-spin Mott insulating state with a large gap - larger than the Mo-trimer gap. On the octahedral site, the $e$(1)
orbitals are Mott-insulating with a large gap, and the $a_1$ and $e$(2)
states are band-insulating in which Mo and oxygen provide one electron to form a covalent band with the $a_1$
electrons. We note that no sharp low-energy peak is found at the valence band edge, although in principle such a peak is possible.
Figure~\ref{DOS_orbitals}(g,h) displays the partial density of states for the Ni analog with its $d^8$ configuration. In this case, the $e$(1)
and $a_1$ orbitals on the distored tetrahedral site
are fully filled, and the doubly-degenerate $e$(2)
state shows a Mott gap which is comparable to that of the Mo trimer. On the distorted octahedral site, the $e$(2) and $a_1$
states are fully filled, and the $e$(1)
states are in the half-filled Mott insulating state. This Mott gap is again comparable to the Mo-trimer gap. Hence the reduction of the gap as
compared to the Zn analog is minimal.
\section*{Summary and outlook}
To summarize, we measured the optical properties of Fe$_2$Mo$_3$O$_8$ and compared our findings with
first principles electronic structure calculations.
We find a 1.1 eV direct gap composed of heavily mixed charge-transfer excitations that is sensitive to magnetic ordering at 61 K,
vibronic coupling that activates on-site $d$-to-$d$ excitations on the distorted octahedral Fe site, and spin-orbit related features riding on top of the $d$-to-$d$ excitation on the distorted tetrahedral Fe site below the magnetic ordering temperature.
The Kondo effect is, of course, usually studied in metals. Here, we show that it can also appear in a semiconductor that has both Mott and band gaps. Similar to the metallic Kondo effect, the orbitals with Mott-like gaps develop a many-body excitation near the valence edge. This draws the gap downward in energy (from 1.7 eV in Zn$_2$Mo$_3$O$_8$ $\rightarrow$ 1 eV in Fe$_2$Mo$_3$O$_8$) and screens the magnetic moment. This discovery opens the door to deeper exploration of the Kondo effect in semiconductors.
Fe$_2$Mo$_3$O$_8$ is also a superb platform for unraveling structure-property relationships. What differentiates Fe$_2$Mo$_3$O$_8$ from the Zn, Mn, Co, and Ni members of this series is the band-Mott mixing, the Zhang-Rice resonance,
and how the gap is hybridized.
Taken together, these findings enhance our understanding of charge transfer in quantum cluster magnets and advance the use of this powerful scaffold in new types of
charge storage devices.
\section*{Acknowledgements}
Research at the University of Tennessee and Rutgers University is supported by the NSF-DMREF program (DMR-1629079 and DMR-1629059). G.L.P.'s work was supported by a grant of the Romanian Ministry of Education and Research, CNCS - UEFISCDI, project number PN-III-P1-1.1-TE-2019-1767, within PNCDI III. Access to the x-ray facilities at the Research Complex, Rutherford Appleton Laboratory is gratefully acknowledged.
\bibliographystyle{apsrev4-1}
|
1,108,101,566,389 | arxiv | \section{Introduction}
The study of equidistribution of zeros of random holomorphic sections has become intensively active recently, which can be applied to quantum chaotic eigenfunctions \cite{bbl, nv} and shed a light on the Quantum unique ergodicity conjecture \cite{rs, hs}.
Shiffman-Zelditeh \cite{sz} established an equidistribution theorem for high powers of a positive line bundle. More results on equidistribution for singular metrics of line bundles were obtained. For example, Dinh-Ma-Marinescu \cite{dmm} explored the equidistribution of zeros of random holomorphic sections for singular Hermitian metrics with a convergence speed. Coman-Marinescu-Nguy\^{e}n \cite{cmn} studied the equidistribution of common zeros of sections of several big line bundles. Coman-Marinescu-Nguy\^{e}n \cite{cmn19} studied equidistribution for spaces of $L^2$-holomorphic sections vanishing along subvarieties. Dinh-Sibony \cite{ds1} first extended equidistribution with respect to general measures with a good convergence speed. Shao \cite{sh1,sh2} provided large family of singular measures to satisfy equidistribution theorems. See \cite{cm, bcm, bchm, cmm, hs} for more references.
Recently, Coman-Lu-Ma-Marinescu \cite{clmm} studied equidistrbution for a sequence $(L_p, h_p)$ instead of $(L^{\otimes p},h^{\otimes p})$ of a single line bundle $L$. They imposed a natural convergence, that is, the first Chern curvature currents $c_1(L_p,h_p)$ converge to a (non-integral) K\"{a}hler form $\omega$, which can be regarded as a "prequantization" of $\omega$ in the setting of geometric quantization.
In this paper, we establish an equidistribution theorem for several sequences of line bundles. Now we formulate our setting and state our main result. Let $(X, \omega)$ be a K\"{a}hler manifold of $\dim_{\mathbb{C}}X=n$ with a fixed K\"{a}hler form $\omega$.
Let $\{(L_{kp}, h_{kp})\}_{p=1}^{\infty}$ be $m$ sequences of Hermitian holomorphic line bundles on $X$ with (possibly singular) Hermitian metrics $h_{kp}$, where $1\leq k\leq m\leq n$. We endow the space $\mathscr{C}^{\infty}(X,L_{kp})$ of smooth sections $L_{kp}$ with the inner product
\begin{equation*}
\langle s_1,s_2 \rangle:=\int_X \langle s_1,s_2 \rangle_{h_{kp}} \frac{\omega^n}{n!}, s_1, s_2 \in\mathscr{C}^{\infty}(X,L_{kp}),
\end{equation*}
and we set $\|s\|^2=\langle s,s \rangle$. We denote by $L^2(X,L_{kp})$ the completion of $\mathscr{C}^{\infty}(X,L_{kp})$ with respect to this norm. Denote by $H_{(2)}^0(X,L_{kp})$ the Bergman space of $L^2$ holomorphic sections of $L_{kp}$ and let $B_{kp}:=L^2(X,L_{kp})\rightarrow H_{(2)}^0(X,L_{kp})$ be the orthogonal projection. The integral kernel $B_{kp}(x,x')$ of $B_{kp}$ is called the Bergman kernel. The restriction of the Bergman kernel to the diagonal of $X$ is the Bergman kernel function of $H_{(2)}^0(X,L_{kp})$, which we still denote by $B_{kp}$, i.e., $B_{kp}(x)=B_{kp}(x,x)$.
The first assumption is the following:
{\bf Assumption 1:}\ \ There exist a constant $M_0>1$ and $p_0>0$ such that
\begin{equation*}
\frac{A_{kp}^n}{M_0}\leq B_{kp}(x)\leq M_0A_{kp}^n,
\end{equation*}
for any $ x\in X, p\geq p_0, 1\leq k\leq m$, where $A_{kp}$ are positive numbers, $\lim\limits_{p\rightarrow \infty} A_{kp}=\infty$ for $1\leq k\leq m$ with the same order of infinite.
Denote by $\mathbb{CP}H_{(2)}^0(X,L_{kp})$ the associated projective space of $H_{(2)}^0(X,L_{kp})$. Set $d_{kp}:=\dim\mathbb{CP}H_{(2)}^0(X,L_{kp})$. By Assumption 1, we have
\begin{equation*}
d_{kp}=\int_{X}B_{kp}(x)\omega^{n}-1\approx A_{kp}^n.
\end{equation*}
There exist $M_1>1$ and $p_0>0$, such that
\begin{equation*}
\frac{A_{kp}^n}{M_1}\leq d_{kp}\leq M_1A_{kp}^n.
\end{equation*}
Consider the multi-projective spaces
\begin{equation}\label{e-7283}
\mathbb{X}_p:=\mathbb{CP}H_{(2)}^0(X,L_{1p})\times\cdots \times \mathbb{CP}H_{(2)}^0(X,L_{mp}),
\end{equation}
equipped with a probability (singular) measure $\sigma_{p}$. The standard measure $\sigma_{p}$ is just the product of the Fubini-study volume forms on all components of $\mathbb{X}_p$.
In our main theorem, $\sigma_p$ is the product of moderate measures, which is a generalization of the standard one (cf. \eqref{e-7281}).
The product space with product measure is $\mathbb{P}^X:=\prod\limits_{p=1}^{\infty} \mathbb{X}_p, \sigma=\prod\limits_{p=1}^{\infty} \sigma_{p}$. For $s_p=(s_{1p},\cdots,s_{mp})\in \mathbb{X}_p$, we define $[s_p=0]:=[s_{1p}=0]\wedge\cdots\wedge[s_{mp}=0]$.
With a minor change of the proof for Bertini type theorem with respect to a singular measure in \cite[Section 2]{sh2}. We can deduce that $[s_p=0]$ is well-defined for a.e. $\{s_p\}$ with respect to $\sigma_p$. In fact, if $\sigma_p$ has no mass on any proper analytic subset of $\mathbb{X}_p$, the above statement still holds true, see the proof of \cite[Lemma 2.2, Proposition 2.3]{sh2}.
Now, we give the second natural assumption on the convergence of Chern curvature currents $c_1(L_{kp},h_{kp})$ associated to $h_{kp}$, which can be thought as a "prequantization" process.
{\bf Assumption 2:}\ \ There exist positive closed $(1,1)$-currents $\omega_k$ ($\omega_k\geq 0$ in the sense of currents) with positive measure (i.e.$\int \omega_k\wedge \omega^{n-1}>0$) such that the norms of currents satisfy
\begin{equation*}
\left\|\frac{1}{A_{kp}}c_1(L_{kp},h_{kp})-\omega_k\right\|\leq C_0 A_{kp}^{-a_k},
\end{equation*}
where $C_0, a_k>0$ are constants. Moreover, $\omega_{j_1}\wedge \cdots \wedge \omega_{j_l}$ are well-defined with positive measures, for any multi-index $(j_1,\cdots,j_l)\subset\{1,2,\cdots,m\}$.
Note that one trivial example is $L_{kp}=L_k^{\otimes p}, h_{kp}=h_{k}^{\otimes p}, \omega_k=c_1(L_k,h_k)$ and the non-continuous set of $h_k$ are in general position (cf. \cite{cmn}).
Now we are in a position to state our main theorem:
\begin{theorem}\label{thm1.1}
Let $\{(L_{kp},h_{kp})\}_{p=1}^{\infty}$ be $m$ sequences of Hermitian holomorphic line bundles on a compact K\"{a}hler manifold $X$ of $\dim_{\mathbb{C}}X=n$. If Assumption 1 and Assumption 2 hold, then for $\sigma$-a.e. $\{s_p\}\in \mathbb{P}^{X}$, we have
\begin{equation*}
\frac{1}{\prod\limits_{k=1}^m A_{kp}}[s_p=0]\rightarrow \omega_1\wedge\cdots\wedge \omega_m
\end{equation*}
in the sense of currents.
\end{theorem}
\begin{theorem}\label{thm1.2}
With the same notations and assumptions in Theorem \ref{thm1.1}, for any $\alpha>0, $ there exist $C_1>0, C_2>0, p_1>0$ and $E_p^{\alpha}$ such that
\noindent (i)\quad $\sigma_p(E_p^{\alpha})\leq C_1(\sum\limits_{k=1}^mA_{kp})^{-\alpha}$ for any $p>p_1$;
\noindent (ii)\quad for $s_p\in X_p\backslash E_{p}^{\alpha}$ and any $(n-m,n-m)$ form $\phi$ of class $\mathscr{C}^2$,
\begin{equation*}
\begin{array}{rl}
&|\langle\frac{1}{\prod\limits_{k=1}^m A_{kp}}[s_p=0]-\omega_1\wedge\cdots\wedge \omega_m, \phi\rangle|\\
\leq & C_2 \left(\sum\limits_{k=1}^m
\frac{\log A_{kp}}{A_{kp}}+\frac{\log(\sum\limits_{k=1}^mA_{kp})}{\sum\limits_{k=1}^mA_{kp}}+\sum\limits_{k=1}^mA_{kp}^{-a_{k}}\right)\|\phi\|_{\mathscr{C}^2}.
\end{array}
\end{equation*}
\end{theorem}
In \cite{clmm}, $L_{kp}=L_p, \omega_k=\omega_1$ and $h_{kp}=h_p$ are smooth metrics. In this case, Assumption 1 is automatically satisfied under Assumption 2. In fact, they gave the asymptotic expansion of the Bergman kernel $B_{kp}(x)=B_p(x)$. Our theorem is a generalization of \cite[Theorem 0.4]{clmm}.
If $L_{kp}=L_k^{\otimes p}, \{L_k\}_{k=1}^m$ are all big line bundles and the non-continuous set of $h_{kp}=h_k^{\otimes p}$ are in general position and of H\"{o}lder continuous with singularities, our theorems recover \cite[Theorem 1.2, Theorem 1.3]{sh2} partially.
In the classical setting of equidistribution theorems, there is a single sequence $\{L_p\}$ of a positive line bundle $L$. Assumption 1 is satisfied due to the uniform estimate of $B_p(x)$ for $H^{0}(X,L^p)$, then we derive the classical result by Shiffman-Zelditch \cite{sz}.
\begin{corollary}\label{cor1.3}
Let $m=1, \{L_{p}=L^{\otimes p}\}$ be a sequence of high powers of a positive line bundle. Take $\omega=c_1(L,h)$, where $h$ is a smooth Hermitian metric, $\sigma_p$ is the Fubini-Study volume on $\mathbb{CP}H^{0}(X,L_p)$. Then, for $\sigma$-a.e. $\{s_p\}\in\mathbb{P}^{X}$, $\frac{1}{p}[s_p=0]\rightarrow \omega$ in the sense of currents.
\end{corollary}
Assumption 1 is indeed strong. We next provide a dimension growth result for sequences of pseudo-effective line bundles, which can shed a light on this assumption. To simplify, let $(L_{1p}, h_{1p})=(L_{p}, h_{p})$, where all $L_p$ are pseudo-effective line bundles, i.e. $c_1(L_p,h_p)\geq 0$ in the sense of currents. $A_p>0, \lim\limits_{p\rightarrow \infty}A_p=+\infty$ and $\frac{1}{A_p}c_1(L_p,h_p)\rightarrow \omega_1$.
Suppose that $h_p$ is continuous on $X\backslash \Sigma$, where $\Sigma$ is a proper analytic subset, $c_1(L_p,h_p)\geq A_p\eta_p \omega$, where $\eta_p:X\rightarrow [0,+\infty)$. For any $x\in X\backslash \Sigma$, there exists a neighborhood $U_x$ of $x$ and a constant $c_x>0$ such that $\eta_p(x)\geq c_x$ on $U_x$ for any $p$ large.
\begin{theorem}\label{thm1.4}
With the above setting, there exists a constant $C_3>0$, we have $\dim H_{(2)}^0(X,L_p)\geq C_3 A_p^n$.
\end{theorem}
In the rest of the paper, we abusively use $C$ to denote positive constants, where $C$ is not necessary to be the same in different places.
The paper is organized as follows. In Section 2 we recall Dinh-Sibony's technique and the notion of moderate measure
with estimates for capacities in multi-projective spaces.
In Section 3 we prove a variant of convergence result of Fubini-Study currents for multi-sequences of holomorphic line bundles.
Section 4 is devoted to proving the main theorem. We conclude Section 5 with a dimension growth estimate for pseudo-effective line bundles.
\section{Preliminaries}
\subsection{Dinh-Sibony's technique}
Let $(X, \omega)$ (resp. $(Y, \omega_{Y})$) be a compact K\"{a}hler manifold of dimension $n$ (resp. $n_{Y}$).
Recall that a $meromorphic$ $transform$ $F: X\rightarrow Y$ is the graph of an analytic subset $\Gamma\subset X\times Y$
of pure dimension $n_{Y}+k$ such that the natural projections $\pi_{1}: X\times Y\rightarrow X$
and $\pi_{2}: X\times Y\rightarrow Y$ restricted to each irreducible component of the analytic subset $\Gamma$ are surjective.
We write $F=\pi_{2}\circ (\pi_{1}|_{\Gamma})^{-1}$.
The dimension of the fiber $F^{-1}(y):=\pi_{1}(\pi_{2}^{-1}|_{\Gamma}(y))$ is equal to $k$
for the point $y\in Y$ generic. This is the codimension of the meromorphic transform $F$.
If $T$ is a current of bidegree $(l,l)$ on $Y$, $n_{Y}+k-n\leq l\leq n_{Y}$, we define
\begin{equation}\label{e-7283-1}
F^{\star}(T):=(\pi_{1})_{\ast}(\pi_{2}^{\ast}(T)\wedge [\Gamma]),
\end{equation}
where $[\Gamma]$ is the current of integration over $\Gamma$.
We introduce the notations of intermediate degrees of $F$,
\begin{equation*}
\begin{split}
& \delta^1(F):=\int_{X}F^{\ast}(\omega_{Y}^{n_{Y}})\wedge\omega^{k}, \\
& \delta^2(F):=\int_{X}F^{\ast}(\omega_{Y}^{n_{Y}-1})\wedge\omega^{k+1}. \\
\end{split}
\end{equation*}
To introduce more notions and notations, we first recall the following lemma in \cite[Proposition 2.2]{ds1}.
\begin{lemma}\label{lem2.1}
There exists a constant $r>0$ such that for any positive closed current $T$ of bidegree $(1,1)$ with mass $1$ on $(X, \omega)$,
there is a smooth $(1,1)$-form $\alpha$ which depends only on the cohomology class of $T$ and a q.p.s.h. function $\varphi$
satisfying that
\begin{equation*}
-r\omega\leq\alpha\leq r\omega, \quad dd^{c}\varphi-T=\alpha.
\end{equation*}
\end{lemma}
Denote by $r(X, \omega)$ the smallest $r$ in Lemma \ref{lem2.1}.
For example, $r(\mathbb{CP}^{N},\omega_{FS})=1$.
Consider a positive measure $\mu$ on $X$. $\mu$ is said to be a PLB measure if all q.p.s.h. functions are integrable with respect to $\mu$.
It is easy to see that all moderate measures are PLB.
Now given a PLB probability measure $\mu$ on $X$ and $t\in\mathbb{R}$, we define,
\begin{equation}\label{e-7283-2}
\begin{split}
Q(X, \omega):&=\{\varphi ~q.p.s.h. ~on~ X, dd^{c}\varphi\geq -r(X,\omega)\omega\}, \\
R(X, \omega, \mu):&=\sup\{\max_{X}\varphi: \varphi\in Q(X, \omega), \int_{X}\varphi d\mu=0\} \\
&=\sup\{-\int_{X}\varphi d\mu: \varphi\in Q(X, \omega), \max_{X}\varphi=0\}, \\
S(X, \omega, \mu):&=\sup\{\bigl|\int\varphi d\mu \bigr|: \varphi\in Q(X, \omega), \int_{X}\varphi\omega^{n}=0\}, \\
\Delta(X, \omega, \mu, t):&=\sup\{\mu(\varphi<-t): \varphi\in Q(X, \omega), \int_{X}\varphi d\mu=0\}. \\
\end{split}
\end{equation}
These constants are related to Alexander-Dinh-Sibony capacity, see \cite[A.2]{ds1}.
Let $\Phi_{p}$ be a sequence of meromorphic transforms from a projective manifold $(X, \omega)$ into
the compact K\"{a}hler manifolds $(\mathbb{X}_{p}, \omega_{p})$ of the same codimension $k$, where
$\mathbb{X}_{p}$ is defined in \eqref{e-7283}.
Let
\begin{equation*}
d_{p}=d_{1,p}+...+d_{m,p}
\end{equation*}
be the dimension of $\mathbb{X}_{p}$.
Consider a PLB probability measure $\mu_{p}$ on $\mathbb{X}_{p}$,
for every $p>0, \epsilon>0$, we define
\begin{equation}\label{e-7283-3}
E_{p}(\epsilon):=\bigcup_{\|\phi\|_{\mathscr{C}^{2}}\leq 1}\{s_{p}\in\mathbb{X}_{p}:
\bigl|\bigl<\Phi_{p}^{\ast}(\delta_{s_{p}})-\Phi_{p}^{\ast}(\mu_{p}), \phi\bigr>\bigr|\geq \delta^1(\Phi_{p})\epsilon\},
\end{equation}
where $\delta_{s_{p}}$ is the Dirac measure at the point $s_{p}$.
By the definition of the pullback of $\Phi_{p}$ on currents, we see that $\Phi_{p}^{\ast}(\delta_{s_{p}})$
and $\Phi_{p}^{\ast}(\mu_{p})$ are positive closed currents of bidimension $(k,k)$ on $X$.
Moreover, $\Phi_{p}^{\ast}(\delta_{s_{p}})$ is well-defined for $s_{p}\in\mathbb{X}_{p}$ generic.
Recall that $\omega_{Mp}$ and $c_p$ were defined in \cite[(18),(19)]{sh2}.
The following estimate from Dinh-Sibony equidistribution theorem \cite{ds1} is crucial in our paper.
\begin{theorem}\label{thm2.2}
Let $\eta_{\epsilon, p}:=\epsilon\delta^2(\Phi_{p})^{-1}\delta^1(\Phi_{p})-3R(\mathbb{X}_{p}, \omega_{Mp}, \mu_{p})$,
then
\begin{equation*}
\mu_{p}(E_{p}(\epsilon))\leq\Delta(\mathbb{X}_{p}, \omega_{Mp}, \mu_{p}, \eta_{\epsilon, p}).
\end{equation*}
\end{theorem}
We also need the following important estimate, which was deduced from \cite[Lemma 4.2(c),Proposition 4.3]{ds1}.
\begin{theorem}\label{thm2.3}
In the above setting, we have
\begin{equation*}
\bigl|\bigl<\delta^1(\Phi_{p})^{-1}(\Phi_{p}^{\ast}(\mu_{p})
-\Phi_{p}^{\ast}(\omega_{Mp}^{d_{p}})), \phi\bigr>\bigr|\leq
2S(\mathbb{X}_{p}, \omega_{Mp}, \mu_{p})\delta^2(\Phi_{p})\delta^1(\Phi_{p})^{-1}\|\phi\|_{\mathscr{C}^{2}}
\end{equation*}
for any $(k,k)$-form $\phi$ of class $\mathscr{C}^{2}$ on $X$.
\end{theorem}
\begin{theorem}\label{thm2.4}
Suppose that the sequence $\{R(\mathbb{X}_{p}, \omega_{Mp}, \mu_{p})\delta^2(\Phi_{p})\delta^1(\Phi_{p})^{-1}\}$ tends to $0$ and
\begin{equation*}
\Sigma_{p\geq 1}\Delta(\mathbb{X}_{p}, \omega_{Mp}, \mu_{p}, \delta^2(\Phi_{p})^{-1}\delta^1(\Phi_{p})t)<\infty
\end{equation*}
for all $t>0$.
Then for almost everywhere $s=(s_{p})\in\mathbb{P}^{X}$ with respect to $\mu=\prod\limits_{p=1}^\infty\mu_p$, the sequence $\langle \delta^1(\Phi_{p})^{-1}(\Phi_{p}^{\ast}(\delta_{s_{p}})-\Phi_{p}^{\ast}(\mu_{p})), \phi \rangle$
converges to $0$ uniformly on the bounded set of $(k-1,k-1)$-forms on $X$ of class $\mathscr{C}^{2}$.
\end{theorem}
Consider the $Kodaira$ $map$
\begin{equation*}
\Phi_{k,p}: X\rightarrow\mathbb{CP}(H_{(2)}^{0}(X, L_{kp})^{\ast}).
\end{equation*}
Here $H_{(2)}^{0}(X, L_{kp})^{\ast}$ is the dual space of $H_{(2)}^{0}(X, L_{kp}))$.
Choose $\{S_{k,p}^{j}\}_{j=0}^{d_{k,p}}$ as an orthonormal basis of $H_{(2)}^{0}(X, L_{kp}))$.
By an identification via the basis, it boils down to a meromorphic map
\begin{equation*}
\Phi_{k,p}: X\rightarrow \mathbb{CP}^{d_{kp}}.
\end{equation*}
Now we give a local analytic description of the above map.
Let $U\subset X$ be a contractible Stein open subset, $e_{kp}$ be a local holomorphic frame of $L_{kp}$ on $U$.
Then there exists a holomorphic function $s_{j}^{k,p}$ on $U$ such that $S_{k,p}^{j}=s_{j}^{k,p}e_{kp}$.
Then the map is expressed locally as
\begin{equation}\label{e-7283-4}
\Phi_{k,p}(x)=[s_{0}^{k,p}(x):...:s_{d_{k,p}}^{k,p}(x)], \quad \forall x\in U
\end{equation}
It is called the Kodaira map defined by the basis $\{S_{k,p}^{j}\}_{j=0}^{d_{k,p}}$.
Denote by $B_{kp}$ the Bergman kernel function defined by
\begin{equation}
B_{kp}(x)=\sum_{j=0}^{d_{k,p}}|S_{k,p}^{j}(x)|^{2}_{h_{k,p}}, \quad |S_{k,p}^{j}(x)|^{2}_{h_{k,p}}=h_{k,p}(S_{k,p}^{j}(x),S_{k,p}^{j}(x)).
\end{equation}
It is easy to see that this definition is independent of the choice of basis.
Recall that $\omega_{FS}$ is the normalized Fubini-Study form on $\mathbb{CP}^{d_{k,p}}$. We define the Fubini-Study
currents $\gamma_{k,p}$ of $H_{(2)}^{0}(X, L_{kp})$ as pullbacks of $\omega_{FS}$ by Kodaira map,
\begin{equation}\label{e-7283-5}
\gamma_{k,p}=\Phi_{k,p}^{\ast}(\omega_{FS}).
\end{equation}
We have in the local Stein open subset $U$,
\begin{equation*}
\gamma_{k,p}\bigl|_{U}=\frac{1}{2}dd^{c}\log\sum_{j=0}^{d_{kp}}|s_{j}^{k,p}|^{2}.
\end{equation*}
This yields
\begin{equation*}
\frac{1}{p}\gamma_{k,p}=c_{1}(L_{k},h_{k})+\frac{1}{2p}dd^{c}\log B_{kp}.
\end{equation*}
Since $\log B_{kp}$ is a global function which belongs to $L^{1}(X, \omega^{n})$,
$\frac{1}{p}\gamma_{k,p}$ has the same cohomology class as $c_{1}(L_{kp},h_{kp})$. We focus on the special meromorphic transforms $\Phi_{p}:X\rightarrow \mathbb{X}_p$ induced by the product map of Kodaira maps $ \Phi_{kp}: X\rightarrow \mathbb{CP}H_{(2)}^0(X,L_{kp})$. $\Phi_p$ is indeed a meromorphic transform with a graph
\begin{equation*}
\Gamma_{kp}=\{(x, s_{1p},\cdots,s_{mp})\in X\times \mathbb{X}_p: s_{1p}(x)=\cdots=s_{mp}(x)=0\},
\end{equation*}
see \cite[Section 3]{sh2}.
We also need the following
\begin{lemma}\label{lem2.5}
$\Phi_{p}^{\ast}(\delta_{s_{p}})=[s_{p}=0]$.
\end{lemma}
\begin{proposition}\label{pro2.6} \emph{\cite[Lemma 4.5]{cmn}}
$\Phi_{p}^{\ast}(\omega_{Mp}^{d_{p}})=\gamma_{1,p}\wedge...\wedge\gamma_{m,p}$ for all $p$ sufficiently large.
\end{proposition}
\subsection{Mderate measures on multi-projective spaces}
We say that a function $\phi$ on $X$ is quasi-plurisubharmonic (q.p.s.h) if it is $c\omega$-p.s.h. for some constant $c>0$.
Consider a measure $\mu$ on $X$, $\mu$ is said to be PLB if all the q.p.s.h. functions are $\mu$-integrable.
Let
\begin{equation}\label{e-7283-6}
\mathcal{F}=\{\phi ~q.p.s.h. ~on~ X: dd^{c}\phi\geq -\omega, \max_{X}\phi =0\}.
\end{equation}
$\mathcal{F}$ is compact in $L^{p}(X)$ and bounded in $L^{1}(\mu)$ when $\mu$ is a PLB measure, see \cite{ds1}.
\begin{definition}\label{def2.7}
Let $\mu$ be a {\rm PLB} measure on $X$. We say that $\mu$ is $(c,\alpha)$-moderate for some constants $c, \alpha >0$ if
\begin{equation*}
\int_{X}\exp(-\alpha \phi)d\mu\leq c
\end{equation*}
for all $\phi\in\mathcal{F}$. The measure $\mu$ is called moderate if there exist constants $c, \alpha >0$ such that it is $(c,\alpha)$-moderate.
\end{definition}
For example, $\omega^n$ is moderate in $X$. In particular, the Fubini-Study volume form is moderate in a projective space. We introduce product of moderate measures used in the main theorem. We define singular moderate measures $\sigma_{p}$ as perturbations of standard measures on $\mathbb{X}_{p}$.
For each $p\geq 1, 1\leq k\leq m, 1\leq j\leq d_{k,p}$, let $u_{j}^{kp}:\mathbb{CP}H_{(2)}^{0}(X, L_{kp})\rightarrow\mathbb{R}$
be an upper-semi continuous function. Fix $0<\rho<1$ and a sequence of positive constants $\{c_{p}\}_{p\geq 1}$.
We call $\{u_{j}^{k,p}\}$ {\it a family of $(c_{p},\rho)$-functions} if all $u_{j}^{k,p}$
satisfy the following two conditions:
\begin{flushleft}
$\bullet$ $u_{j}^{k,p}$ is of class $\mathscr{C}^{\rho}$ with modulus $c_{p}$, \\
$\bullet$ $u_{j}^{k,p}$ is a $c_{p}\omega_{FS}$-p.s.h.
\end{flushleft}
Then for each $p\geq 1$, there is a probability measure
\begin{equation}\label{e-7281}
\sigma_{p}=\prod_{k=1}^{m}\bigwedge_{j=1}^{d_{kp}}\pi_{k,p}^{\ast}(dd^{c}u_{j}^{k,p}+\omega_{FS})
\end{equation}
on $\mathbb{X}_{p}$. By \cite[Theorem 1.1, Remark 2.12]{sh1},
$\bigwedge_{j=1}^{d_{k,p}}(dd^{c}u_{j}^{k,p}+\omega_{FS})$ is a moderate measure on $\mathbb{CP}H_{(2)}^{0}(X, L_{kp})$
when $c_{p}\leq 1/c^{(\sum_{k=1}^m A_{kp})^{n}}$ for a suitable constant $c>1$, $\forall 1\leq k\leq m, p\geq 1$.
We call
\begin{equation}\label{e-7282}
\sigma=\prod_{p=1}^{\infty}\sigma_{p}=\prod_{p=1}^{\infty}\prod_{k=1}^{m}\bigwedge_{j=1}^{d_{kp}}\pi_{k,p}^{\ast}(dd^{c}u_{j}^{k,p}+\omega_{FS})
\end{equation}
a probability measure on $\mathbb{P}^{X}$ generated by a family of $(c_{p},\rho)$-functions $\{u_{j}^{k,p}\}$ on $\{\mathbb{CP}H_{(2)}^{0}(X, L_{kp})\}$.
\subsection{Capacity estimate on multi-projective space}
Now we study the estimates on multi-projective spaces.
Let $\mathbb{CP}^{\ell_{1}},...,\mathbb{CP}^{\ell_{m}}$ be $m$ projective spaces.
Let $\pi_{k}: \mathbb{CP}^{\ell_{1}}\times...\times\mathbb{CP}^{\ell_{m}}\rightarrow \mathbb{CP}^{\ell_{k}}$
be the natural projection map.
Let $\sigma_{k}$ be a probability moderate measure with respect to a family of $(c_{\ell_{k}},\rho)$-functions
$\{u_{k,j}\}_{j=1}^{\ell_{k}}$ on $\mathbb{CP}^{\ell_{k}}$ defined in \eqref{e-7281}.
Let $\ell=\ell_{1}+...+\ell_{m}$.
Recall that the notation $r(\mathbb{CP}^{\ell_{1}}\times...\times \mathbb{P}^{\ell_{m}}, \omega_{Mp})$
is defined after Lemma \ref{lem2.1}.
We have the following lemma \cite[Lemma 4.6]{cmn}.
\begin{lemma}\label{lem2.8}
Under the above hypotheses,
\begin{equation*}
r(\mathbb{CP}^{\ell_{1}}\times...\times \mathbb{CP}^{\ell_{m}}, \omega_{Mp})\leq r(\ell_{1},...\ell_{m}):=\max_{1\leq k\leq m}\frac{\ell}{\ell_{k}}.
\end{equation*}
\end{lemma}
The following proposition is taken from \cite[Proposition 3.14]{sh2}.
\begin{proposition}\label{pro2.9}
In the above setting, let $\mathbb{CP}^{\ell_{k}}$ be a projective space endowed
with a probability moderate measure $\sigma_{k}$ defined in \eqref{e-7281}, $\forall 1\leq k\leq m$.
Set $\sigma:=\sigma_{1}\times...\times\sigma_{m}$.
Suppose that $\ell_{1},..,\ell_{m}$ are chosen sufficiently large such that
\begin{equation}\label{e-7282-1}
\begin{split}
\frac{r(\ell_{1},...,\ell_{m})\log\ell}{\min(\ell_{1},...,\ell_{m})}&\ll 1, \\
(\frac{\rho}{4})^{\min(\ell_{1},...,\ell_{m})}\ell&\ll 1. \\
\end{split}
\end{equation}
Then there exist positive constants $\beta_{1}, \beta_2, \xi$ depending only on $m$ such that for $0\leq t\leq \min(\ell_{1},...,\ell_{m})$,
we have
\begin{equation*}
\begin{split}
R(\mathbb{CP}^{\ell_{1}}\times...\times \mathbb{CP}^{\ell_{m}}, \omega_{Mp}, \sigma)&\leq \beta_{1}r(\ell_{1},...\ell_{m})(1+\log\ell), \\
S(\mathbb{CP}^{\ell_{1}}\times...\times \mathbb{P}^{\ell_{m}}, \omega_{Mp}, \sigma)&\leq \beta_{1}r(\ell_{1},...\ell_{m})(1+\log\ell), \\
\Delta(\mathbb{CP}^{\ell_{1}}\times...\times \mathbb{CP}^{\ell_{m}}, \omega_{Mp}, \sigma, t)&\leq \beta_{1}\ell^{\xi}\exp(-\beta_2 t/r(\ell_{1},...\ell_{m})). \\
\end{split}
\end{equation*}
\end{proposition}
\section{Convergence of Fubini-study currents}
Recall that the Fubini-Study current of $H_{(2)}^0(X,L_{kp})$ is
\begin{equation*}
\gamma_{kp}=\frac{1}{2} dd^c\log \sum_{j=0}^{d_{kp}}\|f_{kp,j}\|^2
=c_1(L_{kp},h_{kp})+\frac{1}{2}dd^c\log B_{kp}.
\end{equation*}
In this section, we will prove the following.
\begin{proposition}\label{pro3.1}
With the same notations and assumptions in Theorem 1, there exists $C>0$ such that
\begin{equation*}
\begin{split}
&|<\frac{1}{\prod_{k=1}^m A_{kp}}\gamma_{1p}\wedge\cdots\wedge \gamma_{mp}-\omega_{1}\wedge\cdots\wedge \omega_{n}, \phi>|\\
&\leq C \sum_{k=1}^m\bigl(\frac{\log A_{kp}}{A_{kp}}+A_{kp}^{-a_k}\bigr)\|\phi\|_{\mathcal{C}^2}.
\end{split}
\end{equation*}
for any $(n-m,n-m)$-form $\phi$ of class $\mathcal{C}^2$.
\end{proposition}
\begin{proof}
Let $W_p:= \frac{1}{\prod_{k=1}^m A_{kp}} \gamma_{1p}\wedge\cdots\wedge \gamma_{mp}-\omega_1\wedge\cdots\wedge\omega_m$, $\alpha_{kp}=\frac{c_1(L_{kp},h_{kp})}{A_{kp}}-\omega_k$.
By Assumption 2, $\|\alpha_{kp}\|\leq \frac{C_0}{A_{kp}^{a_k}}$ in the norm of currents
\begin{equation*}
\frac{\gamma_{kp}}{A_{kp}}-\omega_k=\alpha_{kp}+\frac{1}{2A_{kp}}dd^c \log B_{kp}.
\end{equation*}
We have
\begin{equation*}
\begin{split}
&|<W_p,\phi>|\\
&=|\sum\limits_{k=1}^m <\omega_1\wedge\cdots\wedge \omega_{k-1}\wedge (\frac{\gamma_{kp}}{A_{kp}}-\omega_{k})\wedge\frac{\gamma_{k+1,p}}{A_{k+1,p}}\wedge\cdots\wedge\frac{\gamma_{m,p}}{A_{mp}},\phi>|\\
&\leq \sum\limits_{k=1}^m |<\omega_1\wedge\cdots\wedge \omega_{k-1}\wedge (\alpha_{kp}+\frac{1}{2A_{kp}}dd^c \log B_{kp})\wedge \frac{\gamma_{k+1,p}}{A_{k+1,p}}\wedge\cdots\wedge\frac{\gamma_{m,p}}{A_{mp}},\phi>|.
\end{split}
\end{equation*}
Note that there exists a constant $c>0$ such that
\begin{equation*}
-\frac{cC_0}{A_{kp}}\omega^{n-m+1}\leq \alpha_{kp}\wedge \phi \leq \frac{cC_0}{A_{kp}^{a_k}}\omega^{n-m+1}
\end{equation*}
in the sense of currents. Then,
\begin{equation*}
\begin{split}
&|<W_p,\phi>|\\
&= |\sum\limits_{k=1}^m \frac{cC_0}{A_{kp}^{a_k}}\|\phi\|_{\mathcal{C}^0}\int_{X}|\omega_1\wedge\cdots\wedge\omega_{k-1}\wedge\omega^{n-m+1}\wedge\frac{\gamma_{k+1,p}}{A_{k+1,p}}\wedge\cdots\wedge\frac{\gamma_{m,p}}{A_{m,p}}|\\ &+\sum\limits_{k=1}^m \int_{X}|\frac{\log B_{kp}}{2A_{kp}}dd^c\phi\wedge (\omega_1\wedge\cdots\wedge\omega_{k-1}\wedge\frac{\gamma_{k+1,p}}{A_{k+1,p}}\wedge\cdots\wedge\frac{\gamma_{m,p}}{A_{m,p}})|\\
&=I+II.
\end{split}
\end{equation*}
We choose $A_p\geq M_0$ for large $p$,
\begin{equation*}
A_{kp}^{n-1}\leq B_{kp}(x)\leq A_{kp}^{n+1},
\end{equation*}
so $|\log B_{kp}|\leq (n+1)\log A_{kp}$.
Hence
\begin{equation*}
\begin{split}
II= |\sum\limits_{k=1}^m \frac{nc\log A_{kp}}{A_{kp}}\|\phi\|_{\mathcal{C}^2}\int_{X}|\omega_1\wedge\cdots\wedge\omega_{k-1}\wedge\omega^{n-m+1}\wedge\frac{\gamma_{k+1,p}}{A_{k+1,p}}\wedge\cdots\wedge\frac{\gamma_{m,p}}{A_{m,p}}|.
\end{split}
\end{equation*}
Then we have
\begin{equation*}
\begin{split}
&|<W_p,\phi>|\\
= &|\sum\limits_{k=1}^m (\frac{nc\log A_{kp}}{A_{kp}}+\frac{cC_0}{A_{kp}^{a_k}})\|\phi\|_{\mathcal{C}^2}| \int_{X}\omega_1\wedge\cdots\wedge\omega_{k-1}
\wedge\omega^{n-m+1}\wedge\frac{\gamma_{k+1,p}}{A_{k+1,p}}\wedge\cdots\wedge\frac{\gamma_{m,p}}{A_{m,p}}|\\
=&|\sum\limits_{k=1}^m (\frac{nc\log A_{kp}}{A_{kp}}+\frac{cC_0}{A_{kp}^{a_k}})\|\phi\|_{\mathcal{C}^2}\\
&\int_{X}|\omega_1\wedge\cdots\wedge\omega_{k-1}
\wedge\omega^{n-m+1}\wedge\frac{c_1(L_{k+1,p},h_{k+1,p})}{A_{k+1,p}}\wedge\cdots\wedge\frac{c_1(L_{m,p},h_{m,p})}{A_{m,p}}|\\
\leq & \sum\limits_{k=1}^m 2^{m-k} (\frac{nc\log A_{kp}}{A_{kp}}+\frac{cC_0}{A_{kp}^{a_k}})\|\phi\|_{\mathcal{C}^2}\int_{X}|\omega_1\wedge\cdots\wedge\omega_{k-1}
\wedge\omega^{n-m+1}\wedge\omega_{k+1}\wedge\cdots\wedge\omega_{m}|.
\end{split}
\end{equation*}
The proof is completed.
\end{proof}
\section{Proof of main theorems}
Our proof is based on Dinh-Sibony's technique on equidistribution theorems. The key estimate is the following.
\begin{equation}\label{e-7291}
\begin{split}
&|\langle\frac{1}{\prod_{k=1}^m A_{kp}}[s_p=0]-\omega_1\wedge\cdots\wedge\omega_m,\phi\rangle|\\
&\leq |\langle\frac{1}{\prod_{k=1}^m A_{kp}}(\Phi_p^{\ast}(\delta_{s_p})-\Phi_p^{\ast}(\sigma_p),\phi\rangle|\\
&+|\langle\frac{1}{\prod_{k=1}^m A_{kp}}(\Phi_p^{\ast}(\sigma_p)-\Phi_p^{\ast}(\omega_{Mp}^{d_p}),\phi\rangle|\\
&+ |\langle\frac{1}{\prod_{k=1}^m A_{kp}}(\Phi_p^{\ast}(\omega_{Mp}^{d_p})-\omega_1\wedge\cdots\wedge\omega_m,\phi\rangle|\\
&= I+ II+III.
\end{split}
\end{equation}
The estimate of $III$ is already done in Section 3.
Now we deal with $II$. First, we compute $\delta_p^1$ and $\delta_p^2$,
where
\begin{equation}\label{e-7292}
\begin{split}
\delta_p^1&=\int_X (\Phi_p^{\ast}(\omega_{Mp}^{d_p})\wedge\omega^{n-m}, \\
\delta_p^2&=\int_X (\Phi_p^{\ast}(\omega_{Mp}^{d_p-1})\wedge\omega^{n-m+1}.
\end{split}
\end{equation}
\begin{lemma}\label{lem4.1}
There exists a constant $C>1$, such that
\begin{equation*}
\frac{\prod\limits_{k=1}^m A_{kp}}{C}\leq \delta_p^1\leq C \prod_{k=1}^m A_{kp},
\end{equation*}
\begin{equation*}
\frac{1}{C}\frac{\prod\limits_{k=1}^m A_{kp}}{\sum\limits_{k=1}^m A_{kp}}\leq \delta_p^2\leq C\frac{\prod\limits_{k=1}^m A_{kp}}{\sum\limits_{k=1}^m A_{kp}}.
\end{equation*}
\end{lemma}
\begin{proof}
By Proposition \ref{pro2.6}, we have
\begin{equation*}
\begin{split}
\delta_p^1&=\int_X c_1(L_{1p},h_{1p})\wedge\cdots\wedge c_1(L_{mp},h_{mp})\wedge\omega^{n-m}\\
&\approx \prod\limits_{k=1}^m A_{kp}\int_X\omega_1\wedge\cdots\wedge_m\wedge\omega^{n-m}.
\end{split}
\end{equation*}
By Assumption 1, $\int_X\omega_1\wedge\cdots\wedge_m\wedge\omega^{n-m}>0$, then we have $\delta_p^1\approx \prod\limits_{k=1}^m A_{kp}$.
To compute $\delta_p^2$, we recall that
\begin{equation*}
\dim H^{\ell,\ell}(\mathbb{CP}^{N})=1,
\end{equation*}
\begin{equation*}
\dim H^{\ell_1,\ell_2}(\mathbb{CP}^{N})=0, \ell_1\neq \ell_2
\end{equation*}
for cohomology groups associated to sheaf of currents.
Then $\omega_{FS}^{d_{kp}}$ and $\delta_{s_{kp}}, \omega_{FS}^{d_{kp}-1}$ and $[\mathcal{D}_{kp}]$ have the same cohomology classes, where $\delta_{s_{kp}}$ and $\mathcal{D}_{kp}$ are generic point and complex line respectively in $\mathbb{CP}^{d_{kp}}$. By the definition of meromorphic transform, we have
\begin{equation*}
\begin{split}
\langle \Phi_{kp}^{\ast}([\mathcal{D}_{kp}]), \phi\rangle
= & \langle (\pi_1)_{\ast} (\pi_2)^{\ast}[\mathcal{D}_{kp}]\wedge[\Gamma_{kp}], \phi\rangle\\
=&(\pi_2)^{\ast}[\mathcal{D}_{kp}]\wedge[\Gamma_{kp}], (\pi_1)^{\ast} \phi\rangle\\
=& \langle [\pi_2^{-1}([\mathcal{D}_{kp})\bigcap \Gamma_{kp}], \pi_1^{\ast}\phi\rangle\\
= &\int_{\pi_2^{-1}(\mathcal{D}_{kp})\bigcap \Gamma_{kp}} \pi_1^{\ast}\phi.
\end{split}
\end{equation*}
Note that $\pi_2^{-1}(\mathcal{D}_{kp})\bigcap \Gamma_{kp}=\{(x, s_{kp})\in X\times \mathcal{D}_{kp}, s_{kp}(x)=0\}$.
Since $\mathcal{D}_{kp}$ is generic, $\forall x\in X$, there exists a unique $s_{kp}\in \mathcal{D}_{kp}$ such that $s_{kp}(x)=0$, where $\Gamma_{kp}=\{(x,s_{kp})\in X\times\mathbb{CP}^{d_{kp}}: s_{kp}(x)=0\}$.
So $\pi_1: \pi_2^{-1}(\mathcal{D}_{kp})\bigcap \Gamma_{kp}\rightarrow X$ is bijective. Hence, $\langle\Phi_{kp}^{\ast}([\mathcal{D}_{kp}]), \phi\rangle=\int_X \phi$, i.e. $\Phi_{kp}^{\ast}([\mathcal{D}_{kp}])=[X]=1$.
Note that
\begin{equation*}
\begin{split}
\Phi_{kp}^{*}(\omega_{Mp}^{d_p-1})
= & \sum\limits_{k=1}^m\frac{d_{kp}}{c_pd_p}\Phi_{p}^{\ast}(\{s_{1p}\}\times\cdots\times\{s_{k-1,p}\}\times\{\mathcal{D}_{kp}\}\times\cdots\times\{s_{mp}\})\\
=&\sum\limits_{k=1}^m\frac{d_{kp}}{c_pd_p}[s_{1p}=0]\wedge\cdots\wedge\Phi_{kp}^{\ast}([\mathcal{D}_{kp}])\wedge\cdots\wedge[s_{mp}=0],
\end{split}
\end{equation*}
where the bounded sequence $\{c_p\}$ is mentioned before Theorem \ref{thm2.2}.
Then,
\begin{equation*}
\begin{split}
\delta_p^2&=\sum\limits_{k=1}^m\frac{d_{kp}}{c_pd_p}\int_{X} c_1(L_{1p},h_{1p})\wedge\cdots\wedge\widehat{c_1(L_{kp},h_{kp})}\wedge\cdots\wedge c_1(L_{mp},h_{mp})\wedge\omega^{n-m+1}\\
&\approx \sum\limits_{k=1}^m\frac{d_{kp}}{c_pd_p}\frac{\prod\limits_{j=1}^mA_{jp}}{A_{kp}}\int_X \omega_1\wedge\cdots\wedge\widehat{\omega_k}\wedge\omega_m\wedge^{n-m+1}\\
&\approx \frac{\prod\limits_{k=1}^mA_{kp}}{\sum\limits_{k=1}^m A_{kp}}.
\end{split}
\end{equation*}
The last approximation follows from the fact that $\{A_{kp}\}_{p=1}^{\infty}$ have the same infinity order.
The proof is completed.
\end{proof}
Now we are in a position to estimate the term $II$.
\begin{proposition}\label{pro4.2}
We have
\begin{equation*}
|\langle\frac{1}{\prod_{k=1}^mA_{kp}}(\Phi_p^{\ast}(\sigma_p)-\Phi_p^{\ast}(\omega_{Mp}^{d_p})), \phi\rangle|\leq \frac{C\log(\sum\limits_{k=1}^mA_{kp})}{\sum\limits_{k=1}^mA_{kp}}\|\phi\|_{\mathcal{C}^2}.
\end{equation*}
\end{proposition}
\begin{proof}
Recall that $d_{kp}\approx A_{kp}^n$ and $\{d_{1p}\},\cdots,\{d_{mp}\}$ satisfy the conditions in Proposition \ref{pro2.9} due to Assumption 1. By Lemma \ref{lem4.1} and Proposition \ref{pro2.9} we deduce that $S(\mathbb{X}_p, \omega_{Mp},\sigma_p)\leq C\log(\sum\limits_{k=1}^mA_{kp})$.
By Lemma \ref{lem4.1},
\begin{equation*}
\delta_p^2/\delta_p^1\approx \frac{1}{\sum\limits_{k=1}^m A_{kp}}.
\end{equation*}
Then it follows from Theorem \ref{thm2.3} that
\begin{equation*}
\begin{split}
&|\frac{1}{\prod\limits_{k=1}^m A_{kp}}(\Phi_{p}^{\ast}(\sigma_p)-\Phi_{p}^{\ast}(\omega_{Mp}^{d_p}),\phi\rangle|\\
\leq &CS(\mathbb{X}_p,\omega_{Mp},\sigma_p)(\delta_p^2/\delta_p^1)\|\phi\|_{\mathcal{C}^2}\\
\leq &C\frac{\log(\sum\limits_{k=1}^mA_{kp})}{\sum\limits_{k=1}^mA_{kp}}\|\phi\|_{\mathcal{C}^2}.
\end{split}
\end{equation*}
The proof is completed.
\end{proof}
Next we study the estimate of the term $I$.
\begin{proposition}\label{pro4.3}
For $\sigma$-a.e.$\{s_p\}\in \mathbb{P}^{X}$, we have
\begin{equation*}
\frac{1}{\prod\limits_{k=1}^m A_{kp}}(\Phi_{p}^{\ast}(\delta_{s_p})-\Phi_{p}^{\ast}(\sigma_p))
\end{equation*}
tens to $0$.
\end{proposition}
\begin{proof}
By Lemma \ref{lem2.8} and Proposition \ref{pro2.9}, we have
\begin{equation*}
R(\mathbb{X}_p,\omega_{Mp},\sigma_p)\leq C(1+\log(\sum\limits_{k=1}^m A_{kp})),
\end{equation*}
\begin{equation*}
\Delta(\mathbb{X}_p,\omega_{Mp},\sigma_p,t)\leq C(\sum\limits_{k=1}^m A_{kp})^{\xi}\exp(-\tilde\beta_2t),
\end{equation*}
where $\tilde\beta_2$ is a positive constant.
Then
\begin{equation*}
R(\mathbb{X}_p,\omega_{Mp},\sigma_p)\delta_p^2/\delta_p^1 \rightarrow 0,
\end{equation*}
\begin{equation*}
\sum\limits_{p=1}^{\infty}\Delta(\mathbb{X}_p,\omega_{Mp},\sigma_p,(\delta_p^2/\delta_p^1)t)\leq C\sum\limits_{p=1}^{\infty}(\sum\limits_{k=1}^m A_{kp})^{\xi}\exp(-\tilde\beta_2(\sum\limits_{k=1}^m A_{kp})t)<\infty.
\end{equation*}
Then the proof is completed by applying Theorem \ref{thm2.4}.
\end{proof}
\begin{proof}[End of the proof of Theorem \ref{thm1.1}]: The theorem follwos from Proposition \ref{pro3.1}, Proposition \ref{pro4.2} and Proposition \ref{pro4.3}.
\end{proof}
Now we prove Theorem \ref{thm1.2} by applying Theorem \ref{thm2.2}, which gives also an alternative proof of Theorem \ref{thm1.1}.
\begin{proof}
We take $C_4>0$ to be determined later and set
\begin{equation*}
\varepsilon_p:=\frac{C_4\log(\sum\limits_{k=1}^mA_{kp})}{\sum\limits_{k=1}^mA_{kp}},
\end{equation*}
and
\begin{equation*}
\begin{split}
\eta_{\varepsilon p} =&
\varepsilon_p \delta_p^1/\delta_p^2-3R_p\\
\geq
&C_5\varepsilon_p\log(\sum\limits_{k=1}^mA_{kp})-C\log(\sum\limits_{k=1}^mA_{kp})\\
\geq &C_6\log(\sum\limits_{k=1}^mA_{kp}),
\end{split}
\end{equation*}
where $C_6>0$ determined by $C_4$.
Note that $\log(\sum\limits_{k=1}^mA_{kp})\leq
\min\{\sum\limits_{k=1}^mA_{kp}\}$ by Assumption 1 for large $p$.
We can apply Theorem \ref{thm2.2} and derive that
\begin{equation*}
\begin{split}
\sigma_p(E_p({\varepsilon_p})) \leq
&\Delta(X_p,\omega_{Mp},\sigma_p,\eta_{\varepsilon p})\\
\leq &C_1(\sum\limits_{k=1}^mA_{kp})^{\xi}(\sum\limits_{k=1}^mA_{kp})^{-\tilde\beta_2 C_6}\\
= &C_1(\sum\limits_{k=1}^mA_{kp})^{-\alpha},
\end{split}
\end{equation*}
where $C_4$ is chosen such that $\alpha=\tilde\beta_2 C_6-\xi>0$. Since $\tilde\beta_2$ and $\xi$ are fixed constants, $\alpha>0$ can be arbitrarily chosen.
Set $E_p^\alpha :=E_p(\varepsilon_p)$.
Hence for any $s_p\in X_p\setminus E_p^\alpha$, we have
\begin{equation}\label{e-7293}
\begin{split}
&|\frac{1}{\prod\limits_{k=1}^mA_{kp}}\langle[s_p=0]-\Phi_p^{\ast}(\sigma_p),\phi\rangle|\\
&\leq
\frac{C_7\log(\sum\limits_{k=1}^mA_{kp})}{\sum\limits_{k=1}^mA_{kp}}\|\phi\|_{\mathscr{C}^{2}}.
\end{split}
\end{equation}
Combining Proposition \ref{pro3.1}, Proposition \ref{pro4.2} and \eqref{e-7293}, we obtain
\begin{equation*}
\begin{split}
&\left|\langle\frac{1}{\prod\limits_{k=1}^mA_{kp}}[s_p=0]-\omega_1\wedge \omega_2\cdots\wedge w_m,\phi\rangle\right|\\
&\leq C_2\left(\sum\limits_{k=1}^m
\frac{\log A_{kp}}{A_{kp}}+\frac{\log(\sum\limits_{k=1}^mA_{kp})}{\sum\limits_{k=1}^mA_{kp}}+\sum\limits_{k=1}^mA_{kp}^{-a_{k}}\right)\|\phi\|_{\mathscr{C}^{2}}.
\end{split}
\end{equation*}
Then the proof of Theorem \ref{thm1.2} is completed.
When
$\sum\limits_{p=1}^{\infty}(\sum\limits_{k=1}^mA_{kp})^{-\alpha}<\infty$,
we can prove Theorem 1 by standard arguments using Borel-Cantelli lemma (cf. \cite[Proposition 4.5]{sh2}).
\end{proof}
\section{Dimension growth of a sequence of pseudo-effective line bundles}.
In this section, we provide a dimension growth result which sheds a
light on Assumption 1.
It is enough to consider one sequence of holomorphic line bundles
$(L_p,h_p)$.
We are devoted to proving Theorem \ref{thm1.4}.
We first recall the $L^2$-estimate for line bundles with singular metrics (cf.\cite[Theorem 3.2]{cmn18}).
\begin{theorem}\label{thm5.1}
Let $(X,\omega)$ be a K\"{a}hler manifold of dimension $n$ which admits a complete K\"{a}hler metric. Let $(L,h)$ be a singular Hermitian holomorphic line bundle and let $\lambda: X\rightarrow [0,\infty)$ be a continuous function such that $c_1(L,h)\geq \lambda\omega$. Then for any form $g\in L_{0,1}^2(X,L,loc)$ satisfying
\begin{equation*}
\overline{\partial} g=0, \int_{X}\lambda^{-1}|g|^2_{h}\omega^n <\infty,
\end{equation*}
there exists $u\in L^2(M,L,loc)$ with $\overline\partial u=g$ and
\begin{equation*}
\int_{X}|u|^2_{h}\omega^n \leq\int_{X}\lambda^{-1}|g|^2_{h}\omega^n.
\end{equation*}
\end{theorem}
\begin{proof}[Proof of Theorem \ref{thm1.4}:]
We add additional local weights to the sequence.
Let $x\in U_{\alpha}\Subset X\setminus\Sigma$, $e_p$ is the local
frame of $L_{p}$ on $U_{\alpha}$.
Fix $r_0>0$ so that the ball $V:=B(x,2r_0)\subset\subset U_{\alpha}$ and let $U:=B(x,r_0)$. Let $\theta\in \mathscr{C}^{\infty}(\mathbb{R})$ be a cut-off function such that $0\leq\theta\leq 1, \theta(t)=1$ for $|t|\leq \frac{1}{2}, \theta(t)=0$ for $|t|\geq 1$. For $z\in U$, define the quasi-psh function $\varphi_z$ on $X$ by
\begin{equation*}
\varphi_z(y))=
\left\{\begin{array}{lc}
\theta\left(\frac{|y-z|}{r_0})\log(\frac{|y-z|}{r_0}\right), \ \ \text{for} \ \ y\in U_{\alpha},\\
0, \ \ \text{for}\ \ y\in X\backslash B(z,r_0).\\
\end{array}\right.
\end{equation*}
Note that $dd^c \phi_z\geq 0$, on $\{y:|y-z|\leq \frac{r_0}{2}\}$. Since $V\Subset U_{\alpha}$, it follows that there exists a constant $c'>0$ such that for all $z\in U$ we have $dd^c\varphi_z\geq -c'\omega$ on $X$ and $dd^c\varphi_z=0$ outside $\overline{V}$.
Since
\begin{equation*}
c_1(L_p,h_p)\geq A_p\eta_pw\geq A_pc_x\omega=A_pc\omega.
\end{equation*}
We can find constants $a,b$ with $a=c-bc'>0$, such that
\begin{equation*}
\begin{split}
c_1(L_p,h_pe^{-bA_p\varphi_{z}})\geq & 0 \ \text{on} \ X.\\
c_1(L_p,h_pe^{-bA_p\varphi_{z}})= &
c_1(L_p,h_p)+bA_pdd^c\varphi_z\\
\geq & A_p(c-bc^{\prime})w=aA_pw \ \text{near} \ \overline{V}.
\end{split}
\end{equation*}
Consider a continuous function $\lambda_p: X\rightarrow
[0,+\infty)$ such that $\lambda=aA_p$ on $\overline{V}$,
$c_1(L_p,h_pe^{-bA_p\varphi_{z}})\geq\lambda_pw$.
Set $ \beta=(\beta_1,...,\beta_n)$ with
$\sum\limits_{j=1}^n\beta_j\leq [bA_p]-n$, and
\begin{equation*}
v_{z,p,\beta}(y)=(y_1-z_1)^{\beta_1}...(y_n-z_n)^{\beta_n}.
\end{equation*}
Let
\begin{equation*}
g_{z,p,\beta}=\overline{\partial}(v_{z,p,\beta}\theta(\frac{|y-z|}{r_0})e_p).
\end{equation*}
Then
\begin{equation*}
\begin{split}
&\int_X\frac{1}{\lambda}|g_{z,p,\beta}|^2_{h_p}e^{-2bA_p\varphi_{z}}\omega^n\\
&=\int_V\frac{1}{\lambda}|g_{z,p,\beta}|^2_{h_p}e^{-2bA_p\varphi_{z}}\omega^n\\
&= \frac{1}{aA_p}\int_{V\setminus
B(z,\frac{r_0}{2})}|v_{z,p,\beta}|^2|\partial\theta(\frac{|y-z|}{r_0})|^2
e^{-2\psi_p}e^{-2bA_p\varphi_{z}}\omega^n,
\end{split}
\end{equation*}
where $\psi_p$ is the local weight of $h_p$.
Note that $\varphi_z$ is bounded on $V\setminus B(z,\frac{r_0}{2})$. Then
\begin{equation*}
\int_X\frac{1}{\lambda}|g_{z,p,\beta}|^2_{h_p}e^{-2bA_p\varphi_{z}}\omega^n<\infty,\forall
p.
\end{equation*}
By applying Theorem \ref{thm5.1}, there exists
$u_{z,p,\beta}\in L^2(X,L_p)$,such that
\begin{equation*}
\overline{\partial}u_{z,p,\beta}=g_{z,p,\beta},
\end{equation*}
and
\begin{equation*}
\begin{split}
&\int_X|u_{z,p,\beta}|^2_{h_p}e^{-2bA_p\varphi_{z}}\omega^n\\
\leq&\int_X\frac{1}{\lambda}|g_{z,p,\beta}|^2_{h_p}e^{-2bA_p\varphi_{z}}\omega^n\\
\end{split}
\end{equation*}
So we construct an element
\begin{equation*}
S_{z,p,\beta}=v_{z,p,\beta}\theta(\frac{|y-z|}{r_0})
e_p-u_{z,p,\beta}
\end{equation*}
in $H_{(2)}^0(X,L_p)$.
For $y\in B(z,\frac{r_0}{2})$,
\begin{equation*}
S_{z,p,\beta}(y)=v_{z,p,\beta}(y) e_p-u_{z,p,\beta}(y),
\end{equation*}
we see that $u_{z,p,\beta}$ is a holomorphic near $z$.
Let $\mathscr{L}$ be the sheaf of holomorphic functions on $X$ vanishing at $z$ and let ${\bf m}\subset \mathscr{O}_{X,z}$ the maximal ideal of the ring of germs of holomorphic function at $z$.
Consider the natural map
\begin{equation*}
L_p\rightarrow L_p\otimes \mathscr{O}_X/\mathscr{L}^{a+1}.
\end{equation*}
This map induces a map in the level of cohomology
\begin{equation*}
J_p^a:H_{(2)}^0(X,L_p)\rightarrow H_{(2)}^0(X,L_p\otimes
\mathscr{O}_X/\mathscr{L}^{a+1})=(L_p)_z\otimes \mathscr{O}_{X,z}/\mathscr{L}^{a+1}.
\end{equation*}
The right hand side of the above map is called the space of $a$-jets of $L^2$-holomorphic sections of $L_p$ at $z$.
We recall the following fact:
\begin{equation*}
\int_{|y_1-z_1|<1,\cdots,|y_n-z_n|<1}\prod\limits_{k=1}^n|y_k-z_k|^{2r_k}|y-z|^{-2bA_p}i^ndy_1\wedge
d\overline{y}_1\wedge \cdots\wedge dy_n\wedge d\overline{y}_n<\infty,
\end{equation*}
if and only if $\sum\limits_{j=1}^nr_j\geq [bA_p]-n+1$.
Then for $u_{z,p,\beta}\in L^2(X,L_p)$, we have
\begin{equation*}
\int_X|u_{z,p,\beta}|^2e^{-2bA_p\varphi_{z}}\omega^n<\infty
\end{equation*}
if and only if $u_{z,p,\beta}$ has vanishing order of at least $[bA_p]-n+1$ at
$z$.
So the $([bA_p]-n)$-jet of $S_{z,p,\beta}$ coincides with
$v_{z,p,\beta}$.
For any such $v_{z,p,\beta},\sum\limits_{j=1}^n\beta_j\leq
[bA_p]-n$, we can construct $S_{z,p,\beta}$ as before such that
\begin{equation*}
J_p^{[bA_p]-n}(S_{z,p,\beta})=v_{z,p,\beta}.
\end{equation*}
So $J_p^{[bA_p]-n}$ is surjective.
Hence
\begin{equation*}
\begin{split}
d_p=&\dim H_{(2)}^0(X,L_p)-1\\
\geq & \dim (\mathscr{O}_{X,z}/\mathscr{L}^{[bA_p]-n+1})-1\\
= & \binom{[bA_p]}{[bA_p]-n}-1\geq C_3A_p^n,
\end{split}
\end{equation*}
for some constant $C_3>0$.
The proof is completed.
\end{proof}
\begin{remark}\label{rem5.1}
To get the upper estimate of $d_p$ by the spirit of Siegel's lemma, we need impose more conditions on the transition functions of each $L_p$.
\end{remark}
|
1,108,101,566,390 | arxiv | \subsection{Appendix: Quantum maximum via SDP}\label{SDP}
We follow the SDP method put forward by Wehner \cite{Wehner} recently, in order to prove analytically quantum bounds for the correlation type Bell inequalities of Eq.~(\ref{bellexpr}). Let us consider the $m\times m$ matrix $M$ with real coefficients
\begin{equation}
M_{ij}=1-\frac{m}{2}\delta_{ij},
\label{def}
\end{equation}
introduced in Eq.~(\ref{bellexpr}). As stated in the main text, the expectation values in the polynomial Eq.~(\ref{bellexpr}) can be replaced by dot product of unit vectors,
\begin{equation}
\max{\sum_{i,j=1}^m{M_{ij}\vec a_i\cdot\vec b_j}},
\label{q}
\end{equation}
where maximization is taken over all unit vectors $\{\vec a_1,\ldots,\vec a_m,\vec b_1,\ldots,\vec b_m\} \in R^{2m}$. As shown by Tsirelson, the maximum obtained in this way corresponds to the maximum quantum value as well \cite{Tsirelson}.
However, the above problem can be formulated as the following SDP optimization \cite{Wehner}:
\begin{equation}\begin{aligned}\label{primal}
\text{maximize}&\quad \frac{1}{2}\mathsf{Tr}{(\Gamma W)}\\
\text{subject to}&\quad
\Gamma\succeq 0,\quad \forall i \,\Gamma_{ii}=1\,.
\end{aligned}\end{equation}
Here the matrix $W$ is built up as
\begin{eqnarray}
W = \left(\begin{array}{cc}
0 & M\\
M & 0
\end{array}
\right),
\label{W}
\end{eqnarray}
and $\Gamma=(\Gamma_{ij})$ is the Gram matrix of the unit vectors $\{\vec a_1,\ldots,\vec a_m,\vec b_1,\ldots,\vec b_m\} \in R^{2m}$. Denoting the columns of the above vectors by $V$, we can write $\Gamma=V^t V$ if and only if $\Gamma$ is positive semidefinite. The constraint $\Gamma_{ii}=1$, on the other hand, owes to the unit length of vectors $\vec a_i$ and $\vec b_j$. Note, that the primal problem defined by (\ref{primal}) is the first step of the hierarchy of semidefinite programs given by Navascu\'{e}s et al. \cite{NPA07,NPA08}.
However, one can also define a dual formulation of the SDP problem (for an exhaustive review see \cite{VB04}):
\begin{equation}\begin{aligned}\label{dual}
\text{maximize}&\quad \mathsf{Tr}{(\mathop{\mathrm{diag}}(\lambda))}\\
\text{subject to}&\quad -\frac{1}{2}W + \mathop{\mathrm{diag}}(\lambda)\succeq 0,
\end{aligned}\end{equation}
where $\lambda$ is a $2m$-dimensional vector with real entries and we note that this dual problem is just the first step of the hierarchy introduced by Doherty et al. \cite{DLTW}.
Let us denote by $p^*$ and $d^*$ the optimal values for the primal and the dual problems, respectively. However, according to weak duality, $d^*\ge p^*$ \cite{VB04}. Thus, in order to prove optimality of the quantum bound one suffices to exhibit a feasible solution both for the primal (\ref{primal}) and for the dual (\ref{dual}) problem and then show that they are in fact equal to each other. For this sake let us guess the primal optimum by setting $\vec a_i, \vec b_j =(1,0,\ldots,0)$ in (\ref{q}) with a Bell matrix defined by (\ref{def}). These vectors correspond to a classical deterministic strategy and this solution yields $p^*=\sum_{i,j}^m{M_{i,j}}=m^2/2$.
Similarly, we guess the solution $\lambda^*=(m/4)(1,\ldots,1)$ for the dual problem, for which the dual value is $d^*=\mathsf{Tr}{(\mathop{\mathrm{diag}}(\lambda^*))}=m^2/2$. In order to get a feasible solution, it remains to check according to (\ref{dual}) whether $R=-(1/2)W + \mathop{\mathrm{diag}}(\lambda^*)\succeq 0$ is satisfied. This amounts to prove $\gamma_{min}[R]\ge 0$, where we use the notation $\gamma_{min}[R]$ ($\gamma_{max}[R]$) for the smallest (largest) eigenvalue of a matrix $R$. However, due to Weyl's theorem \cite{HJ85}, for two Hermitian matrices $P$ and $Q$, it holds $\gamma_{min}[P+Q]\ge \gamma_{min}[P]+\gamma_{min}[Q]$. For our particular case,
\begin{align}
\gamma_{min}[R]&\ge \gamma_{min}[-\frac{1}{2}W]+\gamma_{min}[\mathop{\mathrm{diag}}(\lambda^*)]\nonumber\\&=-\frac{1}{2}\gamma_{max}[W]+\frac{m}{4}.
\label{gamma}
\end{align}
The eigenvalues of matrix $W$ in the form (\ref{W}) are given by the singular values $\sigma_s=\sqrt{\gamma_s \gamma_s^*}$ of matrix $M$ of (\ref{def}) and their negatives. The eigenvalues of $M$ on the other hand are the roots of the characteristic polynomial $\mathrm{det}(M-\gamma_s\leavevmode\hbox{\small1\normalsize\kern-.33em1})$, where $\leavevmode\hbox{\small1\normalsize\kern-.33em1}$ is the
$m\times m$ unit matrix. In \cite{VP09} we found that the determinant of an $m\times m$ matrix with diagonal elements $p$ and non-diagonal elements $q$ is
$[p+(m-1)q](p-q)^{m-1}$. By inserting $p=1-m/2-\gamma_s$ and $q=1$ into the determinant above, we obtain the roots $\gamma_s=\pm m/2$.
This result implies $\gamma_{max}[W]=\frac{m}{2}$. By substituting this value into (\ref{gamma}) we get $\gamma_{min}[R]\ge 0$. This implies that this solution for $d^*$ is feasible, and recalling the guessed solution $p^*$, we have $d^*=p^*$. Thus the maximum quantum value of the Bell polynomial $M$ defined by Eq.~(\ref{bellexpr}) is equal to $m^2/2$, which can be achieved by classical means as well.
\end{document}
|
1,108,101,566,391 | arxiv |
\section{ Introduction}
The Meridian Axial Circle (MAC, D=180~mm, F=2.3~m) in Kiev was recently
modernized by installing a 1040x1160 CCD camera that can work
in scan mode (Telnyuk-Adamchuk et al. \cite{aa2002}; Karbovsky \cite{karb}).
The camera, designed at the Nikolaev Observatory (Ukraine),
incorporates a glass filter to enable observations
in the V band. With effective exposures of
about 108~sec for equatorial stars, the magnitude limit is V=17~mag.
The instrument was used in two observational projects.
The first long-term project was the astrometric survey of the sky in
the equatorial
zone to extend the Hipparcos-Tycho
reference frame to fainter magnitudes. This programme is still in progress.
The second project, now completed, concerns observations of star fields
in the direction of
192 extragalactic ISRF objects, a list of which, for the declination zone
from 0$^{\circ}$ to +30$^{\circ}$, was taken from Molotaj (\cite{Molotaj}).
This declination range was chosen to reduce CCD distortion effects
(Vertypolokh et al. \cite{nik2001}; Vertypolokh et al. \cite{journ}).
The project was carried out in the framework of scientific problems:
maintenance of the Hipparcos
frame of reference and the linking of optical frames to the ICRF.
This report describes the data reduction
and compilation of the Kyiv meridian axial circle catalogue (KMAC1) of stars
in fields of extragalactic radio reference frame sources.
The most important data sources used for the compilation of the
catalogue include the major catalogues
Tycho2 (Hog et al. \cite{tycho}); CMC13 (Evans et al. \cite{evans});
UCAC2 (Zacharias et al. \cite{ucac}); 2MASS (Cutri et al. \cite{2mass});
USNO-A2.0 (Monet et al. \cite{a20}) and USNO-B1.0 (Monet et al. \cite{b10}).
Also, for
calibration of the instrumental magnitude scale we used several
photometric catalogues of NGC 2264 stars.
The astrometric reduction and source catalogues
used for compilation of the KMAC1 are shown in Fig.~\ref{general}.
Compilation of
the catalogue followed the following steps of data reduction:
image processing (Sect.~2),
calibration for instrumental and magnitude-dependent errors (Sect.~3)
and correction of the magnitude scale (Sect.~4).
Conversion to the ICRF was carried out with the two alternative
types of referencing,
using the space-based catalogue Tycho2 and the modern ground-based catalogues
CMC13 and UCAC2. This resulted
in the compilation of two catalogue versions: KMAC1-T and KMAC1-CU.
The details of
referencing to the ICRF system are discussed in Sect.~5 and
the computation of proper motions in Sect.~6.
The catalogue description, its
properties and external verification are described in Sect.~7.
\begin{figure}[htb]
\centerline{\includegraphics*[width=9.0cm]{2573fg1.eps}}
\caption{Compilation of the KMAC1: main steps of reduction
and source catalogues}
\label{general}
\end{figure}
\section{ Image processing}
The catalogue is based on 1100 CCD scans each of
46x24$'$ size in the sky (right ascension x declination)
and centered on the observed ICRF object with
an accuracy of about $\pm 2'$. Each of the 192 ICRF fields
was scanned on at least 5 nights. The original
scanned data were archived and stored in a CD-ROM database.
The first stage of data reduction began with a
search and extraction of data files from the database
archive. CCD images of stellar fields were then filtered of various
instrumental and noise
features that introduce an inhomogeneity in the sky level.
The inhomogeneity pattern inherent to a scan mode is dominated by a
1D strip-like structure that changes only along the declination
(DEC) direction (the $x$-axis in the CCD),
with a possible weak
trend over right ascension (RA), the $y$-axis of the CCD.
The striped structures in the images are formed by increased noise from
a few dozen bad bright pixels, which produce vertical pixel-width
noisy strips. Images are also contaminated by
a number of flares and tracks of radioactive particles of cosmic origin
and from Chernobyl and which have coma or star-like shapes.
Also, the sky level measured along the $x$-axis has a large-scale component
which under normal observing conditions does not exceed 5\% of the
total signal level. Some scans also show vertical variations
in the sky level related to clouds or changing sky brightness.
All types of background variation were eliminated with a simple correction
model that considered these variations to be caused by additive components.
While this interpretation is reasonable for a vertical pixel width
structure, large-scale
variations along the $x$-axis can also contain
a multiplicative flat field component. To investigate this problem,
we carried out a study illuminating the CCD with a light source
placed at the telescope objective. Using bias information read from the
outer calibration regions of the CCD, the flatfield pattern was computed
and compared to the systematic trends in the
preliminary differences of instrumental
$v$ magnitudes and $r'$ CMC13 photometry. Only a partial correlation was
found, indicating a possible variation of the bias along the $x$-axis
(a similar conclusion was reached by Evans et al. (\cite{cmc}) for
observations at the Carlsberg meridian circle).
Considering the small amplitude of variations, they were treated
as additive components. However, possible inaccuracies
due to omission of multiplicative components in the image
analysis is compensated for by the
method of calibration
for errors dependent on instrumental parameters,
in which any residual systematic trend along the $x$-axis is eliminated
using information from an external catalogue (Sect.~3.1).
Thus, scans were filtered by, first, subtracting
the local sky large-scale changes in the two directions,
and then subtracting a running average taken along each column of 1 pixel
width.
Detection of objects in the noisy field was carried out
by application of a smoothing filter whose shape approximately
corresponded to the Point Spread Function,
and by the elimination of bright 1x1~pixel flares. Detection consisted of a
comparison of the pixel flux with a threshold
defined as $I_{det}=[1.1+(\sigma_{n}-12)/45]\sigma_{n}$,
where $\sigma_{n} \geq 12$ is the local sky noise. The second term
in this expression ensures approximately constant, independent of
$\sigma_{n}$ and the sky star density, the number of false
detections
(from 300 to 500 per frame). For faint images, it was required that
an object should fill at least two adjacent pixels.
For bright images a special filtration was applied to avoid
false multiple image detections.
Determination of the $x$, $y$ positions and fluxes $v$ for each object was
performed with the various approaches available for processing of
CCD images. These are: 1) the modified Center of Gravity (CoG)
method (Irwin \cite{irwin}) and 2) a group of the full profile fitting
methods based on the Gaussian linearized
least squares method (e.g. Condon \cite{condon}; Viateau \cite{vit}).
The modified CoG method used at the CMT (Evans et al. \cite{cmc})
is based on theoretical considerations by Irwin (\cite{irwin}) who
demonstrated that
its accuracy is almost equal to that obtained with a full profile
fitting.
The method, based on profile fitting, provids both for circular
and elliptical Gaussians; in the second case, horizontal orientation
of semi-axes was considered as adequate.
The original non-smoothed scans were used for the image processing.
Numerical procedures corrected for the undersampling effect that occurs
when the pixel size is large and comparable to the FWHM (Viateau \cite{vit}).
In bright images, saturated pixels were not used for the fitting.
Centroiding was performed, trying the CoG method and the
Gaussian circular and elliptic models in turn. When a solution
was not achieved at any step of the computation, the image quality index
was flagged as non-standard centroiding. This occured also when
a final solution, with reference to the initial approximate position
(found from the first CoG iteration) was shifted by more than 1.5 pixels.
The image quality index thus marks images that are possibly multiple
or of non-standard shape.
Computations made by different methods produced very similar results,
which supports the conclusions of Irwin (\cite{irwin}). Thus, the r.m.s.
difference of coordinates computed by the CoG and Gaussian methods is
about $\pm 0.05-0.06''$ for V=15--16~mag stars and is negligibly small
in comparison to the internal random error $\pm 0.2-0.3''$
of one observation.
The most important feature of the profile fitting methods is
the possibility to change their performance so as to minimize the influence
of systematic errors typical of the CCD used at the MAC and which seriously
degrade the accuracy of the DEC measurements (see discussion in the
next Section).
Preliminary processing
showed that these errors appear as a systematic trend
in declination with magnitude, which does not depend on whether computations
are made by the CoG or profile centroiding methods. The largest effect
occurs for bright magnitudes; thus for V=10~mag
stars the systematic effect, measured with reference to V=14~mag stars,
is $0.45''$. To reduce this effect, each pixel and the related
equation of the linearized system of equations was weighted by a
factor $p=\sigma _n / \sqrt{\sigma _n^2 +I}$ where $I$ is the flux received
by the pixel from the star. This modification of the least
squares procedure decreased the amplitude of the error to $0.15''$.
\section{Astrometric calibrations}
The main goal of the astrometric calibrations described in this Section
is the refinement of the
measured $x$, $y$ positional and $v$ photometric data influenced by various
bias sources that are particularly intricate for the
CCD camera used for the observations.
A problem arose from the inaccurate tuning
of the electronics which produced a slight asymmetry of the star images.
Based on visual inspection, we considered the effect to be acceptably
small and so started observations.
After a major part of the observations had
been obtained, it became clear that the data is affected by large systematic
errors caused, most likely, by the asymmetry of the images.
Thus, refinement of the data required the use of
special data processing technique.
\subsection{Calibration of instrumental errors}
The dominant componets of the MAC instrumental errors are related
to the following effects:
oversaturnation of bright V$<12$~mag images caused by
use of a 12-bit AD convertor; a slight
asymmetry of stellar profiles in the direction of the CCD
declination $x$-axis,
along the direction of
fast charge transfer to the reading register in the last row.
Also, profiles of star images are elongated along the
$x$-coordinate.
While the Gaussian image size parameter $\sigma_y$ does not show any
change with $x$, the $\sigma_x$ parameter progressively increases
in this direction (Fig.~\ref{r1}), as does the image elongation.
Only near the CCD reading register location,
at its left edge ($x=0$), are the images perfectly
round ($\sigma_x = \sigma_y$).
This effect is similar to the charge
transfer efficiency problem that occurs along the scan direction
(Evans et al. \cite{cmc}), but of different origin.
No dependency of the image elongation on the background level is seen.
The amplitude of each type of image distortion
was found to depend on the star flux. Images are fairly
symmetric along the drift scan direction, so degradation due to the
above effects
concerns mainly the DEC and photometry.
\begin{figure}[htb]
\resizebox{\hsize}{!}{\includegraphics{2573fg2.ps}}
\caption{Systematic dependence of the image size parameter
$\sigma_x$ on the CCD $x$-coordinate. Different line types
correspond to instrumental magnitudes $v$ from 10 to 16}
\label{r1}
\end{figure}
\begin{figure}[htb]
\resizebox{\hsize}{!}{\includegraphics{2573fg3.ps}}
\caption{Preliminary differences
KMAC1-CMC13 in DEC plotted versus $\sigma_x$ for
a few scans; stars are devided into three groups depending on $v$ and
shifted vertically by $\pm 0.5''$ for clearness.
Open circles refer to star images with the largest $x>1050$~px
separation from the CCD reading register. Symbol size is proportional to $v$}
\label{r2}
\end{figure}
Image distortions, by affecting the $x$ positions of stars,
cause a systematic bias in the DEC.
Analysis of the KMAC1 positions, obtained with preliminary
data reduction with reference to 12--14~mag CMC13 stars,
revealed a correlation between $\Delta \delta$ differences
KMAC1-CMC13 and $\sigma_x$.
Fig.~\ref{r2} shows the typical systematic trend in $\Delta \delta$,
which is different for different magnitudes and
normally does not exceed $\pm 0.1$--$0.2''$.
The trend is quasi-linear with a slope that depends on $v$
but that cannot be approximated easily since a more complex
cross-relation between $\Delta \delta$, $\sigma_x$, $x$ and $v$
occures.
In particular, stars imaged in the 50~px edge area most distant from the
reading register escape this dependency.
To remove the dependence of $\Delta \delta$ on $x$ and $\sigma_x$,
we considered a
number of models and found that the best correction is to introduce
directly to the measured $x$ values the factor:
\begin{equation}
\label{eq:r1}
\Delta x= A_v(\sigma_x - \sigma_0)
\end{equation}
where $A_v$ is a coefficient defined for each 1-mag bin of star magnitudes
and $\sigma_0$ is a constant model parameter valid for the whole data set.
The function
(\ref{eq:r1}) adequately models the complex nature of image distortions
inherent to the MAC, the model parameter $\sigma_0$
is the $\sigma_x$ value corresponding to
non-distorted star images, such as those observed at the CCD reading
register ($x=0$) and are of circular form $\sigma_x=\sigma_y=\sigma_0$.
Thus model (\ref{eq:r1}) calibrates
the $x$ coordinates for the $\sigma_x$ deviations from
$\sigma_0$, irrespective of the star position in the CCD frame and the seeing.
The use of a fixed constant $\sigma_0$ value for any magnitudes
implies that the reduction (\ref{eq:r1}) calibrates the data to a fixed star
brightness. More complicated versions of the reduction
model that included the $x$ term or image elongation lead to no
improvement.
The coefficients $A_v$ and $\sigma_0$ were found
based on a criterion of best convergence of star declinations
computed for the nights when they were observed; the
reduction procedure used the CMC13 catalogue
as a reference. The numerical estimate of 1.11~px obtained for $\sigma_0$
corresponds to the $\sigma_x$ value
typical for well-exposed images of $v=13.1$~mag stars
measured near the CCD reading register ($x=0$).
A similar calibration procedure, based on a formal reduction to the
CMC13 $r'$ photometry, was applied to instrumental magnitudes $v$. The
difference of photometric bands is of minor importance here
since color residuals $v-r'$ are not correlated with the image
parameters measured at the MAC. The calibration has a form similar to
(\ref{eq:r1}):
\begin{equation}
\label{eq:r1v}
\Delta v= A'_v(\sigma_x - \sigma'_0) .
\end{equation}
Here the definition of
$\sigma'_0$ as a free model parameter lead to the appearance
of an extra systematic trend of $v$ with $x$, therefore $\sigma'_0$ was
taken to be equal to the expectation (an average) of $\sigma_x$
at given $x$ and $v$.
The only model parameter $A'_v$ (defined for each 1-mag bin in $v$)
was determined similarly, from a condition of the best convergence
of individual observations.
Another systematic effect was found considering
preliminary differences KMAC1-CMC13 of positions and photometric
values (formal in the last case)
computed with calibrations (\ref{eq:r1}) and (\ref{eq:r1v}).
The differences were found to contain a small
fluctuating component along the $x$-axis, normally
within $\pm 0.04''$ in position and $\pm 0.03$~mag in photometry.
However in the $x >1050$~px area, the trend in DEC
increased to $0.2''$. The origin of this trend is unclear and possibly
can be due to the imperfect pixel geometry of the CCD.
Using KMAC1-CMC13 differences,
the trend was removed (only the variable part, so as not to incorporate
possible systematic errors of the CMC13) and the reduction
procedure, including determination of $A_v$, $A'_v$ and $\sigma_0$
values, was iteratively repeated. In Fig.~\ref{r3}, the
KMAC1-CMC13 residuals in DEC
before and after calibration are shown. Along with a complete remove of
the correlation, the random scatter of $\Delta \delta$ differences
has been noticeably reduced.
\begin{figure}[tbh]
\centerline{%
\begin{tabular}{r@{}l}
\includegraphics*[height=150pt]{2573fg4.ps} &
\includegraphics*[height=150pt]{2573fg5.ps} \\
\end{tabular}}
\caption{KMAC1-CMC13 differences $\Delta \delta$ versus $\sigma_x$ for
the stars of 12--14~mag: before and after calibration for
instrumental errors}
\label{r3}
\end{figure}
The successful refinement of the measured data suffering from
various instrumental errors was based on extensive use of the
CMC13 as a tool for error calibration. Under the reasonable assumption of
no correlation between instrumental errors of the two telescopes,
the procedure is correct, and the accuracy of calibration
depends on the accuracy of the data in the source catalogue.
\subsection{Formation of equivalent scans}
In drift scan mode, formation of star images is non-synchronous
since the median moment of the image exposure depends on RA.
For that reason, the atmospheric turbulent conditions
under which the images are formed vary as a function of RA.
The measured $x$, $y$, $v$ data are therefore affected by a time-dependent
component of atmospheric refraction, causing an effect of
image motion much larger than is inherent to the astrographic
mode of observations. The induced temporal signal
is difficult to trace and
makes referencing of the observed data to the celestial system
more difficult. A number of methods have been proposed to calibrate this
effect, see e.g. Evans et al. (\cite{cmc}); Viateau et al. (\cite{vit}).
In the case of short scans obtained at the MAC, direct calibration of
atmospheric fluctuations with use of the Tycho2 catalogue was found to give
unreliable results since scans often contained few reference stars.
We used a method which consists of substitution of all
individual overlapping (normally to $\pm 2'$) scans available
for the particular ICRF field by a single specially-formed "equivalent" scan.
For this, each scan of the ICRF field was preprocessed
with the
Tycho2 catalogue so as to determine a zero point of CCD positions and
magnitudes, and to approximately (to $\pm 1''$) reduce
the relative displacement of individual night scans.
After cross-identification, a compiled list of field objects was formed with
$x$, $y$, $v$ data averaged.
This procedure is similar to the formation of subcatalogues adopted
at the Valinhos meridian circle (Viateau et al. \cite{vit}) but with
no conversion to equatorial coordinates.
The validity of this substitution is based on the linearity of the averaging
operation. Thus, the averaging can be performed either
prior to conversion
to the celestial coordinates (that is, over the CCD measured data),
or after this reduction (over equatorial positions), with equivalent results.
A stringent linearity of the averaging is only achieved, however, when
the star content of individual scans is identical.
This is the case for the bright stars
that are normally detected and measured in each nightly scan.
In the case of omitted (usually faint) object images, corrections allowing
for the compensation for the scan system of the "omitted" star observation
should be
applied to that object's $x$, $y$, $v$ data in the equivalent scan.
The information necessary to make this correction is found by
obtaining the differences between each nightly scan and the correspondent
equivalent scan. Since we consider the measured data, not
transformed to celestial coordinates and magnitudes, the
differences usually show systematic trends in both $x$ and $y$
directions and due to possible varying magnitude
errors in $v$.
The systematic component of these differences was approximated
with cubic spline functions.
Calibrations for omitted images started from consideration of major
systematic trends along the temporal $y$-axis. After corrections
in the equivalent scan data ($x$, $y$, $v$),
these trends were removed
from the nightly scans. As a result, the differences between each nightly
and "equivalent" scan became noise-like in shape.
Next, similar steps of calibration were applied to
the differences of "nightly scan" - "equivalent scan" registered
along the $x$ and $v$ data axes.
To obtain convergence, the whole procedure was reprocessed twice,
the outliers removed and all computations repeated again.
The resulting equivalent scans used for transformation to the ICRF
are less subject to atmospheric differential image motion due to
averaging over a subset of individual scans included in the output.
The averaging effect is inversely proportional to the square root of the
number of frames, which is 6 on average.
The output nightly scans were
of less importance since they
were tightly reduced to the system of the corresponding
equivalent scan by filtering out any systematic differences.
The differences between these scans contained
only a random noise component which provided
valuable information on internal catalogue errors (Sect.~7.1).
An important restriction to the method discussed is
that correct tracing of scan system changes is achieved only with
completely overlapped and co-centered nightly scans. Displacement
of individual scans by $\pm 10$~\% of a scan length results
in incorrect extrapolation of the offset scan system in edge areas and
causes the problems discussed in Sect.~5.
\subsection{Magnitude-dependent errors}
For investigation of magnitude-dependent systematic errors
in $x$, $y$ and $v$ data we carried out a preliminary processing of
equivalent scans with the Tycho2 catalogue.
Direct inspection of the KMAC1-Tycho2 residuals clearly indicated
the presence of errors dependent on magnitude.
Systematic effects in positions were found to be within $\pm 0.03''$
for the 9.5--13 magnitude range, sharply increasing to $\pm 0.2''$ at
$v<9$ mag. A much larger trend is seen in the KMAC1
photometry for bright $v<9.5$~mag stars where it exceeded 1.0~mag.
Systematic components of KMAC1-Tycho2 residuals were treated as
errors in the MAC data, therefore,
$x$, $y$ and $v$ values were corrected by subtracting systematic
trends. Calibration was applied only to stars
in the Tycho2 magnitude range V$<13$~mag.
It was considered that initial estimates of KMAC1-Tycho2 residuals
are somewhat
biased due to redistribution of magnitude-dependent errors
between
reference stars in the field. Therefore, to extract better estimates
of the magnitude-related errors from the KMAC1-Tycho2 residuals,
calculations were refined in an iterative manner.
After calibration, the residual trend in corrected positions,
estimated using the CMC13, does not exceed $\pm 0.04''$ for the entire
magnitude range.
\subsection{Seasonal variations of magnitude-related errors}
More explicit analysis revealed a variation
of the magnitude error in declination with season.
For the study we used $\Delta \delta$
differences KMAC1-CMC13 obtained with a preliminary data processing based on
the CMC13 as a reference. Fig.~\ref{ses} shows the systematic
trend of the differences $\Delta \delta$ for 10 groups of star fields with a
numbering that
corresponds to their arrangement in RA, or season.
Variations with season are especially strong at bright magnitudes;
the difference between "winter" (numbers 1--6) and "summer" fields (7--10)
is about $0.2''$ for $v<12$~mag stars and is small at $v \approx 13$~mag.
Note that an attempt to resolve this seasonal effect,
referring the MAC data directly
to Tycho2 and then using the KMAC1-Tycho2 differences, lead
to inconclusive results due to the narrow magnitude range, insufficient
statistics, and, especially, the filtering effect produced
by the reduction procedure.
\begin{figure}[tbh]
\centerline
\begin{tabular}{@{}c@{}}
\includegraphics*[height=120pt, width=8.0cm]{2573fg6.ps} \\
\includegraphics*[height=120pt, width=8.0cm]{2573fg7.ps}
\end{tabular}}
\caption{ Systematic
differences KMAC1-CMC13 in DEC as a function of magnitude
for 10 groups of star fields ordered by RA;
before (upper panel) and after (bottom) calibration (\ref{eq:r4})}
\label{ses}
\end{figure}
Considering that the picture shown in Fig.~\ref{ses} may originate from errors
in the CMC13, we used this information
indirectly, to assume a possibility of a specific error
in the MAC declinations (or in $x$ values), and to define a function
\begin{equation}
\label{eq:r4}
\Delta_{x}=\left \{ \begin{array}{ll}
(\beta \mbox{sin}\alpha + \gamma \mbox{cos}\alpha)(13-v) &, v<13 \\
0 &, v>13 \\
\end{array} \right.
\end{equation}
that models the bias in $x$. Model
parameters $\beta $ and $\gamma$ were found as those whose use
yielded the least square values of KMAC1-Tycho2 differences in the DEC.
The function (\ref{eq:r4}) is defined only for bright
$v<13$~mag stars; the positions of fainter stars cannot be corrected.
Calculations yielded a solution $\beta=0.045''/$mag,
$\gamma=-0.031''/$mag with an uncertainty of $\pm 0.010''/$mag
in each parameter.
The effect of calibrations based on the Tycho2 catalogue is
seen in Fig.~\ref{ses}
where the KMAC1-CMC13 differences after correction
are shown in the bottom panel; the residual
variations of the magnitude-related errors in DEC
are shown to be reduced to $\pm 0.03''$ or less
at $v \geq 10$ mag.
\section{ Calibration of the magnitude scale}
Star magnitudes V of the KMAC1 have been computed using measured
$v$ values corrected for instrumental and
magnitude-dependent errors as described above.
The zero point of the V magnitude scale was
determined
using the Tycho2 photometry of bright V$<13$~mag stars.
The problem consisted of verification of the magnitude
scale linearity at its faint end,
which cannot be directly controlled due to the
absence of faint all-sky standards in the V band.
The study and the following calibration used indirect methods
relied upon red $r'$
and infrared J data taken from the CMC13 and 2MASS global catalogues
respectively.
The UCAC2 catalogue, as an alternative $r'$-like data source, was not
utilized since its magnitudes are only approximate and not
calibrated. Our attempt to take advantage of this
catalogue photometry resulted in a similar but slightly
less accurate calibration
compared to that provided using the CMC13.
The development of the calibration model and its velidation was
based on a photometric study of the open cluster NGC 2264 scans
obtained at the MAC, specially for this purpose.
\subsection{The open cluster NGC 2264}
For this study we compared KMAC1 V values computed for stars of
the open cluster NGC 2264 with those given in several
high-accuracy photometric
catalogues. As photometric V$_{st}$ standards, we used data provided by
Sung et al. (\cite{sung}) for 329 identified stars;
Kuznetsov et al. (\cite{Kuzn}) identified 40 stars ; and
the WEBDA Internet database of UBV CCD observations in
open clusters provided 523 stars (Mermilliod \cite{webda}).
First, we verified
that there are no systematic dependences of
$\Delta$V=V--V$_{st}$
residuals either on B-V color or the star CCD $x$ position.
\begin{figure}[htb]
\resizebox{\hsize}{!}{\includegraphics{2573fg8.eps}}
\resizebox{\hsize}{!}{\includegraphics{2573fg9.eps}}
\caption{Systematic effect in measured V values:
{\bf a} -- found from a comparison to photometric data by
Sung et al. (\cite{sung}) (crosses), Kuznetsov et al. (\cite{Kuzn})
(circles), Internet WEBDA database (inclined crosses) and Tycho2 (squares);
{\bf b} -- the bias simulated with the model (\ref{eq:m2})
}
\label{m1}
\end{figure}
Examination of $\Delta$V residuals, however, revealed a large 0.6~mag
systematic bias of measured data for faint V$>12$~mag stars,
shown in Fig.~\ref{m1}a. Interpretation of this plot should take
into account that
images of stars brighter than 12~mag are oversaturated
and their fluxes determined by the centroiding
procedure can be systematically biased.
The resulting effect in magnitude is however opposite
because the zero point of the V scale is referred to bright Tycho2 stars.
Another important feature of
the plot is the linearity of magnitude scales in
either bright V$<10.5$~mag and faint V$>12$~mag segments of the V-axis,
however, with different zero points.
We tried to simulate this systematic effect using two-color
V$-r' \sim$V--J diagrams that were built for NGC 2264 (Fig.~\ref{m2}a) and for
a complete list of the KMAC1 stars
(Fig.~\ref{m2}b). Star distributions in both plots are clearly
separated depending on V, bright stars being shifted systematically
upward relative to faint stars. The shift
does not depend on V--J color, so we refer it entirely to
magnitude-dependent errors in MAC photometry.
A number of other two-color
diagrams were also tested, including those that incorporate H and K
infrared data
from the 2MASS catalogue; it is the V$-r' \sim$V--J
diagram that ensures the best separation of stars in V.
\begin{figure}[htb]
\centerline{\includegraphics*[width=8.4cm]{2573fg10.eps}}
\centerline{\includegraphics*[width=8.4cm]{2573fg11.eps}}
\caption{Two-color diagrams V$-r' \sim$V--J:
{\bf a} -- for NGC 2264 stars, symbol size indicates star brightness
(9 to 16~mag);
{\bf b} -- for all stars, large dots show V$<$10.5~mag stars.
Solid line (\ref{eq:m1}) fits the bright star location, the dashed line
refers to all stars, thick arrows show the direction of interstellar reddening.
A geometry explaining the model (\ref{eq:m2})
is also shown (see text)
}
\label{m2}
\end{figure}
The differences V$-r'$ for bright KMAC1 stars are approximated
by a function (solid line in Fig.~\ref{m2}b)
\begin{equation}
\label{eq:m1}
f_{V-J}= \overline{V-r'}=-0.163+0.4016(V-J)-0.0422(V-J)^2
\end{equation}
A small residual scatter of $\pm 0.114$~mag suggests that the
distribution is uniform and the dependency
(\ref{eq:m1}) is valid for any field. In particular,
the function
$f_{V-J}$ matches reasonably well the bright star distribution for NGC 2264
(solid line in Fig.~\ref{m2}a). Note that the function
(\ref{eq:m1}) represents a zero-point of the V scale since
the bright stars, for the most part, are the Tycho2 stars used
as a reference for photometry. The dashed line refers to all stars.
The position of faint V$>12$~mag stars
are shifted downwards and the fitted dashed line
is almost parallel to $f_{V-J}$ in the most populated 0.5--2.0 area
of V--J colors. This leads to the very important conclusion that
errors $\Delta V$ in MAC photometry do not depend on the color,
which is consistent with previous results based on the use
of photometric standards; rather, they are a function of V.
The calibration of faint star photometry to the instrumental
magnitude system defined by bright stars is based on the use of the
fitting curve (\ref{eq:m1}) as reference.
Consider the two-color diagram where the
unbiased star location A is a point V$_{st}-r'$,V$_{st}-$J in the
fitting curve (Fig.~\ref{m2}a).
The star's measured
position, B, is shifted by $\Delta V$ in both directions, to
V$_{st}+\Delta V-r'$,V$_{st}+\Delta V-$J. This geometry allows us to
express the point B distance to the fitting curve
(\ref{eq:m1}) in two ways, as V$-r'-\overline{V-r'}$ and as
$ \Delta V(1-f'_{V-J})$ where
$f'_{V-J}$ is the derivative. Hence we derive an estimate
\begin{equation}
\label{eq:m2}
\Delta V= V-r'-\overline{V-r'}/(1-f'_{V-J})
\end{equation}
of the bias. In the first approximation (when $f' \approx 0$),
it is equal to the vertical distance between the measured
star location B in the diagram and the fitting color curve $\overline{V-r'}$.
Using Eq.\ (\ref{eq:m2}), we computed errors $\Delta V$ for
each NGC 2264 star with $r'$ and J data available. The results
shown in Fig.~\ref{m1}b match well
the systematic trend found directly on photometric standards
(Fig.~\ref{m1}a), except for a
small systematic discrepancy of about 0.1~mag at the faint V end.
\subsection{Galactic extinction}
The above analysis does not take into account interstellar extinction, which
requires special considerations. With respect to bright stars we may
however assume that for the most part they are nearby objects
not affected by extinction. The distribution of bright stars in
Fig.~\ref{m2}a therefore is expected to follow a natural temperature
reddening, at least for V-J$<3$ mag. Considering that the interstellar
reddening is inversely proportional to wavelength
(Whitford \cite{whitford}),
we find a direction along which faint stars may move
in the color diagrams (arrows in Fig.-s~\ref{m2}a and b). The inclination
of the reddening line 0.234 was computed using the median wavelengths
0.54, 0.623 and 1.25~nm of V, $r'$ and J filters respectively, with no
allowence for spectral class. It is seen that the temperature and
Galactic reddening are indistinguishable since the corresponding curves are
almost parallel. Thus, both the function (\ref{eq:m1}) and the reddening
curve have equal inclination at V-J=2.0. The differential shift of faint
stars off the temperature line is therefore small and for most stars with
V-J ranging from +1 to +2 is, on average, less than 0.05~mag,
supposing that the extinction in V does not exceed 1.5~mag (1.0~mag
color excess in V-J). Since further computations are performed
by averaging over the total star sample,
the expected error in the photometric calibration is even smaller.
\subsection{Calibration of individual fields}
The point estimates of Eq.\ (\ref{eq:m2}) thus form the basis for
the calibration of the magnitude scale in isolated fields
with the use of external color information from CMC13 and 2MASS.
Taking into account the insufficient statistics
and the complicated shape of the bias to be removed,
we introduced for each field a simple one-parameter
model that represents the systematic dependence of $\Delta V$
as a function of V. The form of this function $\overline{\Delta V(V)}$ was
chosen considering the distribution of $\Delta V$ values computed with
Eq.\ (\ref{eq:m2}) for all stars with known $r'$ and J values (Fig.~\ref{m3}).
The overall distribution of $\Delta V$
is like that shown in Fig.~\ref{m1}a for NGC 2264, except with more
shallow knee, and is fitted with the function
\begin{equation}
\label{eq:m3}
\overline{\Delta V(V)}= \left \{ \begin{array}{ll}
0, & V<11\\
\nu [(V-11)/(1.65)]^2, & 11<V<12.65 \\
\nu +0.029(V-12.65), & V>12.65
\end{array}
\right.
\end{equation}
where the single $\nu$ parameter is the bias magnitude at
V=12.65~mag.
The fit obtained with $\nu=-0.340$ is shown by the solid line.
\begin{figure}[htb]
\centerline{\includegraphics*[width=8.0cm]{2573fg12.eps}}
\caption{Individual estimates (\ref{eq:m2}) of $\Delta V$ bias
in magnitudes for all stars, and its approximation (\ref{eq:m3})
by the solid line}
\label{m3}
\end{figure}
\begin{figure}[htb]
\centerline{\includegraphics*[width=8.0cm]{2573fg13.eps}}
\caption{V$-r' \sim$V--J color diagram for all stars,
with V corrected. The solid line is the fitting function (\ref{eq:m4})
}
\label{m4}
\end{figure}
Further analysis has shown that the $\nu$ value is to be determined for
each star field individually, by fitting estimates (\ref{eq:m2}) with
the model (\ref{eq:m3}). The calibration of magnitudes therefore was
performed with individual $\nu$ values that varied from
-0.63 to -0.14 (-0.58 for NGC 2264). The scatter of $\nu$ values,
in particular, is the cause of the large $\pm 0.157$~mag dispersion of points
in Fig.~\ref{m3}.
The efficiency of corrections is seen from
the two-color V$-r' \sim$V--J diagram built
with final calibrated V values (Fig.~\ref{m4}). The relative shift
of bright to faint stars like that shown in Fig.~\ref{m2}b
was eliminated; also, the standard deviation of points from the fitting
curve
\begin{equation}
\label{eq:m4}
V-r'=-0.147+0.381(V-J)-0.0293(V-J)^2
\end{equation}
was improved from $\pm 0.114$~mag to $\pm 0.087$~mag.
The scatter of V$-r'$ residuals from the fitting curve (\ref{eq:m4}),
plotted in Fig.~\ref{m5}a as a function of V,
includes errors in $r'$ values and indicates the upper limit
of KMAC1 magnitude errors. This plot also shows a good
elimination of systematic errors.
Fig.~\ref{m5}b shows a comparison of the KMAC1 and Valinhos meridian
circle photometry (Camargo et al. \cite{camargo}) for 1190
stars in 13 stellar fields.
The residuals contain no large systematic trend; the
standard deviation of data points is $\pm 0.13$~mag.
Consideration of local fields, however, indicates
local systematic discrepancies in magnitudes
sometimes reaching $\pm 0.10$~mag, with a random scatter of residuals
of about $\pm 0.10$~mag.
Considering the accuracy of the Valinhos photometry, which is
about $0.10$~mag (Viateau et al. \cite{vit}),
we estimate that the KMAC1 data
is of the same or better accuracy.
\begin{figure}[htb]
\centerline{\includegraphics*[width=8.0cm]{2573fg14.eps}}
\centerline{\includegraphics*[width=8.0cm]{2573fg15.eps}}
\caption{Magnitude residuals as a function of V:
{\bf a} -- V$-r'$ differences
measured with reference to the fitting curve shown in Fig.~\ref{m4};
{\bf b} -- residuals between the KMAC1 and the Valinhos catalogue V values.
}
\label{m5}
\end{figure}
In conclusion, we give a relation between V and the CMC13 $r'$ values
using $r'-$J colors:
\begin{equation}
\label{eq:m5}
V-r'=-0.015+0.376(r'-J)-0.0269(r'-J)^2
\end{equation}
\section{ Reduction to the ICRF}
Compilation of KMAC1 started from astrometric calibration of the
measured data as described in the previous Sections. Conversion of corrected
CCD $x$, $y$ positions to equatorial coordinates originally was intended
to be performed using the Tycho2 catalogue which is the best
optical representation of the ICRF. Prior to this conversion
we performed a tentative study to determine how well
the positions of the Tycho2 catalogue match the
modern catalogues CMC13, UCAC2
and our observations.
For this purpose we selected Tycho2 stars that:
\begin{itemize}
\item are located in the fields observed;
\item are listed at least in either the CMC13 or UCAC2;
\item have preliminary KMAC1 positions that
do not differ from those in either the CMC13 or UCAC2 by
more than $0.4''$;
\item have proper motions $\mu_{\alpha}$ and $\mu_{\delta}$ not
exceeding $\pm 0.3''$/year.
\end{itemize}
Fig.~\ref{tycho} presents individual
Tycho2-CMC13 and Tycho2-UCAC2 differences plotted versus Tycho2-KMAC1
preliminary differences for 1843 Tycho2 stars that meet the above
conditions. The almost diagonal location of data points testifies to the
good agreement between the ground-based catalogues,
which provide positions normally consistent to $0.1$---$0.2''$. Very large
deviations, $>\pm 1''$ , usually seen for faint stars V$>$12~mag,
seem to
originate from errors in the Tycho2 positions.
For 1.4 percent
of stars, positional errors of Tycho2 exceed $\pm 0.5''$, and in about 12\%
they are larger than $\pm 0.2''$. The r.m.s. of the difference
of "Tycho2 - CCD catalogue" in RA is 204, 166 and 210~mas,
respectively, for the CMC13, UCAC2 and KMAC1. In DEC these estimates are
148, 123 and 170~mas. Somewhat minor deviations,
149~mas in RA and 129~mas in DEC, were found by
Camargo et al. (\cite{camargo})
from the analysis of the Valinhos transit circle observations.
Of course, the values cited include comparison catalogue errors which are
usually not large.
The above analysis establishes the
degradation of the Tycho2 data at the epoch of KMAC1 observations,
probably due to uncertanties
in proper motions. This problem is often allowed for in different ways
when referring CCD observations to equatorial coordinates.
Thus, at the Flagstaff Astrometric Scanning Telescope reductions are
made by applying weights to the Tycho2 stars depending on their brightness
(Stone et al. \cite{fasst}).
\begin{figure}[htb]
\begin{tabular}{@{}c@{}c@{}}
\includegraphics*[width=4.2cm]{2573fg16.eps} & \includegraphics*[width=4.2cm]{2573fg18.eps}\\
\includegraphics*[width=4.2cm]{2573fg17.eps} & \includegraphics*[width=4.2cm]{2573fg19.eps}\\
\end{tabular}
\caption{Correlation between Tycho2-CMC13
and Tycho2-KMAC1 preliminary differences:
{\bf a} - in RA; {\bf b} - in DEC;
correlation between Tycho2-UCAC2 and Tycho2-KMAC1 preliminary differences:
{\bf c} - in RA; {\bf d} - in DEC; only differences exceeding 0.15$''$
are shown}
\label{tycho}
\end{figure}
This discussion suggests
that a reliable reduction to the ICRF using the Tycho2 catalogue requires
the use of a sufficiently large number of reference stars which are to be
first filtered or weighted to eliminate problem stars.
This is especially important in our case since, due to the rather short
scan length,
some fields in sky areas with a low star density are represented only
by 6--8 Tycho2 stars.
Also, the accuracy of the reduction is affected by inhomogeneity in
the sky distribution of reference stars whose images,
in addition, are oversaturated and poorly measured.
At the first stage of referencing we therefore
detected and removed all
Tycho2 problem stars whose positions deviated from
the CMC13, UCAC2 and preliminary KMAC1 data by more than $\pm 0.2''$.
This greatly improved the reliability of the conversion to equatorial
coordinates and was found to be more efficient than the usual
search for outliers based on an iterative approach to the least-squares
solution. With a truncation limit of $\pm 0.2''$, reliable results
were obtained, however, for 106 fields only.
For another 53 fields,
a good transformation to the ICRF required a further rejection of
reference stars with Tycho2-CMC13 and Tycho2-UCAC2 differences in
the range from $\pm 0.2''$ to $\pm 0.15''$.
For the 33 remaining fields with a low reference star density, the
reduction was found to give
quite unstable and ambiguous solutions highly sensitive to
any changes in the reference star set.
A further comparison with the CMC13 and UCAC2 positions has shown
that large systematic deviations are present at the edges of some fields.
This concerns those fields that at some nights were observed
with incorrect telescope pointings (made by
hand since the MAC is not automatic); in a few cases
the relative displacement of sky strips exceeds $10'$ in RA.
Individual scans thus were not
exactly overlapped as was assumed at the phase of equivalent scan
formation. To eliminate this fault, the offset regions were
truncated.
A rigorous conversion to the ICRF using the Tycho2
catalogue was achieved for 159 sky fields, most of which had
a high star density.
Conversion for the complete data array
(192 sky fields) required
the use of the CMC13 and UCAC2 catalogues which are known to be in
the ICRF system. The reduction was performed with well-measured stars
not fainter than 14.5~mag and by limiting their number to 170.
Reference catalogues were used in a combined form,
with equal weights. No truncation
of offset scan edges was applied since the large number of
reference stars ensured very tight referencing using
spline fitting.
Thus, the catalogue KMAC1 exists in the two
versions: with reduction to the Tycho2 (KMAC1-T)
and to the CMC13 and UCAC2 catalogues (KMAC1-CU).
No rejection of stars with large deviations from comparison catalogues
was applied. V magnitudes given in both catalogues are identical and
based on the Tycho2 photometry with the corrections described in Sect.~4.
\section{Proper motions}
The first epoch positional data used for the computation of proper motions
was taken from the USNO-A2.0 catalogue. However the epoch difference of about
50 years prevented direct identification of stars with large
proper motions. To improve the reliability of identification,
the USNO-B1.0 catalogue proper
motions were used and were applied to reduce displacement of star positions
due to the difference in epochs. For stars not found
in the USNO-B1.0, the identification was performed with no proper
motion information applied. In this case, a window
used for identification
was set from $1.4''$ to $2.4''$ depending on the star magnitude.
Stars with proper motions larger than about 40~mas/year therefore
cannot be found in the USNO-A2.0 with no proper motion information
from the USNO-B1.0.
The percentage of KMAC1 stars supplemented with proper motions
varies from 53 to 97, and on average is 90\%. The highest ratio
of 93\% is obtained in the magnitude range from 13 to 17~mag,
dropping to 47\% for V$<$12~mag stars and to 74\% for V$>$17~mag.
Considering the approximate precision of $\pm 250$~mas,
the mean epoch 1954 of the USNO-A2.0 and the internal positional
precision of the KMAC1 (Table~\ref{char}), we find a formal estimate
of proper motion errors of 5--6~mas/year.
\section{Characteristics of the catalogue}
\subsection{Description of the catalogue}
The catalogue KMAC1, as explained above, was released in the two versions
and can be obtained in electronic form from the CDS or via
anonymous ftp://ftp.mao.kiev.ua/pub/users/astro/kmac1.
The KMAC1-CU catalogue contains 115\,032 stars in 192 sky fields and is
referred to the CMC13 and UCAC2; the KMAC1-T contains 104\,796
stars (91\% of the total star number)
in 159 fields and is referred to the Tycho2 catalogue.
The location of KMAC-CU fields in the
sky is shown in Fig.~\ref{sky}. All fields
are located in a declination zone from 0$^{\circ}$ to 30$^{\circ}$;
the mean epoch of observations
is 2002.33. The main
characteristics of the catalogue are given in Table~\ref{char}.
\begin{figure}[tbh]
\includegraphics*[bb = 33 546 545 811, width=8.0cm]{2573fg20.ps}
\caption{Distribution of KMAC1-CU fields across the sky}
\label{sky}
\end{figure}
\begin{table}[tbh]
{\footnotesize
\caption{\small Main characteristics of catalogues
KMAC1-T and KMAC1-CU}
\label{char}
\begin{center}
\begin{tabular}{@{}l@{}ll@{}}
\hline
Catalogue version & KMAC1-T & KMAC1-CU\rule{0pt}{11pt} \\
Reference catalogues & Tycho2 & CMC13, UCAC2 \\
Number of fields & 159 & 192 \\
Declination zone & 0$^{\circ}$ to +30$^{\circ}$ & 0$^{\circ}$ to +30$^{\circ}$ \\
Number of stars & 104796 & 115032 \\
Precision of positions & & \\
V$<14$~mag: & 30--50~mas$^*$ & 30--50 mas$^*$ \\
& 50--80 mas$^{**}$ & 30--70 mas$^{**}$ \\
V=16~mag: & 170~mas$^*$ & 170 mas$^*$\\
& 180 mas$^{**}$ & 170 mas$^{**}$ \\
Formal precision of & & \\
proper motions: & 5--6~mas/year & 5--6~mas/year \\
Precision of photometry & & \\
V$<15$~mag: & 0.02--0.04~mag$^{*}$ & 0.02--0.04~mag$^{*}$ \\
& 0.06--0.08~mag$^{**}$ & 0.06--0.08~mag$^{**}$ \\
\hline
\end{tabular}
\end{center}
\begin{tabular}{l}
$^*$) internal errors; \, \, \, \, $^{**}$) external errors \\
\end{tabular}
\begin{center}
\begin{tabular}{l}
Astronomical data included:
$\alpha$, $\delta$, $\mu_{\alpha}$ , $\mu_{\delta}$,
B, V, R, r$'$, J \\
\hline
\end{tabular}
\end{center}
}
\end{table}
Besides the original positions given at the epoch of observations,
proper motions and original V values, the
KMAC1 catalogue contains B, R values from the USNO-B1.0
for 83\% of the stars identified; r$'$ values are taken
from the CMC13 for 67\% of stars and J values from the 2MASS catalogue
available for 94\% of stars.
Usual supplementary information, including internal error estimates,
the number and epoch of observations, the image quality index, image size for
extended objects and cross-identification to
the USNO-B1.0 is also given.
Note that the flagged image quality index (see Sect.~2)
may indicate centroiding problems of various origins (e.g. binary or
unresolved stars);
the catalogue positions therefore are probably biased. The unequal number of
observations for RA and DEC data means that a rejection of bad
measurements was applied which also may indicate certain problems with
image quality not marked in the image quality index.
These stars should not be used when very high accuracy of positions is required.
The list of sky strips, the IAU designations of the central ICRF object
and star numbers $N_T$ and $N_{CU}$
containing, respectively, in the KMAC1-T and KMAC1-CU for each strip,
are given in Table~\ref{destbl}.
Note that $N_T$ is often lower than $N_{CU}$ due to a truncation of sky strip
edges applied to some fields (Sect.~5).
The star number distribution over
fields is highly inhomogeneous and depends on the Galactic latitude.
This distribution as a function of RA is shown in
Fig.~\ref{Gal}.
\begin{table*}[htb]
\caption{\ Fields with ICRF objects and star numbers contained
in the KMAC1-CU and KMAC1-T catalogues}
{\scriptsize
\label{destbl}
\begin{tabular}{rrr|rrr|rrr|rrr}
\hline
Identifier &$N_{CU}$ &$N_T$&Identifier &$N_{CU}$ &$N_T$&Identifier &$N_{CU}$ &$N_T$&Identifier &$N_{CU}$ &$N_T$\rule{0pt}{11pt} \\
\hline
001031.0+105829 & 274 &258 &044907.6+112128 & 462 & 462 & 105829.6+013358 & 179 & -- &164125.2+225704 & 591 & 591\rule{0pt}{11pt}\\
001033.9+172418 & 243 &232 &045952.0+022931 & 602 & 602 & 111358.6+144226 & 164 & -- &165259.3+022358 & 798 & 798 \\
002232.4+060804 & 190 &190 &050145.2+135607 & 366 & 366 & 111857.3+123441 & 162 &162 &165809.0+074127 & 701 & 701 \\
002225.4+001456 & 127 & -- &050523.1+045942 & 621 & 602 & 112027.8+142054 & 139 & -- &165833.4+051516 & 685 & 675 \\
004204.5+232001 & 308 &308 &050927.4+101144 & 715 & 715 & 112553.7+261019 & 219 & 219 &170734.4+014845 &1105 & 1105 \\
005905.5+000651 & 183 & -- &051002.3+180041 & 939 & 903 & 113320.0+004052 & 222 & -- &171521.2+214531 & 734 & 734 \\
010838.7+013500 & 167 & 167 &051601.9+245830 & 985 & 985 & 114505.0+193622 & 220 & 220 &171913.0+174506 & 776 & 763 \\
011205.8+224438 & 239 & 222 &052109.8+163822 & 984 & 984 & 115019.2+241753 & 193 & -- &172824.9+042704 &1483 & 1483 \\
011343.1+022217 & 171 & 171 &053056.4+133155 & 583 & 568 & 115825.7+245017 & 145 & -- &174535.2+172001 &1130 & 1087 \\
012141.5+114950 & 196 & 194 &053238.9+073243 & 741 & 741 & 115931.8+291443 & 131 & 131 &175132.8+093900 &1305 & 1290 \\
012156.8+042224 & 170 & 159 &053942.3+143345 &1376 & 1338 & 121923.2+054929 & 213 & 213 &175342.4+284804 & 936 & 936 \\
012642.7+255901 & 303 & 303 &054734.1+272156 &1402 & 1375 & 122006.8+291650 & 115 & -- &175559.7+182021 &1256 & 1153 \\
013027.6+084246 & 223 & 223 &055704.7+241355 &1998 & 1927 & 122503.7+125313 & 192 & 192 &182402.8+104423 &2426 & 2426 \\
014922.3+055553 & 188 & 182 &055932.0+235353 & 925 & -- & 122906.6+020308 & 198 & -- &183250.1+283335 &1419 & 1419 \\
015127.1+274441 & 360 & 327 &060309.1+174216 & 768 & 768 & 123049.4+122328 & 180 & 180 &185802.3+031316 & 301 & 301 \\
015218.0+220707 & 257 & 257 &060351.5+215937 &1911 & 1776 & 123924.5+073017 & 197 & 197 &191306.8+013423 &3941 & 3941 \\
020346.6+113445 & 192 & 192 &061350.1+260436 &2077 & 1836 & 125438.2+114105 & 164 & 164 &191254.2+051800 &1600 & 1600 \\
020434.7+090349 & 176 & 176 &061357.6+130645 &2128 & 2128 & 130020.9+141718 & 184 &147 &192218.6+084157 &2755 & 2755 \\
020450.4+151411 & 195 & 195 &064524.0+212151 &1488 & 1488 & 130933.9+115424 & 189 &-- &192559.6+210626 &1755 & 1755 \\
021748.9+014449 & 247 & 159 &070001.5+170921 &1723 & 1723 & 132700.8+221050 & 180 &180 &192840.8+084848 &2682 & 2682 \\
022428.4+065923 & 148 & -- &072516.8+142513 &1121 & 1121 & 133037.6+250910 & 150 &-- &193124.9+224331 &1701 & 1701 \\
023145.8+132254 & 268 & 192 &073807.3+174218 & 782 & 782 & 134733.3+121724 & 202 &202 &193435.0+104340 &5024 & 5024 \\
023752.4+284808 & 372 & 350 &073918.0+013704 &1216 & 1175 & 135704.4+191907 & 227 &227 &193648.0+205136 &1405 & 1405 \\
023838.9+163659 & 199 & -- &074533.0+101112 & 840 & 840 & 140501.1+041535 & 280 &280 &194606.2+230004 &2414 & 2414 \\
024229.1+110100 & 258 & -- &074625.8+254902 & 578 & 559 & 140700.3+282714 & 365 &-- & 195005.5+080713 &2668 & 2668 \\
025927.0+074739 & 201 & 188 &075052.0+123104 & 759 & 756 & 141154.8+213423 & 318 &249 & 203154.9+121941 &1739 & 1700 \\
030230.5+121856 & 166 & -- &075706.6+095634 & 708 & 708 & 141558.8+132023 & 170 &170 & 210138.8+034131 & 608 & 608 \\
030826.2+040639 & 189 & 172 &080757.5+043234 & 672 & 513 & 141908.1+062834 & 295 &291 & 210841.0+143027 & 846 & 779\\
030903.6+102916 & 202 & 202 &081126.7+014652 & 724 & 717 & 141959.2+270625 & 204 &184 & 211529.4+293338 &1858 & 1793\\
031951.2+190131 & 349 & 287 &082550.3+030924 & 431 & 431 & 142440.5+263730 & 193 &193 & 212313.3+100754 & 604 & 604\\
032153.1+122113 & 177 & -- &083052.0+241059 & 343 & 303 & 142700.3+234800 & 193 &174 & 212344.5+053522 & 622 & 622 \\
032536.8+222400 & 293 & 293 &083148.8+042939 & 451 & 447 & 143439.7+195200 & 209 &196 & 213032.8+050217 & 478 & 478 \\
032635.3+284255 & 361 & 361 &084205.0+183540 & 364 & 282 & 144516.4+095836 & 283 &283 & 213638.5+004154 & 464 & 438 \\
032957.6+275615 & 525 & 494 &085448.8+200630 & 349 & 244 & 150424.9+102939 & 267 & 259 &213901.3+142335 & 607 & 607 \\
033409.9+022609 & 182 & 182 &090910.0+012135 & 312 & -- & 150506.4+032630 & 239 & 239 &214710.1+092946 & 433 & 433 \\
033647.2+003516 & 252 & -- &091437.9+024559 & 307 & -- & 151340.1+233835 & 271 & 254 &214805.4+065738 & 448 & 448 \\
033717.1+013722 & 241 & 241 &091552.4+293324 & 223 & 196 & 151656.7+193212 & 272 & 272 &215137.8+055212 & 330 & 330 \\
034328.8+045802 & 211 & 211 &095456.8+174331 & 217 & -- & 153452.4+013104 & 496 & 496 &221205.9+235540 & 734 & 696 \\
034423.1+155943 & 219 & -- &095649.8+251516 & 177 & 177 & 154049.4+144745 & 310 & 293 &223236.4+114350 & 348 & 348 \\
034506.4+145349 & 251 & -- &100741.4+135629 & 218 & 218 & 154929.4+023701 & 448 & 448 & 223622.4+282857 & 601 & 581 \\
040305.5+260001 & 717 & 717 &101353.4+244916 & 192 & -- & 155035.2+052710 & 302 & -- & 225307.3+194234 & 395 & 395 \\
040922.0+121739 & 256 & -- &101447.0+230116 & 156 & 156 & 155930.9+030448 & 383 & 374& 225357.7+160853 & 278 & 241 \\
041243.6+230505 & 473 & 418 &102010.0+104001 & 148 & 148 & 160332.0+171155 & 297 & 297& 225717.5+024317 & 202 & -- \\
042446.8+003606 & 375 & 375 &102444.8+191220 & 168 & 165 & 160846.2+102907 & 412 & 412& 232044.8+051349 & 290 & 266 \\
042655.7+232739 & 397 & 388 &103334.0+071126 & 168 & 168 & 160913.3+264129 & 336 & 336& 232159.8+273246 & 450 & 422 \\
042952.9+272437 & 189 & 189 &104117.1+061016 & 163 & -- & 161637.5+045932 & 461 & 369& 233040.8+110018 & 328 & 323 \\
043103.7+203734 & 428 & -- &104244.6+120331 & 166 & 166 & 161903.6+061302 & 538 & 538& 234029.0+264156 & 469 & 422 \\
043337.8+290555 & 681 & 681 &105148.7+211952 & 205 & -- & 162439.0+234512 & 399 & 399& 234636.8+093045 & 262 & -- \\
\hline
\end{tabular}
}
\end{table*}
\begin{figure}[tbh]
\includegraphics*[width=8.0cm]{2573fg21.eps}
\caption{Distribution of KMAC1 star number per field on right ascensions}
\label{Gal}
\end{figure}
The distribution of stars by magnitude (Fig.~\ref{figdis})
shows that the catalogue limiting magnitude is near V=17.0~mag.
Note that beyond this limit, some of the faint
V$>$17~mag objects in the catalogue may
be artifacts appearing due to the low detection
threshold. A substantial ratio of very faint stars
were not identified with any of the 2MASS, USNO-A2.0 or
UNSO-B1.0 objects, which could be related to either variability
of stars or false detection. Thus, while 99.3\% of stars to 16~mag
were found in one of the major catalogues,
this ratio drops to 77\% for V$>17$~mag stars, and for yet
fainter V$>17.5$~mag stars the ratio
decreases to 56\%.
Nevertheless,
faint stars were not excluded from the catalogue because
of the very low probability of false detections since each star
was observed in at least two CCD scans.
A very powerful indicator of the detection feasibility is the number
of times the star was observed. Thus, among V$>17.5$~mag stars
observed at least 3 times, the ratio of identifications
with external catalogues is 98.3\%.
\begin{figure}[tbh]
\includegraphics*[width=8.0cm]{2573fg22.eps}
\caption{Distribution of the KMAC1-T (solid) and KMAC1-CU (dashed)
star magnitudes
}
\label{figdis}
\end{figure}
The internal precision of the catalogue was estimated
in a somewhat unconventional way, by comparing
CCD positions and instrumental magnitudes of stars in those nights
when they were observed. The comparison was made with nightly
scans transformed to the equivalent scan system (Sect.~3.2). Note
that an important feature of this transformation procedure
is a scan to scan fitting that works as a filter which completely
removes any systematic differences \emph {between } scans,
leaving only random components. This is the reason why
the internal precision can be estimated in the way discussed with
no use of equatorial positions and V magnitudes computed for each
night (this data was never computed).
Results presented in Fig.~\ref{intacc} show the equal accuracy of
RA and DEC positions.
\begin{figure}[tbh]
\includegraphics*[width=8.0cm]{2573fg23.eps}
\caption{Internal mean accuracy of one catalogue entry
as a function of magnitude}
\label{intacc}
\end{figure}
\subsection{External verification of the catalogue}
External verification of the KMAC1 positional accuracy was performed using
the CMC13 and UCAC2 which are the only all-sky sources of
present epoch positions available for faint stars.
We computed
the r.m.s. differences of KMAC1 positions
with the CMC13 and UCAC2 (Fig.~\ref{acc}).
The plots that refer to catalogue versions "T" and "CU" are very similar.
The increase
of errors at the bright V$<$12~mag end is caused
by oversaturation of images and affects primarily declinations.
For fainter magnitudes, the precision of RA and
DEC is the same, which is evidence of the good efficiency
of various calibrations applied to improve declination data.
Plots shown in Fig.~\ref{acc} include errors of comparison
catalogues and so mark the upper limit of KMAC1 errors.
More correct estimation of KMAC1 errors requires use of information
on the quality of the comparison catalogues.
The data on the external accuracy of the CMC13 is given by
Evans et al. (\cite{evans}) and formal (internal) errors
of the UCAC2 are given by Zacharias et al. (\cite{ucac}).
With this information,
we estimated external KMAC1 positional errors
(Fig.~\ref{ext}) separately for the "T" and "CU" catalogue versions.
The worse quality of the KMAC1-T catalogue is due to problems
with reduction to the ICRF system in short CCD scans containing
a limited sample of Tycho2 reference stars. Note that
external and internal (Fig.~\ref{intacc})
errors of the KMAC1-CU are almost equal, which
indicates a very good referencing of instrumental positions to the
equatorial system.
\begin{figure}[tbh]
\centerline{%
\begin{tabular}{@{}c@{}}
\includegraphics*[height=120pt, width=8.0cm]{2573fg24.ps} \\
\includegraphics*[height=120pt, width=8.0cm]{2573fg25.ps}
\end{tabular}}
\caption{ R.m.s. residuals of the KMAC1 positions
with the CMC13 and UCAC2; upper panel -- for the version "T";
bottom -- for the version "CU"}
\label{acc}
\end{figure}
\begin{figure}[tbh]
\includegraphics*[width=8.0cm]{2573fg26.ps}
\caption{External errors of the KMAC1 ("T" and "CU" versions)
positions (curves) and photometry (open circles)}
\label{ext}
\end{figure}
We were not able to perform a direct external verification
of the KMAC1 magnitudes because of the lack of all-sky referencing available
for faint stars in the V band. Comparison to the Valinhos
photometry (Camargo et al. \cite{camargo}) was performed for
a 1\% subset of catalogue stars and yielded about a 0.1~mag error
estimate (see Sect.4).
External photometric errors (Fig.~\ref{ext}) were found considering
the dispersion of points in Fig.~\ref{m5}, which are the deviations
of V-r$'$ residuals from the color calibration curve
(\ref{eq:m4}).
No subtraction of CMC13 photometric errors was applied since
for magnitudes fainter than V=16 these errors
were found to exceed the r.m.s. KMAC1-CMC13 differences.
Thus, at V=16.4~mag (16.0~mag in the r$'$ band)
the r.m.s. KMAC1-CMC13 differences are equal to 0.12~mag while
the CMC13 external error is 0.17~mag. Probably, the quality of the CMC13
photometry is better than cited.
Magnitude-dependent systematic errors of KMAC1-CU and KMAC1-T
do not exceed $\pm 20$~mas and $\pm 40$~mas respectively
(Fig.~\ref{syst}.). No clear dependency on magnitude is seen
except for a positive hump of plots at V$\approx$16~mag, and
a negative downtrend for the version KMAC1-T in DEC at the
bright V$<$11 end.
\begin{figure}[tbh]
\centerline{%
\begin{tabular}{@{}c@{}}
\includegraphics*[height=130pt, width=8.0cm]{2573fg27.ps} \\
\includegraphics*[height=130pt, width=8.0cm]{2573fg28.ps}
\end{tabular}}
\caption{Systematic differences $\Delta \alpha$ and $\Delta \delta $
of the KMAC1-T and
KMAC1-CU positions to those in the CMC13 and UCAC2 as a function
of magnitude}
\label{syst}
\end{figure}
Individual differences between the KMAC1-T star positions and
positions in the comparison catalogues CMC13 and UCAC2 are shown
in Fig.~\ref{indiv}. We present the worst comparison in DEC
and the "T" catalogue version;
slightly better plots can be obtained for RA and
the KMAC1-CU catalogue version. No systematic trend of
individual differences
in magnitude is observed.
\begin{figure}[tbh]
\begin{tabular}{@{}c@{}}
\includegraphics*[bb = 32 465 546 818 height=120pt, width=8.0cm]{2573fg29.eps} \\
\includegraphics*[bb = 44 467 566 825 height=120pt, width=8.0cm]{2573fg30.eps}
\end{tabular}
\caption{Individual differences of declinations in the KMAC1-T and
in CMC13 and UCAC2}
\label{indiv}
\end{figure}
\section{Conclusion}
The aim of this work was to obtain a catalogue of faint stars
in sky areas with ICRF objects whose declinations are
optimal for observations with the MAC. The catalogue contains positions
of faint V$<$17~mag objects referred to the optical Hipparcos-Tycho
reference frame and thus presents an extension of the ICRF to the
optical domain.
The catalogue described in this Paper is the first catalogue obtained
with the Kyiv meridian axial circle after it was refurbished with a CCD
camera. Realization of this project involved development
of special software for image processing, astrometric calibration
for instrumental errors etc. A quite unexpected finding was that the measured
data (especially the DEC component) is strongly
affected by systematic errors even when star images have a relatively good
shape. A solution to this problem was found in extensive use of external
astrometric catalogues for calibrations.
Another difficulty arose from underestimation of the Tycho2
errors at the present epoch and from the inhomogeneous sky
distribution of the catalogue stars.
As a result, scan lengths appeared to be too short
to allow rigorous reduction to the ICRF
and forced us to use other catalogues (CMC13 and UCAC2) for referencing.
The use of the Tycho2
catalogue for astrometric work in small fields of about 0.5x0.5$^{\circ}$
or less is thus problematic, and feasible only in some sky areas.
|
1,108,101,566,392 | arxiv | \section{Introduction}
\label{sec:intro}
In the context of fault-tolerant quantum computing, operations from
the Clifford group are relatively easy to perform and are therefore
considered inexpensive. In contrast, operations that do not belong to
the Clifford group are complicated to execute fault-tolerantly because
they require resource intensive distillation protocols
\cite{reichardt}. Since non-Clifford operations are necessary for
universal quantum computing, it has become standard to use the number
of non-Clifford gates in a circuit as a measure of its cost. This
fault-tolerant perspective on the cost of circuits has profoundly
impacted the field of quantum compiling and significant efforts have
been devoted to minimizing the number of non-Clifford operations in
circuits.
An important problem in quantum compiling is the problem of
\emph{exact synthesis}: given an operator $U$ known to be exactly
representable over some gate set $G$, find a circuit for $U$ over
$G$. An \emph{exact synthesis algorithm} is a constructive solution to
this problem. When the gate set $G$ is an extension of the Clifford
group, it is desirable that the exact synthesis algorithm for $G$ be
efficient and produce circuits that use as few non-Clifford gates as
possible.
In the past few years, methods from algebraic number theory have been
successfully applied to the exact synthesis problem associated to a
variety of single-qubit \cite{bbg2014,PhysRevA.88.012313,FGKM15,
kmm-exact,KY2015, vsynth, RS16} and single-qutrit \cite{BCKZ15,
2271, KBS2013,PhysRevA.98.032304} gate sets. In many cases, the
resulting exact synthesis algorithms efficiently produce circuits that
are \emph{optimal}, in the sense that they use the least possible
number of non-Clifford gates. These powerful exact synthesis methods
were central in the development of good unitary approximation methods,
which play a key role in the compilation of practical quantum programs
\cite{bbg2014,PhysRevA.88.012313, KBRY2015, kmm-approx,vsynth, RS16}.
Exact synthesis algorithms also exist for various instantiations of
the multi-qubit compiling problem, though each suffers shortcomings in
some respect. Optimal algorithms for two-qubit circuits over
continuous gate sets have been known for a number of years
\cite{PhysRevA.69.062321,PhysRevA.67.042313}. Unfortunately, such gate
sets are not well-suited for fault-tolerant quantum
computing. Multi-qubit exact synthesis algorithms for universal and
fault-tolerant gate sets were introduced more recently
\cite{restricted,m3,GS13,m4,CH2018,m2, MSdM2018,m1}. While the
algorithms of \cite{restricted,GS13} are far from optimal, the
algorithms of \cite{m3,m4,m2,m1} synthesize provably optimal circuits
by cleverly utilizing certain properties of fault-tolerant gate sets
containing the Clifford group. However, the runtimes of these optimal
synthesis algorithms are exponential in both qubit count and optimal
circuit length. Powerful heuristics were introduced in \cite{m1}
achieving polynomial scaling with optimal circuit
length. Unfortunately, even this improved heuristic algorithm takes
thousands of seconds to compute optimal two-qubit circuits of
practical size (40 non-Clifford operations) on modest hardware.
Not only are these multi-qubit exact synthesis algorithms impractical
in many cases, they also fail to shed much light on the
\emph{structure} of optimal circuits. In the single-qubit case,
intimate knowledge of this structure for certain gate sets was
devleoped by describing optimal circuits via regular expressions or,
equivalently, automata \cite{ma-remarks}. Such descriptions are of
theoretical interest, but also have practical consequences. In
particular, for certain single-qubit gate sets these decriptions
allowed researchers to derive a rigorous lower-bound on the number of
non-Clifford gates required to approximate typical elements of
$\su(2)$ \cite{Sel2015}. Analogous statements about approximations of
multi-qubit unitaries have eluded researchers thus far.
In the present paper, we introduce an efficient and optimal exact
synthesis algorithm for a two-qubit gate set that is appropriate for
universal and fault-tolerant quantum computing. We focus on two-qubit
circuits over the Clifford+$CS$ gate set, which consists of the
Clifford gates together with the non-Clifford controlled-phase gate
$CS=\diag(1,1,1,i)$. The $CS$ gate has received recent attention as an
alternative to the $T$-gate in methods for fault-tolerant quantum
computing \cite{beverland2020lower,haah2018codes} and due to its
natural implementation as an entangling operation in certain
superconducting qubit systems
\cite{cross2016scalable,garion2020synthesis,garion2020experimental,PhysRevA.93.060302}
whose fidelity is approaching that of single-qubit gates
\cite{PhysRevLett.125.120504,PhysRevLett.125.240503}. Our algorithm
produces an optimal circuit in a number of arithmetic operations
linear in the length of the optimal decomposition. This is unlike
existing multi-qubit synthesis methods. Moreover, because our
algorithm is deterministic, the circuit it associates to a
Clifford+$CS$ operator can be viewed as a normal form for that
operator. We give an explicit description of these normal forms in the
language of automata and use this description to derive a worst-case
lower bound of $5\log_2(\frac{1}{\epsilon})+O(1)$ on the number of
$CS$ gates required to $\epsilon$-approximate elements of $\su(4)$. A
Mathematica package implementing our algorithm is freely available
on-line \cite{thecode}. This code is very efficient, synthesizing
optimal circuits of $CS$-count 10000 in $1.2\pm0.1$ seconds on modest
hardware.
The paper is structured as follows. We first introduce a convenient
set of generators in \cref{sec:gens}. Then, in \cref{sec:iso}, we
describe the exceptional isomorphism
$\mbox{SU}(4)\cong\mbox{Spin}(6)$. In \cref{sec:synth}, we leverage
this isomorphism to introduce an exact synthesis algorithm for
Clifford+$CS$ operators. In \cref{sec:nfs}, we use the theory of
automata to study the structure of the circuits produced by the exact
synthesis algorithm. We take advantage of this structure in
\cref{sec:lowerbounds} to establish a worst-case lower bound on the
number of non-Clifford resources required to $\epsilon$-approximate
elements of $\su(4)$ using Clifford+$CS$ circuits. Finally, we
conclude and discuss avenues for future work in \cref{sec:conc}.
\section{Generators}
\label{sec:gens}
Throughout, we use $\N$, $\Z$, $\R$, and $\C$ to denote the usual
collection of numbers, $\Z_p$ to denote the collection integers modulo
$p$, and $\Zi$ to denote the collection of Gaussian integers (the
complex numbers with integer real and imaginary parts). We write
$\rho$ for the canonical homomorphism $\Z \to \Z_2$ (if $n\in\Z$ then
$\rho(n)$ is the parity of $n$). For two integers $n\leq m$, we write
$[n,m]$ for the set $\s{n,\ldots,m}\subseteq \Z$ and simply write
$[m]$ for $[1,m]$. We view scalars and vectors as matrices so that any
concept defined for matrices of arbitrary dimensions also applies to
scalars and vectors. Finally, for readability, we use the symbol
$\cdot$ to denote the zero entries of a matrix.
The single-qubit \emph{Pauli} gates $X$, $Y$, and $Z$ are defined as
\[
X= \begin{bmatrix} \cdot & 1 \\ 1 & \cdot \end{bmatrix},
\qquad
Y= \begin{bmatrix} \cdot & -i \\ i & \cdot \end{bmatrix},
\qquad \mbox{ and } \qquad
Z = \begin{bmatrix} 1 & \cdot \\ \cdot & -1 \end{bmatrix}.
\]
These gates generate the \emph{single-qubit Pauli group} $\s{i^a P ~;~
a\in\Z_4 \mbox{ and } P\in\s{I, X, Y, Z}}$. The \emph{two-qubit
Pauli group}, which we denote by $\pauli$, is defined as $\pauli =
\s{i^a (P\otimes Q) ~;~ a\in\Z_4 \mbox{ and } P,Q \in
\s{I,X,Y,Z}}$. The \emph{Clifford} gates $H$, $S$, and $CZ$ are
defined as
\[
H= \frac{1}{\sqrt{2}}\begin{bmatrix} 1 & 1 \\ 1 & -1 \end{bmatrix},
\quad
S= \begin{bmatrix} 1 & \cdot \\ \cdot & i \end{bmatrix},
\qquad \mbox{ and } \qquad
CZ =
\begin{bmatrix}
1 & \cdot & \cdot & \cdot \\
\cdot & 1 & \cdot & \cdot \\
\cdot & \cdot & 1 & \cdot \\
\cdot & \cdot & \cdot & -1
\end{bmatrix}.
\]
These gates are known as the \emph{Hadamard} gate, the \emph{phase}
gate, and the \emph{controlled-$Z$} gate, respectively. The
\emph{single-qubit Clifford group} is generated by $H$ and $S$ and
contains the primitive 8-th root of unity $\omega =
e^{\frac{i\pi}{4}}$. The \emph{two-qubit Clifford group}, which we
denote by $\clifford$, consists of the operators which can be
represented by a two-qubit circuit over the gate set $\s{H, S,
CZ}$. Equivalently, $\clifford$ is generated by $H\otimes I$,
$I\otimes H$, $S \otimes I$, $I\otimes S$, and $CZ$. Up to global
phases, the Clifford groups are the normalizers of the Pauli groups.
Clifford gates are well-suited for fault-tolerant quantum computation
but the Clifford group is not universal. One can obtain a universal
group by extending $\clifford$ with the \emph{controlled-phase gate}
$CS$ defined as
\[
CS = \begin{bmatrix}
1 & \cdot & \cdot & \cdot \\
\cdot & 1 & \cdot & \cdot \\
\cdot & \cdot & 1 & \cdot \\
\cdot & \cdot & \cdot & i \\
\end{bmatrix}.
\]
In what follows, we focus on the group $\cliffordcs$ of operators
which can be represented by a two-qubit circuit over the universal
gate set $\s{H, S, CZ, CS}$. Equivalently, $\cliffordcs$ is the group
generated by $H\otimes I$, $I\otimes H$, $S \otimes I$, $I\otimes S$,
$CZ$, and $CS$. We have $\pauli\subseteq\clifford \subseteq
\cliffordcs$. We sometimes refer to $\cliffordcs$ as the
\emph{Clifford+$CS$} group or \emph{Clifford+controlled-phase}
group. We know from \cite{restricted} that $\cliffordcs$ is the group
of $4\times4$ unitary matrices of the form
\begin{equation}
\label{eq:u4rep}
\frac{1}{\sqrt{2}^k} M
\end{equation}
where $k\in\N$ and the entries of $M$ belong to $\Zi$. In the
fault-tolerant setting, the $CS$ gate is considered vastly more
expensive than any of the Clifford gates. As a result, the cost of a
Clifford+$CS$ circuit is determined by its \emph{$CS$-count}: the
number of $CS$ gates that appear in the circuit. Our goal is to find
circuits for the elements of $\cliffordcs$ that are optimal in
$CS$-count.
We start by introducing a generalization of the $CS$ gate which will
be helpful in describing the elements of $\cliffordcs$.
\begin{definition}
\label{def:gens}
Let $P$ and $Q$ be distinct elements of $\pauli \setminus \s{\Id}$
such that $P$ and $Q$ are Hermitian and $PQ=QP$. Then $R(P,Q)$ is
defined as
\[
R(P,Q) = \exp \left( \frac{i\pi}{2} \left( \frac{\Id-P}{2} \right)
\left( \frac{\Id-Q}{2} \right) \right).
\]
\end{definition}
We have $R(Z\otimes \Id, \Id\otimes Z)=CS$. Moreover, since
$\clifford$ normalizes $\pauli$ and $CR(P,Q)C^\dagger = R(CPC^\dagger,
CQC^\dagger)$ for every $C\in \clifford$, we know that
$R(P,Q)\in\cliffordcs$ for every appropriate $P,Q\in\pauli$. We record
some important properties of the $R(P,Q)$ gates in the lemma
below. Because the proof of the lemma is tedious but relatively
straightforward, it is given in \cref{app:proof}.
\begin{lemma}
\label{lem:rels}
Let $C\in\clifford$ and let $P$, $Q$, and $L$ be distinct elements
of $\pauli \setminus \s{I}$. Assume that $P$, $Q$, and $L$ are
Hermitian and that $PQ=QP$, $PL=LP$, and $QL=-LQ$. Then the
following relations hold:
\begin{align}
C R(P,Q)C^\dagger & = R(CPC^\dagger,CQC^\dagger), \label{eq:CliffordCommute}\\
R(P,Q) & = R(Q,P), \label{eq:swappable}\\
R(P,-PQ) & = R(P,Q), \label{eq:permutable}\\
R(P,-Q) & \in R(P,Q) \clifford, \label{eq:minusPauli}\\
R(P,Q)^2 & \in \clifford,\mbox{ and} \label{eq:squared}\\
R(P,L) R(P,Q) & = R(P,Q) R(P,iQL). \label{eq:sharedPauli}
\end{align}
\end{lemma}
We will use the $R(P,Q)$ gates of \cref{def:gens} to define normal
forms for the elements of $\cliffordcs$. The equivalences given by
\cref{lem:rels} show that it is not necessary to use every $R(P,Q)$
gate and the following definition specifies the ones we will be using.
\begin{definition}
\label{def:genset}
Let $\mathcal{T}_1$ and $\mathcal{T}_2$ be the subsets of
$\pauli\times \pauli$ given below.
\begin{align*}
& \mathcal{T}_1 = \s{(P,Q) ~;~ P\in\s{X\otimes I, Y\otimes I, Z\otimes
I}, Q \in \s{I\otimes X, I\otimes Y, I\otimes Z}} \\
& \mathcal{T}_2 = \s{(P,Q) ~;~ P\in\s{X\otimes X, Z\otimes X, Y\otimes
X}, Q \in \s{Y\otimes Y, Z\otimes Y, X\otimes Y}, \mbox{ and }
PQ=QP}.
\end{align*}
The set $\gens$ is defined as $\gens = \s{R(P,Q) ~;~ (P,Q) \in
\mathcal{T}_1 \mbox{ or } (P,Q) \in \mathcal{T}_2}$.
\end{definition}
\begin{figure}
\centering
\begin{tabular}{lllll}
$R(X\otimes I,I\otimes X)$ & $R(Y\otimes I,I\otimes Y)$ &
$R(Z\otimes I,I\otimes Z)$ & $R(Y\otimes I,I\otimes Z)$ &
$R(Z\otimes I,I\otimes Y)$ \\
$R(Z\otimes I,I\otimes X)$ & $R(X\otimes I,I\otimes Z)$ &
$R(X\otimes I,I\otimes Y)$ & $R(Y\otimes I,I\otimes X)$ &
$R(X\otimes X,Y\otimes Y)$ \\
$R(X\otimes X,Z\otimes Y)$ & $R(Z\otimes X,Y\otimes Y)$ &
$R(Y\otimes X,X\otimes Y)$ & $R(Z\otimes X,X\otimes Y)$ &
$R(Y\otimes X, Z\otimes Y)$
\end{tabular}
\caption{The 15 elements of $\gens$.}\label{fig:elems}
\end{figure}
The set $\gens$ contains 15 elements which are explicitly listed in
\cref{fig:elems}. It can be verified that all of the elements of
$\gens$ are distinct, even up to right-multiplication by a Clifford
gate. It will be helpful to consider the set $\gens$ ordered as in
\cref{fig:elems}, which is to be read left-to-right and row-by-row. We
then write $\gens_j$ to refer to the $j$-th element of $\gens$. For
example, $\gens_1$ is in the top left of \cref{fig:elems}, $\gens_5$
is in the top right, and $\gens_{15}$ is in the bottom right. The
position of $R(P,Q)$ in this ordering roughly expresses the complexity
of the Clifford circuit required to conjugate $CS$ to $R(P,Q)$.
We close this section by showing that every element of $\cliffordcs$
can be expressed as a sequence of elements of $\gens$ followed by a
single element of $\clifford$.
\begin{lemma}
\label{lem:gensuff}
Let $P$ and $Q$ be distinct elements of $\pauli \setminus \s{\Id}$
such that $P$ and $Q$ are Hermitian and $PQ=QP$. Then there exists
$P',Q' \in \pauli$ and $C\in\clifford$ such that $R(P',Q')\in\gens$
and $R(P,Q) = R(P',Q')C$.
\end{lemma}
\begin{proof}
Let $P=i^p(P_1\otimes P_2)$ and $Q=i^q(Q_1\otimes Q_2)$ with $P_1,
P_2, Q_1, Q_2 \in\s{I,X,Y,Z}$. Since $P$ and $Q$ are Hermitian, $p$
and $q$ must be even. Moreover, by \cref{eq:swappable,eq:minusPauli}
of \cref{lem:rels}, we can assume without loss of generality that
$p=q=0$ so that $P=P_1\otimes P_2$ and $Q=Q_1\otimes Q_2$. Now, if
one of $P_1$, $P_2$, $Q_1$, or $Q_2$ is $I$, then we can use
\cref{eq:swappable,eq:permutable,eq:minusPauli} of \cref{lem:rels}
to rewrite $R(P,Q)$ as $R(P',Q')C$ with $C\in\clifford$ and
$(P',Q')\in\mathcal{T}_1$ as in \cref{def:genset}. If, instead, none
of $P_1$, $P_2$, $Q_1$, or $Q_2$ are $I$, then we can reason
similarly to rewrite $R(P,Q)$ as $R(P',Q')C$ with $C\in\clifford$
and $(P',Q')\in\mathcal{T}_2$.
\end{proof}
\begin{proposition}
\label{prop:gen}
Let $V\in\cliffordcs$. Then $V = R_1\cdots R_n C$ where
$C\in\clifford$ and $R_j\in\gens$ for $j\in [n]$.
\end{proposition}
\begin{proof}
Let $V\in\cliffordcs$. Then $V$ can be written as $V = C_1 \cdot CS
\cdot C_2 \cdot CS \cdot \ldots \cdot C_n \cdot CS \cdot C_{n+1}$
where $C_j\in\clifford$ for $j \in [n+1]$. Since $CS = R(Z\otimes I,
I\otimes Z)$ we have
\begin{equation}
\label{eq:v}
V = C_1 \cdot R(Z\otimes I, I\otimes Z) \cdot C_2 \cdot R(Z\otimes
I, I\otimes Z) \cdot \ldots \cdot C_n \cdot R(Z\otimes I, I\otimes
Z) \cdot C_{n+1}.
\end{equation}
Now, by \cref{eq:CliffordCommute} of \cref{lem:rels}, $C_1R(Z\otimes
I, I\otimes Z) = C_1R(Z\otimes I, I\otimes Z) C_1^\dagger C_1 =
R(P,Q)C_1$ for some $P,Q\in\pauli$. We can then apply
\cref{lem:gensuff} to get
\[
C_1R(Z\otimes I, I\otimes Z) = R(P,Q)C_1 = R(P',Q')CC_1 = R(P',Q')C'
\]
with $C' = CC_1\in\clifford$ and $R(P',Q')\in\gens$. Hence, setting
$R_1= R(P',Q')$ and $C_2'=C'C_2$, \cref{eq:v} becomes
\[
V = R_1 \cdot C_2' \cdot R(Z\otimes I, I\otimes Z) \cdot \ldots
\cdot C_n \cdot R(Z\otimes I, I\otimes Z) \cdot C_{n+1}
\]
and we can proceed recursively to complete the proof.
\end{proof}
\section{The Isomorphism
\texorpdfstring{$\mbox{SU}(4)\cong \mbox{Spin}(6)$}{SU(4)-Spin(6)}}
\label{sec:iso}
In this section, we describe the exceptional isomorphism $\su(4)\cong
\Spin(6)$ which will allow us to rewrite two-qubit operators as
elements of $\so(6)$. Consider some element $U$ of
$\mbox{SU}(4)$. Then $U$ acts on $\C^4$ by
left-multiplication. Moreover, this action is norm-preserving. Now let
$\s{e_j}$ be the standard orthonormal basis of $\C^4$. From this
basis, we construct an alternative six-component basis using the
\emph{wedge product}.
\begin{definition}[Wedge product]
\label{def:wedge}
Let $a\wedge b$ be defined as the \emph{wedge product} of $a$ and
$b$. Wedge products have the following properties given vectors
$a,b,c\in \C^n$ and $\alpha,\beta\in\C$:
\begin{itemize}
\item Anticommutativity: $a\wedge b = -b \wedge a$.
\item Associativity: $(a\wedge b)\wedge c= a\wedge (b\wedge c)$.
\item Bilinearity: $(\alpha a + \beta b)\wedge c = \alpha (a\wedge
c) + \beta (b\wedge c)$.
\end{itemize}
Note that the anticommutation of wedge products implies that
$a\wedge a=0$. We say that $v_1\wedge\cdots\wedge v_k\in\bigwedge^k
\C^n$ for $v_j\in\C^n$. To compute the inner product of two wedge
products $v_1\wedge\cdots\wedge v_k$ and $w_1\wedge\cdots\wedge
w_k$, we compute
\[
\langle v_1\wedge\cdots\wedge v_k, w_1\wedge\cdots\wedge w_k \rangle
= \det\left(\langle v_q,w_r\rangle\right)
\]
where $\langle v_q,w_r\rangle$ is the entry in the $q$-th row and
$r$-th column of a $k\times k$ matrix.
\end{definition}
\begin{remark}
The magnitude of a wedge product of $n$ vectors can be thought of as
the $n$ dimensional volume of the parallelotope constructed from
those vectors. The orientation of the wedge product defines the
direction of circulation around that parallelotope by those vectors.
\end{remark}
The wedge product of two vectors in $\C^4$ can be decomposed into a
six-component basis as anticommutativity reduces the 16 potential
wedge products of elements of $\s{e_j}$ to six. We choose this basis
as
\begin{align}
\label{eq:basis}
B =
\s{s_{-,12,34},s_{+,12,34},s_{-,23,14},s_{+,24,13},s_{-,24,13},s_{+,23,14}}
\end{align}
where
\begin{align}
s_{\pm,ij,kl} = \frac{i^\frac{1\mp 1}{2}}{\sqrt{2}}\left(e_i\wedge
e_j \pm e_k\wedge e_l\right).
\end{align}
We note that $B$ is an orthonormal basis and we assume that $B$ is
ordered as in \cref{eq:basis}.
\begin{definition}
\label{def:action}
Let $U\in \su(4)$ and $\oline{U}$ be its representation in the
transformed basis. Let $v,w\in\C^4$ with $v\wedge
w\in\bigwedge^2\C^4$. Then the actions of $U$ and $\oline{U}$ are
related by
\[
\oline{U} (v\wedge w) = (U v)\wedge(U w).
\]
\end{definition}
To avoid confusion, we use an overline, as in $\oline{O}$, to denote
the $\so(6)$ representation of an operator or set of operators $O$. We
are now equipped to define the transformation from $\su(4)$ to
$\so(6)$.
\begin{definition}
\label{def:isom}
Let $U\in \mbox{SU}(4)$ and let $j,k\in[6]$. Then the entry in the
$j$-th row and $k$-th column of the $\so(6)$ representation
$\oline{U}$ of $U$ is
\begin{align}
\label{eq:rep}
\oline{U}_{j,k} = \langle B_j, \oline{U} B_k\rangle
\end{align}
where $ B_j$ is the $j$-th element in the ordered basis $B$, the
action of $\oline{U}$ on $B_k$ is defined by
\cref{def:wedge,def:action}, and the inner product is defined by
\cref{def:wedge}.
\end{definition}
As an illustration of the process specified in \cref{def:isom} we
explicitly calculate the $\so(6)$ representation of a Clifford+$CS$
operator in \cref{app:calculation}. Moreover, we provide code to
compute this isomorphism for any input with our Mathematica package
\cite{thecode}.
\begin{remark}
The fact that this isomorphism yields special orthogonal operators
is ultimately due to the fact that the Dynkin diagrams for the Lie
algebras of $\su(4)$, $\Spin(6)$, and $\so(6)$ are
equivalent. However, this fact can be easily illustrated through the
Euler decomposition of $\su(4)$ \cite{tilma2002generalized}. Direct
calculation of $\oline{U}$ for the operator
\[
U = \begin{bmatrix}
1 & \cdot & \cdot & \cdot \\
\cdot & 1 & \cdot & \cdot\\
\cdot & \cdot & \alpha & \cdot\\
\cdot & \cdot & \cdot & \alpha^*
\end{bmatrix}
\]
for $|\alpha|=1$ and $\alpha = r+ic$ with $r,c\in\R$ yields
\[
\oline{U}= \begin{bmatrix}
1 & \cdot & \cdot & \cdot & \cdot & \cdot\\
\cdot & 1 & \cdot & \cdot & \cdot & \cdot\\
\cdot & \cdot & r & \cdot & \cdot & c\\
\cdot & \cdot & \cdot & r & c & \cdot\\
\cdot & \cdot & \cdot & -c & r & \cdot\\
\cdot & \cdot & -c & \cdot & \cdot & r
\end{bmatrix}
\]
which is explicitly in $\so(6)$. Computation of the other 14 Euler
angle rotations required for an $\su(4)$ parameterization yields
similar matrices, likewise in $\so(6)$. Since $\so(6)$ is a group
under multiplication, the isomorphism applied to any $U\in\su(4)$
yields $\oline{U}\in\so(6)$.
\end{remark}
We close this section by explicitly calculating the $\so(6)$
representation of each of the generators of $\cliffordcs$. We multiply
the generators by overall phase factors to ensure that each operator
has determinant one, and furthermore that single-qubit operators have
determinant one on their single-qubit subspace. Later, when referring
to gates or their $\so(6)$ representation, we omit overall phases for
readability.
\begin{proposition}
\label{prop:imcliffords}
The image of the generators of $\clifford$ in $\so(6)$ are
\[
\begin{array}{rclcrcl}
\oline{(\omega^\dagger S)\otimes\Id} & = & \begin{bmatrix}
\cdot & -1 & \cdot & \cdot & \cdot & \cdot\\
1 & \cdot & \cdot & \cdot & \cdot & \cdot\\
\cdot & \cdot & 1 & \cdot & \cdot & \cdot\\
\cdot & \cdot & \cdot & 1 & \cdot & \cdot\\
\cdot & \cdot & \cdot & \cdot & 1 & \cdot\\
\cdot & \cdot & \cdot & \cdot & \cdot & 1
\end{bmatrix}, & \qquad &
\oline{ \Id\otimes (\omega^\dagger S)} & = & \begin{bmatrix}
1 & \cdot & \cdot & \cdot & \cdot & \cdot\\
\cdot & 1 & \cdot & \cdot & \cdot & \cdot\\
\cdot & \cdot & 1 & \cdot & \cdot & \cdot\\
\cdot & \cdot & \cdot & \cdot & -1 & \cdot\\
\cdot & \cdot & \cdot & 1 & \cdot & \cdot\\
\cdot & \cdot & \cdot & \cdot & \cdot & 1
\end{bmatrix}, \\
~ & ~ \\
\oline{(i H)\otimes\Id} & = & \begin{bmatrix}
\cdot & \cdot & 1 & \cdot & \cdot & \cdot\\
\cdot & -1 & \cdot & \cdot & \cdot & \cdot\\
1 & \cdot & \cdot & \cdot & \cdot & \cdot\\
\cdot & \cdot & \cdot & 1 & \cdot & \cdot\\
\cdot & \cdot & \cdot & \cdot & 1 & \cdot\\
\cdot & \cdot & \cdot & \cdot & \cdot & 1
\end{bmatrix}, & \qquad &
\oline{\Id\otimes(i H)} & = & \begin{bmatrix}
1 & \cdot & \cdot & \cdot & \cdot & \cdot\\
\cdot & 1 & \cdot & \cdot & \cdot & \cdot\\
& \cdot & 1 & \cdot & \cdot & \cdot\\
\cdot & \cdot & \cdot & \cdot & \cdot & 1\\
\cdot & \cdot & \cdot & \cdot & -1 & \cdot\\
\cdot & \cdot & \cdot & 1 & \cdot & \cdot
\end{bmatrix},
\end{array}
\]
\[
\begin{array}{rcl}
\oline{\omega^\dagger CZ} & = & \begin{bmatrix}
\cdot & -1 & \cdot & \cdot & \cdot & \cdot\\
1 & \cdot & \cdot & \cdot & \cdot & \cdot\\
\cdot & \cdot & \cdot & \cdot & \cdot & -1\\
\cdot & \cdot & \cdot & \cdot & -1 & \cdot\\
\cdot & \cdot & \cdot & 1 & \cdot & \cdot\\
\cdot & \cdot & 1 & \cdot & \cdot & \cdot
\end{bmatrix}.
\end{array}
\]
\end{proposition}
\begin{proposition}
\label{prop:imgens}
The elements of $\oline\gens$ are given in \cref{fig:olineelems}.
\end{proposition}
\begin{figure}
\centering
\[
\frac{1}{\sqrt 2}\begin{bmatrix}
1 & \cdot & \cdot & -1 & \cdot & \cdot\\
\cdot & 1 & -1 & \cdot & \cdot & \cdot\\
\cdot & 1 & 1 & \cdot & \cdot & \cdot\\
1 & \cdot & \cdot & 1 & \cdot & \cdot\\
\cdot & \cdot & \cdot & \cdot & 1 & -1\\
\cdot & \cdot & \cdot & \cdot & 1 & 1
\end{bmatrix}\qquad
\frac{1}{\sqrt 2}\begin{bmatrix}
1 & \cdot & 1 & \cdot & \cdot & \cdot\\
\cdot & 1 & \cdot & \cdot & -1 & \cdot\\
-1 & \cdot & 1 & \cdot & \cdot & \cdot\\
\cdot & \cdot & \cdot & 1 & \cdot & 1\\
\cdot & 1 & \cdot & \cdot & 1 & \cdot\\
\cdot & \cdot & \cdot & -1 & \cdot & 1
\end{bmatrix}\qquad
\frac{1}{\sqrt 2}\begin{bmatrix}
1 & -1 & \cdot & \cdot & \cdot & \cdot\\
1 & 1 & \cdot & \cdot & \cdot & \cdot\\
\cdot & \cdot & 1 & \cdot & \cdot & -1\\
\cdot & \cdot & \cdot & 1 & -1 & \cdot\\
\cdot & \cdot & \cdot & 1 & 1 & \cdot\\
\cdot & \cdot & 1 & \cdot & \cdot & 1
\end{bmatrix}
\]
\[
\frac{1}{\sqrt 2}\begin{bmatrix}
1 & \cdot & 1 & \cdot & \cdot & \cdot\\
\cdot & 1 & \cdot & \cdot & \cdot & -1\\
-1 & \cdot & 1 & \cdot & \cdot & \cdot\\
\cdot & \cdot & \cdot & 1 & -1 & \cdot\\
\cdot & \cdot & \cdot & 1 & 1 & \cdot\\
\cdot & 1 & \cdot & \cdot & \cdot & 1
\end{bmatrix}\qquad
\frac{1}{\sqrt 2}\begin{bmatrix}
1 & -1 & \cdot & \cdot & \cdot & \cdot\\
1 & 1 & \cdot & \cdot & \cdot & \cdot\\
\cdot & \cdot & 1 & \cdot & -1 & \cdot\\
\cdot & \cdot & \cdot & 1 & \cdot & 1\\
\cdot & \cdot & 1 & \cdot & 1 & \cdot\\
\cdot & \cdot & \cdot & -1 & \cdot & 1
\end{bmatrix}\qquad
\frac{1}{\sqrt 2}\begin{bmatrix}
1 & -1 & \cdot & \cdot & \cdot & \cdot\\
1 & 1 & \cdot & \cdot & \cdot & \cdot\\
\cdot & \cdot & 1 & -1 & \cdot & \cdot\\
\cdot & \cdot & 1 & 1 & \cdot & \cdot\\
\cdot & \cdot & \cdot & \cdot & 1 & -1\\
\cdot & \cdot & \cdot & \cdot & 1 & 1
\end{bmatrix}
\]
\[
\frac{1}{\sqrt 2}\begin{bmatrix}
1 & \cdot & \cdot & \cdot & \cdot & -1\\
\cdot & 1 & -1 & \cdot & \cdot & \cdot\\
\cdot & 1 & 1 & \cdot & \cdot & \cdot\\
\cdot & \cdot & \cdot & 1 & -1 & \cdot\\
\cdot & \cdot & \cdot & 1 & 1 & \cdot\\
1 & \cdot & \cdot & \cdot & \cdot & 1
\end{bmatrix}\qquad
\frac{1}{\sqrt 2}\begin{bmatrix}
1 & \cdot & \cdot & \cdot & -1 & \cdot\\
\cdot & 1 & -1 & \cdot & \cdot & \cdot\\
\cdot & 1 & 1 & \cdot & \cdot & \cdot\\
\cdot & \cdot & \cdot & 1 & \cdot & 1\\
1 & \cdot & \cdot & \cdot & 1 & \cdot\\
\cdot & \cdot & \cdot & -1 & \cdot & 1
\end{bmatrix}\qquad
\frac{1}{\sqrt 2}\begin{bmatrix}
1 & \cdot & 1 & \cdot & \cdot & \cdot\\
\cdot & 1 & \cdot & -1 & \cdot & \cdot\\
-1 & \cdot & 1 & \cdot & \cdot & \cdot\\
\cdot & 1 & \cdot & 1 & \cdot & \cdot\\
\cdot & \cdot & \cdot & \cdot & 1 & -1\\
\cdot & \cdot & \cdot & \cdot & 1 & 1
\end{bmatrix}
\]
\[
\frac{1}{\sqrt 2}\begin{bmatrix}
1 & \cdot & \cdot & 1 & \cdot & \cdot\\
\cdot & 1 & \cdot & \cdot & 1 & \cdot\\
\cdot & \cdot & 1 & \cdot & \cdot & 1\\
-1 & \cdot & \cdot & 1 & \cdot & \cdot\\
\cdot & -1 & \cdot & \cdot & 1 & \cdot\\
\cdot & \cdot & -1 & \cdot & \cdot & 1
\end{bmatrix}\qquad
\frac{1}{\sqrt 2}\begin{bmatrix}
1 & \cdot & \cdot & -1 & \cdot & \cdot\\
\cdot & 1 & \cdot & \cdot & \cdot & 1\\
\cdot & \cdot & 1 & \cdot & 1 & \cdot\\
1 & \cdot & \cdot & 1 & \cdot & \cdot\\
\cdot & \cdot & -1 & \cdot & 1 & \cdot\\
\cdot & -1 & \cdot & \cdot & \cdot & 1
\end{bmatrix}\qquad
\frac{1}{\sqrt 2}\begin{bmatrix}
1 & \cdot & \cdot & \cdot & \cdot & 1\\
\cdot & 1 & \cdot & \cdot & -1 & \cdot\\
\cdot & \cdot & 1 & 1 & \cdot & \cdot\\
\cdot & \cdot & -1 & 1 & \cdot & \cdot\\
\cdot & 1 & \cdot & \cdot & 1 & \cdot\\
-1 & \cdot & \cdot & \cdot & \cdot & 1
\end{bmatrix}
\]
\[
\frac{1}{\sqrt 2}\begin{bmatrix}
1 & \cdot & \cdot & \cdot & -1 & \cdot\\
\cdot & 1 & \cdot & 1 & \cdot & \cdot\\
\cdot & \cdot & 1 & \cdot & \cdot & 1\\
\cdot & -1 & \cdot & 1 & \cdot & \cdot\\
1 & \cdot & \cdot & \cdot & 1 & \cdot\\
\cdot & \cdot & -1 & \cdot & \cdot & 1
\end{bmatrix}\qquad
\frac{1}{\sqrt 2}\begin{bmatrix}
1 & \cdot & \cdot & \cdot & 1 & \cdot\\
\cdot & 1 & \cdot & \cdot & \cdot & 1\\
\cdot & \cdot & 1 & 1 & \cdot & \cdot\\
\cdot & \cdot & -1 & 1 & \cdot & \cdot\\
-1 & \cdot & \cdot & \cdot & 1 & \cdot\\
\cdot & -1 & \cdot & \cdot & \cdot & 1
\end{bmatrix}\qquad
\frac{1}{\sqrt 2}\begin{bmatrix}
1 & \cdot & \cdot & \cdot & \cdot & 1\\
\cdot & 1 & \cdot & 1 & \cdot & \cdot\\
\cdot & \cdot & 1 & \cdot & 1 & \cdot\\
\cdot & -1 & \cdot & 1 & \cdot & \cdot\\
\cdot & \cdot & -1 & \cdot & 1 & \cdot\\
-1 & \cdot & \cdot & \cdot & \cdot & 1
\end{bmatrix}
\]
\caption{The 15 elements of $\oline\gens$.\label{fig:olineelems}}
\end{figure}
\section{Exact Synthesis}
\label{sec:synth}
In this section, we leverage the isomorphism $\su(4)\cong \Spin(6)$
described in the previous section to find optimal decompositions for
the elements of $\cliffordcs$. We will be working extensively with the
matrix group
\begin{equation}
\label{eq:somatrix}
\matrices=\s{\frac{1}{\sqrt{2}^k}M\in\so(6)~;~ k\in\N,
M\in\Z^{6\times 6}}.
\end{equation}
Note that $\matrices\subseteq \so(6)$. Our interest in $\matrices$
stems from the following observation.
\begin{proposition}
\label{prop:CliffCSinD}
We have $\oline\cliffordcs\subseteq\matrices$.
\end{proposition}
\begin{proof}
The property holds for the generators of $\oline\cliffordcs$ by
\cref{prop:imcliffords,prop:imgens}.
\end{proof}
In the remainder of this section, we prove the converse of
\cref{prop:CliffCSinD} by defining an algorithm which inputs an
element of $\matrices$ and outputs a product of generators. We start
by introducing a few notions that are useful in discussing the
elements of $\matrices$.
\begin{definition}
\label{def:lde}
Let $V\in\matrices$. We say that $\ell\in\N$ is a \emph{denominator
exponent} of $V$ if $\sqrt{2}^\ell V\in\Z^{6\times 6}$. The least
such $\ell$ is the \emph{least denominator exponent} of $V$, which
we denote by $\lde(V)$.
\end{definition}
\begin{lemma}
\label{lem:CScountlower}
Let $U\in\cliffordcs$ and suppose that $\lde(\oline{U})=k$. Then any
Clifford+$CS$ circuit for $U$ has $CS$-count at least $k$.
\end{lemma}
\begin{proof}
The only generators with a factor of $1/\sqrt{2}$ in their $\so(6)$
representation are the elements of $\gens$. Thus, for a least
denominator exponent of $k$ there must be at least $k$ of these
operators, each of which requires a single $CS$ gate.
\end{proof}
\begin{definition}
\label{def:kparity}
Let $V\in\matrices$ and let $\ell$ be a denominator exponent of
$V$. The \emph{$\ell$-residue} of $V$ is the binary matrix
$\rho_\ell(V)\in \Z_2^{6\times 6}$ defined by
\[
(\rho_\ell(V))_{i,j} = \rho((\sqrt{2}^\ell V)_{i,j})
\]
where $\rho :\Z \to \Z_2$ is the canonical (parity) homomorphism.
\end{definition}
The residue matrices introduced in \cref{def:kparity} are important in
the definition of the exact synthesis algorithm. Indeed, the
$\ell$-residue of a Clifford+$CS$ operator $U$ determines the element
of $\gens$ to use in order to reduce the least denominator exponent of
$U$ (although not uniquely, as we discuss below). Similar residue
matrices are used in the study of other fault-tolerant circuits
\cite{restricted,ma-remarks}.
Recall that if $A$ is a set, then a \emph{partition} of $A$ is a
collection of disjoint nonempty subsets of $A$ whose union is equal to
$A$. The set of all partitions of a set $A$ is denoted
$\mathscr{B}_A$. Let $p$ and $p'$ be two partitions of $A$. If every
element of $p$ is a subset of an element of $p'$ then we say that $p'$
is \emph{coarser} than $p$ and that $p$ is \emph{finer} than $p'$.
\begin{definition}
\label{def:pattern}
Let $N\in \Z_2^{6\times 6}$ be a binary matrix with rows
$r_1,\ldots, r_6$ and let $p=\s{p_1,\ldots,p_q}$ be a partition of
the set $[6]$. Then $N$ has the \emph{pattern} $p$ if for any
$p_j$ in $p$ and any $j_1,j_2\in p_j$ we have $r_{j_1}=r_{j_2}$. In
this case we also say that $N$ has a \emph{$|p_1|\times \ldots
\times |p_q|$ pattern}.
\end{definition}
\begin{definition}
\label{def:patternmap}
Let $V\in\matrices$ with $\lde(V)=\ell$. We define the pattern map
$\partition: \matrices\rightarrow \mathscr{B}_{[6]}$ as the function
which maps $V$ to the pattern of $\rho_\ell (V)$. We say that
$p=\partition(V)$ is the pattern of $V$. If $V_1$ and $V_2$ are two
elements of $\matrices$, we say that $V_1$ is \emph{finer} than
$V_2$ or that $V_2$ is \emph{coarser} than $V_1$ if these statements
hold for $\partition(V_1)$ and $\partition(V_2)$.
\end{definition}
\begin{remark}
In a slight abuse of notation, we extend the pattern map to any
valid representation of a Clifford+$CS$ operator. Given a
Clifford+$CS$ operator with $\su(4)$ representation $U$ which can be
written as a word $W$ over the generators and with $\so(6)$
representation $\oline{U}$, we set $\partition(U) = \partition(W) =
\partition(\oline{U})$. This extension is unambiguous after fixing
our transformation from $\su(4)$ to $\so(6)$, as $\partition$ is
insensitive to relative phase changes in $U$. We incorporate all
relational notions described in \cref{def:patternmap} in this
extension.
\end{remark}
We now analyze the image in $\so(6)$ of certain subsets of
$\cliffordcs$. We start by showing that the image of the Clifford
group $\clifford$ is exactly the collection of elements of $\matrices$
with least denominator 0. In other words, $\oline\clifford$ is the
group of $6$-dimensional signed permutation matrices.
\begin{lemma}
\label{lem:CliffinD}
Let $V\in\matrices$. Then $\lde(V)=0$ if and only if
$V\in\oline\clifford$.
\end{lemma}
\begin{proof}
The least denominator exponent of $\oline{H\otimes I}$,
$\oline{I\otimes H}$, $\oline{S\otimes I}$, $\oline{I\otimes S}$,
and $\oline{CZ}$ is 0. Thus, if $U\in\clifford$ then
$\lde(\oline{U})=0$. For the converse, let $C_1$ and $C_2$ be the
Clifford operators $(\omega^\dagger S)\otimes I$ and $(H \otimes H)
(\omega^\dagger CZ)(Z \otimes Z)$, respectively. Then
\[
\oline{C_1} =
\begin{bmatrix}
\cdot & -1 & \cdot & \cdot & \cdot & \cdot\\
1 & \cdot & \cdot & \cdot & \cdot & \cdot\\
\cdot & \cdot & 1 & \cdot & \cdot & \cdot\\
\cdot & \cdot & \cdot & 1 & \cdot & \cdot\\
\cdot & \cdot & \cdot & \cdot & 1 & \cdot\\
\cdot & \cdot & \cdot & \cdot & \cdot & 1
\end{bmatrix}\quad\mbox{and}\quad
\oline{C_2} =
\begin{bmatrix}
\cdot & \cdot & \cdot & \cdot & \cdot & -1\\
1 & \cdot & \cdot & \cdot & \cdot & \cdot\\
\cdot & 1 & \cdot & \cdot & \cdot & \cdot\\
\cdot & \cdot & 1 & \cdot & \cdot & \cdot\\
\cdot & \cdot & \cdot & 1 & \cdot & \cdot\\
\cdot & \cdot & \cdot & \cdot & 1 & \cdot
\end{bmatrix}.
\]
The operators $\oline{C_1}$ and $\oline{C_2}$ generate
$\s{V\in\matrices ~;~ \lde(V)=0}$. Hence, if $V\in\matrices$ and
$\lde(V)=0$ then $V$ can be expressed as a product of the image of
Clifford gates.
\end{proof}
\begin{lemma}
\label{lem:GensinD}
Let $V\in\matrices$. Then $\lde(V)=1$ if and only if $V=\oline{RC}$
for some $R\in\gens$ and some $C\in\clifford$. Furthermore, $V$ has
a $2\times2\times2$ pattern.
\end{lemma}
\begin{proof}
The rows of $V$ have unit norm and are pairwise orthogonal. Hence,
up to a signed permutation of rows and columns, there is only one
such matrix, e.g.,
\begin{align}
\label{eq:k1denom}
\frac{1}{\sqrt{2}}\begin{bmatrix}
1 & -1 & \cdot & \cdot & \cdot & \cdot\\
1 & 1 & \cdot & \cdot & \cdot & \cdot\\
\cdot & \cdot & 1 & -1 & \cdot & \cdot\\
\cdot & \cdot & 1 & 1 & \cdot & \cdot\\
\cdot & \cdot & \cdot & \cdot & 1 & -1\\
\cdot & \cdot & \cdot & \cdot & 1 & 1
\end{bmatrix} = \oline{\gens}_6.
\end{align}
By \cref{prop:gen} the proof is complete, since Clifford operators
correspond to signed permutations by \cref{lem:CliffinD}.
\end{proof}
\begin{lemma}
\label{lem:zerorows}
Let $V\in\matrices$ with $\lde(V)=k\geq 2$. Then $V$ has either a
$2\times2\times2$ or $2\times4$ pattern.
\end{lemma}
\begin{proof}
Let $V\in\matrices$. Since $V$ is orthogonal, we have $V^\dagger V=
I$. Hence, $(\sqrt{2}{}^kV)^\dagger(\sqrt{2}{}^k V)=2^kI$. Since
$k\geq 2$, this implies that the inner product of any column of
$\sqrt{2}{}^k V$ with itself is congruent to 0 modulo 4. Similarly,
the inner product of two distinct columns $\sqrt{2}{}^k V$ is
congruent to 0 modulo 4. Letting, $M=\rho_k(V)$, we then have the
column relations
\begin{align}
\sum_{l} M_{lm}^2 &= 0\mod 4\label{eq:columnodd}\\
\sum_{l} M_{lm} M _{ln} &= 0\mod 2 \mbox{ for } m\neq n\label{eq:rowpair}
\end{align}
as well as analogous row relations. For $x\in\Z$, $x^2=0\mod 4$ if
and only if $x=0\mod2$. Hence, there must be exactly zero or four
odd entries in every column (or row) of $M$ by
\cref{eq:columnodd}. By \cref{eq:rowpair}, we see that the inner
product of any two distinct rows must be even. Up to a permutation
of rows and columns, we can then deduce that $M$ is one of the two
matrices below, which completes the proof.
\begin{align}
\label{eq:k2denom}
\begin{bmatrix}
1 & 1 & 1 & 1 & \cdot & \cdot\\
1 & 1 & 1 & 1 & \cdot & \cdot\\
1 & 1 & 1 & 1 & \cdot & \cdot\\
1 & 1 & 1 & 1 & \cdot & \cdot\\
\cdot & \cdot & \cdot & \cdot & \cdot & \cdot\\
\cdot & \cdot & \cdot & \cdot & \cdot & \cdot
\end{bmatrix}\quad\mbox{or}\quad
\begin{bmatrix}
1 & 1 & 1 & 1 & \cdot & \cdot\\
1 & 1 & 1 & 1 & \cdot & \cdot\\
1 & 1 & \cdot & \cdot & 1 & 1\\
1 & 1 & \cdot & \cdot & 1 & 1\\
\cdot & \cdot & 1 & 1 & 1 & 1\\
\cdot & \cdot & 1 & 1 & 1 & 1
\end{bmatrix}
\end{align}
\end{proof}
\begin{corollary}
\label{cor:rowpair}
Let $V\in\matrices$ with $\lde(V)=k\geq 1$. Then $V$ has either a
$2\times2\times2$ or $2\times4$ pattern.
\end{corollary}
\begin{lemma}
\label{lem:finer}
Let $V\in\matrices$ and assume that $\lde(V)=k\geq 1$. If
$\oline{R}\in\oline\gens$ is finer than $V$, then
$\lde(\oline{R}^\Trans V) = k-1$.
\end{lemma}
\begin{proof}
For simplicity, we assume that $\partition(\oline{R}) = \s{\s{1,2},
\s{3,4}, \s{5,6}}$. The cases in which $\partition(\oline{R})$ is
another pattern are treated similarly. For $j\in[6]$, let $r_j$
denote the rows of $\sqrt{2}^k V$. Since $\partition(V)$ is coarser
than $\partition(\oline{R})$, we have $r_1\equiv r_2$, $r_3\equiv
r_4$, $r_5\equiv r_6$ modulo 2. This implies that $r_1 \pm r_2
\equiv r_3 \pm r_4 \equiv r_5 \pm r_6 \equiv 0$ modulo 2. Hence
\[
\oline{R}^\Trans V = \frac{1}{\sqrt{2}^{k+1}}
\begin{bmatrix}
1 & 1 & \cdot & \cdot & \cdot & \cdot\\
-1 & 1 & \cdot & \cdot & \cdot & \cdot\\
\cdot & \cdot & 1 & 1 & \cdot & \cdot\\
\cdot & \cdot & -1 & 1 & \cdot & \cdot\\
\cdot & \cdot & \cdot & \cdot & 1 & 1\\
\cdot & \cdot & \cdot & \cdot & -1 & 1
\end{bmatrix}
\begin{bmatrix}
r_1 \\
r_2 \\
r_3 \\
r_4 \\
r_5 \\
r_6
\end{bmatrix}
=
\frac{1}{\sqrt{2}^{k+1}}
\begin{bmatrix}
r_1 - r_2 \\
r_1 + r_2 \\
r_3 - r_4 \\
r_3 + r_4 \\
r_5 - r_6 \\
r_5 + r_6
\end{bmatrix}
=
\frac{1}{\sqrt{2}^{k-1}}
\begin{bmatrix}
r_1' \\
r_2' \\
r_3' \\
r_4' \\
r_5' \\
r_6'
\end{bmatrix}.
\]
where each $r_j'$ is a vector of integers.
\end{proof}
\begin{lemma}
\label{lem:denomreduce}
Let $V\in\matrices$ with $\lde(V)\geq 1$. Then there exists
$R\in\gens$ such that $\lde(\oline{R}^\Trans V) = \lde(V)-1$.
\end{lemma}
\begin{proof}
By inspection of \cref{fig:olineelems} we see that for every
$2\times 2\times 2$ pattern $q$ there exists $R\in\gens$ such that
$\partition(\oline{R})=q$. As a result, if $\partition(V)$ is a
$2\times 2 \times 2$ or a $2\times 4$ pattern, then there exists
$R\in\gens$ such that $\oline{R}$ has a pattern finer than
$\partition(V)$. By \cref{cor:rowpair}, $\partition(V)$ is in fact a
$2\times 2\times 2$ row-pattern or a $2\times 4$ row-pattern and
thus there exists $R\in\gens$ such that $\oline{R}$ is finer than
$V$. We can then conclude by \cref{lem:finer}.
\end{proof}
\begin{theorem}
\label{thm:DisCliffCS}
We have $\oline{\cliffordcs}=\matrices$.
\end{theorem}
\begin{proof}
$\oline{\cliffordcs}\subseteq\matrices$ by \cref{prop:CliffCSinD}.
We now show $\matrices\subseteq\oline{\cliffordcs}$. Let
$V\in\matrices$. We proceed by induction on the least denominator
exponent of $V$. If $\lde(V)=0$ then, by \cref{lem:CliffinD},
$V\in\oline\clifford$ and therefore $V\in \oline\cliffordcs$. Now if
$\lde(V)>0$, let $R$ be the element of $\gens$ with the lowest index
such that $\lde(\oline{R}^\Trans V)=k-1$. Such an element exists by
\cref{lem:denomreduce}. By the induction hypothesis we have
$\oline{R}^\Trans V\in\oline\cliffordcs$ which implies that
$\oline{R}(\oline{R}^\Trans V) = V\in\oline\cliffordcs$.
\end{proof}
The proof of \cref{thm:DisCliffCS} provides an algorithm to decompose
an arbitrary element of $\oline\cliffordcs$ into a product of elements
of $\oline \gens$, followed by an element of $\oline\clifford$. In the
proof, there is freedom in choosing the element of $\oline\gens$ used
to reduce $\lde(\oline{V})$. If there is more than one generator with
a finer pattern than $\oline{V}$, we must make a choice. The ordering
imposed on $\gens$ in \cref{sec:gens} is used to make this choice in a
uniform manner: we always choose the element of $\gens$ of lowest
index. As a result, the exact synthesis algorithm becomes
deterministic. The ambiguity in the choice of generator is a
consequence of the relations given in \cref{lem:rels}. In particular,
we have
\[
R(P,L)R(P,Q)=R(P,Q)R(P,iQL)=R(P,iQL)R(P,L)
\]
and these three distinct sequences of generators denote the same
operator. This is the source of the three-fold ambiguity in choosing a
finer $2\times2\times2$ pattern for a given $2\times 4$ pattern.
We will sometimes refer to the association between elements of $\gens$
and patterns used in the exact synthesis algorithm of
\cref{thm:DisCliffCS} as the \emph{first finer partition} association,
or FFP for short. The association is explicitly described
\cref{table:FFP}.
\begin{table}[t]
\begin{center}
\begin{tabular}{c|l}
\textbf{Generator} & \textbf{Associated Patterns Under First Finer Partition (FFP)}\\
\hline
$R(X \otimes I,I \otimes X)$ & $\s{\s{1,4},\s{2,3},\s{5,6}},\s{\s{1,4},\s{2,3,5,6}},\s{\s{2,3},\s{1,4,5,6}},\s{\s{5,6},\s{1,2,3,4}}$\\
$R(Y \otimes I,I \otimes Y)$ & $\s{\s{1,3},\s{2,5},\s{4,6}},\s{\s{1,3},\s{2,4,5,6}},\s{\s{2,5},\s{1,3,4,6}},\s{\s{4,6},\s{1,2,3,5}}$\\
$R(Z \otimes I,I \otimes Z)$ & $\s{\s{1,2},\s{3,6},\s{4,5}},\s{\s{1,2},\s{3,4,5,6}},\s{\s{3,6},\s{1,2,4,5}},\s{\s{4,5},\s{1,2,3,6}}$\\
$R(Y \otimes I,I \otimes Z)$ & $\s{\s{1,3},\s{2,6},\s{4,5}},\s{\s{2,6},\s{1,3,4,5}}$\\
$R(Z \otimes I,I \otimes Y)$ & $\s{\s{1,2},\s{3,5},\s{4,6}},\s{\s{3,5},\s{1,2,4,6}}$\\
$R(Z \otimes I,I \otimes X)$ & $\s{\s{1,2},\s{3,4},\s{5,6}},\s{\s{3,4},\s{1,2,5,6}}$\\
$R(X \otimes I,I \otimes Z)$ & $\s{\s{1,6},\s{2,3},\s{4,5}},\s{\s{1,6},\s{2,3,4,5}}$\\
$R(X \otimes I,I \otimes Y)$ & $\s{\s{1,5},\s{2,3},\s{4,6}},\s{\s{1,5},\s{2,3,4,6}}$\\
$R(Y \otimes I,I \otimes X)$ & $\s{\s{1,3},\s{2,4},\s{5,6}},\s{\s{2,4},\s{1,3,5,6}}$\\
$R(X \otimes X,Y \otimes Y)$ & $\s{\s{1,4},\s{2,5},\s{3,6}}$\\
$R(X \otimes X,Z \otimes Y)$ & $\s{\s{1,4},\s{2,6},\s{3,5}}$\\
$R(Z \otimes X,Y \otimes Y)$ & $\s{\s{1,6},\s{2,5},\s{3,4}}$\\
$R(Y \otimes X,X \otimes Y)$ & $\s{\s{1,5},\s{2,4},\s{3,6}}$\\
$R(Z \otimes X,X \otimes Y)$ & $\s{\s{1,5},\s{2,6},\s{3,4}}$\\
$R(Y \otimes X,Z \otimes Y)$ & $\s{\s{1,6},\s{2,4},\s{3,5}}$\\
\end{tabular}
\caption{The elements of $\gens$ and the explicit row patterns
they are associated with under FFP. \label{table:FFP}}
\end{center}
\end{table}
\begin{theorem}
\label{thm:nf}
If $U$ is a Clifford+$CS$ operator such that $\lde(\oline{U})=k$,
then $U$ can be represented by a Clifford+$CS$ circuit of $CS$-count
$k$. This circuit is optimal in $CS$-count and can be constructed in
$\mathcal{O}(k)$ arithmetic operations.
\end{theorem}
\begin{proof}
Let $U$ be as stated. If $k=0$, then $\oline{U}$ belongs to
$\oline{C}$ and $U$ is therefore a Clifford. If $k>0$, then as in
\cref{thm:DisCliffCS}, there is a unique $R_k\in\gens$ given by FFP
such that $\lde(\oline{R}_k^\Trans\oline{U})=k-1$. By induction on
the least denominator exponent, we have a deterministic synthesis
algorithm to find a sequence such that
\[
\oline{U}=\oline{R}_k\cdots\oline{R}_1\cdot\oline{C}
\]
which then implies that $U = R_k \cdots R_1 C$. Each of these $k$ steps involves a constant number of basic arithmetic operations. This circuit has
$CS$-count $k$, which is optimal by \cref{lem:CScountlower}.
\end{proof}
Our Mathematica package \cite{thecode} implements the algorithm
referred to in \cref{thm:nf} as well as a significant amount of other
tools for two-qubit Clifford + $CS$ circuits. Testing of the
performance of this algorithm on a modest device is presented in
\cref{table:performance}.
\begin{table}[b]
\begin{center}
\begin{tabular}{c|c|c}
$CS$-count & Mean Time (s) & Std. Dev. (s)\\
\hline
10 & 0.0138 & 0.0044 \\
100 & 0.0281 & 0.0051 \\
1000 & 0.1135 & 0.0091 \\
10000 & 1.1883 & 0.0897
\end{tabular}
\caption{Performance of the algorithm (in seconds) of
\cref{thm:nf} as implemented in our Mathematica code
\cite{thecode}. Each run has constant overhead from
computing the $\so(6)$ representation for each
unitary. Deviations from linearity are due to arithmetic
operations on increasingly large integers. Each mean and
standard deviation is computed using a sample of 1000 runs
with pseudorandomly generated operators known to have the
given minimal $CS$-count. Times are measured using
Mathematica's in-built \texttt{AbsoluteTiming}
function. Computations performed on a laptop with an
Intel(R) Core(TM) i7 CPU running at 2.6 GHz with 6 cores and
16 GB of RAM runnning macOS Catalina version
10.15.7.\label{table:performance}}
\end{center}
\end{table}
\section{Normal Forms}
\label{sec:nfs}
In the previous section, we introduced a synthesis algorithm for
Clifford+$CS$ operators. The algorithm takes as input a Clifford+$CS$
matrix and outputs a circuit for the corresponding operator. The
circuit produced by the synthesis algorithm is a word over the
alphabet $\gens\cup\clifford$. Because the algorithm is deterministic,
the word it associates to each operator can be viewed as a normal form
for that operator. In the present section, we use the language of
automata to give a detailed description of the structure of these
normal forms. We include the definitions of some basic concepts from
the theory of automata for completeness. The reader looking for
further details is encouraged to consult \cite{sipser}.
\subsection{Automata}
\label{ssec:automata}
In what follows we sometimes refer to a finite set $\Sigma$ as an
\emph{alphabet}. In such a context, the elements of $\Sigma$ are
referred to as \emph{letters}, $\Sigma^*$ denotes the set of
\emph{words} over $\Sigma$ (which includes the empty word
$\varepsilon$), and the subsets of $\Sigma^*$ are called
\emph{languages over $\Sigma$}. If $w\in\Sigma^*$ is a word over the
alphabet $\Sigma$, we write $|w|$ for the \emph{length} of
$w$. Finally, if $L$ and $L'$ are two languages over an alphabet
$\Sigma$ then their \emph{concatenation} $L\circ L'$ is defined as
$L\circ L' = \s{ww'~;~ w\in L \mbox{ and } w'\in L'}$.
\begin{definition}
\label{def:automaton}
A \emph{nondeterministic finite automaton} is a 5-tuple $\automaton$
where $\Sigma$ and $Q$ are finite sets, $\In$ and $\Fin$ are subsets
of $Q$, and $\delta:Q\times (\Sigma\cup\s{\varepsilon}) :\to
\mathscr{P}(Q)$ is a function whose codomain is the power set of
$Q$. We call $\Sigma$ the \emph{alphabet}, $Q$ the set of
\emph{states}, $\In$ and $\Fin$ the sets of \emph{initial} and
\emph{final} states, and $\delta$ the \emph{transition function}.
\end{definition}
\begin{remark}
\cref{def:automaton} is slightly non-standard. Indeed, automata are
typically defined as having a single initial state, rather than a
collection of them. One can then think of \cref{def:automaton} as
introducing a collection of automata: one for each element of
$\In$. Alternatively, \cref{def:automaton} can also be recovered
from the usual definition by assuming that every automaton in the
sense of \cref{def:automaton} in fact has a single initial state
$s_0$ related to the elements of $\In$ by
$\delta(s_0,\varepsilon)=\In$. We chose to introduce automata as in
\cref{def:automaton} because this results in a slightly cleaner
presentation.
\end{remark}
It is common to define an automaton $A=\automaton$ by specifying a
directed labelled graph called the \emph{state graph} of $A$. The
vertices of the graph are labelled by states and there is an edge
labelled by a letter $w\in \Sigma$ between vertices labelled $q$ and
$q'$ if $q'\in\delta(q,w)$. The initial and final states are
distinguished using arrows and double lines, respectively. For
brevity, parallel edges are drawn only once, with their labels
separated by a comma.
\begin{example}
\label{ex:aut}
The state graph for a nondeterministic finite automaton
$A=(\Sigma,Q,\delta,\In,\Fin)$ is depicted below.
\begin{center}
\begin{tikzpicture}[shorten >=1pt,node distance=2cm,on grid,auto]
\node[initial,state,initial text=] (q_0) {$q_0$};
\node[state] (q_1) [right=of q_0] {$q_1$};
\node[state] (q_2) [right=of q_1] {$q_2$};
\node[state,accepting] (q_3) [right=of q_2] {$q_3$};
\path[->]
(q_0) edge node {1} (q_1)
edge [loop above] node {0,1} ()
(q_1) edge node {0,1} (q_2)
(q_2) edge node {0,1} (q_3);
\end{tikzpicture}
\end{center}
Here, $Q=\s{q_0, q_1, q_2, q_3}$, $\Sigma=\s{0,1}$, the collection
of initial states is $\In=\s{q_0}$, the collection of final states
is $\Fin=\s{q_3}$, and we have, e.g.,
$\delta(q_0,1)=\s{q_0,q_1}$.
\end{example}
An automaton $A=\automaton$ can be used to specify a language
$\lang(A)\subseteq \Sigma^*$. Intuitively, $\lang(A)$ is the
collection of all the words over $\Sigma$ that specify a well-formed
walk along the state graph of $A$. The following definition makes this
intuition more precise.
\begin{definition}
\label{def:language}
Let $A=\automaton$ be an automaton. Then $A$ \emph{accepts} a word
$w=w_1\cdots w_m\in\Sigma^*$ if there exists a sequence of states
$s_0,s_1,\ldots, s_m\in Q$ such that
\begin{enumerate}
\item $s_0\in\In$,
\item $s_{j+1}\in\delta(s_i,w_{j+1})$ for $j\in\s{0,\ldots,m-1}$,
and
\item $s_m\in \Fin$.
\end{enumerate}
The set of words accepted by $A$ is called the language
\emph{recognized} by $A$ and is denoted $\lang(A)$.
\end{definition}
\begin{example}
\label{ex:autlang}
The alphabet for the automaton $A$ given in \cref{ex:aut} is $\Sigma
= \s{0,1}$. The language recognized by $A$ is
$\lang(A)=\s{w\in\Sigma^* ~;~ \mbox{the third rightmost letter of $w$
is $1$}}$.
\end{example}
If a language is recognized by some nondeterministic finite automata
then that language is called \emph{regular}. The collection of regular
languages is closed under a variety of operations. In particular,
regular languages are closed under concatenation.
\begin{definition}
\label{def:comp}
Let $A=\automaton$ and $A'=(\Sigma,Q',\In', \Fin',\delta')$ be two
automata. Then the \emph{concatenation} of $A$ and $A'$ is the
automaton $A\circ A'=(\Sigma,Q'',\In,\Fin',\delta'')$ where $Q'' =
Q\sqcup Q'$ is the disjoint union of $Q$ and $Q'$ and
\[
\delta''(q,s) =
\begin{cases}
\delta(q,s) & q\in Q \setminus \Fin, \\
\delta(q,s) & q\in \Fin \mbox{ and } s\neq \varepsilon, \\
\delta(q,s)\cup \In' & q\in \Fin \mbox{ and } s = \varepsilon, \mbox{ and} \\
\delta'(q,s) & q\in Q'.
\end{cases}
\]
\end{definition}
\begin{proposition}
\label{prop:comp}
Let $A$ and $A'$ be automata recognizing languages $L$ and $L'$,
respectively. Then $A\circ A'$ recognizes $L\circ L'$.
\end{proposition}
An example of the concatenation of two automata is provided in
\cref{fig:autsc,ex:autsc} based off of the automata defined in
\cref{def:clifauto,def:gensauton} below.
\subsection{The Structure of Normal Forms}
\label{ssec:nfstructure}
We now consider the alphabet $\gens\cup\clifford$ and describe the
words over $\gens\cup\clifford$ that are output by the synthesis
algorithm of \cref{thm:nf}.
\begin{definition}
Let $U\in\cliffordcs$. The \emph{normal form of $U$} is the unique
word over $\gens\cup\clifford$ output by the synthesis algorithm of
\cref{thm:nf} on input $U$. We write $\mathcal{N}$ for the
collection of all normal forms.
\end{definition}
To describe the elements of $\mathcal{N}$, we introduce several
automata. It will be convenient for our purposes to enumerate the
elements of $\clifford$. We therefore assume that a total ordering of
the $92160$ elements of $\clifford$ is chosen and we write
$\clifford_j$ for the $j$-th element of $\clifford$.
\begin{definition}
\label{def:clifauto}
Let $k=|\clifford|$ and $\Sigma=\gens\cup\clifford$. The automaton
$\mathfrak{C}$ is defined as $\mathfrak{C}=(\Sigma, [0,k], \s{0},
[k],\delta_\mathfrak{C})$ where, for $s\in[0,k]$ and $\ell\in
\Sigma$, we have
\[
\delta_\mathfrak{C}(s,\ell)=
\begin{cases}
\s{j} & \mbox{ if $s=0$ and $\ell = \clifford_j$, and}\\
\varnothing & \mbox{ otherwise.}
\end{cases}
\]
\end{definition}
\begin{definition}
\label{def:gensauton}
Let $\Sigma=\gens\cup\clifford$. The automaton $\mathfrak{S}_{n,m}$
is defined as $\mathfrak{S}_{n,m} = (\Sigma, [m], [n,m], [m],
\delta_{\mathfrak{S},m})$ where, for $s\in[m]$ and $\ell\in \Sigma$,
we have
\[
\delta_{\mathfrak{S},m}(s,\ell)=
\begin{cases}
\s{t ~;~ \partition(\oline{\gens_s})\cap
\partition(\oline{\gens_t})=\varnothing} & \mbox{ if
$\ell=\gens_s$ and}\\ \varnothing & \mbox{ otherwise.}
\end{cases}
\]
\end{definition}
\begin{example}
\label{ex:autsc}
To illustrate \cref{def:comp,def:clifauto,def:gensauton}, the
automaton $\mathfrak{S}_{1,3}\circ\mathfrak{C}$ is represented in
\cref{fig:autsc}. It can be verified that the words $\clifford_2$,
$\gens_2\gens_1\clifford_1$, and $\gens_3\gens_1\gens_2\clifford_k$
are accepted by $\mathfrak{S}_{1,3}\circ\mathfrak{C}$ while the
words $\gens_1\gens_1\clifford_{4}$ and
$\gens_3\clifford_{7}\gens_1$ are not. Note in particular that if
$\clifford_1$ is the symbol for the identity, then
$\gens_3\clifford_1$ is distinct (as a word) from $\gens_3$. The
former is accepted by $\mathfrak{S}_{1,3}\circ\mathfrak{C}$ while
the latter is not. Despite the state graph of $\mathfrak{S}_{1,3}$
being fully-connected, full-connectivity does not necessarily hold
for state graphs of other $\mathfrak{S}_{n,m}$ automata.
\end{example}
\begin{figure}
\centering
\begin{tikzpicture}[shorten >=1pt,node distance=3cm,on grid,auto]
\draw [decorate,decoration={brace,amplitude=10pt},xshift=-4pt,yshift=0pt] (-1,-0.4) -- (-1,2.4) node [black,midway,xshift=-0.45cm] {$\mathfrak{S}_{1,3}$};
\draw [decorate,decoration={brace,amplitude=10pt},xshift=-4pt,yshift=0pt] (-1,-2.5) -- (-1,-0.5) node [black,midway,xshift=-0.45cm] {$(\circ)$};
\draw [decorate,decoration={brace,amplitude=10pt},xshift=-4pt,yshift=0pt] (-1,-6.4) -- (-1,-2.6) node [black,midway,xshift=-0.45cm] {$\mathfrak{C}$};
\node[state,initial above,initial text=] at (0,0) (q_1) {$1$};
\node[state,initial above,initial text=] at (4,0) (q_2) {$2$};
\node[state,initial above,initial text=] at (8,0) (q_3) {$3$};
\node[state,accepting] at (0,-6) (s_1) {$1'$};
\node[state,accepting] at (3,-6) (s_2) {$2'$};
\node[state,accepting] at (8,-6) (s_3) {$k'$};
\node[state] at (4,-3) (t_2) {$0'$};
\node[] at (5.5,-6) (d) {$\boldsymbol{\cdots}$};
\node[] at (1.5,-4.5) (d) {$\clifford_1$};
\node[] at (3.2,-4.5) (d) {$\clifford_2$};
\node[] at (5.6,-4.5) (d) {$\clifford_k$};
\node[] at (1.5,-1.5) (d) {$\varepsilon$};
\node[] at (3.7,-1.5) (d) {$\varepsilon$};
\node[] at (5.6,-1.5) (d) {$\varepsilon$};
\path[->]
(t_2) edge node {} (s_1)
(t_2) edge node {} (s_2)
(t_2) edge node {} (s_3)
(q_1) edge node {} (t_2)
(q_2) edge node {} (t_2)
(q_3) edge node {} (t_2)
(q_1) edge [bend left=15] node [above] {$\gens_1$} (q_2)
(q_2) edge [bend left=15] node [above] {$\gens_2$} (q_1)
(q_2) edge [bend left=15] node [above] {$\gens_2$} (q_3)
(q_3) edge [bend left=15] node [above] {$\gens_3$} (q_2)
(q_1) edge [bend left=30] node [above] {$\gens_1$} (q_3)
(q_3) edge [bend right=50] node [above] {$\gens_3$} (q_1);
\end{tikzpicture}
\caption{The automaton $\mathfrak{S}_{1,3}\circ\mathfrak{C}$. The
set of states of this automata is $\s{1,2,3,0',1',\dots,k'}$,
which is the disjoint unioin of the states $\s{1,2,3}$ of
$\mathfrak{S}_{1,3}$ and the states $\s{0,1,\dots,k}$ of
$\mathfrak{C}$. The inital states are $\s{1,2,3}$, those of
$\mathfrak{S}_{1,3}$, and the final states are $\s{1',\dots,k'}$,
those of $\mathfrak{C}$. Because $\mathfrak{S}_{1,3}$ has $\Fin =
\s{1,2,3}$ and $\mathfrak{C}$ has $\In=\s{0'}$, the transition
function $\delta$ of $\mathfrak{S}_{1,3}\circ\mathfrak{C}$ is such
that $\delta(1,\varepsilon) = \delta(2,\varepsilon) =
\delta(3,\varepsilon) = \s{0'}$. Otherwise, $\delta$ behaves like
the transition function for $\mathfrak{S}_{1,3}$ on the subset of
states $\s{1,2,3}$ and like the transition function for
$\mathfrak{C}$ on the subset of states
$\s{0',1',\dots,k'}$.\label{fig:autsc}}
\end{figure}
We will use the automata introduced in
\cref{def:clifauto,def:gensauton} to describe the elements of
$\mathcal{N}$. Our goal is to show that
\begin{equation}
\label{eq:nfaut}
\mathcal{N}=\lang(\mathfrak{S}_{1,3} \circ \mathfrak{S}_{4,9} \circ
\mathfrak{S}_{10,15} \circ \mathfrak{C})
\end{equation}
We start by establishing a few propositions.
\begin{proposition}
\label{prop:inclusions}
We have $\lang(\mathfrak{C})\subsetneq
\lang(\mathfrak{S}_{1,15}\circ \mathfrak{C}) \subsetneq
\lang(\mathfrak{S}_{1,9}\circ\mathfrak{S}_{10,15} \circ\mathfrak{C})
\subsetneq \lang(\mathfrak{S}_{1,3} \circ \mathfrak{S}_{4,9} \circ
\mathfrak{S}_{10,15} \circ \mathfrak{C})$, where $\subsetneq$
denotes strict inclusion.
\end{proposition}
\begin{proof}
By \cref{def:clifauto,def:gensauton}.
\end{proof}
We emphasize that the inclusions in \cref{prop:inclusions} are
strict. This implies that $\lang(\mathfrak{S}_{1,3} \circ
\mathfrak{S}_{4,9} \circ \mathfrak{S}_{10,15} \circ \mathfrak{C})$ can
be written as the disjoint union of $\lang(\mathfrak{C})$,
$\lang(\mathfrak{S}_{1,15}\circ \mathfrak{C})$, and
$\lang(\mathfrak{S}_{1,9}\circ\mathfrak{S}_{10,15}
\circ\mathfrak{C})$. The lemmas below show that these languages
correspond to disjoint subsets of $\mathcal{N}$ and, in combination,
suffice to prove \cref{eq:nfaut}.
\begin{lemma}
\label{lem:clifauto}
Let $U$ be a word over $\gens\cup\clifford$. Then
$U\in\lang(\mathfrak{C})$ if and only if $U\in\mathcal{N}$ and
$U$ has length $1$, i.e $U\in\clifford$.
\end{lemma}
\begin{proof}
By \cref{def:clifauto,thm:nf}.
\end{proof}
\begin{lemma}
\label{lem:222pattern}
Let $U$ be a word over $\gens\cup\clifford$. Then
$U\in\lang(\mathfrak{S}_{1,15}\circ
\mathfrak{C})\setminus\lang(\mathfrak{C})$ if and only if
$U\in\mathcal{N}$ and $U$ has a $2\times2\times2$ pattern.
\end{lemma}
\begin{proof}
First, note that $\lang(\mathfrak{C})$ is the set of words of length
1 accepted by $\mathfrak{S}_{1,15}\circ \mathfrak{C}$. This means
that $\lang(\mathfrak{S}_{1,15}\circ
\mathfrak{C})\setminus\lang(\mathfrak{C})$ consists of all the words
of length $k\geq 2$ accepted by $\mathfrak{S}_{1,15}\circ
\mathfrak{C}$. Furthermore, by \cref{lem:CliffinD}, there are no
normal forms of length 1 which have a $2\times2\times2$
pattern. Thus, to prove our lemma it suffices to establish the
following equality of sets
\begin{align}
\label{eq:222reword}
\s{U\in\lang(\mathfrak{S}_{1,15}\circ \mathfrak{C})~;~|U|=k} =
\s{U\in\mathcal{N}~;~ |U|=k \mbox{ and }\partition(U)\mbox{ is a
}2\times 2\times 2\mbox{ pattern}}
\end{align}
for all $k\geq2$. We proceed by induction on $k$.
\begin{itemize}
\item Note that, by definition of $\mathfrak{S}_{1,15}\circ
\mathfrak{C}$, we have $\s{U\in\lang(\mathfrak{S}_{1,15}\circ
\mathfrak{C})~;~|U|=2} = \gens\clifford$. Every element of
$\gens\clifford$ has a $2\times2\times 2$ pattern by
\cref{lem:GensinD}. Moreover, for $U=SC$ with $S\in\gens$ and
$C\in\clifford$, $\partition(SC) = \partition(S)$. Thus, $SC$
must also be the unique word produced by the synthesis algorithm
on input $U$ and hence $U\in\mathcal{N}$. This accounts for all
words of length 2 in $\mathcal{N}$. Therefore
\cref{eq:222reword} holds when $k=2$.
\item Now suppose that \cref{eq:222reword} holds for some $k\geq
2$. Let $U\in\lang(\mathfrak{S}_{1,15}\circ \mathfrak{C})$ be a
word of length $k$ whose first letter is $S\in\gens$. Then
$U\in\mathcal{N}$ and $\partition(U)=\partition(S)$ is a
$2\times2\times2$ pattern. Furthermore, the least denominator
exponent of $\oline{U}$ is $k-1$. We will show that
\cref{eq:222reword} holds for $k+1$ by establishing two
inclusions. Because it will sometimes be convenient to refer to
submatrices, if $M$ is an $n\times n$ matrix and $x,y\subseteq
[n]$, we write
\[
M[x;y]
\]
for the submatrix of $M$ formed from the rows with index in $x$
and the columns with index in $y$.
\begin{itemize}
\item[$\subseteq$:] Suppose that $U'=S'U$ is a word of length
$k+1$ accepted by $\lang(\mathfrak{S}_{1,15}\circ
\mathfrak{C})$. Then by \cref{def:gensauton} we have
$\partition(S')\cap\partition(S)=\varnothing$. Let
$\s{a,b}\in\partition(S')$, and let $r_a$ and $r_b$ be the
corresponding rows of the residue matrix of
$\oline{U}$. Explicitly, we have
\begin{align*}
\rho_{k-1}(\oline{U})[\s{a,b};[6]]&=\begin{bmatrix}
r_a \\
r_b
\end{bmatrix}
\end{align*}
with $r_a\neq r_b$ as $\s{a,b}$ is not a subset of any
element of $\partition(U)$. Direct calculation of the rows
of the residue matrix for $\oline{U}'$ yields
\begin{align*}
\rho_{k}( \oline{U}')[\s{a,b};[6]]=\begin{bmatrix}
r_a + r_b \\
r_a + r_b
\end{bmatrix}.
\end{align*}
We conclude that $\s{a,b}$ is a subset of an element of
$\partition(U')$. Furthermore, by
\cref{lem:zerorows,eq:k2denom} we see that, since
$r_a+r_b\neq 0$, $\partition(U')$ cannot be a $2\times4$
pattern, and therefore $\s{a,b}\in\partition(U')$. As this
holds for all $\s{a,b}\in\partition(S')$, we conclude that
$\partition(S')=\partition(U')$. Thus, by the induction
hypothesis, $S'U$ will be the word produced by the synthesis
algorithm when applied to $U'$. Hence, $U'\in\mathcal{N}$
and $\partition(U')$ is a $2\times2\times2$ pattern.
\item[$\supseteq$:] Suppose that $U'$ is a normal form of
length $k+1$ with a $2\times 2\times 2$ pattern. Write $U'$
as $U'=S'V$ for some unknown normal form $V$. We then have
$\partition(S')=\partition(U')$. Let
$\s{a,b}\in\partition(S')$ and let the corresponding rows of
the residue matrix of $\oline{V}$ be $r_a$ and
$r_b$. Explicitly, we have
\begin{align*}
\rho_{k-1}(\oline{V})[\s{a,b};[6]]&=\begin{bmatrix}
r_a \\
r_b
\end{bmatrix}.
\end{align*}
Direct calculation of the rows of the residue matrix for
$\oline{U}'$ yields
\begin{align*}
\rho_{k}( \oline{U}')[\s{a,b};[6]]=\begin{bmatrix}
r_a + r_b \\
r_a + r_b
\end{bmatrix}.
\end{align*}
Since $\partition(U')$ is not a $2\times4$ pattern, we
conclude that $r_a+r_b\neq0$ and thus that $r_a\neq
r_b$. Therefore, there is no element of cardinality four in
$\partition(V)$. Since $\lde(V)>0$, $\partition(V)$ must
then be a $2\times2\times2$ pattern. Consequently, we have
$V=U$ as defined above. Because
$\s{a,b}\not\in\partition(U)=\partition(S)$, we know
$\partition(S')\cap\partition(S)=\varnothing$. Given that
$S'=\gens_{j'}$ and $S=\gens_j$, we conclude that
$j\in\delta_{\mathfrak{S},15}(j',S'=\gens_{j'})$. Because
$S=\gens_j$ is the first letter of the word $U$, we know the
initial state of $U$ must be $j$. Therefore, by the
induction hypothesis, $U'=S'U$ is accepted by
$\mathfrak{S}_{1,15}\circ\mathfrak{C}$.
\end{itemize}
We have shown that \cref{eq:222reword} holds for words of length
$k+1$ if it holds for words of length $k$. This completes the
inductive step.\qedhere
\end{itemize}
\end{proof}
\cref{lem:222pattern} characterized the normal forms that have a
$2\times 2 \times 2$ pattern. The two lemmas below jointly
characterize the normal forms that have a $2\times 4$ pattern. Because
their proofs are similar in spirit to that of \cref{lem:222pattern},
they have been relegated to \cref{app:walks}.
\begin{lemma}
\label{lem:24pattern36}
Let $U$ be a word over $\gens\cup\clifford$. Then $U\in
\lang(\mathfrak{S}_{1,9}\circ\mathfrak{S}_{10,15}
\circ\mathfrak{C})\setminus \lang(\mathfrak{S}_{1,15}\circ
\mathfrak{C})$ if and only if $U\in \mathcal{N}$ and $U$ has a
$2\times4$ pattern with
$\partition(U)\cap\s{\s{x,y}~;~(x,y)\in [3]\times[4,6]}\neq\varnothing$.
\end{lemma}
\begin{lemma}
\label{lem:24pattern3366}
Let $U$ be a word over $\gens\cup\clifford$. Then $U\in
\lang(\mathfrak{S}_{1,3}\circ\mathfrak{S}_{4,9}\circ\mathfrak{S}_{10,15}\circ
\mathfrak{C})\setminus
\lang(\mathfrak{S}_{1,9}\circ\mathfrak{S}_{10,15}\circ
\mathfrak{C})$ if and only if $U\in\mathcal{N}$ and $U$ has a
$2\times4$ pattern with
$\partition(U)\cap\s{\s{x,y}~;~(x,y)\in([3],[4,6])}=\varnothing$.
\end{lemma}
\begin{theorem}
\label{thm:nfstructure}
Let $U$ be a word over $\gens\cup\clifford$. Then $U\in
\lang(\mathfrak{S}_{1,3} \circ \mathfrak{S}_{4,9} \circ
\mathfrak{S}_{10,15} \circ \mathfrak{C})$ if and only if $U\in
\mathcal{N}$.
\end{theorem}
\begin{proof}
If $|U|=1$ then the result follows from \cref{lem:clifauto}. If
$|U|>1$, then $U$ has a $2\times 2 \times 2$ or a $2\times 4$
pattern and the result follows from \cref{prop:inclusions} and
\cref{lem:222pattern,lem:24pattern36,lem:24pattern3366}.
\end{proof}
\section{Lower Bounds}
\label{sec:lowerbounds}
Recall that the \emph{distance} between operators $U$ and $V$ is
defined as $\lVert U-V\rVert = \sup\s{\lVert Uv - Vv \rVert ~;~
\lVert v \rVert = 1}$. Because $\cliffordcs$ is universal, for every
$\epsilon >0$ and every element $U\in\su(4)$, there exists
$V\in\cliffordcs$ such that $\lVert U-V \rVert \leq \epsilon$. In such
a case we say that $V$ is an \emph{$\epsilon$-approximation} of
$U$. We now take advantage of \cref{thm:nfstructure} to count
Clifford+$CS$ operators and use these results to derive a worst-case
lower bound on the $CS$-count of approximations.
\begin{lemma}
\label{lem:exactlyn}
Let $n\geq 1$. There are $86400(3\cdot 8^n-2\cdot 4^n)$
Clifford+$CS$ operators of $CS$-count exactly $n$.
\end{lemma}
\begin{proof}
Each Clifford+$CS$ operator is represented by a unique normal form
and this representation is $CS$-optimal. Hence, to count the number
of Clifford+$CS$ operators of $CS$-count $n$, it suffices to count
the normal forms of $CS$-count $n$. By \cref{thm:nfstructure}, and
since Clifford operators have $CS$-count 0, a normal form of
$CS$-count $n$ is a word
\begin{equation}
\label{eq:words}
w = w_1w_2w_3w_4
\end{equation}
such that $w_1\in\lang(\mathfrak{S}_{1,3})$,
$w_2\in\lang(\mathfrak{S}_{4,9})$,
$w_3\in\lang(\mathfrak{S}_{10,15})$, $w_4\in\lang(\mathfrak{C})$ and
the $CS$-counts of $w_1$, $w_2$, and $w_3$ sum to $n$. There are
\begin{equation}
\label{eq:words1}
(6\cdot 8^{n-1}+6\cdot 4^{n-1}+3\cdot 2^{n-1})\cdot |\clifford|
\end{equation}
words of the form of \cref{eq:words} such that exactly one of $w_1$,
$w_2$, or $w_3$ is not $\varepsilon$. Similarly, there are
\begin{equation}
\label{eq:words2}
\left(\sum_{0<l<n}18\cdot 2^{2n-3-l}+\sum_{0<l<n}18\cdot 2^{3n-4-2l}
+\sum_{0<j<n}36\cdot 2^{3n-5-j} \right)\cdot |\clifford|
\end{equation}
words of the form of \cref{eq:words} such that exactly two of $w_1$,
$w_2$, or $w_3$ are not $\varepsilon$. Finally, the number of words
of the form of \cref{eq:words} such that $w_1$, $w_2$, and
$w_3$ are not $\varepsilon$ is
\begin{equation}
\label{eq:words3}
\left(\sum_{0<l<n-j} \sum_{0<j<n}108\cdot 2^{3n-6-j-2l}\right)\cdot |\clifford|
\end{equation}
Summing \cref{eq:words1,eq:words2,eq:words3} and applying the geometric
series formula then yields the desired result.
\end{proof}
\begin{corollary}
\label{cor:upton}
For $n\in\N$, there are $\frac{46080}{7}(45\cdot 8^n-35\cdot 4^n+4)$
distinct Clifford+$CS$ operators of $CS$-count at most $n$.
\end{corollary}
\begin{proof}
Recall that the Clifford+$CS$ operators of $CS$-count $0$ are
exactly the Clifford operators and that $|\clifford|=92160$. The
result then follows from \cref{lem:exactlyn} and the geometric
series formula.
\end{proof}
\begin{proposition}
For every $\epsilon\in\R^{>0}$, there exists $U\in \su(4)$ such that
any Clifford+$CS$ $\epsilon$-approximation of $U$ has $CS$-count at
least $5\log_2 (1/\epsilon) -0.67$.
\end{proposition}
\begin{proof}
By a volume counting argument. Each operator must occupy an
$\epsilon$-ball worth of volume in 15-dimensional $\su(4)$ space,
and the sum of all these volumes must add to the total volume of
$\su(4)$ which is $(\sqrt{2}\pi^9)/3$. The number of circuits up to
$CS$-count $n$ is taken from \cref{cor:upton} (we must divide the
result by two to account for the absence of overall phase $\omega$
in the special unitary group) and a 15-dimensional $\epsilon$-ball
has a volume of
\[
\frac{\pi^{\frac{15}{2}}}{\Gamma \left(\frac{15}{2}+1\right)}
\epsilon^{15}. \qedhere
\]
\end{proof}
Let $U$ be an element of $\cliffordcs$ of determinant 1. By
\cref{eq:u4rep} of \cref{sec:gens}, $U$ can be written as
\[
U=\frac{1}{\sqrt{2}^k} M
\]
where $k\in\N$ and the entries of $M$ belong to $\Zi$. We can
therefore talk about the least denominator exponent of the $\su(4)$
representation of $U$. We finish this section by relating the least
denominator exponent of the $\su(4)$ representation of $U$ and the
$CS$-count of the normal form of $U$.
\begin{proposition}
\label{prop:ldecount}
Let $U$ be an element of $\cliffordcs$ of determinant 1, let $k$ be
the least denominator exponent of the $\su(4)$ representation of
$U$, and let $k'$ be the $CS$-count of the normal form of $U$. Then
\[
\frac{k-3}{2}\leq k' \leq 2k+2.
\]
\end{proposition}
\begin{proof}
The $CS$-count of the normal form of $U$ is equal to the least
denominator exponent of the $\so(6)$ representation of
$U$. \cref{eq:rep} then implies the upper bound for $k'$. Likewise,
examination of \cref{thm:nfstructure} reveals that the $CS$
operators in the circuit for $U$ must be separated from one another
by a Clifford with a least denominator exponent of at most 2 in its
unitary representation. Combining this with the fact that the
largest least denominator exponent of an operator in $\clifford$ is
3, we arrive at the lower bound for $k'$.
\end{proof}
\begin{remark}
It was established in \cite{RS16} that, for single-qubit
Clifford+$T$ operators of determinant $1$, there is a simple
relation between the least denominator exponent of an operator and
its $T$-count: if the least denominator exponent of the operator is
$k$, then its $T$-count is $2k-2$ or $2k$. Interestingly, this is
not the case for Clifford+$CS$ operators in $\su(4)$, as suggested
by \cref{prop:ldecount}. Clearly, the $CS$-count of an operator
always scales linearly with the least denominator exponent of its
unitary representation. For large $k$, computational experiments
with our code \cite{thecode} suggest that most operators are such
that $k'\approx k$, though there are examples of operators with
$k'\approx 2k$. One example of such an operator is $\left[R(X
\otimes I,I \otimes Z)R(X \otimes I,I \otimes X)R(Z \otimes I,I
\otimes X)R(Z \otimes I,I \otimes Z)\right]^m$ for $m\in\N$.
\end{remark}
\section{Conclusion}
\label{sec:conc}
We described an exact synthesis algorithm for a fault-tolerant
multi-qubit gate set which is simultaneously optimal, practically
efficient, and explicitly characterizes all possible outputs. The
algorithm establishes the existence of a unique normal form for
two-qubit Clifford+$CS$ circuits. We showed that the normal form for
an operator can be computed with a number of arithmetic operations
linear in the gate-count of the output circuit. Finally, we used a
volume counting argument to show that, in the typical case,
$\epsilon$-approximations of two-qubit unitaries will require a
$CS$-count of at least $5\log_2(1/\epsilon)$.
We hope that the techniques developed in the present work can be used
to obtain optimal multi-qubit normal forms for other two-qubit gate
sets, such as the two-qubit Clifford+$T$ gate set. Indeed, it can be
shown that the $\so(6)$ representation of Clifford+$T$ operators are
exactly the set of $\so(6)$ matrices with entries in the ring
$\Z[1/\sqrt{2}]$. Further afield, the exceptional isomorphism for
$\su(8)$ could potentially be leveraged to design good synthesis
algorithms for three-qubit operators. Such algorithms would provide a
powerful basis for more general quantum compilers.
An interesting avenue for future research is to investigate whether
the techniques and results presented in this paper can be used in the
context of \emph{synthillation}. Quantum circuit synthesis and magic
state distillation are often kept separate. But it was shown in
\cite{CH161} that performing synthesis and distillation simultaneously
(synthillation) can lead to overall savings. The analysis presented in
\cite{CH161} uses $T$ gates and $T$ states. Leveraging
higher-dimensional synthesis methods such as the ones presented here,
along with distillation of $CS$ states, could yield further savings.
\section{Acknowledgements}
\label{sec:acknowledgements}
AG was partially supported by the Princeton Center for Complex
Materials, a MRSEC supported by NSF grant DMR 1420541. NJR was
partially supported by the Natural Sciences and Engineering Research
Council of Canada (NSERC), funding reference number RGPIN-2018-04064.
We would like to thank Matthew Amy, Xiaoning Bian, and Peter Selinger
for helpful discussions. In addition, we would like to thank the
anonymous reviewers whose comments greatly improved the paper.
\section{Contributions}
\label{sec:contributions}
All authors researched, collated, and wrote this paper.
\section{Competing Interests}
\label{sec:compinterests}
The authors declare no competing interests.
\section{Data Availability}
\label{sec:dataavail}
The sets of various $CS$-count operators used to generate the
algorithmic performance information in \cref{table:performance} are
available at \cite{thecode}.
\bibliographystyle{abbrv}
|
1,108,101,566,393 | arxiv | \section{Introduction}
Social network analysis tools have attracted significant attention in the literature \cite{getoor2005link,koschutzki2005centrality,fortunato2010community}. Such tools are typically used under an assumption that the members of the network are not strategic, i.e., they do not manipulate the topology of the network to their advantage. However, as argued by Michalak et al.~\cite{michalak2017strategic}, this assumption does not hold in many situations, ranging from privacy-savvy users of social media platforms \cite{luo2009facecloak}, through political activists \cite{youmans2012social}, to the members of criminal and terrorist organizations whose primary concern is to evade attention of security agencies \cite{kenney2013organisational}.
The first attempt to fill this gap in the literature was carried out by Waniek et al.~\cite{waniek2018hiding}, who considered how one could evade popular centrality measures, such as degree, closeness, and betweenness. More specifically, the authors studied how a member of the network---called the \emph{evader}---can rewire the network (by adding or removing edges) in order to optimally decrease the value of her centrality while maintaining her influence over other members of the network. The authors proved that, even without taking influence into consideration, the problem of decreasing the value of either closeness or betweenness centrality is NP-complete, while for the degree centrality the problem is in P.
Indeed, this study is the first in the literature to consider a strategic evader. Nevertheless, it has a number of limitations. Firstly, in their complexity analysis, the authors considered the problem of decreasing the \textit{value} of the evader's centrality, which is insufficient if the evader is concerned with decreasing her \textit{position} in the centrality-based ranking of all nodes, i.e., decreasing her centrality \emph{relative} to that of other nodes in the network. Secondly, the complexity analysis assumed that the evader is able to add and remove edges in the \textit{entire network}. This seems unrealistic in many settings such as social media platforms, where members are unable to view, let alone modify, any edge in the network. Finally, the authors assumed that the party using the social network analysis tools---the \textit{seeker}---is \textit{not strategic}, i.e., she is unaware of the evasion efforts made by the evader. While this assumption may hold in some settings, there are many others in which the seeker expects the evader to go to great lengths in order to mislead any analysis, as is the case with covert networks.
In this paper, we address all of the above limitations, and present the first analysis of evading centrality measures in settings where \textit{both parties act strategically}. We start by analyzing the complexity of decreasing the evader's \textit{position} in the centrality-based ranking, as opposed to decreasing the \textit{value} of the evader's centrality. More specifically, we require that the evader decreases her ranking by at least $d$ positions, and allow the evader to add or remove edges only \textit{locally}, i.e., in her immediate neighbourhood. We prove that this problem is NP-complete not only for closeness and betweenness centralities but also for degree centrality. Table~\ref{table:comparison} presents the main theoretical contributions of this paper.
We then model the interaction between the seeker and the evader as a \textit{Bayesian Stackelberg game} \cite{fudenberg1991game,paruchuri2007efficient,jain2008bayesian}, whereby the strategy set of the seeker consists of degree, closeness, betweenness, and eigenvector centralities, while the strategy set of the evader consists of all possible sets of changes in her network neighbourhood. Our extensive experimental analysis of this game draws the first conclusions in the literature regarding which centralities the seeker should use to maximize the chances of detecting a strategic evader.
\begin{table}[t]
\centering
\begin{tabular}{lccc}
\toprule
Centrality & Disguising & Hiding & Local Hiding \\
& Centrality~\cite{waniek2018hiding} & {Leader~\cite{waniek2017construction}} & (this paper) \\
\midrule
Degree & P & NP-complete & \textbf{NP-complete}\\
Closeness & NP-complete & NP-complete & \textbf{NP-complete}\\
Betweenness & NP-complete & unknown & \textbf{NP-complete} \\
\bottomrule
\end{tabular}
\caption{Comparing our complexity results to the literature.}
\label{table:comparison}
\end{table}
\section{Preliminaries}
Let $G = (V, E)\in \mathbb{G}$ denote a network, where $V$ is the set of $n$ nodes and $E \subseteq V \times V$ is the set of edges, and let $\mathbb{G}(V)$ denote the set of all possible networks whose set of nodes is $V$.
We denote by $(v,w)$ the edge between nodes $v$ and $w$.
We restrict our attention to undirected networks, and thus we do not discern between edges $(v,w)$ and $(w,v)$.
We also assume that networks do not contain self-loops, i.e., $\forall_{v \in V}(v,v) \notin E$. We denote by $N(v)$ the set of neighbours of $v$, i.e., $N(v) = \{w \in V : (v,w) \in E\}$.
A \textit{path} in $(V,E)$ is an ordered sequence of nodes, $p=\langle v_1, \ldots, v_k\rangle$, in which every two consecutive nodes are connected by an edge in $E$.
The length of a path equals the number of edges therein. For any pair of nodes, $v,w \in V$, we denote by $\Pi(v,w)$ the set of all shortest paths between these two nodes, and denote by $d(v,w)$ the \textit{distance} between the two, i.e., the length of a shortest path between them.
A \textit{centrality measure} is a function, $c \colon \mathbb{G}(V) \times V \rightarrow \mathbb{R}$, that expresses the importance of any given node in the network~\cite{bavelas1948mathematical}. We consider four fundamental centrality measures, namely degree, closeness, betweenness, and eigenvector.
\textit{Degree centrality}~\cite{shaw1954group} of node $v$ is proportional to its degree: $c_{degr}(G,v) = |N(v)|$.
\textit{Closeness centrality}~\cite{beauchamp1965improved} assigns the highest importance to the node with the shortest average distance to all other nodes: $c_{clos}(G,v) = \frac{1}{\sum_{w \in V}d(v,w)}$.
\textit{Betweenness centrality}~\cite{anthonisse1971rush,freeman1977set} of node $v$ is proportional to the percentage of shortest paths between every pair of other nodes that go through $v$: $c_{betw}(G,v) = \sum_{w\neq w'\neq v} \frac{|\{ p \in \Pi(w,w') : v \in p \}|}{|\Pi(w,w')|}.$
\textit{Eigenvector centrality}~\cite{bonacich1987power} evaluates each node based on the importance of its neighbours. Formally, $c_{eig}(G,v) = x_v$, where $x$ is the eigenvector corresponding to the largest eigenvalue of the adjacency matrix of $G$.
We consider two influence models: \textit{independent cascade} and \textit{linear threshold}. Both models can be described in terms of spreading the ``activation'' of nodes across the network.
The process starts with an \textit{active} subset of nodes called the seed set. The activation then propagates through the network in discrete time steps, whereby nodes become influenced by their previously-activated neighbours.
Formally, let $I(t)$ denote the set of nodes that are active at round $t$, with $I(1)$ being the seed set. In the independent cascade model, an activation probability $p: V \times V \rightarrow \mathbb{R}$ is assigned to each pair of nodes. For every round $t > 1$ each node that became active in round $t-1$ has a single chance to activate each of her inactive neighbours $w$ with probability $p(v,w)$.
In our experiments we assume that for every pair of nodes, $v,w$, we have: $p(v,w)=0.15$. As for the linear threshold model, every node, $v$, is assigned a threshold, $t_v$, sampled from the set: $\{0, \ldots, |N(v)|\}$. Then, in every round $t > 1$, each inactive node becomes activated if $|I(t-1) \cap N(v)| \geq t_v$. In our experiments, the threshold of a node, $v$, is sampled from the set $\{1, \ldots, |N(v)|\}$ uniformly at random.
Notice that this variant is slightly different than the standard linear threshold model~\cite{kempe2003maximizing}, in which edges are assigned random weights. We use this variant to stay consistent with the previous literature on the topic~\cite{waniek2017construction,waniek2018hiding}.
In both models, the process ends when there are no new active nodes, i.e., when $I(t-1)=I(t)$. The influence of $v$ is then measured as the expected number of active nodes at the end of the process, when starting with $\{v\}$ as the seed set. Computing the exact influence requires exponential computations under both models, which is intractable even for relatively small networks. Thus, in our experiments we approximate the influence using Monte Carlo sampling, stopping the process when the improvement over the last $1,000$ iterations is smaller than $0.00001$. Note that even approximating the influence of a node becomes challenging when the number of nodes reaches thousands or more.
\section{Complexity of Local Hiding}
\label{sec:complexity}
We now formally define the main computational problem of our study, and analyze its computational complexity.
\begin{definition}[Local Hiding]
This problem is defined by a tuple $(G, v_e, b, c, \hat{A}, \hat{R}, d)$, where $G= (V,E)$ is a network, $v_e \in V$ is the evader, $b \in \mathbb{N}$ is a budget specifying the maximum number of edges that can be added or removed, $c \colon \mathbb{G}(V) \times V \rightarrow \mathbb{R}$ is a centrality measure, $\hat{A} \subseteq N(v_e) \times N(v_e)$ is the set of edges allowed to be added, $\hat{R} \subseteq \{v_e\} \times N(v_e)$ is the set of edges allowed to be removed, and $d \in \mathbb{N}$ is the safety margin.
The goal is to identify a set of edges to be added, $A^* \subseteq \hat{A}$, and a set of edges to be removed, $R^* \subseteq \hat{R}$, such that $|A^*| + |R^*| \leq b$ and the resulting network $(V, (E \cup A^*) \setminus R^* )$ contains at least $d$ nodes with centrality $c$ greater than that of the evader.
\end{definition}
As mentioned in the introduction, the two key differences between the above problem of \textit{Local Hiding} and the problem of \textit{Disguising Centrality} studied by Waniek et al.~\cite{waniek2018hiding} are as follows. Firstly, instead of seeking the optimal way of decreasing the value of the evader's centrality (which may not provide sufficient cover, especially if she is still ranked among the top nodes in the network), we want the position of the evader in the centrality-based ranking of all nodes to drop below $d$. Secondly, we assume that the evader is only capable of rewiring edges within her network neighbourhood---an assumption that holds in many realistic settings, e.g., the evader is able to disconnect herself from any of her friends, or even ask two of them to befriend one another, but is unable to connect to a complete stranger at will, or ask two strangers to befriend or unfriend one another. Notice that we do not allow to add any edges incident to the evader, as in case of most centrality measures such operation can only increase the ranking of the evader.
We also comment on the key differences between our Local Hiding problem and the problem of \textit{Hiding Leaders} studied by Waniek et al.~\cite{waniek2017construction} in the context of constructing covert networks. Firstly, the authors divide the nodes into leaders and the followers, where the changes in the network are allowed only among the followers. Secondly, they only allow edges to be \textit{added} among the followers, meaning that no edge can be removed from the network.
\begin{theorem}
\label{thrm:degree-npcomplete}
The problem of Local Hiding is NP-complete given the degree centrality measure.
\end{theorem}
\begin{proof}
The problem is trivially in NP, since after the addition of a given set of edges $A^*$ and the removal of a given set of edges $R^*$ it is possible to compute the degree centrality of all nodes in polynomial time. Next, we prove that the problem is NP-hard.
To this end, we give a reduction from the NP-complete problem of Finding $k$-Clique, where the goal is to determine whether there exist $k$ nodes in $G$ that form a clique.
Given an instance of the problem of Finding $k$-Clique, defined by $k \in \mathbb{N}$ and a network $G=(V,E)$, let us construct a network, $H=(V',E')$, as follows:
\begin{itemize}
\item $V' = \{v_e\} \cup V \cup \bigcup_{v_i \in V} \bigcup_{j=1}^{|N(v_i)|} \{x_{i,j}\} \cup \bigcup_{i=1}^{k-2} \{z_i\}$,
\item $E' = \bigcup_{v_i \in V'} \{(v_i,v_e) \} \cup \bigcup_{x_{i,j} \in V'} \{(v_i,x_{i,j})\} \cup \bigcup_{z_i \in V'} \{(z_i,v_e)\} \cup \bigcup_{(v_i,v_j) \notin E} \{(v_i,v_j)\}$.
\end{itemize}
\begin{figure}[t]
\centering
\includegraphics[width=.6\linewidth]{figures/degr-nphard}
\caption{Network used in the proof of Theorem~\ref{thrm:degree-npcomplete} for $k=3$.}
\label{fig:degree-nphard}
\end{figure}
An example of such a network $H$ is illustrated in Figure~\ref{fig:degree-nphard}. Now, consider the instance $(H,v_e,b,c,d,\hat{A},\hat{R})$ of the problem of Local Hiding where $H=(V',E')$ is the network we just constructed, $v_e$ is the evader, $b=\frac{k(k-1)}{2}$, $c$ is the degree centrality measure, $d = k$, $\hat{A} = E$, and $\hat{R} = \emptyset$.
From the definition of the problem we know that the edges to be added to $H$ must be chosen from $E$, i.e., from the network in the Finding $k$-Clique problem.
Out of those edges, we need to choose a subset, $A^* \subseteq E$, as a solution to the Local Hiding problem.
In what follows, we will show that a solution to the above instance of the Local Hiding problem in $H$ corresponds to a solution to the problem of Finding $k$-Clique in $G$.
First, note that $v_e$ has the highest degree in $H$, which is $n + k -2$.
Thus, in order for $A^*$ to be a solution to the Local Hiding problem, the addition of $A^*$ to $H$ must increase the degree of at least $k$ nodes in $V$ such that each of them has a degree of at least $n + k - 1$ (note that the addition of $A^*$ only increases the degrees of nodes in $V$, since we already established that $A^* \subseteq E$).
Now since in $H$ the degree of every node $v_i$ equals $n$ (because of the way $H$ is constructed), then in order to increase the degree of $k$ such nodes to $n + k-1$, each of them must be an end of at least $k-1$ edges in $A^*$.
But since the budget in our problem instance is $\frac{k(k-1)}{2}$, then the only possible choice of $A^*$ is the one that increases the degree of exactly $k$ nodes in $V$ by exactly $k-1$.
If such a choice of $A^*$ is available, then surely those $k$ nodes form a clique in $G$, since all edges in $A^*$ are taken from $G$.
\end{proof}
\begin{theorem}
\label{thrm:closeness-npcomplete}
The problem of Local Hiding is NP-complete given the closeness centrality measure.
\end{theorem}
\begin{proof}
The problem is trivially in NP, since after the addition of a given $A^*$, and the removal of a given $R^*$, it is possible to compute the closeness centrality of all nodes in polynomial time.
Next, we prove that the problem is NP-hard.
To this end, we propose a reduction from the NP-complete $3$-Set Cover problem.
Let $U=\{u_1, \ldots, u_l\}$ be the universe, and let $S = \{S_1, \ldots, S_m\}$ be the set of subsets of the universe, where for every $S_i$ we have $|S_i|=3$.
The goal is then to determine whether there exist $k$ elements of $S$ the union of which equals $U$.
Given an instance of the $3$-Set Cover problem, let us construct a network, $G=(V,E)$, as follows:
\begin{itemize}
\item $V = \{v_e, t\} \cup \bigcup_{S_i \in S} \{S_i\} \cup \bigcup_{u_i \in U} \{u_i,w_i\} \cup \bigcup_{i=1}^{l+m-k+1} \{x_i\}$,
\item $E = \{(t,v_e)\} \cup \bigcup_{x_i \in V} \{(x_i,t)\} \cup \bigcup_{w_i \in V} \{(w_i,v_e),(w_i,u_i)\} \cup \bigcup_{S_i \in V} \{(S_i,v_e)\} \cup \bigcup_{u_j \in S_i} \{(S_i,u_j)\}$.
\end{itemize}
\begin{figure}[t]
\centering
\includegraphics[width=.5\linewidth]{figures/clos-nphard}
\caption{Network used in the proof of Theorem~\ref{thrm:closeness-npcomplete}.}
\label{fig:closeness-nphard}
\end{figure}
An example of the resulting network, $G$, is illustrated in Figure~\ref{fig:closeness-nphard}. Now, consider the following instance of the problem of Local Hiding, $(G,v_e,b,c,\hat{A},\hat{R},d)$, where $G$ is the network we just constructed, $v_e$ is the evader, $b=k$ (where $k$ is the parameter of the $3$-Set Cover problem), $c$ is the closeness centrality measure, $d = 1$, $\hat{A} = \{(t,S_i): S_i \in S\}$, and $\hat{R} = \emptyset$.
From the definition of the problem, we see that the only edges that can be added to the graph are those between $t$ and the members of $S$.
Notice that any such choice of $A^*$ corresponds to selecting a subset of $|A^*|$ elements of $S$ in the $3$-Set Cover problem.
In what follows, we will show that a solution to the above instance of Local Hiding corresponds to a solution to the $3$-Set Cover problem.
First, we will show that for every $v \in V \setminus \{t,v_e\}$ and every $A^* \subseteq \hat{A}$ we either have $c(G',v) < c(G',t)$ or have $c(G',v) < c(G',v_e)$, where $G' = (V,E \cup A^*)$.
To this end, let $D(G',v)$ denote the sum of distances from $v$ to all other nodes, i.e., $D(G',v) = \sum_{w \in V \setminus \{v\}} d(v,w)$.
Note that $D(G',v)= \frac{n-1}{c(G',v)}$.
We will show that the following holds:
$$
\forall_{v \in V \setminus \{t,v_e\}} \forall_{A^* \subseteq \hat{A}} \left(D(G',v) > D(G',t) \lor D(G',v) > D(G',v_e)\right).
$$
Let $d_t$ denote $\sum_{u_i \in U} d(t,u_i) + \sum_{S_i \in S} d(t,S_i)$.
Notice also that $k \leq m$.
Next, we compute $D(G',v)$ for the different types of node $v$:
\begin{itemize}
\item $D(G',v_e) = 5l + 3m - 2k + 3$;
\item $D(G',t) = 3l + m - k + 2 + d_t$;
\item $D(G',x_i) = 6l + 3m - 2k + 3 + d_t > D(G',t)$;
\item $D(G',w_i) = 8l + 5m - 3k + 2 > D(G',v_e)$;
\item $D(G',u_i) \geq 9l + 4m - 3k + 2 > D(G',v_e)$ as $\sum_{S_j \in S}d(u_i,S_j) \geq m$;
\item $D(G',S_i)\! \geq\! 7l + 4m - 2k - 4\! >\! D(G',v_e)$ as $d(S_i,v_e)\! \geq\! 1$.
\end{itemize}
Based on this, either $t$ or $v_e$ has the highest closeness centrality, therefore $A^* \subseteq \hat{A}$ is a solution to the problem of Local Hiding if and only if $D(G',t) < D(G',v_e)$.
This is the case when $d_t < 2l + 2m - k + 1.$
Let $U_A=\{u_i \in U: \exists_{S_j \in S} u_i \in S_j \land (t,S_j) \in A^*\}$.
We have that $d_t = |A^*| + 2 (m - |A^*|) + 2 |U_A| + 3 (l-|U_A|)$ which gives us $d_t = 3l - |U_A| + 2m -|A^*|$.
Since by definition $|U_A| \leq l$ and $|A^*| \leq k$, it is possible that $d_t < 2l + 2m - k + 1$ only when $|U_A| = l$ and $|A^*| = k$, i.e., $\forall_{u_i \in U} \exists_{S_j \in S} u_i \in S_j \land (t,S_j) \in A^*$.
This solution to the problem of Local Hiding corresponds to a solution to the given instance of the $3$-Set Cover problem, which concludes the proof.
\end{proof}
\begin{theorem}
\label{thrm:betweenness-npcomplete}
The problem of Local Hiding is NP-complete given the betweenness centrality measure.
\end{theorem}
\begin{proof}
The problem is trivially in NP, since after the addition of a given set of edges $A^*$, and the removal of a given set of edges $R^*$, it is possible to compute the betweenness centrality of all nodes in polynomial time.
Next, we prove that the problem is NP-hard.
To this end, we propose a reduction from the NP-complete $3$-Set Cover problem.
Let $U=\{u_1, \ldots, u_l\}$ be the universe, and let $S = \{S_1, \ldots, S_m\}$ be the set of subsets of the universe, where for every $S_i$ we have $|S_i|=3$.
The goal is then to determine whether there exist $k$ elements of $S$ the union of which equals $U$.
Given an instance of the $3$-Set Cover problem, let us construct a network $G=(V,E)$ as follows:
\begin{itemize}
\item $V = \{v_e,t,w_1,w_2 \} \cup S \cup U \cup \bigcup_{i=1}^{\alpha} \{x_i\} \cup \bigcup_{i=1}^{\beta} \{y_i\}$, where $\alpha = m^2l(m+l+2)$ and $\beta = m^2l(k+l+2)$,
\item $E = \{(t,v_e),(w_1,w_2)\} \cup \bigcup_{x_i \in V} \{(x_i,t)\} \cup \bigcup_{y_i \in V} \{(y_i,v_e)\} \cup \bigcup_{S_i \in V} \{(S_i,v_e),(S_i,w_1)\} \cup \bigcup_{u_j \in S_i}\{(S_i,u_j)\} \cup \bigcup_{u_i \in V} \{(u_i,w_2)\} \cup \bigcup_{x_i,x_j \in V} \{(x_i,x_j)\} \cup \bigcup_{y_i,y_j \in V} \{(y_i,y_j)\}$.
\end{itemize}
\begin{figure}[t]
\centering
\includegraphics[width=.5\linewidth]{figures/betw-nphard}
\caption{The network used in the~proof of Theorem~\ref{thrm:betweenness-npcomplete}.}
\label{fig:betweenness-nphard}
\end{figure}
An example of the resulting network is illustrated in Figure~\ref{fig:betweenness-nphard}. Consider the instance $(G,v_e,b,c,\hat{A},\hat{R},d)$ of the problem of Local Hiding, where $G$ is the network we just constructed, $v_e$ is the evader, $b=k$ (where $k$ is the parameter of the $3$-Set Cover problem), $c$ is the betweenness centrality measure, $d = 1$, $\hat{A} = \{(t,S_i): S_i \in S\}$, and $\hat{R} = \emptyset$.
From the definition of the problem, one can see that the only edges that can be added to the graph are those between $t$ and the members of $S$.
Notice that any such choice of $A^*$ corresponds to selecting a subset of $|A^*|$ elements of $S$ in the $3$-Set Cover problem.
In what follows, we will show that a solution to the above instance of Local Hiding corresponds to a solution to the $3$-Set Cover problem.
First, we will show that for every node $v \in V \setminus \{t,v_e\}$ and every $A^* \subseteq \hat{A}$ we have $c(G',v) < c(G',t)$, where $G' = (V,E \cup A^*)$.
To this end, let $B(v)$ denote the sum of percentages of shortest paths controlled by $v$ between pairs of other nodes, i.e., $B(v) = \sum_{w,w' \in V \setminus \{v\}} \frac {|\{ p \in \Pi(w,w') : v \in p \}|} {|\Pi(w,w')|}$.
Note that $B(v) = \frac{(n-1)(n-2)}{2} c(G',v)$
Next, we will show that the following holds:
$$
\forall_{v \in V \setminus \{t,v_e\}} \forall_{A^* \subseteq \hat{A}} B(v) < B(t).
$$
Since $t$ controls all shortest paths between the nodes in $X$ and those in $\{v_e, w_1, w_2\} \cup Y \cup S \cup U$, we have:
$$
B(t) \geq \alpha (\beta + m + l + 3) \geq m^4 l^3 (m+l+2) + m^2l(m+l+2)^2
$$
Moreover, since $\alpha = m^2l(m+l+2)$, $\beta = m^2 l(k+l+2)$, and $k < m$, then $\alpha + \beta < 2m^2l(m+l+2)$.
For nodes other than $t$ we have:
\begin{itemize}
\item $B(x_i) = B(y_i) = 0 < B(t)$, since the nodes in $X \cup Y$ do not control any shortest paths.
\item $B(w_1) \leq (\alpha + \beta + m + 2) + \frac{m(m-1)}{2} + ml \leq 2m^2l(m+l+2) + m^2 + m + ml < (2m^2l + m)(m+l+2)< B(t)$, because $w_1$ controls some shortest paths between $w_2$ and nodes in $\{t,v_e\} \cup X \cup Y \cup S$ (there are $\alpha + \beta + m + 2$ such pairs), some shortest paths between pairs of nodes in $S$ (there are at most $\frac{m(m-1)}{2}$ such pairs), and some shortest paths between nodes in $U$ and nodes in $S$ (there are at most $ml$ such pairs).
\item $B(w_2) \leq \frac{l(l-1)}{2} + l + ml < \frac{l^2+l}{2} + ml < B(t)$, because $w_2$ controls some shortest paths between pairs of nodes in $U$ (there are at most $\frac{l(l-1)}{2}$ such pairs), some shortest paths between nodes in $U$ and $w_1$ (there are at most $l$ such pairs), and some shortest paths between nodes in $U$ and nodes in $S$ (there are at most $ml$ such pairs).
\item $B(u_i) \leq (\alpha + \beta + m + 2) + \frac{m(m-1)}{2} < B(t)$, because $u_i$ controls
some shortest paths between $w_2$ and nodes in $\{t,v_e\} \cup X \cup Y \cup S$ (there are $\alpha + \beta + m + 2$ such pairs), and some shortest paths between pairs of nodes in $S$ (there are at most $\frac{m(m-1)}{2}$ such pairs).
\item $B(S_i) \leq 3(\alpha + \beta + l + m + 2) + l + 2(\alpha + \beta + 2) \leq 5(\alpha + \beta + l + m + 2) \leq (10 m^2l + 5)(m + l + 2) < B(t)$, because $S_i$ controls some shortest paths between the nodes in $U$ that are connected to $S_i$ and the nodes in $\{t,v_e\} \cup X \cup Y \cup S \cup U$ (there are at most $3(\alpha + \beta + l + m + 2)$ such pairs), some shortest paths between $w_1$ and the nodes in $U$ (there are at most $l$ such pairs), and some of the shortest paths between nodes in $\{w_1,w_2\}$ and nodes in $\{t,v_e\} \cup X \cup Y$ (there are at most $2(\alpha + \beta + 2)$ such pairs).
\end{itemize}
Therefore, either $t$ or $v_e$ has the highest betweenness centrality.
Hence, $A^* \subseteq \hat{A}$ is a solution to the problem of Local Hiding if and only if $B(t) > B(v_e)$.
We now compute the values of $B(t)$ and $B(v_e)$.
We have that:
$$
B(t) = \alpha(\beta + m + l + 3) + \sum_{\substack{S_i,S_j \in S :\\ (t,S_i) \in E \land (t,S_j) \in E}}\frac{1}{|N(S_i,S_j)|} + \sum_{S_i \in N(t)}\sum_{u_j \in U \setminus N(S_i)}\frac{|N(t,u_j)|}{|N(t,u_j)|+|N(v_e,u_j)|+1}
$$
as $t$ controls all shortest paths between every pair $(x_i,v)$ where $x_i\in X$ and $v\in V\setminus (X \cup \{t\})$ (there are $\alpha(\beta + m + l + 3)$ such pairs), one shortest path between each pair of nodes in $N(t)\cap S$, and the shortest paths between every pair $(v,w)$ where $v\in N(t)\cap S$ and $w\in U:N(t)\cap N(w)\neq \emptyset$ (other paths run through $v_e$ and nodes in $S$, or through $w_1$ and $w_2$). On the other hand, we have that:
$$
\begin{aligned}
B(v_e) = \beta(\alpha + m + l + 3) + \sum_{S_i,S_j \in S}\frac{1}{|N(S_i,S_j)|} & + \sum_{S_i \notin N(t)}(\alpha+1) + \sum_{u_i \in U : N(t,u_i) = \emptyset}(\alpha+1) \\
& + \sum_{S_i \in S}\sum_{u_j \in U \setminus N(S_i)}\frac{|N(v_e,u_j)|}{|N(t,u_j)|+|N(v_e,u_j)|+1}
\end{aligned}
$$
as $v_e$ controls all shortest paths between nodes in $Y$ and all other nodes (there are $\beta(\alpha + m + l + 3)$ such pairs), one shortest path between each pair of nodes in $S$, paths between nodes in $S$ and nodes in $U$, and all shortest paths between $\{t\} \cup X$ and nodes $\{S_i \in S : S_i \notin N(t)\} \cup \{u_i \in U : N(t,u_i) = \emptyset\}$. Thus, we have:
$$
B(v_e)-B(t) = (\beta-\alpha) (m+l+3)+\sum_{\substack{S_i,S_j \in S :\\ (t,S_i) \notin E \lor (t,S_j) \notin E}} \frac{1}{|N(S_i,S_j)|}+\Delta SU+\sum_{S_i \notin N(t)}(\alpha+1)+\sum_{u_i \in U : N(t,u_i) = \emptyset}(\alpha+1)
$$
where $0 < \Delta SU \leq ml$.
Note that $B(v_e)$ decreases with $\left|A^*\right|$ and also decreases with $\left|\{u_i \in U:\exists_{S_j \in N(t)} u_i \in S_j\}\right|$. Next, we prove that:
\begin{enumerate}[leftmargin={1cm}]
\item \label{pt1:leaders-betweenness-npcomplete} If $|A^*| = k$ and for every $u_i \in U$ there exists $S_j \in N(t)$ such that $u_i \in S_j$, then $B(v_e) < B(t)$;
\item \label{pt2:leaders-betweenness-npcomplete} If $|A^*| = k$ and there exists $u_i \in U$ such that for every $S_j \in N(t)$ we have $u_i \notin S_j$, then $B(v_e) > B(t)$.
\end{enumerate}
Regarding point~(a), we have:
$$
B(v_e)-B(t) = (\beta-\alpha) (m + l + 3) + (m - k)(\alpha + 1) +\sum_{\substack{S_i,S_j \in S :\\ (t,S_i) \notin E \lor (t,S_j) \notin E}}\frac{1}{|N(S_i,S_j)|} + \Delta SU.
$$
Now since $|\{S_i,S_j \in S : (t,S_i) \notin E \lor (t,S_j) \notin E\}|=\frac{m(m-1)-k(k-1)}{2}=\frac{(m-k)(m+k-1)}{2}$, and $|N(S_i,S_j)| \geq 2$, then we have:
$$
B(v_e)-B(t) \leq (\beta - \alpha) (m + l + 3) +(m - k)\left(\alpha + 1 + \frac{\Delta SU}{m-k} + \frac{m+k-1}{4}\right).
$$
By substituting the values of $\alpha$ and $\beta$, and observing that $\Delta SU < ml$ and $k < m$, we get:
$$
B(v_e) - B(t) < m^2l(k-m) (m+l+3) +(m-k)(m^2l(m+l+2)+1+ml+2m-1),
$$
which gives us:
$$
B(v_e) - B(t) < (k-m)m^2l + (m-k)(ml+2m) = (k-m)m(ml-l-2) < 0.
$$
Hence, if $|A^*| = k$ and for every $u_i \in U$ there exists $S_j \in N(t)$ such that $u_i \in S_j$, then $B(v_e) < B(t)$.
Regarding point~(b), since there exists $u_i \in U$ such that for every $S_j \in N(t)$ we have $u_i \notin S_j$, then:
$$
B(v_e)-B(t) \geq (\beta-\alpha) (m+l+3)+(m-k)(\alpha+1)+(\alpha+1) + \sum_{\substack{S_i,S_j \in S :\\ (t,S_i) \notin E \lor (t,S_j) \notin E}}\frac{1}{|N(S_i,S_j)|} + \Delta SU.
$$
Since $\sum_{\substack{S_i,S_j \in S :\\ (t,S_i) \notin E \lor (t,S_j) \notin E}}\frac{1}{|N(S_i,S_j)|} > 0$ and $\Delta SU > 0$, then we have:
$$
B(v_e) - B(t) > (\beta-\alpha) (m + l + 3)+(m-k+1)(\alpha+1).
$$
By substituting the values of $\alpha$ and $\beta$ we get:
$$
B(v_e)-B(t)>m^2l(k-m)(m + l + 3)+(m-k+1) (m^2l(m+l+2)+1)
$$
which gives us:
$$
B(v_e)-B(t)>m^2l(k-m)+m^2l(m+l+2)=m^2l(k+l+2) > 0
$$
Hence, if $|A^*| = k$ and there exists $u_i \in U$ such that for every $S_j \in N(t)$ we have $u_i \notin S_j$, then $B(v_e) > B(t)$.
Thus, the solution to the problem of Local Hiding corresponds to a solution to the given instance of the $3$-Set Cover problem, which concludes the proof.
\end{proof}
\section{The Seeker-Evader Game}
\label{sec:seeker-evader-game}
\textbf{Player strategies:}
We model the problem of strategically hiding in a network as a game between two players: the \emph{evader} and the \emph{seeker}. In particular, the seeker analyzes the network using a set of strategies, $T_s$, consisting of the fundamental centrality measures: degree, closeness, betweenness, and eigenvector. On the other hand, the goal of the evader is to decrease her position in the centrality-based ranking of all nodes, while maintaining her influence within the network (notice that the theoretical problems presented in Section~\ref{sec:complexity} are focused on providing safety to the evader by lowering her ranking position, while here we additionally allow the evader to take into consideration her influence in the network). To this end, she utilizes a set of strategies, $T_e$, consisting of combinations of edge modifications in her neighbourhood, with the maximum number of permitted modifications being specified by a budget, $b$.
In our experiments, we pay particular attention to the only available evader strategy in the literature, namely ROAM (Remove One Add Many)~\cite{waniek2018hiding}. In particular, the ROAM heuristic involves two steps.
\textit{Step~1:} Remove the edge between the evader, $v_e$, and its neighbour of choice, $v_0$; \textit{Step~2:} Connect $v_0$ to $b-1$ nodes of choice, who are neighbours of $v_e$ but not of $v_0$. This simple heuristic has been shown to be rather effective in practice.
\medskip
\noindent \textbf{Utility functions:}
For any given pair of strategies, $(t_s,t_e)$, such that $t_s\in T_s$ and $t_e\in T_e$, the utility of the evader is:
$$
U_e(\phi,t_s,t_e) = \phi U^R_e(t_s,t_e) + (1-\phi)U^{I}_e(t_e)
$$
where:
\begin{itemize}
\item $U^R_e(t_s,t_e) \in \mathbb{R}$ is the evader's utility from the change in her rank according to the centrality measure $t_s$ chosen by the seeker, when the evader plays strategy $t_e$,
\item $U^{I}_e(t_e) \in \mathbb{R}$ is the evader's utility from the change in her \textit{influence} within the network when she plays strategy $t_e$,
\item $\phi \in \left\{\frac{1}{m+1},\ldots,\frac{m}{m+1}\right\}$ represents the evader's evaluation of $U^R_e(t_s,t_e)$ relative to $U^{I}_e(t_e)$, we will refer to $\phi$ as the \emph{type} of the evader, with $m$ being the number of types.
\end{itemize}
Next, we specify how $U^R_e(t_s,t_e)$ and $U^{I}_e(t_e)$ are calculated (Figure~\ref{fig:util} depicts both functions). Let $r_e(t_s,t_e)$ be the evader's ranking when she plays strategy $t_e$ and the seeker plays strategy $t_s$. Then, $U^R_e(t_s,t_e)$ is calculated as follows:
\[
U^R_e(t_s,t_e) = \frac{1}{\alpha \left(1+e^{-k(r_e(t_s,t_e)-d)}\right)}-\frac{\beta}{\alpha},
\]
where $e$ is Euler's number, $k$ is the curve steepness, $d$ is the inflection point, $\beta = \frac{1}{1+e^{-k(1-d)}}$ and $\alpha = (1-2\beta)$. This formula has the following desirable properties:
\begin{itemize}
\item The evader's utility is $0$ when ranked first, i.e., fully exposed. Formally, $U^R_e(t_s,t_e) = 0$ when $r_e(v)=1$.
\item The evader's utility increases when she becomes more hidden. Formally, $U^R_e(t_s,t_e)$ increases with $r_e(t_s,t_e)$.
\item $U^R_e(t_s,t_e)$ is \textit{convex} for $1 \leq r_e(t_s,t_e)\leq d$, meaning that the marginal gain in utility increases with ranking drop, as long as the evader does not reach position $d$.
\item $U^R_e(t_s,t_e)$ is \textit{concave} for $d \leq r_e(t_s,t_e) \leq n$, i.e., dropping beyond position $d$ produces diminishing returns to the evader.
\end{itemize}
Finally, note that $U^R_e(t_s,t_e) \rightarrow 1+\frac{\beta}{\alpha}$ when $r_e(t_s,t_e) \rightarrow n$. Having specified how $U^R_e(t_s,t_e)$ is calculated, we now move to $U^{I}_e(t_e)$. Recall that the evader's influence is measured according to either the \emph{independent cascade} model or the \emph{linear threshold} model \cite{shaw1954group,beauchamp1965improved}. Regardless of which model is used, let $\Delta_e(t_e)$ denote the relative change in the evader's influence when she plays strategy $t_e$, i.e., $\Delta_e(t_e) = (I_e(t_e)-I^0_e)/I^0_e$, where $I_e(t_e)$ is the evader's influence when she plays strategy $t_e$, and $I^0_e$ is the evader's initial influence before playing. Then, $U^I_e(t_e)$ is calculated as follows:
$$
U^I_e(t_e) = \begin{cases}
\Delta_e(t_e), & \mbox{if } \Delta_e(t_e) > 0 \\
-\Delta_e(t_e)^2, & \mbox{if } \Delta_e(t_e) \le 0
\end{cases}
$$
This formula has some desired properties. Firstly, $U^I_e(t_e)$ is concave when $\Delta_e(t_e) \le 0$, meaning that the marginal loss in utility grows with the loss in influence (this is intuitive in scenarios where the evader does not mind a negligible drop in influence in return for a better disguise, but strongly opposes a significant drop in influence). Secondly, when $\Delta_e(t_e) \geq -1$, we have $U^I_e(t_e) \geq -1$, and as $\Delta_e(t_e)$ increases, $U^I_e(t_e)$ reaches a similar order of magnitude as that of $U^R_e(t_s,t_e)$, meaning that the equilibrium is not dominated by any of those two utilities.
\begin{figure}[t]
\center
\includegraphics[width=.3\linewidth]{figures/utility-rank}
\includegraphics[width=.3\linewidth]{figures/utility-infl}
\caption{The evader's utility functions for $d = 15$ and $k = \frac{3}{d}$.
}
\label{fig:util}
\end{figure}
Let us now turn our attention to the utility of the seeker. In our analysis we consider two different versions of the game: \emph{zero-sum game} and \emph{non-zero-sum game}. In the zero-sum version of the game we assume that the seeker is interested in minimizing the total utility of the evader, i.e., the seeker's utility is $U_s = -U_e$. In the non-zero-sum version in the game we assume that the seeker is interested solely in identifying the evader, i.e., the seeker's utility is $U_s = -U_e^R$. Notice that in the latter version of the game the seeker completely disregards any utility that the evader might gain from the change in her influence. We assume that the payoffs and the distribution of evader types are common knowledge, while the actual evader's type is private.
\medskip
\noindent\textbf{The Stackelberg game:}
Our model allows for mixed strategies. More specifically, let $p_{s}(t_s)$ be the probability that the seeker plays pure strategy $t_s\in T_s$. Moreover, let $p(\phi)$ be the probability that the evader type is $\phi$, and let $p^{\phi}_{e}(t_e)$ be the probability that an evader of type $\phi$ plays pure strategy $t_e\in T_e$. Now since the evader moves second, i.e., she knows the strategy of the seeker, then we can restrict her available strategies to only pure ones. Hence, the probability that an evader of type $\phi$ plays pure strategy $t_e\in T_{e}$ is $p_{e}^{\phi}(t_e) \in \{0,1\}$. The seeker's objective is to maximize her expected payoff. This optimization problem can be formulated as a Mixed-Integer Quadratic problem:
\begin{alignat*}{2}
& \text{max} && \sum_{\phi \in \Phi} \sum_{t_s\in T_{s}} \sum_{t_e\in T_{e}} p(\phi) p^{\phi}_{e}(t_e) p_{s}(t_s) U_{s}(\phi,t_s,t_e) \\
& \text{s.t.} & \quad & \begin{aligned}[t]
\ &\sum_{t_s\in T_{s}} p_{s}(t_s) = 1 \\
\ &\sum_{t_e\in T_{e}} p^{\phi}_{e}(t_e) = 1 \\
\ &\lambda \ge \sum_{t_s\in T_{s}} p_{s}(t_s)U_e(\phi,t_s,t_e) \\
\ &\lambda \le (1-p^{\phi}_{e}(t_e))\eta + \sum_{i\in T_{s}} p_{s}(t_s)U_e(\phi,t_s,t_e)
\end{aligned}
\end{alignat*}
The first and second constraints correspond to the probability distributions over the sets of strategies available to the players. As for $\eta\in\mathbb{R}$, it is an arbitrarily large number. This way, the third and fourth constraints ensure that, by solving the problem, we get:
$$
\lambda = \max\limits_{t_e\in T_e} \sum_{t_s\in T_{s}} p_{s}(t_s) U_e(\phi,t_s,t_e).
$$
This is because, when $\eta$ is arbitrarily large, $(1-p^{\phi}_{e}(t_e))\eta$ reflects the fact that the evader will play the strategy that maximizes her expected payoff. Finally, in order to solve the problem efficiently, we linearize it by substituting variables: $z^{\phi}(t_s,t_e) = p^{\phi}_{e}(t_e)p_{s}(t_s)$.
We use the linearization procedure described by Paruchuri et al.~\cite{paruchuri2008playing}.
\section{Empirical Analysis}
\begin{table}[t]
\centering
\begin{tabular}{lccc}
\toprule
Network & Network & All & Undominated \\
& size & strategies & strategies \\
\midrule
WTC & 36 & 14190 & 60 \\
Bali & 17 & 280840 & 7 \\
Madrid & 70 & 45760 & 5 \\
Scale-Free & 30 & 61365 & 17 \\
Small-World & 30 & 902 & 36 \\
Erdos-Renyi & 30 & 4122 & 47 \\
\bottomrule
\end{tabular}
\caption{The number of possible strategies vs. the number of undominated strategies (for random networks, the number is taken as the average over $100$ such networks).}
\label{table:datasets}
\end{table}
\subsection{Network Datasets}
We now briefly describe the network datasets used in our analysis.
We consider three standard models of random networks (for each model, we generate $100$ networks consisting of $30$ nodes):
\begin{itemize}
\item \emph{Scale-free} networks, generated using the Barabasi-Albert model~\cite{barabasi1999emergence}. The number of links added with each node is $3$.
\item \emph{Small-world} networks, generated using the Watts-Strogatz model~\cite{watts1998collective}. In our experiments, the expected average degree is $10$.
\item \emph{Random graphs} generated using the Erdos-Renyi model ~\cite{erdds1959random}. In our experiments, the expected average degree is $10$.
\end{itemize}
We also analyze a number of real-life network datasets.
We consider three terrorist networks, namely:
\begin{itemize}
\item \emph{WTC}---the network of terrorists responsible for the WTC 9/11 attack~\cite{Krebs:2002a};
\item \emph{Bali}---the network of terrorists behind the 2002 Bali attack~\cite{hayes2006connecting};
\item \emph{Madrid}---the network of terrorists responsible for the 2004 Madrid train bombing~\cite{hayes2006connecting}.
\end{itemize}
Finally, we consider anonymized fragments of three social media networks, namely Facebook, Twitter and Google+~\cite{leskovec2012learning}.
The networks that we consider in our experiments are of moderate size, as for every evader’s strategy we need to compute the ranking produced by each centrality measure, which in turn requires us to compute the centrality of all nodes.
\subsection{Experimental Process}
For each network, following the work by Waniek et al.~\cite{waniek2018hiding}, the evader is chosen as the node with the smallest sum of centrality ranks (based on Degree, Closeness, Betweenness and Eigenvector); ties are broken uniformly at random. The evader type $\phi$ is sampled uniformly at random from the set $\{0.2,0.4,0.6,0.8\}$.
All results for random networks are presented as an average over $100$ samples.
While the number of pure strategies of the seeker is rather small (we assume them to be the four main centrality measures), the number of pure strategies of the evader is much larger, since every possible way of rewiring the evader's neighbourhood may be considered a unique strategy. This very quickly becomes computationally challenging even for small networks and small budgets. For instance, in the case of the WTC network, the number of the evader's strategies for budget $b=3$ is $14,190$, for $b=4$ it is $148,995$, and for $b=5$ it is $1,221,759$.
With this in mind, to study the evader's entire space of possible strategies, we focus first on a version of the game that is more computationally feasible. More specifically, we analyze the \emph{zero-sum} version of the game, where the seeker's gain equals the evader's loss. This implies that the seeker is not only interested in the evader's centrality (as in the aforementioned model), but is also interested in the evader's influence (this is implied by the fact that the evader's utility does not only depend on her centrality but also on her influence). Importantly, this version of the game can be formulated as a linear program; hence, it is much easier to solve. By analyzing the zero-sum version of the model, we aim to understand the properties of the evader's most rewarding strategies. This understanding will help us identify effective heuristics for the evader, which in turn would enable us to study the original, more computationally-challenging version of the game.
\subsection{The Zero-Sum Version}
For each network we generated the payoff matrices corresponding to budgets $3$ and $4$ and both influence measures. We were also able to consider $10\%$ of the strategies corresponding to budget $5$ (except for the WTC network, where we considered $100\%$). Our main observations regarding the strategies are threefold.
Firstly, \textit{most of the evader's strategies are strongly dominated, regardless of the evader's type}. Specifically, given different networks, Table~\ref{table:datasets} specifies the number of all strategies as well as those that are undominated. As shown, less than 1\% of strategies are undominated, and this percentage is even smaller for larger networks.
\begin{figure}[t]
\center
\includegraphics[width=0.32\linewidth]{figures/histograms/new_wtc_3}
\includegraphics[width=0.32\linewidth]{figures/histograms/new_bali_3}
\includegraphics[width=0.32\linewidth]{figures/histograms/new_madrid_3}
\includegraphics[width=0.32\linewidth]{figures/histograms/new_BA_3}
\includegraphics[width=0.32\linewidth]{figures/histograms/new_WS_3}
\includegraphics[width=0.32\linewidth]{figures/histograms/new_ER_3}
\caption{The distributions of the evader's payoffs for budget $3$. Values are provided for evader type $\phi = 0.5$ and averaged over the seeker's equilibrium strategies. For each network, the red and black lines denote the average payoff and 0, respectively.}
\label{fig:hists}
\end{figure}
\begin{figure*}[t]
\center
\includegraphics[width=.26\linewidth]{figures/heat/wtc3-heat}\hfill
\includegraphics[width=.26\linewidth]{figures/heat/bali3-heat}\hfill
\includegraphics[width=.26\linewidth]{figures/heat/madrid3-heat}
\includegraphics[width=.26\linewidth]{figures/heat/wtc4-heat}\hfill
\includegraphics[width=.26\linewidth]{figures/heat/bali4-heat}\hfill
\includegraphics[width=.26\linewidth]{figures/heat/madrid4-heat}
\includegraphics[width=.26\linewidth]{figures/heat/wtc5-heat}\hfill
\includegraphics[width=.26\linewidth]{figures/heat/bali5-heat}\hfill
\includegraphics[width=.26\linewidth]{figures/heat/madrid5-heat}
\caption{
The evader's average payoff given the three terrorist networks (WTC, Bali, and Madrid), and given budgets $3$, $4$, and $5$. The x-axis represents the number of neighbours the evader is disconnected from, while the y-axis represents the number of edges added between the evader's neighbours. The color intensity of each cell represents the evader’s average payoff for given strategy.
}
\label{fig:heat:WS:ER}
\end{figure*}
\begin{figure}[t!]
\center
\includegraphics[width=.26\linewidth]{figures/heat/BA3-heat}\hfill
\includegraphics[width=.26\linewidth]{figures/heat/WS3-heat}\hfill
\includegraphics[width=.26\linewidth]{figures/heat/ER3-heat}
\includegraphics[width=.26\linewidth]{figures/heat/BA4-heat}\hfill
\includegraphics[width=.26\linewidth]{figures/heat/WS4-heat}\hfill
\includegraphics[width=.26\linewidth]{figures/heat/ER4-heat}
\includegraphics[width=.26\linewidth]{figures/heat/BA5-heat}\hfill
\includegraphics[width=.26\linewidth]{figures/heat/WS5-heat}\hfill
\includegraphics[width=.26\linewidth]{figures/heat/ER5-heat}
\caption{
Same as Figure~\ref{fig:heat:WS:ER}, but for scale-free networks, small-world and random-graph networks.
}
\label{fig:heat:wtc:bali}
\end{figure}
Secondly, for any given equilibrium strategy of the evader, the difference in the seeker's payoff between her optimal strategy and other strategies is minimal (less than 1\%). This suggests that, for the zero-sum game, the seeker could, in principle, use any centrality measure to analyse the network, without compromising much efficiency. Conversely, for any given equilibrium strategy of the seeker, the difference in the evader's payoff between her optimal strategy and other strategies is much more pronounced (more than 100\%, see Figure~\ref{fig:hists}). Hence, \textit{the outcome of the game relies heavily on the evader's choice of strategy, while the seeker's choice of centrality measure has negligible impact}.
Thirdly, the strategies that yield similar payoffs seem to involve rewiring the network in similar ways; see Figures~\ref{fig:heat:WS:ER} and~\ref{fig:heat:wtc:bali}. Interestingly, \textit{the ROAM heuristic of Waniek et al.~\cite{waniek2018hiding} is often among the evader's most rewarding strategies}.
Based on these observations, we next analyze the non-zero-sum version of the game when the evader uses the ROAM heuristic.
\subsection{The Non-Zero-Sum Version}
In this version of the game, we assume that the evader's strategies are instances of the ROAM heuristic. More specifically, the evader's total budget $b$ is used to repeatedly run ROAM. We write ROAM($x$), where $x$ is the number of added between the evader's neighbours. The budget of a single iteration is between $1$ and $\frac{b}{2}$, i.e., there are at least two iterations. The evader repeatedly run ROAM, until the entire budget $b$ is spent. For example, for $b = 10$, we have the following set of evader strategies: $\{$ROAM($1$) repeated $5$ times, ROAM($2$) repeated $3$ times + ROAM($0$), ROAM($3$) repeated twice + ROAM($1$), ROAM($4$) repeated twice$\}$.
We calculate the equilibrium strategy profiles for different networks. For each network, we consider budgets $b \in \{5,10,15,20,25,35\}$, assuming that $b$ is no more than $25\%$ of all edges in the network. This cap is meant to limit the changes in the network characteristics resulting from the evader's actions.
Figure~\ref{fig:mixed} illustrate the mixed strategies played by the seeker in the equilibrium for different networks and evader budgets. For each centrality, Tables~\ref{tab:averages_rg} and~\ref{tab:averages_reallife} present the average probability of being used in different networks.
The equilibrium strategies show, on one hand, which heuristics the evader should use to minimize her centrality while maintaining as much influence as possible. On the other hand, they indicate which centrality the seeker should adopt to have the greatest chance of identifying the evader among the top nodes in the network. Our first key observation in the non-zero-sum game setting is that the choice of the strategy by the seeker has a much greater impact on her payoff than in the zero-sum game. Hence, in what follows, we will focus particularly on the strategies of the seeker, i.e., we will consider which centrality a network analyzer should use when facing a strategic evader.
Regarding the results for the randomly-generated networks, we observe clear, robust patterns, suggesting that it is possible to identify some combination(s) of centrality measures that can be used against the evader. In particular:
\begin{itemize}
\item \emph{Scale-free networks:} degree centrality is used almost exclusively. Due to the power-law distribution of nodes' degrees in scale-free networks, the ``hubs'' have extremely high degree, and the evader is most certainly one of them. As such, even with a large budget, any attempts to reduce the evader's position in the degree-based ranking have limited impact.
\item \emph{Small-world networks:} eigenvector centrality consistently proves to be most difficult to manipulate, it is played by the seeker in almost every small-world network.
\item \emph{Random graph networks:} For low values of the evader's budget, eigenvector centrality is the most effective. However, for larger budgets, it is often replaced by closeness centrality. This shift occurs when budget reaches about $15$, regardless of the network size.
\end{itemize}
\begin{figure*}[th]
\centering
\begin{subfigure}[b]{.32\linewidth}
\center
\includegraphics[width=\linewidth]{figures/heatmap_ba}
\caption{Scale-free}
\end{subfigure}
\begin{subfigure}[b]{.32\linewidth}
\center
\includegraphics[width=\linewidth]{figures/heatmap_er}
\caption{Random graphs}
\end{subfigure}
\begin{subfigure}[b]{.32\linewidth}
\center
\includegraphics[width=\linewidth]{figures/heatmap_ws}
\caption{Small-world}
\end{subfigure}
\begin{subfigure}[b]{.32\linewidth}
\center
\includegraphics[width=\linewidth]{figures/heatmap_social-big}
\caption{Social media}
\end{subfigure}
\begin{subfigure}[b]{.32\linewidth}
\center
\includegraphics[width=\linewidth]{figures/heatmap_ter}
\caption{Terrorist networks}
\end{subfigure}
\caption{
The seeker's equilibrium strategies given the evader types $\{0.2,0.4,0.6,0.8\}$, in (a) \emph{scale-free}, (b) \emph{random graph} and (c) \emph{small-world} networks with $100$, $250$, $500$, $750$ and $1000$ nodes, as well as in (d) social media and (e) terrorist networks. Results are presented for $d=15$, and the independent cascade influence model. A darker color indicates that the corresponding centrality measure has a greater weight in the seeker's mixed strategy.
}
\label{fig:mixed}
\end{figure*}
\begin{table}[t]
\centering
\begin{tabular}{lcccc}
\toprule
Network & $c_{betw}$ & $c_{clos}$ & $c_{degr}$ & $c_{eig}$ \\
\midrule
\emph{Scale-free} & 0 & 0.04 & 0.94 & 0.04 \\
\emph{Random graphs} & 0.05 & 0.08 & 0.25 & 0.62 \\
\emph{Small-world} & 0 & 0 & 0.06 & 0.94 \\
\bottomrule
\end{tabular}
\caption{The average probability of using each centrality given randomly-generated networks.}
\label{tab:averages_rg}
\end{table}
\begin{table}[t]
\centering
\begin{tabular}{lcccc}
\toprule
Network & $c_{betw}$ & $c_{clos}$ & $c_{degr}$ & $c_{eig}$ \\
\midrule
WTC & 0.04 & 0.03 & 0.03 & 0.89 \\
Bali & 0.39 & 0.27 & 0.33 & 0 \\
Madrid & 0.27 & 0 & 0 & 0.73 \\
\midrule
Overall Terrorist & 0.23 & 0.10 & 0.12 & 0.54 \\
\midrule
Facebook & 0 & 0.14 & 0 & 0.86 \\
Google+ & 0 & 0.14 & 0 & 0.86 \\
Twitter & 0 & 0.56 & 0.44 & 0 \\
\midrule
Overall Social & 0 & 0.28 & 0.15 & 0.57 \\
\bottomrule
\end{tabular}
\caption{The average probability of using each centrality given different real-life networks.}
\label{tab:averages_reallife}
\end{table}
Regarding the results for the real-life networks, we also find regularities. Overall, for the networks with lower average clustering coefficient and lower density (Madrid and WTC attacks, Facebook, Google+), eigenvector centrality seems to be played most often. Furthermore, degree centrality is never played against the evader in larger networks. In more detail:
\begin{itemize}
\item \emph{Covert organizations}: for the WTC 9/11 attack and the Madrid train attack networks, eigenvector centrality is played almost exclusively. On the other hand, for the Bali attack network, degree and betweenness centralities are chosen. This last network, in addition to being the smallest, consists of two subnetworks connected by one node---Samudra---the leader of the terrorist organization. This atypical topology of the network may be responsible for the difference. Moreover, the average clustering coefficient and the density for the Bali network are much greater than for the other networks.
\item \emph{Social media}: eigenvector centrality is the most frequent choice for Facebook and Google+ networks, but for the Twitter network it is replaced by closeness and betweenness. This could be due to the former networks having a lower density and average clustering coefficient than the last one, making them more similar to small-world networks.
\end{itemize}
The above analysis of equilibrium strategies, both for real-life and randomly-generated networks, allows us to derive a number of policy recommendations:
\begin{itemize}
\item Eigenvector centrality should be used by the seeker in networks exhibiting small-world properties. This finding is supported by the results for both randomly generated small-world networks and real-life social media networks.
\item Degree centrality should be used by the seeker in scale-free networks, as evident by the results for Barabasi-Albert networks. However, since those networks exhibit some small-world properties, eigenvector centrality can be considered as a second choice.
\item For networks that resemble random graphs, eigenvector centrality proves to be useful, at least against evaders whose budget is small. As for larger budgets, closeness centrality yields superior results.
\item For two of the three terrorist networks under consideration, eigenvector centrality dominates the alternatives, highlighting its potential benefits when facing covert networks.
\end{itemize}
In general, eigenvector centrality seems to be a reliable choice for a variety of network types. Although for some networks it is the second best choice, generally it outperforms other measures, and seems to be more resilient against strategic manipulation.
\section{Conclusions}
We investigated the problem of concealing the importance of an individual in a social network, where both the evader, i.e., the person who wishes to hide, and the seeker, i.e., the party analyzing the network, act strategically. We focused on settings where the evader cannot rewire edges between complete strangers, but instead can only modify connections involving her neighbours in the networks. We showed that even in this simplified setting, the problem of finding an optimal way to hide from the most fundamental centrality measures is NP-complete. In light of these hardness results, we analyzed a number of instances of the game under both the zero-sum and the non-zero-sum payoffs; this highlighted some potential policy implications for network analyzers in the face of a strategic evader.
For future work, we intend to study this setting more rigorously, e.g., by analyzing the case in which multiple evaders are acting simultaneously, and more broadly, e.g., by considering a wider range of centrality measures available to the seeker. Another interesting follow-up of this study is to analyze the problem of hiding from link-prediction algorithms under the assumption that both the evader and the seeker act strategically.
\section*{Acknowledgments}
Tomasz Michalak was supported by the Polish National Science Centre (grant 2016/23/B/ST6/03599).
Yevgeniy Vorobeychik was supported by the National Science Foundation (IIS-1903207, IIS-1905558) and Army Research Office (MURI W911NF1810208).
Kai Zhou was supported by PolyU (UGC) Internal Fund (1-BE3U).
For an earlier version of this work, Marcin Waniek was supported by the Polish National Science Centre (grant 2015/17/N/ST6/03686).
\clearpage
\bibliographystyle{abbrv}
|
1,108,101,566,394 | arxiv | \section{Introduction}
By charged current quasielastic scattering (CCQE) one usually understands the reaction in which the elementary process
\begin{equation}
\nu_l (p) \,+ \,n(k) \rightarrow l^-(k')\, + \, p(p')
\label{qe}
\end{equation}
takes place inside the nucleus. CCQE, the largest reaction channel for $E_\nu \lesssim 2$~GeV, is of cardinal importance for oscillation experiments that rely on the detection of muons (electrons) in $\nu_\mu$ disappearance ($\nu_e$ appearance) searches. It is also the channel that can be more reliably used for a kinematical neutrino energy reconstruction, an indispensable exercise for a precise determination of oscillation parameters in long-baseline accelerator experiments. Moreover, quasielastic scattering is interesting by itself and has, indeed, been carefully investigated with electron beams both experimentally and theoretically with a large variety of models~\cite{Paviabook,Benhar:2006wy,Gil:1997bm,Amaro:2006if}. With neutrinos it is possible to study different properties of the nuclear response in the axial sector that are not (easily) accessible in electron scattering experiments. Provided that nuclear effects are under control, CCQE could be a source of information about the nucleon axial form factor, often parametrized as
\begin{equation}
\label{FA}
F_A (Q^2) = g_A \left( 1 + \frac{Q^2}{M_A^2} \right)^{-2} \,,
\end{equation}
where $Q^2 = -(k-k')^2$ and $M_A$ is the so called axial mass.
The definition of CCQE given above already implies the assumption that the neutrino-nucleus interaction takes place predominantly on a single nucleon (impulse approximation) although at small momentum transfer $|\mathbf{q}| = |\mathbf{k} - \mathbf{k'}|$ (in the Laboratory frame), collective effects involving several nucleons should play a role. On the other side, as the excitation energy ($\omega=k_0-k_0'$) increases, inelastic channels
\begin{equation}
\nu_l (p) \,+ \,N(k) \rightarrow l^-(k')\, + \, X(p')
\label{inel}
\end{equation}
with $X=(\mathrm{ph}) \,N$, $\pi \,N$, $\dots$ start to open. As these processes are not always identified experimentally (pions can be absorbed and the nuclear products may not be all detected), they cannot be separated from CCQE in a model independent way.
The MiniBooNE experiment, running with $\langle E_\nu \rangle \sim 750$~MeV on a CH$_2$ target,
has collected the largest sample available so far for low energy $\nu_\mu$ CCQE~\cite{:2007ru}. After subtracting the non CCQE background, mainly from $\Delta(1232)$ excitation, using the NUANCE event generator~\cite{Casper:2002sd}, the CCQE data set was analyzed with the relativistic Global Fermi Gas model of Smith and Moniz (SM)~\cite{Smith:1972xh}. The shape of the muon angular and energy distributions averaged over the $\nu_\mu$ flux $\langle d\sigma / d\cos{\theta_\mu} dE_\mu \rangle$ could be described with rather standard values of the Fermi momentum $p_F = 220$~MeV and binding energy $E_B = 34$~MeV, but restricting the available phase space for the final proton by means of an ad hoc parameter $\kappa = 1.019 \pm 0.011$ such that $p'^0_{\mathrm{min}} = \kappa ( \sqrt{M^2 + p_F^2} - \omega + E_B )$,
and taking $M_A = 1.23 \pm 0.20$~GeV~\cite{:2007ru}; this value of $M_A$ is in agreement with the K2K result $M_A = 1.2 \pm 0.12$~GeV~\cite{Gran:2006jn} on $^{16}$O, but considerably higher than the one obtained from $\nu_\mu$-deuterium data $M_A=1.0137 \pm 0.0264$~GeV~\cite{Bodek:2007vi} or by NOMAD at high energies (3-100~GeV) also on $^{12}$C [$M_A = 1.05 \pm 0.02(stat) \pm 0.06(syst)$~GeV]~\cite{Lyubushkin:2008pe}. A recent reanalysis using charged current single pion production (CC1$\pi$) data to adjust the Monte Carlo simulation employed to subtract the background obtains $\kappa = 1.007 \pm 0.007$ and $M_A = 1.35 \pm 0.17$~GeV~\cite{Katori:2009du}. While the first shape-only fit falls short compared to the measured integrated cross section~\cite{:2007ru}, the second one underestimates it only by 10~\%~\cite{Katori:2009du}.
While such a modified SM model might be convenient to parametrize the CCQE cross section using a small number of parameters, it is important to understand the MiniBooNE CCQE data with more realistic nuclear models that implement the knowledge gathered through many years of research in electron-nucleus scattering. Besides, the fact that the description changes with the background subtraction procedure indicates the need of making theoretical predictions that could be compared to (more) inclusive and less model dependent data.
\section{The model}
The scattering amplitude for the elementary process [Eq.~(\ref{qe})] is proportional to the product of the leptonic and hadronic currents
\begin{equation}
\mathcal{M} = \frac{G_F \cos{\theta_C}}{\sqrt{2}} l^\alpha J_\alpha \,.
\label{ampl}
\end{equation}
While the charged-current leptonic current is given by the Standard Model, the hadronic one can be written in terms of form factors that contain the information about nucleon properties
\begin{equation}
J_\alpha = \bar{u}(p')
\left[ \left( \gamma_{\alpha} - \frac{q \hspace{-1.5mm}/ \,q_{\alpha}}{q^2} \right) F^V_1 + \frac{i}{2m_N} \sigma_{\alpha\beta} q^{\beta} F^V_2 -\gamma_{\alpha}\gamma_5 F_A - \frac{q_{\alpha}}{m_N} \gamma_5 F_P \right] u(p) \,.
\label{curr}
\end{equation}
The vector form factors $F^V_{1,2}$ are obtained from electron scattering~\cite{Bodek:2007vi}; using PCAC $F_P$ can be expressed as a function of $F_A$, given by Eq.~(\ref{FA}) with $g_A=1.267$; for $M_A$ we adopt a value of 1~GeV, consistent with the world data.
Our description of the CCQE reaction on nuclei is based on a Local Fermi Gas model, i.e. at each space point the initial-nucleon momentum distribution is given by a Fermi sphere $f(\mathbf{r},\mathbf{p})=\Theta (p_F (r) -|\mathbf{p}|)$ with radius $p_F (r) = [\frac{3}{2} \pi^2 \rho(r)]^{1/3}$, where $\rho(r)$ is the empirical nuclear density. A Pauli blocking factor for the final nucleon $P_{\mathrm {Pauli}} = 1 - \Theta (p_F (r) -|\mathbf{p}|)$ also applies.
Such a simple model already incorporates a space-momentum correlation which is absent for the Global Fermi Gas~\cite{Leitner:2008ue}, and provides a framework where more elaborated many body dynamics can be naturally incorporated. In contrast to the constant binding of the SM model, here all the nucleons, initial and final, are embedded in a density and momentum dependent potential $V(\mathbf{p},\mathbf{r})$ whose parameters have been fixed by proton-nucleus scattering data~\cite{Leitner:2008ue,Teis:1996kx}. As a consequence, the nucleons acquire effective masses $m_{\mathrm{eff}}(\mathbf{p},\mathbf{r})$ given by $\sqrt{\mathbf{p}^2 + m_N^2} + V(\mathbf{p},\mathbf{r}) = \sqrt{\mathbf{p}^2 + m^2_{\mathrm{eff}}}$.
The presence of nucleon-nucleon (NN) interactions inside nuclei implies that nucleon propagators are dressed with complex selfenergies $\Sigma$. This leads to spectral functions
\begin{equation}
S(p) = - \frac{1}{\pi} \frac{\mathrm{Im} \Sigma(p)}{[p^2-m_N^2-\mathrm{Re} \Sigma(p)]^2 + [\mathrm{Im} \Sigma(p)]^2} \,.
\label{SF}
\end{equation}
As most of the nucleons in the nucleus can be described as occupying single-particle states in a mean field potential~\cite{Ankowski:2007uy} we can neglect NN interactions for the initial nucleons (holes) and take $\mathrm{Im}\Sigma =0$. Then, $S_h(p) \rightarrow \delta (p^2 - m_{\mathrm{eff}}^2)$ and we recover the description of the initial state outlined in the previous paragraph. On the contrary, for the final nucleons (particle states) NN interactions should be considered. For this purpose in Eq.~(\ref{SF}) we take $\mathrm{Im} \Sigma = - \sqrt{(p^2)} \Gamma_{\mathrm{coll}} (p,r)$, with the collisional broadening $\Gamma_{\mathrm{coll}} = \rho \sigma_{NN} v_{\mathrm{rel}}$ fixed according to the parametrizations of the Giessen Boltzmann-Uehling-Uhlenbeck (GiBUU) framework~\cite{GiBUU}. As for $\mathrm{Re} \Sigma$, it is obtained from $\mathrm{Im}\Sigma$ with a once-subtracted dispersion relation demanding that at the pole $p_0^{\mathrm{(pole)}} = \sqrt{\mathbf{p}^2 + m_{\mathrm{eff}}^2}$. More details can be found in Ref.~\cite{Leitner:2008ue}.
To complete our model we take into account that inside nuclei, the strength of the electroweak couplings may change from their free nucleon values due to the presence of strongly interacting nucleons~\cite{Singh:1992dc}. In the nuclear medium, the axial coupling $g_A$ is renormalized in the same way as the electric field of a dipole is screened in a dielectric medium~\cite{EricsonWeise}
\begin{equation}
\frac{(g_A)_{\mathrm{eff}}}{g_A} = \frac{1}{1+g' \chi_0} \,,
\end{equation}
where $\chi_0$ is the elementary dipole susceptibility and $g'$ the Lorentz-Lorenz factor whose classical value is $g' \sim 1/3$. This quenching of $g_A$ in nuclear Gamow-Teller $\beta$ decay is well established experimentally: $(g_A)_{\mathrm{eff}} / g_A \sim 0.9$~\cite{Wilkinson:1973} and was first applied to CCQE scattering by Singh and Oset~\cite{Singh:1992dc}. Such medium polarization effects involve several nucleons in the nucleus and are therefore important at low $|\mathbf{q}|$ where the space resolution of the probe is large compared with the average NN separation. This corresponds to the region where MiniBooNE data exhibit a reduction with respect to the SM model handled in the analysis by introducing the $\kappa$ parameter. Following Nieves {\it et al.}~\cite{Nieves:2004wx}, we modify the lepton-nucleon interaction by an infinite sum of particle-hole (ph) states (RPA), as illustrated in Fig.~\ref{fig1},
\begin{figure}[ht]
\label{fig1}
\includegraphics[width=.9\textwidth]{correl_new.eps}
\caption{Long range correlations in CCQE scattering. Solid lines pointing to the right (left) denote particle (hole) states. The double line stands for the $\Delta(1232)$.}
\end{figure}
which interact with an effective potential cast as \footnote{Only the terms that contribute to CCQE are shown.}
\begin{equation}
\label{pot}
V=\pmb\tau_1 \pmb\tau_2 \sigma_1^i \sigma_2^j [\hat{q}_{i}\hat{q}_{j}V_{L}(q)
+({\delta}_{ij}- \hat{q}_{i}\hat{q}_{j})V_{T}(q) ] + c_0 f' \pmb\tau_1 \pmb\tau_2 \,;
\end{equation}
$V_L$ ($V_T$) contain explicit $\pi$ ($\rho$) exchange
\begin{equation}
\label{ex}
V_L = \frac{f_{NN\pi}^2}{m^2_\pi}\left
\{\left(\frac{\Lambda_\pi^2-m_\pi^2}{\Lambda_\pi^2-q^2 }\right)^2
\frac{\mathbf{q}{\,^2}}{q^2-m_\pi^2} + g^{\prime} \right \} \,, \,\,
V_T = \frac{f_{NN\pi}^2}{m^2_\pi}\left
\{C_\rho \left(\frac{\Lambda_\rho^2-m_\rho^2}{\Lambda_\rho^2-q^2 }\right)^2
\frac{\mathbf{q}{\,^2}}{q^2-m_\rho^2} + g^{\prime} \right \}
\end{equation}
and a short range part effectively included in the phenomenological constant $g'$ with values in the range $g'=0.6 \pm 0.1$. Details about couplings and cutoff parameters $\Lambda_{\pi, \rho}$ can be found in Ref.~\cite{Nieves:2004wx}. No meson exchange is directly associated with the scalar term in Eq.~(\ref{pot}), assumed to be density dependent
\begin{equation}
f'=\frac{\rho(r)}{\rho(0)} f'^{(in)} + \left[ 1 - \frac{\rho(r)}{\rho(0)} \right] f'^{(ex)} \,
\end{equation}
where the parameters $f'^{(in)}= 0.33$, $f'^{(ex)}=0.45$ (and $c_0=380$~MeV~fm$^3$) are tuned to describe collective nuclear excitations~\cite{Speth:1980kw}. The RPA sum also includes $\Delta$-hole excitations as shown in Fig.~\ref{fig1}. The ph-$\Delta$h and $\Delta$h-$\Delta$h interactions can be obtained by replacing $\pmb\sigma$ ($\pmb\tau$) with the spin (isospin) $1/2 \rightarrow 3/2$ transition operators $\mathbf{S}$ ($\mathbf{T}$) in Eq.~(\ref{pot}) and $f_{NN\pi}$ by $f_{\Delta N\pi}$ in Eq.~(\ref{ex}). Explicit expressions for the RPA corrections to the hadronic tensor are given in Appendix A of Ref.~\cite{Nieves:2004wx}.
This RPA approach, built up with the single-particle states of the Local Fermi Gas is simpler than other more sophisticated methods such as continuum RPA and applies only to inclusive processes, but it incorporates explicitly $\pi$, $\rho$ exchange and $\Delta$-hole states and can be naturally inserted in a unified framework to study different neutrino-induced reactions like inclusive quasielastic scattering, nucleon knockout and pion production~\cite{Leitner:2006ww}. Moreover, it had been successfully applied to photo- and electro-nuclear reactions~\cite{Carrasco:1989vq,Gil:1997bm} and allows to describe simultaneously inclusive muon capture on $^{12}$C and the low energy LSND CCQE measurements (with $\nu_e$ and $\nu_\mu$)~\cite{Nieves:2004wx}.
Finally, in order to compare to model-independent data it is necessary to include the contributions from the processes in Eq.~(\ref{inel}) that look like CCQE events in the detector. The main source of such a background is pion production from $\Delta(1232)$ excitation ($\nu_\mu \, N \rightarrow \mu^- \, \Delta$) followed by absorption ($\Delta \, N \rightarrow N \, N$). Pion final state interactions
in the nuclear medium are treated with a semiclassical BUU model in coupled channels (GiBUU, see Refs.~\cite{Leitner:2009ec,Leitner:2008wx} for details).
\section{Results}
In Fig.~\ref{fig2} we present the predictions of our CCQE model on $^{12}$C averaged over the MiniBooNE flux~\cite{AguilarArevalo:2008yp} for four different distributions. It is important to stress that, while the upper two correspond to directly measurable quantities (energy and scattering angle of the outgoing muons), the lower ones can only be experimentally obtained by reconstruction, assuming that the target nucleon is at rest~\cite{:2007ru}. Further corrections can be made by mapping reconstructed to true energy with the help of a reaction model, as done by MiniBooNE with their Monte Carlo simulation~\cite{AguilarArevalo:2009eb}.
\begin{figure}[h]
\label{fig2}
\begin{minipage}{.49\textwidth}
\includegraphics[width=.98\textwidth]{QE_incl_dsdcos.eps}
\includegraphics[width=.98\textwidth]{QE_incl_dsdQs.eps}
\end{minipage} \hfill
\begin{minipage}{.49\textwidth}
\includegraphics[width=.98\textwidth]{QE_incl_dsdelep.eps}
\includegraphics[width=.98\textwidth]{QE_incl_dsdenu.eps}
\end{minipage}
\caption{Differential cross sections for the CCQE reaction~(\ref{qe}) on $^{12}$C and with $l=\mu$ averaged over the MiniBooNE flux as a function of the cosine of the outgoing muon angle (upper left), its energy (upper right), the (true) four-momentum transfer squared (lower left) and the (true) neutrino energy (lower right). Dotted lines represent the Local Fermi Gas model with Fermi motion and Pauli blocking. In the dash-dotted lines the nucleons are exposed to the mean field potential while the dashed ones also incorporate spectral functions for the outgoing nucleons. The full model with long range (RPA) correlations is denoted by solid lines.}
\end{figure}
The plots reveal that the nuclear many-body corrections taken into account reduce the cross section. The effect of the spectral functions is rather small for these observables but the long range correlations cause a considerable reduction at forward angles ($\cos{\theta_{\mu}} < 0.8$) and low $Q^2 < 0.3$~GeV$^2$.
As discussed above, model-independent comparisons to data must include the fake CCQE background. Our prediction for this background from $\Delta$ excitation
and the full CCQE-like yield is shown in Fig.~\ref{fig3} for $d\sigma/dQ^2$. In total we obtain a fraction of fake CCQE over the total CCQE-like events of 10~\%, slightly smaller than the prediction of the MiniBooNE Monte Carlo: 12~\% (9.4~\% CC1$\pi^+$ resonant plus 2.5 ~\% CC1$\pi^0$)~\cite{AguilarArevalo:2009eb}.
\begin{figure}[h]
\label{fig3}
\includegraphics[width=.48\textwidth]{MiniBooNE_dsdQs_QElike_full.eps}
\caption{$Q^2$ distribution of CCQE-like events averaged over the MiniBooNE flux (solid line). It is given by the sum of the pure CCQE contribution (dashed line) plus fake CCQE events where the $\Delta$ is excited but no pion is produced (dash-dotted line)}.
\end{figure}
CCQE-like data from MiniBooNE are not yet available. Nevertheless, it is useful to compare our pure CCQE predictions with the results of the MiniBooNE analysis that describe the shape of $\langle d\sigma / dQ^2 \rangle$ with a modified SM model~\cite{:2007ru,Katori:2009du}. Such a shape-only comparison for the $Q^2$ distribution is presented in the left panel of Fig.~\ref{fig4}. The two sets of ($\kappa, M_A$) values, the new $\kappa = 1.007$, $M_A = 1.35$~GeV [denoted as (1)] and the original $\kappa = 1.019$, $M_A = 1.23$~GeV [denoted as (2)] are shown, both actually leading to very similar shapes. The comparison reveals that the RPA correlations, which reduce the size of the peak with respect to the tail,
bring the shape of our distribution close to those extracted from data while keeping $M_A = 1$~GeV.
\begin{figure}[h]
\label{fig4}
\begin{minipage}{.49\textwidth}
\includegraphics[width=.98\textwidth]{QE_incl_dsdQs_RPA_twicemodSmithMoniz_NORMALIZED.eps}
\end{minipage} \hfill
\begin{minipage}{.49\textwidth}
\includegraphics[width=.98\textwidth]{QE_incl_dsdQs_MA_variation_NORMALIZED_wSM.eps}
\end{minipage}
\caption{Shape of the $Q^2$ distribution for the $\nu_\mu$-induced CCQE reaction on $^{12}$C averaged over the MiniBooNE flux. On the left panel, the prediction of our model without (solid line) and with RPA correlations (dashed line) is compared to the modified SM model with $\kappa = 1.007$, $M_A = 1.35$~GeV (1) and $\kappa = 1.019$, $M_A = 1.23$~GeV (2). On the right panel, the modified SM model (2) is confronted with the present model (including RPA) evaluated for different values of $M_A$. All curves are normalized to the same area.}
\end{figure}
On the other side, as it is clear from Fig.~\ref{fig2}, the integrated CCQE cross section obtained with our full model ($\langle \sigma \rangle = 3.2 \times 10^{-38}$~cm$^2$) is smaller than the one obtained within a standard Fermi gas model with $M_A = 1$~GeV. Instead, MiniBooNE gets a considerably larger value: $\langle \sigma \rangle = 5.65 \times 10^{-38}$~cm$^2$ with an error of 10.8~\%~\cite{Katori:2009du}. We have explored the sensitivity of the results to some of the uncertain magnitudes in the model. In particular, changing $g'$, whose contribution to the correlations is the largest at low $Q^2$, within acceptable values $g' = 0.5-0.7$ leaves the shape of the $Q^2$ distribution practically unchanged; the impact on the integrated cross section is also small: $\langle \sigma \rangle = 3.1-3.4 \times 10^{-38}$~cm$^2$. Increasing the value of $M_A$ causes an increase in the total cross section, but at the same time, the description of the shape gets worse, as illustrated on the right panel of Fig.~\ref{fig4}.
\section{Concluding remarks}
The theoretical model for the CCQE reaction presented here incorporates important many body corrections to the basic Local Fermi Gas picture like spectral functions and long range RPA correlations. The latter are important at low $Q^2$ where collective effects play a role. We find that a good agreement with the shape of the CCQE $Q^2$ distribution extracted from MiniBooNE data with $M_A = 1$~GeV, which is favored by early neutrino data, by the analysis of pion electroproduction close to threshold and by a recent neutrino experiment at high energies (NOMAD). However our description clearly underestimates the MiniBooNE integrated CCQE cross section. The situation is common to other models that take into account short-range correlations~\cite{Meloni} or apply the phenomenological scaling function extracted from electron scattering~\cite{Amaro:2006tf}. Other many-body mechanisms like meson exchange currents might add some additional strength and should be investigated. One should also bear in mind that the MiniBooNE result is not at all model independent. It relies on simulations to determine the neutrino beam and to subtract the fake contributions to the CCQE-like cross section. The plot of the MiniBooNE integrated CCQE cross section as function of the reconstructed neutrino energy together with NOMAD data (Fig.~6 of Ref.~\cite{Katori:2009du}) shows that, to make both experimental results compatible, the CCQE cross section would have to exhibit an unusual behavior, decreasing $20-30$~\% in the $E_\nu = 2-4$~GeV region to saturate afterwards.
Further joint theoretical and experimental work is necessary reconcile available CCQE theoretical calculations and experimental values.
\begin{theacknowledgments}
LAR thanks Juan Nieves for useful discussions and the S\'eneca Foundation for financial support during his stay in the University of Murcia. This work has been supported in part by the Deutsche Forschungsgemeinschaft.
\end{theacknowledgments}
|
1,108,101,566,395 | arxiv | \section{Introduction}
\label{sec.int}
In recent years the study of the structure and behavior of real world networks has received wide attention.
The degree sequence of these networks appear to have special properties (like power law degree distribution).
Classical random graph models (like the Erd\H{o}s-R\'enyi model) have very different degree sequence.
An obvious solution is to study a random graph with given degree sequence.
More generally generate a random graph with a degree sequence from a family of degree sequences.
In \cite{Chatterjee-Diaconis-Sly} Chatterjee, Diaconis and Sly studied random dense graphs (graphs whose number of edges is comparable to the square of the number of vertices) with a given degree sequence.
It is not always easy to generate a truly random graph with a given degree sequence.
There is a fairly large literature on the configuration model (for the exact definition of the model see \cite{Bollobas}), where for a given degree sequence for each node $i$ we consider $d_i$ stubs and take a random pairing of the stubs and connect the corresponding nodes with an edge.
This model creates the required degree distribution, but gives a graph with possible loops and parallel edges.
A notion of convergence for (dense) graph sequences was developed by Borgs, Chayes, Lov\'asz, S\'os and Vesztergombi in \cite{Borgs-Chayes-Lovasz-Sos-Vesztergombi}.
The limit objects were described by Lov\'asz and Szegedy in \cite{Lovasz-Szegedy}.
Using this limit theory, the authors in \cite{Chatterjee-Diaconis-Sly} described the structure of random (dense) graphs from the configuration model.
They defined the convergence of degree sequences and
for convergent degree sequences they gave a sufficient condition on the degree sequence, which implies the convergence of the random graph sequence (taken from the configuration model).
What can we say if the graphs we want to study are sparse (the number of edges is comparable to the number of vertices) and not dense?
Is there a similar characterization for sparse graphs with given degree sequence?
We establish a characterization for random trees with given (possibly random) degree sequence.
There are various limit theories and convergence notions for trees introduced
by Aldous \cite{Aldous} and by Elek and Tardos \cite{Elek-Tardos}.
We use the notion of convergence introduced for bounded degree graphs (that is the degree of each vertex is bounded above by some uniform constant $d$) first introduced by Benjamini and Schramm \cite{Benjamini-Schramm}.
This notion was extended by Lyons \cite{Lyons} to bounded average degree graphs.
In \cite{Deak} the author described the behavior of a random tree sequence with a given degree distribution.
In this paper we extend this result and prove a similar characterization as in \cite{Chatterjee-Diaconis-Sly} for random trees with given degree sequence.
We define the convergence of degree sequences and give a necessary and sufficient condition on the degree sequence, which implies the convergence of the tree sequence ${\bm T}({\bm D}_n)$ in the sense of Lyons \cite{Lyons}.
In the case of convergence we describe the limit object.
This paper is organized as follows:
In Section \ref{sec.def.not}, we give the basic definitions and notations.
In Section \ref{sec.lim.deg.seq}, we describe the basic properties and the limit of a sequence of random degree sequences.
At the end of the section we state our main theorem.
In Section \ref{sec.lab.subg.dens}, we deal with labeled homomorphisms and in Section \ref{sec.limit}, we describe the limit object.
\section{Basic definitions and notations}
\label{sec.def.not}
\subsection{Random weak limit of graph sequences}
Let $G=G(V,E)$ be a finite simple graph on $n$ nodes.
For $S\subseteq V(G)$ denote by $G[S]$ the subgraph of $G$ spanned by the vertices $v\in S$.
For a finite simple graph $G$ on $n$ nodes , let $B_G(v,R)$
be the rooted $R$-ball around the node $v$, also called as the $R$-neighborhood of $v$, that is the subgraph induced by the nodes at distance at most $R$ from $v$:
$$
B_G(v,R)=G[\{u\in V(G): dist_G(u,v)\leq R\}].
$$
Two rooted graphs $G_1, G_2$ are rooted isomorphic if there is an isomorphism between them which maps the root of $G_1$ to the root of $G_2$.
Given a positive integer $R$, a finite rooted graph $F$ and a probability distribution $\rho$ on rooted graphs, let $p(R,F,\rho)$ denote the probability that the graph $F$ is rooted isomorphic to the $R$-ball around the root of a rooted graph chosen with distribution $\rho$.
It is clear that $p(R,F,\rho)$ depends only on the component of the root of the graph chosen from $\rho$.
So we will assume that $\rho$ is concentrated on connected graphs.
For a finite graph $G$, let $U(G)$ denote the distribution graphs.ed graphs obtained by choosing a uniform random vertex of $G$ as root of $G$.
It is easy to see, that for any finite graph $G$ we have
$$
p(R,F,U(G))={|\{v\in V(G):B_G(v,R)\textrm{ is rooted isomorhpic to } F\}|\over |V(G)|}.
$$
\begin{definition}
Let ($G_n$) be a sequence of finite graphs on $n$ nodes, $\rho$ a probability distribution on infinite rooted graphs. We say that the random weak limit of $G_n$ is $\rho$, if for any positive integer $R$ and finite rooted graph $F$, we have
\begin{equation}
\label{eqn.conv}
\lim_{n\rightarrow \infty}p(R,F,U(G_n))=p(R,F,\rho).
\end{equation}
\end{definition}
If $G_n$ is a sequence of random finite graphs, then $p(R,F,U(G_n))$ is a random variable, so by convergence we mean convergence in probability.
\begin{definition}
Let $(G_n)$ be a sequence of random finite graphs on $n$ nodes, $\rho$ a probability distribution on infinite rooted graphs.
We say that the random weak limit of $G_n$ is $\rho$, if $\forall\epsilon > 0,R\in \mathds{N}^+$ and finite rooted graph $F$, we have
\begin{equation}
\label{eqn.rnd.conv}
\lim_{n\rightarrow \infty}\mathds{P}(|p(R,F,U(G_n))-p(R,F,\rho)|>\epsilon)=0.
\end{equation}
\end{definition}
The formal meaning of this formula is, that the statistics $p(R,F,U(G_n))$ as random variables are concentrated.
\subsection{Other notations}
We will denote random variables with bold characters.
For a probability space $(\Omega,{\cal B},\mu)$ and $A\in {\cal B}$ denote by ${\bm I}(A)$ the indicator variable of the event $A$.
Denote by ${\cal D}_n$ the set of possible degree sequences of a labeled tree on $n$ nodes.
Throughout the paper we consider labeled trees on $n$ nodes unless stated otherwise.
Let ${\bm D}_n$ be a random variable on ${\cal D}_n$.
Denote by ${\bm T}({\bm D}_n)$ the uniform random tree on $n$ labeled nodes, with degree sequence ${\bm D}_n$.
Denote the degree sequence of a tree $T$ by $D_T=(D_T(i))_{i=1}^n$.
For a given degree sequence $D=(D(i))_{i=1}^n$ there are
$$n-2 \choose D(1)-1,D(2)-1,\cdots ,D(n)-1$$
labeled trees with degree sequence $D$.
It follows that for an arbitrary tree $T$
\begin{equation*}
\mathds{P}({\bm T}({\bm D}_n)=T)=\frac{\mathds{P}({\bm D}_n=D_T)}{
\displaystyle{{n-2\choose D_T(1)-1,D_T(2)-1,\cdots ,D_T(n)-1}}}.
\end{equation*}
If it does not cause any confusion, we will use $D,{\bm T}_n$ instead of $D_T,{\bm T}({\bm D}_n)$ respectively.
A finite rooted graph $G$ with root $v$ is said to be $l$-deep if the largest distance from the root is $l$, that is
$$l=\max_{u\in V(G)}dist(v,u).$$
Denote by $U^l$ the set of equivalence classes of finite unlabeled $l$-deep rooted graphs with respect to root-preserving isomorphisms.
Let $T_x^l$ be an $l$-deep rooted tree on $k$ nodes with root $x$.
Denote the vertices at distance $i$ from the root by $T_i$, and let $t_i=|T_i|$ ($t_0$ is $1$, $t_1$ is the degree of the root).
For every finite graph $G$, $p(R,F,U(G))$ induces a probability measure on $U^R$ which we call the $R$-neighborhood statistics of $G$.
If $G$ is a tree then $p(R,F,U(G))$ is concentrated on rooted trees.
Let $\cal T$ be the set of all countable, connected infinite rooted trees.
For an infinite rooted tree $T\in {\cal T}$ denote by $T(R)$ the $R$-neighborhood of the root of $T$.
For an $R$-deep rooted tree $F$ define the set
$$
{\cal T}(F)=\{T\in {\cal T}: T(R) \textrm{ is rooted isomorphic to } F\}.
$$
Let ${\cal F}$ be the sigma-algebra generated by the sets $({\cal T}(F))_F$, where $F$ is an arbitrary finite rooted tree.
$({\cal T},{\cal F})$ is a probability field.
We call a probability measure $\mu$ on ${\cal T}$ an infinite rooted random tree.
Every infinite random tree $\mu$ has the property that for any $F\in U^R$:
\begin{equation}
\label{eqn.neigh.cons}
p(R,F,\mu)=\sum_{H\in U^{R+1},\ H(R)\cong F}p(R+1,H,\mu).
\end{equation}
Actually every distribution on rooted infinite graphs has the above property.
Note that if we want to prove the convergence of a random tree sequence to a certain limit distribution $\rho$, then we need to have the convergence of the neighborhood densities and also (\ref{eqn.neigh.cons}), the consistency of these densities, which ensures that $\rho$ will be concentrated on infinite rooted trees.
These together will imply (\ref{eqn.rnd.conv}).
\section{Limits of degree sequences}
\label{sec.lim.deg.seq}
Consider a random degree sequence ${\bm D}_n=({\bm D}_{n}(i))_{i=1}^n$ and construct a labeled tree ${\bm T}({\bm D}_n)$ with uniform distribution given the degree sequence.
We want to describe the limit of ${\bm T}({\bm D}_n)$ as $n\to \infty$.
We give a characterization of the degree sequences for which ${\bm T}({\bm D}_n)$ has an infinite random tree as a limit.
To describe the model and the limit, we need to define and understand the limit of a random degree sequence ${\bm D}_n$.
Here we only deal with degree sequences of trees.
We further assume that ${\bm D}_n$ is an exchangeable sequence, that is for any $\sigma\in S_n$ we have
$$
({\bm D}_n(i))_{i=1}^n\sim ({\bm D}_n(\sigma(i)))_{i=1}^n.
$$
Exchangeability is a way to eliminate exceptional vertices and allows us to use the limit theory of exchangeable sequences.
For more on exchangeable random variables we refer to \cite{Aldous.ex}.
\begin{definition}
We say that an exchangeable sequence ${\bm D}_n$ is convergent and ${\bm D}_n\to \bm{D}$, where ${\bm D}$ is a random infinite sequence, if for every $k\in \mathds{N}$ we have
\begin{equation*}
({\bm D}_{n}(i))_{i=1}^k\stackrel{\mathds{P}}{\rightarrow} ({\bm D}(i))_{i=1}^k.
\end{equation*}
\end{definition}
It is easy to see, that if ${\bm D}_n$ is exchangeable and ${\bm D}_n\to {\bm D}$ then ${\bm D}$ is also an exchangeable sequence.
The following theorem of Hewitt and Savage( see \cite{Hewitt-Savage}), which is a generalization of de Finetti's theorem, describes the limits of exchangeable sequences.
\begin{theorem}
\label{thm.deFinetti}
Let $\bm X$ be a random infinite exchangeable array.
Then $\bm X$ is a mixture of infinite dimensional iid distributions
\begin{equation*}
{\bm X} =\int_{IID}\lambda dp(\lambda),
\end{equation*}
\noindent
where $p$ is a distribution on infinite dimensional IID distributions $\lambda$.
\end{theorem}
As a result we have that the limit of an exchangeable degree sequence is an infinite exchangeable sequence and so a mixture of IID distributions.
Note that if ${\bm D}_n$ is not exchangeable then we can take a random permutation $\sigma\in S_n$ and define the exchangeable degree sequence $\tilde{{\bm D}}_n(i)={\bm D}_n(\sigma(i))$.
\begin{lemma}
\label{lem.exchange.degen}
Let $\bm X$ be an infinite exchangeable random sequence. Further assume that we have
$$
\mathds{P}({\bm X}(1)=i,{\bm X}(2)=i)=\mathds{P}({\bm X}(1)=i)\mathds{P}({\bm X}(2)=i).
$$
Then $\bm X$ is an infinite IID distribution ($p$ is concentrated on one distribution).
\end{lemma}
\noindent
{\bf Proof:}
From Jensen's inequality we have that
\begin{equation}
\label{eqn.jensen}
\int_{IID}\lambda(i)^2dp(\lambda)\geq \left(\int_{IID}\lambda(i)dp(\lambda)\right)^2.
\end{equation}
Also from Theorem \ref{thm.deFinetti} we have
\begin{multline*}
\label{eqn.exchange.degen}
\int_{IID}\lambda(i)^2dp(\lambda)=\mathds{P}({\bm X}(1)=i,{\bm X}(2)=i)=\\
=\mathds{P}({\bm X}(1)=i)\mathds{P}({\bm X}(2)=i) =\left(\int_{IID}\lambda(i)dp(\lambda)\right)^2
\end{multline*}
It follows that in (\ref{eqn.jensen}) equality holds which means that $p$ is a degenerate distribution and so proves our lemma.
\qed
We will see that if ${\bm T}({\bm D}_n)$ is convergent, then ${\bm D}_n$ satisfies the assumptions in Lemma \ref{lem.exchange.degen}.
So for a convergent random tree sequence ${\bm T}({\cal D}_n)$ the limit of the degree sequence ${\bm D}_n$ needs to be an infinite IID distribution.
\begin{example}
\label{example.star}
Let ${\bm X}$ be a uniform random element of $[n]$.
Consider the degree sequence
\begin{equation*}
{\bm D}(i)=\left \{
\begin{array}{ll}
n-1, & \textrm{if } i={\bm X}\\
1, & \textrm{otherwise}
\end{array}
\right.
\end{equation*}
Let ${\bm S}{\bm t}_n={\bm T}({\bm D}_n)$ be the star-graph on $n$ nodes.
The limit degree sequence is just the constant $1$ vector $\mathds{1}=(1,1,\cdots )$.
Obviously in the limit the expected degree of a node is $1$.
It is not hard to see, that if $F$ is not a single edge, then $u(R,F,{\bm S}{\bm t}_n)=0$ for every $n>|V(F)|$.
Thus there is no limit distribution $\rho$ on infinite graphs such that $\mathds{P}(|p(R,F,U({\bm S}{\bm t}_n)-p(R,F,\rho)|>\epsilon)\to 0$ for every $F$.
\end{example}
Example \ref{example.star} shows that if only the average degree is bounded, too many unbounded degree vertices destroy convergence.
As the average degree of a tree on $n$ nodes is $2{n-1\over n}$, one would expect that in the limit distribution the expected degree of a node is $2$, that is $\mathds{E}({\bm D}(i))=2$ for every $i$.
It turns out that it is enough to have that the degree sequence converges and $\mathds{E}({\bm D}(i))=2$ holds $\forall i$.
Now we are ready to state our main theorem which describes the degree sequence of convergent random tree sequences.
\begin{theorem}
\label{thm.main}
Let ${\bm D}_n$ be a sequence of random degree sequences (${\bm D}_n\in {\cal D}_n$).
The random tree sequence ${\bm T}({\bm D}_n)$ is convergent and converges to an infinite random tree if and only if ${\bm D}_n\to {\bm D}$, where ${\bm D}=({\bm D}_0,{\bm D}_0,\cdots)$ is an infinite IID sequence and $\mathds{E}({\bm D}_0)=2$.\\
\end{theorem}
\section{Labeled subgraph densities}
\label{sec.lab.subg.dens}
To prove convergence we need to understand the neighborhood statistics of the random tree ${\bm T}({\bm D}_n)$.
First we will count subgraph densities and then relate them to neighborhood statistics.
For fixed unlabeled graphs $F$ and $G$ denote by
$$
inj(F,G)={|\{\phi: \phi \textrm{ is an injective homomorphism from } F \textrm{ to } G\}|\over |V(G)|}
$$
the normalized number of copies of $F$ in $G$.
We call $F$ the test graph.
We call $inj(F,G)$ the injective density of $F$ in $G$.
For bounded degree graphs the convergence of injective densities for every $F$ is equivalent to the convergence of neighborhood densities for every $H$ rooted finite graph.
For bounded average degree graphs subgraph statistics may be unbounded.
For the random star tree ${\bm S}{\bm t}_n$ we have
$$
inj(
\begin{tikzpicture}[every node/.style={circle, fill=black, inner sep=0mm,
minimum size=1mm}]
\node (A) at (-0.2,0) {};
\node (B) at (0.2,0) {};
\node (C) at (0,0.2) {};
\draw (A) -- (C) -- (B);
\end{tikzpicture},
{\bm S}{\bm t}_n)={(n-1)(n-2)\over n}.
$$
To avoid unbounded subgraph statistics we add a further structure to the test graph $F$.
We call a pair $(F,r)$ a numbered graph, where $r=(r_i)_{i=1}^{V(F)}$ and $r_i\in \mathds{N}$.
We call $r_i$ the remainder degree of the node $i\in V(F)$.
Let $(F,r)$ be a numbered graph and $\phi$ be a homomorphism from $F$ to a graph $G$.
We say that $\phi$ is a labeled homomorphism if $\phi$ is a homomorphism and
$$D_G(\phi(v))=D_F(v)+r_v,\ \forall v\in V(F).$$
Let
$$
inj_{lab}((F,r),G)={|\{\phi: \phi \textrm{ is an injective labeled homomorphism from } F \textrm{ to } G\}|\over |V(G)|}
$$
be the normalized number of numbered copies of $F$ in $G$.
First we want to derive properties of degree sequences ${\rm D}_n$ for which $inj_{lab}((F,r),{\bm T}({\bm D}_n))$ is convergent for every finite graph $F$ and remainder degrees $r$.
Then in Section \ref{sec.limit} we will turn to the convergence of neighborhood statistics.
\begin{remark}
The convergence of $inj_{lab}(.,G_n)$ for every $(F,r)$ does not imply the random weak convergence of $G_n$ in general.
$inj_{lab}((F,R),{\bm S}{\bm t}_n)$ is convergent for every $(F,r)$, but as we saw earlier ${\bm S}{\bm t}_n$ is not a convergent tree sequence.
\end{remark}
\begin{remark}
\label{rem.lab.hom.bounded}
Let $(F,r)$ be an arbitrary numbered graph on $k$ nodes.
One can easily see that $inj_{lab}((F,r),G)$ is uniformly bounded for every $G$.
\end{remark}
\medskip
\noindent
{\bf Proof.:}
To see this we will bound the number of ways we can construct an injective labeled homomorphism $\psi$ from $(F,r)$ to $G$.
Let $R=\max\{r_i\}$.
If we define $\psi(1)=v\in V(G)$, then $D_G(\psi(1))=r_1$.
There are at most $D_G(\psi(1))^{D_F(1)}=r_1^{D_F(1)}\leq R^k$ possibilities for $\psi(u)$'s $(u\in N_F(1))$, where $N_F(1)$ is the set of neighbors of $1$ in $F$.
Following this idea we get that for every $v$ there are at most $(R^k)^k$ possible ways to extend $\psi$, given $\psi(1)=v$.
Hence there are at most $nR^{k^2}$ injective labeled homomorphisms from $F$ to $G$ and the remark follows.
\qed
\medskip
\noindent
For an arbitrary numbered tree $(T,r)$, and $\phi:V(T)\mapsto [n]$ let
$$
{\bm I}_n((T,r),\phi)={\bm I}(\{\phi \textrm{ is an injective labeled homomorphism of } T \textrm{ to } {\bm T}_n\})
$$
\begin{equation}
\label{eqn.sum.of.indicator}
{\bm X}_n^{(T,r)}=\sum_{\phi: V(T)\mapsto [n]}{\bm I}_n((T,r),\phi)=n\cdot inj_{lab}((T,r),{\bm T}({\bm D}_n)).
\end{equation}
We define ${\bm I}_n((F,r),\phi),{\bm X}^{(F,r)}_n$ similarly for a numbered forest $(F,r)$.
If it does not cause any confusion, we will omit $r$ from the formulas above and use ${\bm I}_n(T,\phi)$, $X^T_n,$ ${\bm I}_n(F,\phi)$ and ${\bm X}^{F}_n$ instead to simplify notation.
For random graph sequences ${\bm G}_n$ by the convergence of $inj_{lab}(.,{\bm G}_n)$ we mean convergence in probability.
Let ${\bm D}_n$ be a random degree sequence and ${\bm T}_n={\bm T}({\bm D}_n)$ be the associated random tree sequence.
Let $(T,r)$ be a numbered tree.
As $inj_{lab}((T,r),{\bm T}_n)$ is bounded, we have that $inj_{lab}((T,r),{\bm T}_n)$ is convergent for every $(T,r)$ if and only if we have that
\begin{equation}
\label{eqn.lab.conc}
\mathds{D}^2\left ({X_n^T\over n}\right )=\mathds{D}^2(inj_{lab}((T,r),{\bm T}_n))\rightarrow 0.
\end{equation}
We will use this formula to prove properties of the degree sequence.
We can expand the above formula using (\ref{eqn.sum.of.indicator})
\begin{multline}
\label{eqn.expand.deviation}
\mathds{D}^2\left({X_n^T\over n}\right)={1\over n^2}
\Bigl(\sum_{\psi,\phi: V(T)\mapsto [n]}\mathds{E}({\bm I}_n(T,\psi){\bm I}_n(T,\phi))-\\
-\sum_{\psi,\phi: V(T)\mapsto [n]}\mathds{E}({\bm I}_n(T,\psi))\mathds{E}({\bm I}_n(T,\phi))\Bigr)\rightarrow 0.
\end{multline}
The following two lemmas will establish a connection between the degree sequence and the probabilities
$\mathds{P}({\bm I}_n(T,\phi)=1)$.
Then we will use (\ref{eqn.expand.deviation}) to prove that the degree sequence satisfies the conditions in Lemma \ref{lem.exchange.degen}.
\begin{remark}
\label{rem.relab}
As the degree sequence is exchangeable we have that for any $\psi,\phi: V(T)\mapsto [n]$
$$\mathds{P}({\bm I}_n(T,\phi)=1)=\mathds{P}({\bm I}_n(T,\psi)=1).$$
\end{remark}
\noindent
Let $T$ be an arbitrary tree on $k$ nodes.
For a random degree sequence ${\bm D}_n$ and $\phi:V(T)\mapsto [n]$ let ${\bm D}_{\phi}=({\bm D}_n(\phi(i)))_{i=1}^k$.
\begin{lemma}
\label{lem.forest.prob}
Let ${\bm D}_n\in {\cal D}_n$ be a random degree sequence and ${\bm T}_n={\bm T}({\bm D}_n)$.
Let $F$ be an arbitrary forest on $m\ (m\leq n)$ nodes with remainder degrees $r=(r_1,\cdots , r_m)$.
Let $R=\sum_ir_i$ and denote by $C_1,C_2,\cdots ,C_c$ the connected components of $F$. The probability that an arbitrary $\phi:V(T)\mapsto [n]$ is an injective labeled homomorphism is
$$
\mathds{P}({\bm I}_n(F,\phi)=1)={(n-m+c-2)!\over (n-2)!}H(r,F)\mathds{P}({\bm D}_{\phi}=D_T),
$$
where $H(r,F)=\prod_{i=1}^c\left[\left(\sum_{j\in C_i}r_j\right)\prod_{j\in C_i}{(D_F(j)+r_j-1)!\over (r_j!)}\right]$ is a constant depending only on $F$ and the remainder degrees $r$.
\end{lemma}
\noindent
{\bf Proof:}
We may assume that $\phi(i)=i,\forall i\in V(F)$.
Let $R_i=\sum_{j\in C_i}r_j$.
Fix a degree sequence $D=(D(i))_{i=1}^n$.
It follows from the Pr\H{u}fer sequence that the number of trees realizing this degree sequence is ${n-2\choose D(1)-1,\cdots ,D(n)-1}$.
We need to count the trees with degree sequence $D$ which have $F$ spanned by the first $m$ nodes and the remainder degree condition holds.
Contract every connected component $C_i$ of $F$ to a single vertex $u_i$.
Also contract the images of these components in ${\bm T}({\bm D}_n)$.
We get a tree on $n-m+c$ nodes with degree sequence
$$D'=(R_1, R_2, \cdots , R_c,D(m+1),\cdots , D(n)).$$
There are
$$n-m+c \choose R_1-1, R_2-1, \cdots , R_c-1,D(m+1)-1,\cdots , D(n)-1$$
trees realizing the degree sequence $D'$.
For each connected component $C_i$ we can connect the $R_i$ edges to the vertices in
$$R_i!\over \prod_{j\in C_i}r_j!$$
ways.
It follows that the number of labeled trees realizing the degree sequence $D$ and having $F$ on the first $m$ vertices is
$$
{n-m+c-2\choose R_1-1,\cdots ,R_c-1,D(m+1)-1,\cdots ,D(n)-1}
\prod_{i=1}^c\left[ {R_i!\over \prod_{j\in C_i}r_j!}\right].
$$
From this it follows that
\begin{multline}
\label{eqn.forest.prob}
\mathds{P}({\bm I}_n(F,\phi)=1|{\bm D}_n=D)=\\
={\displaystyle {n-m+c-2\choose R_1-1,\cdots ,R_c-1,D(m+1)-1,\cdots ,D(n)-1}\over \displaystyle
{n-2\choose D(1)-1,\cdots ,D(n)-1}}\prod_{i=1}^c {R_i!\over \prod_{j\in C_i}r_j!}.
\end{multline}
Note that the degree sequence $D$ should be such that $D(i)=D_F(i)+r_i,\,i=1,\cdots,m$ holds for the first $m$ degrees.
We need to sum this probability for every possible degree sequence.
In our case we sum over degree sequences for which $D(i)=D_F(i)+r_i,\,i=1,\cdots,m$ holds.
As in equation (\ref{eqn.forest.prob}) the right hand side does not depend on $D(i),\,i>m$, we have
\begin{multline*}
\mathds{P}({\bm I}_n(F,\phi)=1)=\\
={(n-m+c-2)!\over (n-2)!}
\prod_{i=1}^c\left[R_i\prod_{j\in C_i}{(D_F(j)+r_j-1)!\over (r_j!)}\right]
\mathds{P}({\bm D}_{\phi}=D_T).
\end{multline*}
If we take $H(r,F)=\prod_{i=1}^c\left[R_i\prod_{j\in C_i}{(D_F(j)+r_j-1)!\over (r_j!)}\right]$, we get the desired equation.
\qed
Let $(F_1,r_1),(F_2,r_2)$ be two labeled graphs, $\phi:V(F_1)\mapsto [n]$ and $\psi:V(F_2)\mapsto [n]$.
We denote by $F_{1,2}$ the graph obtained by identifying nodes $i\in V(F_1),\ j\in V(F_2)$ if and only if $\phi(i)=\psi(j)$.
We can define remainder degrees $r_{1,2}$ on $F_{1,2}$ in a straightforward way if $\phi(i)=\psi(j)\Rightarrow r_1(i)=r_2(j)$.
\begin{lemma}
\label{lem.forest.cond.prob}
Let ${\bm D}_n\in {\cal D}_n$ be a random degree sequence and ${\bm T}_n={\bm T}({\bm D}_n)$.
Let $F_1,F_2$ be two forests on $m_1$ and $m_2$ nodes $(m_1,m_2\leq n)$ with remainder degrees $r_1,r_2$.
Let $\phi:V(F_1)\mapsto [n]$ and $\psi:V(F_2)\mapsto [n]$.
If $F_{1,2}$ is a forest and we can define $r_{1,2}$, then let
$m_{1,2}=|V(F_{1,2})|$, $c_{1,2}=\{$the number of components of $F_{1,2}\}$ and $R_{1,2}=\sum_{V(F_{1,2})}r_{1,2}(i)$. We have
\begin{multline}
\label{eqn.cond.forest.prob}
\mathds{P}({\bm I}_n(F_1,\phi)=1|{\bm I}_n(F_2,\psi)=1)=\\
{(n-m_{1,2}+c_{1,2}-2)! \over (n-m_2+c_2-2)!}
{H(r_{1,2},F_{1,2})\over H(r_2,F_2)}
\mathds{P}({\bm D}_{\phi}=D_{F_1}|{\bm D}_{\psi}=D_{F_2}).
\end{multline}
\end{lemma}
\noindent
{\bf Proof:}
The proof follows immediately from the definition of conditional probability.
\qed
Let ${\bm D}_n$ be a degree sequence and ${\bm T}_n={\bm T}({\bm D}_n)$ be the associated random tree.
Assume that $inj_{lab}((T,r),{\bm T}_n)$ is convergent.
Then by (\ref{eqn.lab.conc}) we have that
$\mathds{D}^2({\bm X}_n^{T}\slash n)\to 0$.
For any tree $T$ on $k$ nodes we have
\begin{equation}
\label{eqn.2nd.moment}
\mathds{D}^2\left({{\bm X}_n^T\over n}\right)={1\over n^2}\sum_{\phi,\psi:V(T)\mapsto [n]}
\Big( \mathds{E}({\bm I}_n(T,\phi){\bm I}_n(T,\psi))-\mathds{E}({\bm I}_n(T,\phi))\mathds{E}({\bm I}_n(T,\psi))\Big)
\end{equation}
Now if we split the sum by the size of the intersection of $\phi(V(T))$ and $\psi(V(T))$ and use Remark \ref{rem.relab}, we have
\begin{multline}
\label{eqn.2nd.moment.b}
\mathds{D}^2\left({{\bm X}_n^T\over n}\right)=
{1\over n^2}\sum_{\substack{i=0\\|\phi(V(T))\cap \psi(V(T))|=i}}^k n(n-1)\cdot\ldots\cdot (n-2k+i+1)\cdot\\
\cdot\big(\mathds{E}({\bm I}_n(T,\phi){\bm I}_n(T,\psi))-\mathds{E}({\bm I}_n(T,\phi))\mathds{E}({\bm I}_n(T,\psi))\big)
\end{multline}
From Lemma \ref{lem.forest.prob} and \ref{lem.forest.cond.prob} we can easily derive that the order of the terms corresponding to $i\neq 0$ is ${\cal O}({1\over n})$.
It follows that the condition $\mathds{D}^2({\bm X}_n^{T})\to 0$ is equivalent to
$$
{(n-1)\cdot\ldots\cdot (n-2k+1)\over n}\left(\mathds{P}({\bm I}_n(T,\phi){\bm I}_n(T,\psi))-\mathds{P}({\bm I}_n(T,\phi))\mathds{P}({\bm I}_n(T,\psi))\right)\to 0.
$$
Using again Lemma \ref{lem.forest.prob} and \ref{lem.forest.cond.prob} we can easily derive the following:
\begin{multline}
\label{eqn.independency}
\forall T,\ \mathds{D}^2\left({{\bm X}_n^{T}\over n}\right)\to 0 \Leftrightarrow
\forall \phi,\psi:V(T)\mapsto [n],\, \phi(V(T))\cap \psi(V(T))=\emptyset\\
\mathds{P}({\bm D}_{\phi}=D_{T}, {\bm D}_{\psi}=D_{T})\to \mathds{P}({\bm D}_{\phi}=D_{T})\mathds{P}({\bm D}_{\psi}=D_{T})
\end{multline}
\noindent
The following corollary is an easy application of Lemma \ref{lem.exchange.degen} and (\ref{eqn.independency}).
\begin{corollary}
\label{cor.conv.deFinetti}
The labeled subgraph densities of a random tree sequence converge in probability if and only if the corresponding degree sequence converges to an infinite IID sequence.
\end{corollary}
\begin{remark}
\label{rem.deg.conn}
The formula in Lemma \ref{lem.forest.prob} yields an easy result on the probability that two vertices $i,j$ with degrees $d_i,d_j$ are connected:
$$
\mathds{P}(ij\in E({\bm T}({\bm D}_n))\ |\ {\bm D}_n(i)=d_i,{\bm D}_n(j)=d_j)={d_i+d_j-2\over n-2}.
$$
Similarly for a given edge $ij\in E({\bm T}({\bm D}_n))$ the degree distribution of the vertices $i$ and $j$ can be expressed:
\begin{multline*}
\mathds{P}({\bm D}_n(i)=d_i,{\bm D}_n(j)=d_j\ |\ ij\in E({\bm T}({\bm D}_n)))=\\
{n\over n-2}{d_i+d_j-2\over 2}\mathds{P}({\bm D}_n(i)=d_i,{\bm D}_n(j)=d_j)
\end{multline*}
\end{remark}
\section{The limit of ${\bm T}({\bm D}_n)$}
\label{sec.limit}
In the last section we discussed tree sequences ${\bm T}_n$ for which $inj_{lab}((T,r),{\bm T}_n)$ was convergent for every $(T,r)$.
We now turn to neighborhood statistics.
First we want to relate them to labeled subgraph densities.
We will express the neighborhood statistics as functions of the labeled subgraph densities.
As before let $U^l$ denote the set of all finite $l$-deep rooted tree.
Consider an $l$-deep rooted tree with root $x$: $T^l_x\in U^l$, with $|T^l_x|=k$.
Let us denote the nodes at distance $i$ from the root by $T_i$, and $|T_i|=t_i$ ($t_0$ is just $1$, $t_1$ is the degree of the root).
$B_G(v,l)$ is the rooted $l$-ball around $v$ in $G$ and ${\bm T}_n={\bm T}({\bm D}_n)$ is a random labeled tree with degree distribution ${\bm D}_n$.
Let $\sigma, \rho\in Aut(T_x^l)$ be two rooted automorphisms of the rooted tree $T_x^l$.
We say that $\sigma \sim \rho$ if and only if there exists $\tau \in Aut(T_x^l)$, such that $\tau$ fixes every vertex not in $T_{l}$ and $\sigma \circ \tau = \rho$.
$\sim$ is an equivalence relation.
The equivalence classes have $\prod_{i\in T_{l-1}}(D(i)-1)!$ elements, hence it follows
\begin{equation}
\label{eqn.aut}
|Aut(T_x^l)|=|Aut(T_x^l)\slash\sim|\prod_{i\in T_{l}}(D(i)-1)!.
\end{equation}
It is easy to see that
\begin{equation}
\label{eqn.neigh.lab.equiv}
p(l,T_x^l,{\bm T}_n)={1\over n}{\displaystyle {\bm X}_n^{(T',(r_i')_{i=1}^{|T'|})}\over |Aut(T_x^l)\slash \sim|}, \textrm{ where}
\end{equation}
\begin{equation}
\label{eqn.neig.lab.eqv}
\begin{array}{l}
T'=T_x^l\setminus T_l \\
r_i'=\left \{
\begin{array}{ll}
0 & i\notin T_l\cup T_{l-1}\\
D_{T_x^l}(i)-1 & i\in T_{l-1}.
\end{array}
\right.
\end{array}
\end{equation}
If ${\bm T}_n$ is a convergent random tree sequence then from (\ref{eqn.rnd.conv}) and (\ref{eqn.neigh.lab.equiv}) we have that for any $T'$ defined above
\begin{equation}
\mathds{D}^2\left({X_n^{T'} \over n}\right) \to 0.
\end{equation}
For bounded degree graphs the convergence of the neighborhood densities implies the convergence of the graph sequence in the sense of Benjamini and Schramm.
We saw earlier in Example \ref{example.star} that for bounded average degree graphs this is not the case.
The convergence of the neighborhood densities alone is not enough.
We need also (\ref{eqn.neigh.cons}) to hold.
The reason is that for fixed $k$ the $k$-neighborhood of the large degree nodes is large (${\cal O}(n)$).
In Example \ref{example.star} even if $k=1$, every node "sees" the center node (eg. every neighborhood with radius $1$ contains the center node) and so every $2$ radius neighborhood contains ${\cal O}(n)$ vertices, which is unbounded.
Assign remainder degrees $r$ ($r_i=0,\ \forall i\notin T_l$) to the rooted tree $T_x^l$ and forget the root, then using Lemma \ref{lem.forest.prob}
\begin{multline}
\label{eqn.exp.subtree}
\mathds{E}({\bm X}_n^T)=\mathds{E}\left(\sum_{\phi:V(T)\mapsto [n]}{\bm I}_n(T,\phi)\right)={n!\over (n-k)!}\mathds{P}({\bm I}_n(T,\phi)=1)=\\
n{n-1\over n-k}\mathds{P}({\bm D}_n(\{1,2,\cdots ,k\})=D_T)H(r,T).
\end{multline}
From (\ref{eqn.neig.lab.eqv}) we have that
$$
p(l,T_x^l,{\bm T}_n)={1\over n}{{\bm X}_n^{T'}\over |Aut(T_x^l)\slash\sim|}.
$$
We want to define an infinite random rooted tree which is the limit of ${\bm T}_n$.
Let
$$
\mu_n(T_x^l)={1\over n}{\mathds{E}({\bm X}_n^{T'})\over |Aut(T_x^l)\slash\sim|}.
$$
Assume we have a convergent sequence of random trees ${\bm T}_n$ with degree sequence ${\bm D}_n$.
Further assume that ${\bm D}_n\to {\bm D}=({\bm D}_0,{\bm D}_0,\cdots)$ and let $\gamma=\mathds{E}(D_0)-1$.
Define
\begin{multline}
\label{eqn.prob.dist}
p(T_x^l)=\lim_{n\to\infty}\mu_n(T_x^l)=\\
\lim_{n\to\infty}{1\over |Aut(T_x^l)\slash\sim|}
{n-1\over n-k}\mathds{P}({\bm D}_n(\{1,2,\cdots ,k\})=D_{T'})H(r,T')=\\
{\prod_{i\notin T_{l}}\mathds{P}(D_0=d_i)(d_i-1)!\over |Aut(T_x^{l})|}t_l
\end{multline}
We can expand the formula
\begin{multline*}
H(r,T')=\sum_{i\in V(T')}r_i\prod_{i\in V(T')}{(d_i+r_i-1)!\over r_i!}=\\
\sum_{i\in T_{l-1}}(d_i-1)\prod_{i\notin T_{l-1}\cup T_l}(d_i-1)!=t_l\prod_{i\notin T_{l-1}\cup T_l}(d_i-1)!.
\end{multline*}
Then the last equation in (\ref{eqn.prob.dist}) follows using equation (\ref{eqn.aut}) and the expansion of $H(r,T')$.
Define $\mu({\cal T}(F))=p(F)$.
As the sets ${\cal T}(F)$ generate the $\sigma$-algebra, we can extend $\mu$ to $\cal T$ if $\mu$ satisfies (\ref{eqn.neigh.cons}).
If this is the case then $\mu$ is a random infinite rooted tree.
\begin{lemma}
\label{lem.mu.prob.measure}
Let ${\bm D}_n$ be an exchangeable random degree sequence and assume that ${\bm D}_n\to {\bm D}$, where ${\bm D}$ is an infinite IID random sequence of the variable ${\bm D}_0$.
Let $\mu$ be the associated measure defined above.
$\mu$ extends to a probability measure on $\cal G$ if and only if
$\mathds{E}({\bm D}_0)=2$ (or equivalently $\gamma=1$).
\end{lemma}
\noindent
{\bf Proof:}
We only need to show that $\mu$ satisfies (\ref{eqn.neigh.cons})
\begin{equation}
\label{eqn.prob.measure}
p(T_x^{l-1})=\sum_{\displaystyle T_x^{l}: B_{T_x^{l}}(x,l-1)\cong T_x^{l-1}}p(T_x^{l})\Leftrightarrow \gamma=1
\end{equation}
We have
$$
p(T_x^l)={\prod_{i\notin T_l}\mathds{P}({\bm D}_0=d_i)(d_i-1)!\over |Aut(T^l_x)|}t_l.
$$
Now rearranging the sum by the degrees of the leafs of $T_x^{l-1}$ in $T_x^l$ we have
\begin{multline*}
\sum_{\displaystyle T_x^{l}: B_{T_x^{l}}(x,l-1)\cong T_x^{l-1}}p(T_x^{l})=
\sum_{D_{T_x^l}(i)=1,\,i\in T_{l-1}}^\infty p(T_x^{l-1}\cup (d_i)_{i\in T_{l-1}})=\\
\prod_{j\notin T_{l-1}\cup T_l}\mathds{P}({\bm D}_0(j)=d_j)(d_j-1)!
\sum_{D_{T_x^l}(i)=1,\,i\in T_{l-1}}^\infty {\prod_{i\in T_{l-1}}\mathds{P}({\bm D}_0(j)=d_j)(d_i-1)!\over |Aut(T_x^l)|}t_l=\\
{\prod_{j\notin T_l\cup T_{l-1}}\mathds{P}({\bm D}_0(j)=d_j)(d_j-1)!\over
|Aut(T_x^l\setminus T_l)|}t_{l-1}\gamma,
\end{multline*}
where the last equation follows from the fact that for fixed $d_i,\ i\in T_{l-1}$ every $\sigma\in Aut(T_x^{l-1})$ has only one extension in $Aut(T_{l})\slash\sim$.
Now (\ref{eqn.prob.measure}) will hold only if $\gamma=1$.
It follows that (\ref{eqn.prob.measure}) holds if and only if $\mathds{E}(D_0)=2$ $(\gamma=1)$.
\qed
\noindent
{\bf Proof of Theorem \ref{thm.main}:}\\
\noindent
Let ${\bm D}_n$ be a degree sequence and ${\bm T}({\bm D}_n)={\bm T}_n$ be the associated random tree sequence.
First assume, that the degree sequence converges to the distribution ${\bm D}=({\bm D}_0,{\bm D}_0,\cdots)$ and $\mathds{E}({\bm D}_0)=2$.
From equation (\ref{eqn.independency}) we get that for an arbitrary tree $T$, $\mathds{D}^2\left({{\bm X}_n^T\over n}\right)\rightarrow 0$.
Then by equation \eqref{eqn.neigh.lab.equiv} we have that for every $T_x^l$ $l$-deep rooted tree, the neighborhood statistics converge in probability to a limiting distribution $p(T_x^{l})$.
As the assumptions of Lemma \ref{lem.mu.prob.measure} hold we have that $p(T_x^l)$ defines a measure $\mu$ on infinite rooted trees and so ${\bm T}_n\rightarrow \mu$.
On the other hand assume that ${\bm T}_n$ converges to a random infinite rooted tree $\mu$.
Then by equation (\ref{eqn.neigh.lab.equiv}) we get that the number of degree $d$ vertices is concentrated.
Using that our degree distribution is exchangeable we get that ${\bm D}_n\rightarrow {\bm D}=({\bm D}_0,{\bm D}_0,\cdots )$ and $\mathds{E}({\bm D}_0)=2$.
This completes the proof of Theorem \ref{thm.main}.
|
1,108,101,566,396 | arxiv | \section{Introduction}
\label{sec:intro}
\begin{table*}[t]
\centering
{\renewcommand{\arraystretch}{1.1}
\setlength{\tabcolsep}{6pt}
\small
\begin{tabular}{@{}l c c c c c c c c@{}}
\toprule
{\bf Methods} & \makecell{{\bf Frozen}\\{\bf LMs}} & {\bf Automated} & \makecell{{\bf Gradient-}\\{\bf free}} & \makecell{{\bf Guided}\\{\bf Optimize}} & \makecell{{\bf Few-}\\{\bf shot}} & \makecell{{\bf Zero-}\\{\bf shot}} & \makecell{{\bf Transferrable}\\ {\bf b/w LMs}} & {\bf Interpret.} \\ \midrule
Finetuning & {\color{alizarin} \ding{55}} & {\color{kellygreen} \ding{51}} & {\color{alizarin} \ding{55}} & {\color{kellygreen} \ding{51}} & {\color{alizarin} \ding{55}} & {\color{alizarin} \ding{55}} & {\color{alizarin} \ding{55}} & {\color{alizarin} \ding{55}} \\
In-context Demo. & {\color{kellygreen} \ding{51}} & {\color{kellygreen} \ding{51}} & {\color{kellygreen} \ding{51}} & {\color{alizarin} \ding{55}} & {\color{kellygreen} \ding{51}} & {\color{alizarin} \ding{55}} & {\color{kellygreen} \ding{51}} & {\color{kellygreen} \ding{51}} \\
Instructions & {\color{kellygreen} \ding{51}} & {\color{alizarin} \ding{55}} & {\color{kellygreen} \ding{51}} & {\color{alizarin} \ding{55}} & {\color{kellygreen} \ding{51}} & {\color{kellygreen} \ding{51}} & {\color{kellygreen} \ding{51}} & {\color{kellygreen} \ding{51}} \\
Manual Prompt & {\color{kellygreen} \ding{51}} & {\color{alizarin} \ding{55}} & {\color{kellygreen} \ding{51}} & {\color{alizarin} \ding{55}} & {\color{kellygreen} \ding{51}} & {\color{kellygreen} \ding{51}} & {\color{kellygreen} \ding{51}} & {\color{kellygreen} \ding{51}} \\
Soft Prompt Tuning & {\color{kellygreen} \ding{51}} & {\color{kellygreen} \ding{51}} & {\color{alizarin} \ding{55}} & {\color{kellygreen} \ding{51}} & {\color{kellygreen} \ding{51}} & {\color{alizarin} \ding{55}} & {\color{alizarin} \ding{55}} & {\color{alizarin} \ding{55}} \\
Discrete Prompt Enum. & {\color{kellygreen} \ding{51}} & {\color{kellygreen} \ding{51}} & {\color{kellygreen} \ding{51}} & {\color{alizarin} \ding{55}} & {\color{kellygreen} \ding{51}} & {\color{kellygreen} \ding{51}} & {\color{kellygreen} \ding{51}} & {\color{kellygreen} \ding{51}} \\
AutoPrompt & {\color{kellygreen} \ding{51}} & {\color{kellygreen} \ding{51}} & {\color{alizarin} \ding{55}} & {\color{kellygreen} \ding{51}} & {\color{kellygreen} \ding{51}} & {\color{alizarin} \ding{55}} & {\color{kellygreen} \ding{51}} & {\color{kellygreen} \ding{51}} \\
\midrule
RLPrompt ({\bf Ours}) & {\color{kellygreen} \ding{51}} & {\color{kellygreen} \ding{51}} & {\color{kellygreen} \ding{51}} & {\color{kellygreen} \ding{51}} & {\color{kellygreen} \ding{51}} & {\color{kellygreen} \ding{51}} & {\color{kellygreen} \ding{51}} & {\color{kellygreen} \ding{51}} \\
\bottomrule
\end{tabular}
}
\vspace{-5pt}
\caption{
Comparison of different (prompting) paradigms for using pretrained LMs on downstream tasks, in terms of a number of desirable properties.
\emph{Guided Optimize} means the optimization or search is guided by either gradient or reward signals, and thus tends to be more efficient than those without guidance (e.g., enumeration).
Prompts consisting of discrete tokens (as opposed to embeddings) are often \emph{transferrable}/reusable by different LMs. Our approach with RL can optimize prompts with rewards without any supervised data (\emph{zero-shot}).
\emph{Discrete Prompt Enum.} enumerates discrete prompts (e.g., by paraphrasing or generation) from which the best is selected \cite[e.g.,][]{jiang2020can, gao2021LMBFF, liu2021KATE, prasad2022grips}.
\emph{AutoPrompt} \cite{shin2020autoprompt} uses gradients to edit the discrete prompt tokens.
See \S\ref{sec:relatedwork} for more details.
}
\label{tab:summary}
\vspace{5pt}
\end{table*}
Prompting has emerged as a promising approach to perform a wide range of NLP problems using large pretrained language models (LMs), including left-to-right models such as GPTs \cite{radford2019GPT2, brown2020language} and masked LMs such as BERT \cite{devlin-etal-2019-bert}, RoBERTa \cite{liu2019roberta}, etc. Compared to conventional fine-tuning that expensively updates the massive LM parameters for each downstream task, prompting concatenates the inputs with an additional piece of text that steers the LM to produce desired outputs.
A key question with prompting is how to find the optimal prompts to improve the LM's performance on various tasks, often with only a few training examples.
One of the most popular
schemes of prompt optimization is to
tune \emph{soft} prompts (i.e., continuous embedding vectors) as they are amenable to gradient descent \citep[][\textit{etc.}]{lester2021promptuning, li2021prefix, vu2021spot, gu2021ppt, liu2021ptuningv1, mokady2021clipcap, qian2022contrastiveprefix, an2022input}. However,
the resulting continuous embedding learned with an LM is, by its nature, hard for humans to understand \cite{khashabi2021prompt, lester2021promptuning, hambardzumyan2021warp, mokady2021clipcap} and incompatible for use with other LMs. Besides, the required LM internal gradients are often expensive to compute, or
simply unavailable for LMs deployed with only inference APIs (e.g., GPT-3). It is thus often desirable to
use \emph{discrete} prompts which consist of concrete tokens from a vocabulary. However, the discrete nature of the prompts renders the optimization very difficult. Previous work has typically relied on manual engineering with heuristics \cite{petroni2019KB, brown2020language, schick2021exploiting, tam2021improving}, or automatic enumeration of multiple prompt candidates, from which the best one is picked
\cite{jiang2020can, gao2021LMBFF, liu2021KATE, prasad2022grips}.
AutoPrompt \cite{shin2020autoprompt} uses gradient information to edit the prompt tokens, which suffers from training instability as well as the same applicability issue as gradient-based soft prompting, showing limited effectiveness in practice.
This paper presents {{\textsc{RLPrompt}}}\xspace, a new discrete prompt optimization approach based on reinforcement learning (RL). This approach brings together a wide range of desirable properties for broad and efficient use on diverse tasks and LMs (Table~\ref{tab:summary}).
Crucially, rather than directly optimizing/editing the discrete prompt tokens, which has been difficult and inefficient, {{\textsc{RLPrompt}}}\xspace parameterizes a policy network that, after trained, generates the desired prompts. Discrete prompt optimization thus amounts to learning a small number of policy parameters which we set as an MLP layer inserted into a frozen compact LM such as distil-GPT2 \cite{2019distilgpt2}.
This formulation also allows us to employ off-the-shelf RL algorithms \citep[e.g., ][]{guo2021text} that learn the policy with arbitrary reward functions---defined either with available data (e.g., in few-shot classification) or other weak signals when no supervised data is accessible (e.g., in controllable text generation).
On the other hand, RL for prompt optimization poses new challenges to learning efficiency: the large black-box LM presents a highly complex environment that, after receiving the prompt (i.e., actions), has to go through a long series of complex transitions (e.g., reading the input and inferring the outputs) before computing the rewards. This makes the reward signals extremely unstable and hard to learn from.
To overcome the difficulty, we propose two simple yet surprisingly effective ways to normalize and stabilize the rewards, and improve the optimization efficiency.
Experiments on few-shot classification and unsupervised text style transfer show our approach improves over a wide range of finetuning and prompting methods (as those in Table~\ref{tab:summary}). The resulting discrete prompts also facilitate rich interpretations and analyses for new insights into LM prompting. We also show the automatic optimization is robust to different choices of verbalizers in classification.
In particular, the optimized prompts, though inducing strong task performance, tend to be gibberish text without clear human-understandable meaning, echoing the recent research \citep{webson2021prompt, zhao2021calibrate, prasad2022grips} that LMs making use of prompts do not necessarily follow human language patterns. Perhaps surprisingly, those gibberish prompts learned with one LM can be used in other LMs for significant performance, indicating that those different pretrained LMs have grasped shared structures for prompting.
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{figure-2-d5.pdf}
\vspace{-15pt}
\caption{Overview of {{\textsc{RLPrompt}}}\xspace for discrete prompt optimization. All LMs (white boxes) are frozen. We build our policy network by training a task-specific MLP module inserted into a frozen pre-trained LM.
The figure above illustrates generation of a prompt (left), example usages in a masked LM for classification and a left-to-right LM for generation (top-right and bottom-right, respectively), and update of the MLP using RL reward signals.
}
\label{fig:prompt-generator-implementation}
\vspace{5pt}
\end{figure*}
\section{Discrete Prompt Optimization with RL}
\label{sec:method}
We present {{\textsc{RLPrompt}}}\xspace, a framework for learning prompts of discrete tokens for pre-trained LMs to succeed in a wide range of NLP tasks.
As discussed in \S\ref{sec:intro}, discrete prompts are easier to interpret and use
than continuous prompts, but are also more challenging to learn due to intractable optimization over discrete tokens.
To solve this difficulty, we formulate discrete prompt optimization as a reinforcement learning (RL) problem, using a continuous policy network to explore the prompt space.
The policy network is highly parameter-efficient, only training a small MLP layer over a frozen compact LM (such as distilGPT-2).
Below, we present our RL formulation of discrete prompt optimization (\S\ref{subsec:method:formulation}-\ref{subsec:method:rl}).
After that, we discuss the design of our policy network (\S\ref{subsec:method:model}).
Finally, we describe our reward engineering techniques to improve RL training (\S\ref{subsec:method:reward-engineering}).
\subsection{The Discrete Prompt Optimization Problem}
\label{subsec:method:formulation}
Recent work \cite{brown2020language, jiang2020can, khashabi2021prompt, gao2021LMBFF} shows it's possible to combine discrete text prompt $\mathbf{z}$ with input $\mathbf{x}$ to directly perform various NLP tasks using a pre-trained LM's generative distribution $P_{\text{LM}}(\mathbf{y} | \mathbf{z}, \mathbf{x})$, without needing to fine-tune the model.
For instance, in classification, the LM can be a masked language model (MLM) such as BERT \cite{devlin-etal-2019-bert}, and $\mathbf{y}$ is the class-label token (a.k.a. verbalizer like \texttt{positive} and \texttt{negative}) in the mask position; in a generation task, the LM can be a left-to-right model such as GPT-2 \cite{radford2019GPT2}, and $\mathbf{y}$ is the generated text. See Figure~\ref{fig:prompt-generator-implementation} for illustrative examples. We use $\mathbf{y}_\text{LM}(\mathbf{z} , \mathbf{x})$ to denote the LM output on $\mathbf{x}$ prompted by $\mathbf{z}$.
Our goal is to find the optimal discrete prompt $\mathbf{z}^*$ from vocabulary $\mathcal{V}$ to maximize some downstream performance measure $R$ of $\mathbf{y}_\text{LM}(\mathbf{z}^* , \mathbf{x})$.\footnote{Technically $\mathcal{V}$ can be any set of tokens. Here we assume $\mathcal{V}$ is the same as the LM's vocabulary for simplicity.}
The metric $R(\mathbf{y})$ can be as simple as match with ground truth $\mathbf{y}^*$ (e.g., in classification when data is available), but can also be more complex like the success of controllable text generation, which composes quality aspects such as style accuracy, language quality, and semantic preservation.
Assuming the prompts have fixed length $L$, we write the task of \emph{discrete text prompt optimization} in the general format below:
\begin{equation}
\max\nolimits_{\mathbf{z} \in \mathcal{V}^L} R\left(\mathbf{y}_\text{LM}(\mathbf{z}, \mathbf{x})\right).
\end{equation}
The optimization above, however, can be intractable because the discrete tokens of $\mathbf{z}$ are not amenable to gradient-based optimization, while the brute-force search space grows exponentially in the order of $\mathcal{O}(\mathcal{V}^L)$.
Previous work either approximate gradients over $\mathbf{z}$ using their continuous LM embeddings \cite{shin2020autoprompt} or tweak human-written prompts with heuristics \cite{jiang2020can,mishra2021reframing,prasad2022grips}, to some success.
\subsection{The Reinforcement Learning Formulation}
\label{subsec:method:rl}
To overcome the difficulty, we formulate discrete text prompt optimization as an RL problem,
in which an agent learns to select prompt tokens $[z_1, \dots, z_L]$ one-by-one to maximize the downstream reward $R(\mathbf{y}_\text{LM}(\mathbf{z}, \mathbf{x}))$. At each time step $t$, the agent receives previous prompt tokens $\mathbf{z}_{<t}$ and generates the next prompt token $z_t$
according to a policy $\pi(z_t | \mathbf{z}_{<t})$.
After the agent finishes the entire prompt $\hat{\mathbf{z}}$, it receives the task reward $R(\mathbf{y}_\text{LM}(\hat{\mathbf{z}}, \mathbf{x}))$.
Parameterizing the policy with $\thetav$, we can rewrite the problem above as
\begin{equation}
\max\nolimits_{\thetav} R(\mathbf{y}_\text{LM}(\hat{\mathbf{z}}, \mathbf{x})),\ \hat{\mathbf{z}} \sim \prod_{t=1}^L \pi_{\thetav}(z_t | \mathbf{z}_{<t}).
\label{eq:rl}
\end{equation}
Compared to typical (soft) prompt tuning approaches, the RL formulation above has the key advantage of not needing gradient access to the LM,
treating them instead as black-box functions.
This enables us to optimize prompts for LMs whose gradients are too expensive to compute, or LMs that are solely available as inference APIs (e.g., GPT-3), using arbitrary reward functions. Compared to previous discrete prompt enumeration/paraphrasing, the RL approach explores the prompt space more efficiently guided by the reward signals.
The policy formulation also brings added flexibility. For instance, it can accommodate other information such as the input $\mathbf{x}$, leading to input-specific prompts (e.g., as used in text style transfer in \S\ref{subsec:method:reward-engineering}).
During training, we explore the prompt space by sampling from the policy network.
After the policy is trained, during inference, we select tokens greedily at each step to produce a deterministic prompt.
The reward objective in Eq.\eqref{eq:rl} can be optimized with any off-the-shelf RL algorithm. We use the latest soft Q-learning \citep[SQL,][]{guo2021text} which has shown advanced
learning efficiency and performance on various text generation problems, with open-source implementation.\footnote{Our preliminary experiments indicate SQL often achieves superior performance than common policy gradient methods.
} Specifically, we use only its on-policy learning component. We refer interested readers to \citet{guo2021text} for more details.
\subsection{Efficient Parameterization of Policy}
\label{subsec:method:model}
We present an efficient parameterization of the policy network $\pi_{\thetav}$, which adapts a frozen pre-trained LM (i.e., policy LM) with a simple MLP layer that contains all the parameters $\thetav$ to be trained. The policy LM need not be the same as the LM we optimize the prompt for (i.e., task LM), and can be any LM with accessible gradients.
In practice, we only use compact models such as distilGPT-2 \cite{2019distilgpt2} for the policy LM.
Specifically, we use the LM to extract contextual embeddings of partial prompt $\hat{\mathbf{z}}_{<t}$
, apply the added task-specific MLP layer to compute the adapted embeddings, and pass the output into the model's original LM head to obtain the next prompt token probabilities, as illustrated in the left of Figure \ref{fig:prompt-generator-implementation}.
During training, we compute the MLP gradients by back-propagating through the policy LM.
Our policy network saves parameters by keeping the LM frozen, including its expensive LM head.
As a concrete example, using a distilGPT-2 with hidden size of 768, we implement a generously parameterized MLP with 1 hidden layer and 2048 hidden states that only requires 3.1M parameters, a small fraction of the 82M parameters of distilGPT-2, itself a very small LM.
Even with changing relatively few parameters, the efficient parameterization works well by producing good performance in experiments.
\subsection{Reward Engineering and Stabilization}
\label{subsec:method:reward-engineering}
Proper design of reward functions, a.k.a. reward engineering, is crucial to training efficiency and success in RL \cite{sutton2018reinforcement}.
Discrete prompt optimization, in particular, poses new challenges due to its highly complex reward functions---to receive the reward, each prompt has to go through many processing steps (e.g., combining with input, passing through a large black-box LM, and inferring the outputs), each step introducing its own variations. This makes the reward signal highly unstable, and difficult to assess progress towards the task goal.
To solve these difficulties, we propose two simple reward engineering techniques that are effective at encouraging and stabilizing discrete prompt training.
\paragraph{Piecewise Reward}
With a misspecified or vulnerable reward function, the policy may maximize it without moving towards the desired goal. For example, while learning text classification using the probability of ground-truth labels as the reward function, the policy can sometimes find adversarial prompts \cite{Wallace2019UniversalAdversarial,xu2022exploring} that lead to very high probabilities for a single class given arbitrary inputs.
To overcome the issue, we propose to design piecewise reward functions \cite{Yu2020MetaWorldAB, Rengarajan2022ReinforcementLW} with both smooth and disjoint components to better express the task priorities and improve robustness.
Typically, we can include a dense, quantitative signal (e.g., label probability) to measure fine-grained progress towards the goal, and a sparse, qualitative signal only when certain states are achieved (e.g., accurate prediction on all classes) which can be encouraged by a large sudden increase in the reward.
We illustrate an example design of piecewise reward in text classification (\S\ref{subsec:classifysetting}).
\paragraph{Input-Specific $z$-Score Reward}
\label{reward: z-score}
Different inputs can have different levels of difficulties for reasoning or prediction. Prompted LMs can thus see different reward scales
for different inputs.
In text style transfer (\S\ref{subsec:tst-experiment}), for instance, some sentences may only require changing a few words to alter the style, so the LM naturally achieves higher rewards on them than other sentences, which may require a more significant rewrite.
Naively optimizing for all inputs with the same reward scale, therefore, can lead to training bias and instability.
To mitigate this problem, we propose to transform the reward using input-specific $z$-score, which normalizes the rewards by the input-specific means and standard deviations.
This can be seen as an analog to environment-specific reward normalization, a commonly-used technique in RL.
During prompt optimization, we sample a batch of prompts $Z(\mathbf{x})$ for each input $\mathbf{x}$, and compute the reward $R(\mathbf{y}_{\text{LM}}(\mathbf{z}, \mathbf{x}))$ for each prompt $\mathbf{z} \in Z(\mathbf{x})$.
After that, we compute the reward $z$-scores across prompts $Z(\mathbf{x})$. Using the shorthand $R_{\mathbf{x}}(\mathbf{z}) = R(\mathbf{y}_{\text{LM}}(\mathbf{z}, \mathbf{x}))$, we can write the transformation as below:
\begin{equation*}
z\text{-score}(\mathbf{z}, \mathbf{x}) = \frac{R_{\mathbf{x}}(\mathbf{z}) - \mean\nolimits_{\mathbf{z}' \in Z(\mathbf{x})} R_{\mathbf{x}}(\mathbf{z}')}{\std\nolimits_{\mathbf{z}' \in Z(\mathbf{x})}R_{\mathbf{x}}(\mathbf{z}')}.
\end{equation*}
To distinguish the $z$-scores of different inputs in the same batch, we condition our policy network on the inputs, i.e., $\pi_{\thetav}(\mathbf{z}|\mathbf{x})$.
\section{Experiments}
The proposed {{\textsc{RLPrompt}}}\xspace is generally applicable to various types of pretrained LMs, to perform different NLP tasks using different prompt formats (Figure~\ref{fig:prompt-generator-implementation}). We evaluate our approach on both classification (in few-shot setting, \S\ref{subsec:classifysetting}) and generation (unsupervised text style transfer, \S\ref{subsec:tst-experiment}), and perform rich analyses for new insights of LM prompting (\S\ref{sebsec:analysis}).
\input{tab-NLU-dataset}
\subsection{Few-Shot Text Classification}
\label{subsec:classifysetting}
Learning text classification with few labeled examples has been a problem of interest in many applications \cite{xu2018lifelong,yu2018diverse}.
Previous work on prompting has applied various methods to the problem
\cite{brown2020language,shin2020autoprompt,schick2021exploiting,lester2021promptuning}.
We adopt the typical prompting approach which formulates classification using prompted LM as a generation problem, such as token infilling for an MLM like BERT, or next-token prediction for a left-to-right LM like GPT-2.
Classification, therefore, amounts to selecting tokens that correspond to a set of predetermined class labels, a.k.a., \emph{verbalizers} (e.g., \texttt{great} for positive sentiment and \texttt{terrible} for negative sentiment).
For instance, to classify the sentiment of an input sentence ``\texttt{food is delicious}'' using an MLM, we first fill our prompt and the input into a template ``\texttt{[MASK] [Prompt] [Input]}''.
After that, we select the verbalizer token with the highest probability of filling into the \texttt{[MASK]} position.
\paragraph{Reward Function}
\label{reward:classification-piecewise}
The text classification task aims to correctly assign input text $\mathbf{x}$ to its ground truth label $c^*$ from a set of classes $\mathcal{C}$. In the context of prompting, it means assigning the highest probability to verbalizer $\mathbf{y}_{c^*}$ (e.g., \texttt{great})
which corresponds to class $c^*$ (e.g., positive sentiment).
To mitigate the adversarial cases discussed in \S\ref{subsec:method:reward-engineering}, we design
a piecewise reward function that
encourages prompts to be sensitive to \textit{all} classes.
Specifically, we compute the reward for prompt $\mathbf{z}$ using one example from each class $c \in \mathcal{C}$ for a total of $|\mathcal{C}|$ examples.
For each example $(\mathbf{x}_{c}, \mathbf{y}_c)$, we compute the prediction scores $S_\mathbf{z}(\mathbf{y}) := \log P_{\text{LM}}(\mathbf{y} | \mathbf{z}, \mathbf{x}_c)$ for each verbalizer $\mathbf{y}$, and take the argmax as the prediction $\hat{\mathbf{y}}_c$.
After that, we compare $\hat{\mathbf{y}}_c$ with the ground truth $\mathbf{y}_c$.
If any prediction $\hat{\mathbf{y}}_c$ is incorrect, we compute the reward as a hinge-loss-style objective for the gap between the target class score
and the highest score from other classes, written as $\text{Gap}_\mathbf{z}(c) = S_\mathbf{z}(\mathbf{y}_c) - \max_{c'\neq c} S_\mathbf{z}(\mathbf{y}_{c'})$. If all predictions are correct, we introduce a large sudden increase in the reward to express the prompt's desirability. Thus, we define the reward function for prompt $\mathbf{z}$ as below:
{
\begin{equation}
\begin{split}
R(&\{\mathbf{x}_c, \mathbf{y}_c, \hat{\mathbf{y}}_c\}_{c=1}^{|\mathcal{C}|}) = \\
&\begin{cases}
\lambda_1
\sum_{c=1}^{|\mathcal{C}|}
\min[0, \text{Gap}_\mathbf{z}(c)]
\hspace{0.6cm}
\begin{aligned}[t]
\text{if any } \hat{\mathbf{y}}_c \neq \mathbf{y}_c
\end{aligned}
\\
\lambda_2
\sum_{c=1}^{|\mathcal{C}|}
\text{Gap}_\mathbf{z}(c)
\hspace{3.4cm}
\text{o.w.,}
\end{cases}
\end{split}
\end{equation}
}%
where $\lambda_2 > \lambda_1$ are balancing weights. Intuitively, the reward function above stays negative as long as any output $\hat{\mathbf{y}}_c$ is incorrect, but provides a large positive signal when the opposite is true. During training, we take the examples $\mathbf{x}_c$ from our few-shot training set, and set $\lambda_1 = 1.2$ and $\lambda_2 = 2.0$ by tuning on the validation set.
In the experiment, we combine the above piecewise reward together with part of the $z$-score normalization (\S\ref{reward: z-score}) by subtracting the mean reward over examples in a batch.
\paragraph{Dataset} Following \cite{gao2021LMBFF, hu2021knowledgeable, sun2022black}, we conduct our experiments on several text classification benchmarks including sentiment analysis and topic classification. For sentiment analysis, we choose SST-2 \cite{socher2013recursive}, Yelp polarity \cite{zhang2015character}, MR \cite{pang2005MR}, and CR \cite{hu2004CR}. For topic classification, we choose AG's News \cite{zhang2015character}.
The statistics of the datasets are shown in Table~\ref{tab:nlu-dataset}.
\paragraph{Few-Shot Setting}
Following previous works \cite{gao2021LMBFF, min2021noisy, sun2022black}, we randomly draw $16$ samples per class from the original training set to form a $16$-shot training set. We also draw another $16$ samples from the original training set to form a validation set, in order to form the standard few-shot learning setting \cite{perez2021trueFS}. We pick three prompts that show the highest performance on our validation set within each experiment. Due to the instability and inherent randomness of the setup~\cite{henderson2018RLrandom, gao2021LMBFF}, we sample different training and validation sets with $5$ random seeds. Similarly, we run each experiment with $3$ random seeds, and report the average accuracy and standard deviation.
\paragraph{Baselines}
We compare our method with all training and prompting paradigms shown in Table~\ref{tab:summary}, described in the list below (more implementation details in Appendix \S\ref{appendix:implementation:fstc}):
\begin{itemize}[noitemsep, nolistsep, leftmargin=0.35cm]
\item \textbf{Finetuning}: fine-tuning the entire PLM with the classification head on our few-shot training examples.
\item \textbf{Manual Prompt}: taking the hand-crafted prompts from \cite{schick2021exploiting}.
\item \textbf{In-context Demonstration}~\cite{brown2020language}: randomly selecting one training sample per class and concatenate them with the input texts.
\item \textbf{Instructions}: manually creating task descriptions and label definitions following the \textit{natural instructions} protocol \cite{mishra2021NI} as shown in Table~\ref{tab:nlu-instruction} in appendix, and prepending the instructions to the input texts.
\item \textbf{Prompt Tuning} \cite{lester2021promptuning}: a soft prompt approach using gradient for prompt tuning.
\item \textbf{Black Box Tuning} \cite{sun2022black}: mixing discrete and soft prompts, and tuning the soft part in a gradient-free manner.
\item \textbf{GrIPS} \cite{prasad2022grips}: as a discrete prompt enumeration approach, performing phrase-level edition on instructions and selects the best one.
\item \textbf{AutoPrompt}~\cite{shin2020autoprompt}: adding discrete trigger tokens as prompts and iteratively updating prompts by gradient-guided search.
\end{itemize}
\paragraph{Experiment Setup}
We use RoBERTa-large \cite{liu2019roberta} as our backbone model.
For approach, we set prompt length $L=2$, and insert the prompt tokens at the same positions with our manual prompts \cite{schick2021exploiting, tam2021improving}\footnote{It is known that increasing prompt length and/or inserting prompt tokens in multiple positions can often lead to improved performance. We leave the further experiments in the future.}. We use distilGPT2 as the frozen base of the policy network. Please see Appendix \S\ref{appendix:implementation:classify} for more training details.
\paragraph{Results}
\input{tab-NLU-main-result}
We present the few-shot classification results in Table~\ref{tab:cls-main}.
Compared to existing discrete prompt optimization frameworks (GrIPS and AutoPrompt), our method finds more powerful prompts, achieving substantially higher accuracy on all benchmarks.
When comparing against soft Prompt Tuning, {{\textsc{RLPrompt}}}\xspace achieves higher and more stable (e.g., lower std-dev) accuracy, as our approach does not suffer from the sensitivity to initialization, a common issue of soft prompt tuning methods in few-shot settings \cite{gu2021ppt, vu2021spot, su2021transferability, li2021prefix}.
Our approach achieve comparable performance with Black Box Tuning, a mixed prompt approach that specifically tunes the soft part of the prompt. It is interesting to explore the integration to enable both discrete and soft optimization.
Comparing against model finetuning in the few-shot setting shows our approach achieves higher performance along with better stability in most of our benchmarks. Other than perturbing the LM original knowledge due to parameter change in finetuning, the prompting methods better stimulate the LM's power without hurting its inherent generic capability.
\subsection{Text Style Transfer}
\label{subsec:tst-experiment}
Controlling the attributes of generated text has long been a challenging problem for natural language generation \cite{yu-2017-seqgan,hu2017toward}.
Specifically, the goal of text style transfer (TST) is to (1) change the style of an input sentence while (2) preserving its content, usually without access to supervised training data. For instance, in a sentiment transfer task,
given a negative sentence ``The food is disgusting'', a good output would be the positive sentence ``The food is delicious''. The training data, however, only comprise of negative and positive sentences with no input-output relationships.
Even without supervision data, our method can learn prompts with weak supervision signals as the reward function, which is not possible for previous prompt optimization methods.
Compared to previous TST work that trained models from scratch \citep[][\textit{inter alia}]{dai2019styleTransformer,luo2019dual,madaan-etal-2020-politeness} or fine-tuned pre-trained LMs \cite{liu2021DIRR}, our method presents a more efficient solution that learns discrete prompts for a LM without updating the latter's parameters.
\paragraph{Reward Function}
Given input sentence $\mathbf{x}$, the goal of text style transfer is to generate output $\mathbf{y}^*$
that preserves the information in $\mathbf{x}$ while showing style attribute $s^*$. Following these priorities, we define the task reward as a simple sum of content preservation and target style intensity, described formally below:
\begin{equation}
\small
R(\mathbf{x}, \mathbf{y}, s^*) = \ \text{Preservation}(\mathbf{x}, \mathbf{y})
+ \text{Style}(\mathbf{y}, s^*).
\end{equation}
We implement our preservation reward using its CTC metric \cite{deng-etal-2021-compression}, which measures the bi-directional information alignment between input $\mathbf{x}$ and output $\mathbf{y}$. We compute the alignment by matching token embeddings from RoBERTa-large similarly to BERTScore \cite{zhang2019bertscore}, a technique that shows the highest correlation with human judgments. For the style reward, we compute the target style probability under a BERT base classifier learned from the Yelp training set, which achieves 98.4\% accuracy on the validation set.
\paragraph{Dataset}
We test our method on the Yelp \cite{shen2017style} dataset, which contains customer reviews of positive and negative sentiment. The training set contains 266K positive and 177K negative reviews, the validation set 38K and 25K, and the test set 76K and 50K, respectively. We perform evaluation on a separate dataset consisting of 500 reviews for each sentiment, with reference outputs collected by \citet{li-etal-2018-delete}.
\begin{table*}[t]
\centering
{\renewcommand{\arraystretch}{1.2}
\setlength{\tabcolsep}{5pt}
\small
\begin{tabular}{lrrrrr | rrr}
\toprule
{Model} & {Content} & {Style} & {Fluency} & {\bf $\bm{J}$({\scriptsize C, S, F})} & {\bf GM({\scriptsize C, S, F})} & {BLEU} & {BERTScore} & {PPL}$\downarrow$ \\
\midrule
\rowcolor{Gray}
\multicolumn{9}{l}{\textit{Oracles}} \\
Copy & 100 \scriptnumber{0.0} & 1.4 \scriptnumber{0.0} & 92.2 \scriptnumber{0.0} & 11.9 \scriptnumber{0.0} & 23.5 \scriptnumber{0.0} & 30.1 \scriptnumber{0.0} & 62.2 \scriptnumber{0.0} & 20.6 \scriptnumber{0.0} \\
Reference & 62.2 \scriptnumber{0.0} & 78.9 \scriptnumber{0.0} & 88.7 \scriptnumber{0.0} & 55.9 \scriptnumber{0.0} & 75.8 \scriptnumber{0.0} & 100 \scriptnumber{0.0} & 100 \scriptnumber{0.0} & 30.8 \scriptnumber{0.0} \\
\rowcolor{Gray}
\multicolumn{9}{l}{\textit{Training Baselines}} \\
Style Transformer & 75.2 \scriptnumber{0.1} & 96.4 \scriptnumber{0.1} & 58.6 \scriptnumber{0.2} & 46.1 \scriptnumber{0.2} & 75.2 \scriptnumber{0.1} & 27.6 \scriptnumber{0.1} & 56.1 \scriptnumber{0.0} & 78.2 \scriptnumber{0.3} \\
DiRR & \textbf{78.8 \scriptnumber{0.0}} & \textbf{97.7 \scriptnumber{0.1}} & 75.6 \scriptnumber{0.2} & 59.6 \scriptnumber{0.2} & 83.5 \scriptnumber{0.1} & \textbf{30.0 \scriptnumber{0.0}} & \textbf{61.7 \scriptnumber{0.0}} & 40.6 \scriptnumber{0.1} \\
\rowcolor{Gray}
\multicolumn{9}{l}{\textit{Prompting Baselines (GPT-2 xlarge)}} \\
Null Prompt & 37.4 \scriptnumber{0.1} & 94.8 \scriptnumber{0.1} & 97.6 \scriptnumber{0.1} & 33.6 \scriptnumber{0.1} & 70.2 \scriptnumber{0.1} & 6.6 \scriptnumber{0.1} & 35.8 \scriptnumber{0.1} & 59.5 \scriptnumber{2.0} \\
Random Prompt & 39.6 \scriptnumber{0.1} & 93.8 \scriptnumber{0.2} & \textbf{97.8 \scriptnumber{0.1}} & 34.7 \scriptnumber{0.2} & 71.3 \scriptnumber{0.1} & 7.3 \scriptnumber{0.1} & 37.4 \scriptnumber{0.1} & 60.5 \scriptnumber{1.6} \\
Manual Prompt & 64.2 \scriptnumber{1.0} & 91.5 \scriptnumber{0.6} & 93.2 \scriptnumber{0.2} & 53.4 \scriptnumber{1.2} & 81.8 \scriptnumber{0.5} & 19.2 \scriptnumber{0.6} & 53.1 \scriptnumber{0.8} & 35.5 \scriptnumber{1.4} \\
\rowcolor{Gray}
\multicolumn{9}{l}{\textbf{\textit{{{\textsc{RLPrompt}}}\xspace (Ours)}}} \\
distilGPT-2 & 57.3 \scriptnumber{0.3} & 96.5 \scriptnumber{0.1} & 85.3 \scriptnumber{0.3} & 46.0 \scriptnumber{0.2} & 77.9 \scriptnumber{0.1} & 15.7 \scriptnumber{0.1} & 49.1 \scriptnumber{0.1} & 43.6 \scriptnumber{0.6} \\
GPT-2 small & 60.0 \scriptnumber{0.1} & 96.4 \scriptnumber{0.1} & 89.0 \scriptnumber{0.5} & 50.7 \scriptnumber{0.3} & 80.1 \scriptnumber{0.1} & 16.5 \scriptnumber{0.1} & 51.3 \scriptnumber{0.1} & 37.8 \scriptnumber{0.9} \\
GPT-2 medium & 65.7 \scriptnumber{0.2} & 95.2 \scriptnumber{0.2} & 89.3 \scriptnumber{0.2} & 56.1 \scriptnumber{0.6} & 82.3 \scriptnumber{0.1} & 20.0 \scriptnumber{0.2} & 55.1 \scriptnumber{0.2} & 34.4 \scriptnumber{0.3} \\
GPT-2 large & 65.1 \scriptnumber{0.3} & 94.6 \scriptnumber{0.4} & 91.6 \scriptnumber{0.2} & 56.5 \scriptnumber{0.5} & 82.6 \scriptnumber{0.1} & 19.8 \scriptnumber{0.1} & 54.7 \scriptnumber{0.1} & 34.9 \scriptnumber{0.3} \\
GPT-2 xlarge & 72.1 \scriptnumber{0.2} & 94.2 \scriptnumber{0.4} & 89.5 \scriptnumber{0.1} & \textbf{61.4 \scriptnumber{0.7}} & \textbf{84.7 \scriptnumber{0.2}} & 24.2 \scriptnumber{0.2} & 59.0 \scriptnumber{0.1} & \textbf{34.3 \scriptnumber{0.3}} \\
\bottomrule
\end{tabular}
}
\vspace{-5pt}
\caption{Automatic evaluation of our method vs. baselines on the Yelp \cite{shen2017style} sentiment transfer dataset. Content measures the content preservation using the CTC metric \cite{deng-etal-2021-compression}. Style is the accuracy under our sentiment classifier. Fluency is accuracy under \citet{krishna-etal-2020-reformulating}'s grammaticality classifier. $J(\cdot)$ is our main metric which measures the average joint sentence level score defined in \S\ref{subsec:tst-experiment}. We also report the following popular metrics: GM, the geometric average of Content, Style, and Fluency; BLEU and BERTScore between outputs and references; PPL, the perplexity under a GPT-2 language model. Copy and Reference are oracles that duplicate the input sentence and use the human-written reference, respectively.
Numbers in (parentheses) are standard errors of performance.
}
\label{tab:tst-main}
\vspace{8pt}
\end{table*}
\paragraph{Baselines}
We evaluate our method against both training- and prompting-based baselines.
For the training baselines, we compare with two strong existing methods, Style Transformer \cite{dai2019styleTransformer} and DiRR \cite{liu2021DIRR}. In particular, DiRR fine-tunes a GPT-2 \cite{radford2019GPT2} model with RL and auxiliary objectives, so it can be seen as a full-model tuning analogue to our method.
For the prompting baselines, we compare with (1) Null Prompt, which doesn't use any prompt, (2) Random Prompt, which samples 5 tokens from the vocabulary as prompts, and (3) Manual Prompt, which averages the performance of three human-written templates, one by \citet{reif2021recipe} and two written for this experiment.
\paragraph{Experiment Setup}
For our prompt optimization method, we experiment with all 5 GPT-2 models as the task LM, ranging from the smallest distilGPT-2 \cite{2019distilgpt2} with 82M parameters to the largest GPT-2 xlarge with 1.5B parameters.
Because TST shows different reward scales across inputs, we normalize our rewards using input-specific $z$-score during training, as discussed in \S\ref{subsec:method:reward-engineering}.
To generate text $\hat{\mathbf{y}}$, we sample 32
output candidates from the prompted LM, and pick the one with the highest reward as the final output.
We also fix the prompt length $L=5$.
To reduce the performance variance caused by RL initialization and sample selection, we average the performance from 3 RL experiments for each result from our own method.
Additionally, we perform the same sample selection for all our baselines for comparable performance.
We describe more training details in Appendix \S\ref{appendix:implementation:tst}.
\paragraph{Evaluation}
Following previous work, we evaluate the semantic preservation, style accuracy, and fluency of our test outputs. We measure semantic preservation using the CTC metric \cite{deng-etal-2021-compression} discussed earlier, which we denote as Content for convenience. For style accuracy (Style), we compute the outputs' match with their target styles using a BERT base classifier trained on both training and testing data, with 98.4\% accuracy on the validation set.
To evaluate fluency (FL), we rate output grammaticality using the same classifier as \citet{krishna-etal-2020-reformulating}.\footnote{\url{https://huggingface.co/cointegrated/roberta-large-cola-krishna2020}}
To evaluate how outputs balance and maximize all aspects, we aggregate the quality dimensions by averaging the joint sentence-level scores strictly following \citet{krishna-etal-2020-reformulating}'s protocol,
defined as
\begin{align*}
J(\text{\small Content}, &\text{\small Style}, \text{\small Fluency}) = \\ &
\sum_{\mathbf{x} \in \mathcal{X}} \frac{\text{\small Content}(\mathbf{x}) \cdot \text{\small Style}(\mathbf{x}) \cdot \text{\small Fluency}(\mathbf{x})}{|\mathcal{X}|}
,
\end{align*}
which requires each sentence to preserve input content, have the correct style, and be fluent.
We also report popular metrics such as geometric average (GM) of Content, Style, and Fluency scores as another aggregation method, BLEU and BERTScore
between the outputs and human-written references, and output perplexity (PPL) under a GPT-2 language model fine-tuned on the training data.
\paragraph{Results}
We present the TST results in Table \ref{tab:tst-main}, and discuss prompting baselines and our method in terms of performance using GPT-2 xlarge.
Compared to training baselines such as Style Transformer and DiRR, our method shows slightly lower semantic preservation and style accuracy, but have markedly better fluency, which leads to higher overall joint score ($J(\cdot)$) and geometric average score (GM$(\cdot)$). This may be because our method better preserves the LM's fluent generation capability by not tuning its parameters. In contrast, DiRR, which fine-tunes a GPT-2 model, suffers from lower fluency despite doing well on other aspects.
Relative to prompting baselines, our prompt optimization clearly improves on the default performance. In particular, our trained prompts performs better on average with lower variance than manual prompts, which sees good performance with some prompts, but much worse performance with other prompts with similar meanings. We present the performance of each manual prompt along with that of our learned prompts in Table \ref{tab:tst-prompt-examples}.
Within our own method, the model size plays an important role in TST success, primarily through content preservation. As model size increases from the smallest distilGPT-2 to the largest GPT-2 xlarge, the Content score generally increases while Style and Fluency stay high, resulting in improved $J(\cdot)$ and GM$(\cdot)$ scores.
\subsection{Analysis}
\label{sebsec:analysis}
\subsubsection{Fluent vs. Gibberish Prompts}
\label{subsec:analysis:fluent-prompts}
We also study the interaction of prompt fluency with downstream task performance, because fluent prompts are valuable for interpretability and insights into what LMs may consider as useful task instructions. Our results show that \emph{good optimized prompts
for the downstream task indeed are often not linguistically coherent, but instead tend to be gibberish}.
For instance, one set of prompts we learned for style transfer to positive and to negative sentiments are ``\texttt{Affect differed judgments
(- analysis}'' and ``\texttt{Difference
experiences (- contrasting
experience}'', respectively, which are far from grammatical.
The observation suggests that frozen LMs also make use of prompts differently from humans, in line with previous discoveries in prompt-based model fine-tuning \cite{webson2021prompt}.
To compare fluent and gibberish prompts, we use the task of text style transfer (\S\ref{subsec:tst-experiment}). Whereas our standard prompt optimization does not require prompt fluency, we propose to optimize \emph{fluent} prompts with top-k filtering \citep{qin2022cold}. That is, we limit our policy's action space at each step $t$ to the tokens with top-10 probabilities under a GPT-2 language model, conditioning on the previous prompt tokens $\mathbf{z}_{<t}$. Other than that, we train the policy using the same routine. We evaluate prompt fluency using its perplexity under a GPT-2 language model, and compare with our standard method (without fluency constraint) in Table \ref{tab:tst-fluency}.
Results show that the fluency-constrained prompts have remarkably lower perplexity, which indicates higher language coherence. For instance, a pair of fluent prompts we learned for to-positive and to-negative transfers are ``\texttt{We love and thank}'' and ``\texttt{We are not in}'', respectively, which make sense as prompts for generating sentences of the target sentiments.
However, these prompts receive much lower task performance in terms of joint score $J(\cdot)$ (44.4 vs. 61.4) and geometric average score GM$(\cdot)$ (76.9 vs 84.7). We present the learned fluent prompts along with their full performances in Table \ref{tab:tst-prompt-examples} in appendix.
\begin{table*}[h]
\setlength{\tabcolsep}{4pt}
\centering
{\renewcommand{\arraystretch}{1.2}
\small
\begin{tabular}{lrrrrrrrrr}
\toprule
{Method} & {Prompt PPL} $\downarrow$ & {Content} & {Style} & {Fluency} & {$J$({\scriptsize C, S, F})} & {GM({\scriptsize C, S, F})} & {BLEU} & {BERTScore} & {PPL}$\downarrow$ \\ \midrule
RLPrompt & $2.54\mathrm{e}5$ {\scriptsize ($4.34\mathrm{e}4$)} & \textbf{72.1 {\scriptsize(0.2)}} & 94.2 {\scriptsize(0.4)} & 89.5 {\scriptsize(0.1)} & \textbf{61.4 {\scriptsize(0.7)}} & \textbf{84.7 {\scriptsize(0.2)}} & \textbf{24.2 {\scriptsize(0.2)}} & \textbf{59.0 {\scriptsize(0.1)}} & \textbf{34.3 {\scriptsize(0.3)}} \\
+ Fluency & \textbf{65.0 {\scriptsize(1.7)}} & 50.8 {\scriptsize(0.6)} & \textbf{95.4 {\scriptsize(0.2)}} & \textbf{93.9 {\scriptsize(0.2)}} & 44.4 {\scriptsize(0.5)} & 76.9 {\scriptsize(0.3)} & 11.9 {\scriptsize(0.2)} & 45.2 {\scriptsize(0.4)} & 42.0 {\scriptsize(1.0)} \\ \bottomrule
\end{tabular}
}
\vspace{-7pt}
\caption{
Comparison of prompt optimization with fluency constraint vs no constraint on the Yelp dataset. Both experiments use GPT-2 xlarge as the text generation model. Prompt PPL is the prompt's perplexity under a GPT-2 langauge model. The text style transfer metrics are the same as in Table \ref{tab:tst-main}.}
\label{tab:tst-fluency}
\vspace{5pt}
\end{table*}
\subsubsection{Transferring Prompts Across LMs}
One unique advantage of discrete prompts over soft prompts is they are transferrable across models, due to the common text space instead of the model-specific latent space.
This enables us to study the connections between different LMs by comparing the performance of one model using prompts trained from other models (e.g., we learn a prompt from a distilGPT-2 model, and use it to prompt a GPT-2 xlarge model).
Experiments show that prompts transfer better from smaller to larger models than vice versa, suggesting that models with larger capacity may contain the structures that enable smaller models to perform at its best level, but not the other way around.
Specifically, we use the task of text style transfer (TST) for our study (\S\ref{subsec:tst-experiment}).
We take the prompts trained for each GPT-2 model, and use them to perform TST using every other model as the text generator.
We evaluate the outputs from each experiment (averaging 5 evaluation runs per prompt-model pair), and tabulate them in the heatmap of Figure \ref{fig:prompt-transfer}. We also include Manual Prompt for comparison and Random Prompt to represent the performance without any transfer.
Manual Prompt shows uniformly worse performance than learned prompts with smaller models like distilGPT-2 and GPT-2 small, but generally better results with larger models like GPT-2 large and xlarge, suggesting that human-written instructions may better activate larger models.
Overall, all optimized prompts see some transfer, as evidenced by uniformly better performance than Random Prompt, but the level of success depends on both the prompt training and text generation models.
For example, prompts learned from larger models see sharp performance declines when applied to smaller models, indicating that the LM structures they activate to achieve good performance may be less present in smaller models.
On the other hand, prompts learned from smaller models transfer better to larger models (e.g., distilGPT-2 to GPT-2 xlarge), achieving similar performance to using the smaller model itself.
This opens up a promising and exciting direction for future research --- enabled by the transferrability across LMs, we may learn a prompt cheaply from smaller models, and apply it to a larger, more expensive model for inference.
\begin{figure}[t]
\centering
\includegraphics[width=0.48\textwidth]{prompt-transfer-heatmap-d3.png}
\caption{Heatmap of Yelp style transfer performance with transferred discrete prompts. The columns represent the models used to learn the prompts, and the rows represent the models we perform text generation with. \texttt{manual} and \texttt{random} refer to the baselines presented in Table \ref{tab:tst-main}. Brighter color represents better joint score $J(\cdot)$. }
\label{fig:prompt-transfer}
\end{figure}
\subsection{Robustness to Classification Verbalizers}
Classification with prompting has shown to be sensitive to the choice of verbalizers. Manual design of the verbalizers requires domain expertise and the understanding of the base LMs. Previous research devised various methods for automatic verbalizer search \cite{schick2020label, shin2020autoprompt, gao2021LMBFF}, such as rule-based filtering, likelihood pruning, classifier learning, or enumerating in the vocabulary space.
In the few-shot classification tasks, our {{\textsc{RLPrompt}}}\xspace can be used to optimize the prompts given any verbalizers. Table~\ref{tab:verbalizers} shows the results on several intuitive verbalizers. We test on RoBERTa-large and run experiments across three RL random seeds on the SST-2 sentiment classification dataset. Given different pairs of verbalizers, our performance consistently outperforms the manual prompt in a large margin, validating the robustness of our approach to verbalizers.
\begin{table}[t]
\centering
\resizebox{0.45\textwidth}{!}{
\begin{tabular}{lcccl}
\toprule
\textbf{Verbalizers} & \textbf{SST-2} & \textbf{Prompt Template (Ours/Manual)} \\\midrule
\multirow{2}{*}{negative, positive} & 85.4 (1.6) & \texttt{\textless{}S\textgreater{}}. iously Totally \texttt{{[}MASK{]}}. \\
& 76.8 & \texttt{\textless{}S\textgreater{}}. It was \texttt{{[}MASK{]}}. \\\midrule
\multirow{2}{*}{terrible, great} & 88.5 (1.9) & \texttt{\textless{}S\textgreater{}}. itivity absolutely \texttt{{[}MASK{]}}. \\
& 82.8 & \texttt{\textless{}S\textgreater{}}. It was \texttt{{[}MASK{]}}. \\\midrule
\multirow{2}{*}{bad, good} & 88.2 (2.6) & \texttt{\textless{}S\textgreater{}}. 493 things \texttt{{[}MASK{]}}. \\
& 79.7 & \texttt{\textless{}S\textgreater{}}. It was \texttt{{[}MASK{]}}. \\\bottomrule
\end{tabular}}
\vspace{-5pt}
\caption{Performance when directly applying our framework to any intuitive verbalizers for sentiment classification.
}
\label{tab:verbalizers}
\end{table}
\section{Related Work}
\label{sec:relatedwork}
\subsection{Prompting Paradigms}
\paragraph{Fine-Tuning}
The conventional approach to using pre-trained LMs is fine-tuning the model parameters on downstream datasets \cite{devlin-etal-2019-bert, liu2019roberta, lewis2020bart, raffel2020T5, radford2019GPT2}.
While driving progress in a wide range of NLP tasks, fine-tuning expensively updates all model parameters and shows limited success with small datasets.
Prompt-based fine-tuning \cite{gao2021LMBFF,schick-schutze-2021-just} uses prompting to improve few-shot performance, but the problem of costly training remains unsolved.
\paragraph{Manual Prompt}
Researchers first used hand-crafted fill-in-the-blank prompts to extract knowledge from powerful pre-trained LMs for probing analyses \cite{petroni2019KB,jiang2020can}. Later on, \citet{brown2020language} showed that using manually-written prompts, large LMs can perform a number of NLU and NLG tasks without any training examples.
Meanwhile, other studies \cite{raffel2020T5,schick2021exploiting,sanh2021T0} formulated a wide variety of NLP tasks as manual prompts.
\paragraph{Instructions}
Separate from but related to manual prompts, another line of work \cite{weller2020learning,efrat2020turking,mishra2021NI,wang2022NI2} makes use of instructional prompts which provide task descriptions instead of fill-in-the-blank questions.
In particular, instruction meta-tuning \cite{mishra2021NI,zhong-etal-2021-adapting-language,wei2022finetuned} trains models on some tasks with instructions and supervised data in order to generalize to unseen tasks formulated as instructions without training examples.
\paragraph{In-Context Demonstration}
Besides zero-shot learning, \citet{brown2020language} achieve more remarkable performance on few-shot learning by inserting training examples into the input context. More recent works \cite{gao2021LMBFF,liu2021KATE,lu2021fantastically,min2022rethinking} further explore the selection and analysis of in-context demonstrations.
\citet{reif2021recipe} propose augmented zero-shot learning, which inserts training examples from related tasks as demonstrations for tasks that do not have supervised training data, such as text style transfer.
\paragraph{Discrete Prompt Enumeration}
Because discrete prompts are difficult to optimize and susceptible to small design variations \cite{zhao2021calibrate, webson2021prompt, lu2021fantastically}, a number of existing works seek to locate better prompts by augmenting human-written prompts with heuristics such as paraphrasing \cite{jiang2020can, gao2021LMBFF}, editing \cite{prasad2022grips}, and reframing \cite{mishra2021reframing}. The final prompt is typically selected to maximize some downstream performance metric.
\paragraph{AutoPrompt}
\citet{shin2020autoprompt} optimize discrete prompts by editing prompt tokens with guidance from model gradients.
While seeing some success with large training data, the method relies heavily on approximation, which leads to less stable training and limited applicability to few-shot settings.
\paragraph{Soft Prompt Tuning}
Replacing discrete prompts with continuous embeddings, a set of parallel work \cite{qin-eisner-2021-learning,li2021prefix,liu2021ptuningv1} proposed to optimize soft prompts using gradient-based tuning. Soft prompt tuning can be seen as a variant of parameter-efficient transfer learning \cite{houlsby2019parameter, he2021towardsunified, ding2022delta}, and inspired a number of follow-up works that boosted its performance \citep[e.g.,][]{liu2021ptuningv2, gu2021ppt, vu2021spot, clive2021control} or explored novel applications \citep[e.g.,][]{tan-etal-2022-msp,zhou2022conditional,levine2022standing}.
By its nature, however, soft prompts are difficult for humans to understand because of its continuous form \cite{khashabi2021prompt,lester2021promptuning,hambardzumyan2021warp,mokady2021clipcap}. Defined in the latent space of specific models, it is virtually impossible to use learned soft prompts with a different model. Furthermore, their training typically requires gradient information from the models they prompt, which can be expensive to compute or simply inaccessible for models deployed as inference API, such as GPT-3 \cite{brown2020language}.
\citet{sun2022black} and \citet{diao2022black} proposed black-box tuning, which updates continuous prompts using gradient-free techniques to some success.
\subsection{Prompting for Controllable Generation}
Existing state-of-the-art models for controllable text generation typically fine-tune entire pre-trained LMs \citep[e.g.,][]{ziegler2019fine, keskar2019ctrl, ziegler2019encoder,liu2021DIRR}.
Recent work instead employs various prompts to steer the LM to generate text with desired properties such as topic \cite{guo2021text, qian2022contrastiveprefix} and (lack of) toxicity \cite{liu2021dexperts,perez2022red}, or from modalities such as image \cite{mokady2021clipcap,zhou2022conditional}, structured data \cite{li2021prefix, an2022input}, and numbers \cite{wei2022chain}.
However, these works either control simple attributes, perform no explicit prompt optimization, or have access to supervised training data.
For unsupervised controllable generation tasks with more complex requirements, such as text style transfer \cite{hu2017toward, jin2022deep}, \citet{reif2021recipe} proposed augmented zero-shot prompting, an in-context demonstration method that achieves some success using huge LMs like GPT-3 \cite{brown2020language}.
\section{Conclusion}
We have presented {{\textsc{RLPrompt}}}\xspace, a new approach for optimizing discrete text prompts with reinforcement learning (RL), which combines various desirable properties of previous prompt learning paradigms.
With an efficient policy network and effective reward engineering techniques, our flexible approach accommodates different types of LMs and
improves over a wide range of fine-tuning and prompting methods in experiments on few-shot classification and unsupervised text style transfer.
Enabled by the transparency of discrete prompts, our analyses reveal that strong optimized prompts tend to be incoherent gibberish, but can be transferred between different LMs to achieve similar performance. The observations open up many promising possibilities of prompting. For instance, we may be able to learn prompts cheaply from smaller models and perform inference with larger models for better performance. We are excited to explore further.
|
1,108,101,566,397 | arxiv | \section{Introduction}
\label{sec:intro}
Many algorithms in machine learning and other scientific computing fields rely on optimizing a function with respect to a parameter space. In many cases, the objective function being optimized takes the form of a sum over a large number of terms that can be treated as identically distributed: for instance, labeled training samples. Commonly, the problem that we are trying to solve consists of minimizing the negated log-likelihood:
\begin{equation}
f(\btheta) = -\log(p(\Y|\X;\btheta)) = -\sum_{i=1}^N\log(p(\y_i|\x_n;\btheta)) \label{eqn:objf}
\end{equation}
where $(\X,\Y)$ are our observations and labels respectively, and $p$ is the posterior probability of our labels which is modeled by a deep neural network with parameters $\btheta$. In this case it is possible to use subsets of the training data to obtain noisy estimates of quantities such as gradients; the canonical example of this is Stochastic Gradient Descent (SGD).
The simplest reference point to start from when explaining our method is Newton's method with line search, where on iteration $m$ we do an update of the form:
\begin{equation}
\theta_{m+1} = \theta_m - \alpha \H_m^{-1} \g_m, \label{eqn:newton}
\end{equation}
where $\H_m$ and $\g_m$ are, respectively, the Hessian and the gradient on iteration $m$ of the objective function~\eqref{eqn:objf}; here, $\alpha$ would be chosen to minimize~\eqref{eqn:objf} at $\theta_{m+1}$. For high dimensional problems it is not practical to invert the Hessian; however, we can efficiently approximate~\eqref{eqn:newton} using only multiplication by $\H_m$, by using the Conjugate Gradients (CG) method with a truncated number of iterations. In addition, it is possible to multiply by $\H_m$ without explicitly forming it, using what is known as the ``Pearlmutter trick''~\cite{pearlmutter1994fast} (although it was known to the optimization community prior to that; see~\cite[Chapter 8]{Nocedal2006NO}) for multiplying an arbitrary vector by the Hessian; this is described for neural networks but is applicable to quite general types of functions. This type of optimization method is known as ``truncated Newton'' or ``Hessian-free inexact Newton''~\cite{Morales00enrichedmethods}. In~\cite{byrd2011use}, this method is applied but using only a subset of data to approximate the Hessian $\H_m$. A more sophisticated version of the same idea was described in the earlier paper~\cite{martens2010deep}, in which preconditioning is applied, the Hessian is damped with the unit matrix in a Levenberg-Marquardt fashion, and the method is extended to non-convex problems by substituting the Gauss-Newton matrix for the Hessian. We will discuss the Gauss-Newton matrix and its relationship with the Hessian in Section~\ref{sec:gn}.
Our method is quite similar to the one described in~\cite{martens2010deep}, which we will refer to as Hessian Free (HF). We also multiply by the Hessian (or Gauss-Newton matrix) using the Pearlmutter trick on a subset of data, but on each iteration, instead of approximately computing $(\H_m + \lambda \I)^{-1} \g_m$ using truncated CG, we compute a basis for the Krylov subspace spanned by $\g_m, \H_m \g_m, \ldots \H_m^{K-1} \g_m$ for some $K$ fixed in advance (e.g. $K=20$), and numerically optimize the parameter change within this subspace, using BFGS to minimize the original nonlinear objective function measured on a subset of the training data. It is easy to show that, for any $\lambda$, the approximate solution to $\H_m + \lambda \I$ found by $K$ iterations of CG will lie in this subspace, so we are in effect automatically choosing the optimal $\lambda$ in the Levenburg-Marquardt smoothing method of HF (although our algorithm is free to choose a solution more general than this). We note that both our method and HF use preconditioning, which we have glossed over in the discussion above. Compared with HF, the advantages of our method are:
\begin{itemize}
\item Greater simplicity and robustness: there is no need for heuristics to initialize and update the smoothing value $\lambda$.
\item Generality: unlike HF, our method can be applied even if $\H$ (or whatever approximation or substitute we use) is not positive semidefinite.
\item Empirical advantages: our method generally seems to work better than HF in both optimization speed and classification performance.
\end{itemize}
The chief disadvantages versus HF are:
\begin{itemize}
\item Memory requirement: we require storage of $K$ times the parameter dimension to store the subspace.
\item Convergence properties: the use of a subset of data to optimize over the subspace will prevent convergence to an optimum.
\end{itemize}
Regarding the convergence properties: we view this as more of a theoretical than a practical problem, since for typical setups in training deep networks the residual parameter noise due to the use of data subsets would be far less than that due to overtraining.
Our motivation for the work presented here is twofold: firstly, we are interested in large-scale non-convex optimization problems where the parameter dimension and the number of training samples is large and the Hessian has large condition number. We had previously investigated quite different approaches based on preconditioned SGD to solve an instance of this type of optimization problem (our method could be viewed as an extension to~\cite{le2007topmoumoute}), but after reading~\cite{martens2010deep} our interest switched to methods of the HF type. Secondly, we have an interest in deep neural nets, particularly to solve problems in speech recognition, and we were intrigued by the suggestion in~\cite{martens2010deep} that the use of optimization methods of this type might remove the necessity for pretraining, which would result in a welcome simplification. Other recent work on the usefulness of second order methods for deep neural networks includes~\cite{GlorotAISTATS2010,NgICML11}.
\section{The Hessian matrix and the Gauss-Newton matrix}
\label{sec:gn}
The Hessian matrix $\H$ (that is, the matrix of second derivatives w.r.t. the parameters) can be used in HF optimization whenever it is guaranteed positive semidefinite, i.e. when minimizing functions that are convex in the parameters. For non-convex problems, it is possible to substitute a positive definite approximation to the Hessian. One option is the Fisher information matrix,
\begin{equation}
\F = \sum_i \g_i \g_i^T,
\end{equation}
where indices $i$ correspond to samples and the $\g_i$ quantities are the gradients
for each sample. This is a suitable stand-in for the Hessian because
it is in a certain sense dimensionally the same, i.e. it changes the same way under
transformations of the parameter space. If the model can be interpreted as producing
a probability or likelihood, it is possible under certain assumptions (including model
correctness) to show that close to convergence, the Fisher and Hessian matrices have
the same expected value. The use of the Fisher matrix in this way is known as
Natural Gradient Descent~\cite{Amari:1998:NGW:287476.287477}; in~\cite{le2007topmoumoute},
a low-rank approximation of the Fisher matrix was used instead.
Another alternative that has less theoretical justification
but which seems to work better in practice in the case of neural networks is the
Gauss-Newton matrix, or rather a
slight generalization of the Gauss-Newton matrix that we will now describe.
\subsection{The Gauss-Newton matrix}
The Gauss-Newton matrix is defined when we have a function (typically nonlinear) from a
vector to a vector, $f: \Re^n \rightarrow \Re^m$. Let the Jacobian of this function be
$\J \in \Re^{m \times n}$, then the Gauss-Newton matrix is $\G = \J^T \J$,
with $\G \in \Re^{n\times n}$. If the problem is least-squares on the output
of $f$, then $\G$ can be thought of as one term
in the Hessian on the input to $f$. In its application to neural-network training,
for each training example we consider the network as a nonlinear function from the
neural-network parameters $\btheta$ to the output of the network, with the neural-network input
treated as a constant. As in~\cite{schraudolph}, we
generalize this from least squares to general convex error functions by using the
expression $\J^T \H \J$, where $\H$ is the (positive semidefinite) second derivative
of the error function w.r.t. the neural network output. This
can be thought of as the part of the Hessian that remains after ignoring the
nonlinearity of the neural-network in the parameters. In the rest of this document,
following~\cite{martens2010deep} we will refer to this matrix $\J^T \H \J$ simply as
the Gauss-Newton matrix, or $\G$, and depending on the context, we may actually be
referring to the summation of this expression over a number of neural-network training
samples.
\subsection{Efficiently multiplying by the Gauss-Newton matrix}
As described in~\cite{schraudolph}, it is possible to efficiently multiply a
vector by $\G$ using a version of the ``Pearlmutter trick''; the algorithm is similar
in spirit to backprop and we give it as Algorithm~\ref{alg:gn}. Our notation and our derivation for this
algorithm differ from~\cite{pearlmutter1994fast,schraudolph}, and we will explain this briefly;
we find our approach easier to follow. The idea is this: we first imagine
that we are given a parameter set $\btheta$, and two vectors $\btheta_1$
and $\btheta_2$ which we interpret as directions in parameter space; we then
write down an algorithm to compute the scalar $s = \btheta_2^T \G \btheta_1$.
Assume the neural-network input is given and fixed;
let $\v$ be the network output, and write it as $\v(\btheta)$ to emphasize the
dependence on the parameters, and then let $\v_1$ be defined as
\begin{equation}
\v_1 = \lim_{\alpha \rightarrow 0} \frac{1}{\alpha} \v(\btheta + \alpha \btheta_1) - \v(\btheta), \label{eqn:v1}
\end{equation}
so that $\v_1 = \J \btheta_1$. We define $\v_2$ similarly. These can both
be computed in a modified forward pass through the network. Then, if $\H$ is the
Hessian of the error function in the output of the network (taken at parameter value $\btheta$),
$s$ is given by
\begin{equation}
s = \v_2^T \H \v_1, \label{eqn:s:1}
\end{equation}
since $\v_2^T \H \v_1 = \btheta_2^T \J^T \H \J \btheta_1 = \btheta_2^T \G \btheta_1$.
The Hessian $\H$ of the error
function would typically not be constructed as a matrix, but we would compute~\eqref{eqn:s:1}
given some analytic expression for $\H$. Suppose we have written down the algorithm
for computing $s$ (we have not done so here because of space constraints). Then
we treat $\btheta_1$ as a fixed quantity, but compute the derivative of $s$ w.r.t.
$\btheta_2$ (taking $\btheta_2$ around zero for convenience). This derivative equals
the desired product $\G \btheta_1$. Taking the derivative of a scalar
w.r.t. the input to an algorithm can be done in a mechanical fashion via ``reverse-mode''
automatic differentiation through the algorithm, of which neural-net backprop is a special
case. This is how we obtained Algorithm~\ref{alg:gn}.
In the algorithm we denote the derivative of $s$ w.r.t. a quantity $x$ by $\hat{x}$, i.e.
by adding a hat. We note that in this algorithm, we have a ``backward pass'' for
quantities with subscript 2,
which did not appear in the forward pass, because
they were zero (since we take $\btheta_2 = 0$) and we optimized them out.
Something to note here is that when the linearity of the last layer is softmax and the error
is negated cross-entropy (equivalently negated log-likelihood, if the label is known),
we actually view the softmax nonlinearity as part of the error function. This is a
closer approximation to the Hessian, and it remains positive semidefinite.
To explain the notation of Algorithm~\ref{alg:gn}: $\h^{(i)}$ is the input
to the nonlinearity of the $i$'th layer and $\v^{(i)}$ is the output;
$\odot$ means elementwise multiplication; $\phi^{(i)}$ is the nonlinear function of
the $i$'th layer, and when we apply it
to vectors it acts elementwise; $\W^{(1)}$ is the neural-network weights for the
first layer (so $\h^{(1)} = \W^{(1)} \v^{(0)}$, and so on); we use the subscript
$1$ for quantities that represent how quantities change when we move the parameters
in direction $\btheta_1$ (as in~\eqref{eqn:v1}).
The error function is written as ${\cal E}(\v^{(L)}, y)$ (where $L$ is the last layer),
and $y$, which may be a discrete value, a scalar or a vector, represents the
supervision information the network is trained with. Typically ${\cal E}$ would
represent a squared loss or negated cross-entropy.
In the squared-loss case, the quantity $\frac{\partial^2}{\partial \v^2} {\cal E}(\v^{(L)}, y)$
in Line~\ref{line:e} of Algorithm~\ref{alg:gn} is just the unit matrix. The other
case we deal with here is negated cross entropy. As mentioned above, we include
the soft-max nonlinearity in the error function,
treating the elements of the output layer $\v^{(L)}$ as unnormalized log probabilities. If
the elements of $\v^{(L)}$ are written as $v_j$ and we let $\p$ be the vector of
probabilities, with $p_j = \exp(v_j) / \sum_i \exp(v_i)$, then the
matrix of second derivatives is given by
\begin{equation}
\frac{\partial^2}{\partial \v^2} {\cal E}(\v^{(L)}, y) = \diag(\p) - \p \p^T .
\end{equation}
\begin{algorithm}
\caption{Compute product $\hat{\btheta}_2 = \G \btheta_1$: MultiplyG$(\btheta, \btheta_1, \x, y)$ }
\label{alg:gn}
\begin{algorithmic}[1]
\STATE \algorithmiccomment{ Note, $\btheta = ( \W^{(1)}, \W^{(2)}, \ldots )$ and $\btheta_1 = ( \W_1^{(1)}, \W_2^{(2)}, \ldots )$. }
\STATE $\v^{(0)} \gets \x$
\STATE $\v_1^{(0)} \gets {\mathbf 0}$
\FOR { $l = 1 \ldots L$ }
\STATE $\h^{(l)} \gets \W^{(l)} \v^{(l-1)}$
\STATE $\h_1^{(l)} \gets \W^{(l)} \v_1^{(l-1)} + \W_1^{(l)} \v^{(l-1)}$
\STATE $\v^{(l)} \gets \phi^{(l)}(\h^{(l)})$
\STATE $\v_1^{(l)} \gets {\phi'}^{(l)}(\h^{(l)}) \odot \h_1^{(l)}$
\ENDFOR
\STATE $\hat{\v}_2^{(L)} \gets \frac{\partial^2}{\partial \v^2} {\cal E}(\v^{(L)}, y) \v_1^{(L)}$ \label{line:e}
\FOR { $l = L \ldots 1$ }
\STATE $\hat{\h}_2^{(l)} \gets \hat{\v}_2^{(l)} \odot {\phi'}^{(l)}(\h^{(l)})$
\STATE $\hat{\v}_2^{(l-1)} \gets \left.\W^{(l)}\right.^T \hat{\h}_2^{(l)}$
\STATE $\hat{\W}_2^{(l)} \gets \hat{\h}_2^{(l)} \left.\v^{(l-1)}\right.^T$
\ENDFOR
\RETURN $\hat{\btheta}_2 \equiv \left( \hat{\W}_2^{(1)}, \ldots, \hat{\W}_2^{(L)} \right)$
\end{algorithmic}
\end{algorithm}
\section{Krylov Subspace Descent: overview}
\label{sec:overview}
Now we describe our method, and how it relates to Hessian Free (HF) optimization.
The discussion in the previous section (on the Hessian versus Gauss-Newton matrix) is orthogonal
to the distinction between KSD and HF, because either method can use any Hessian substitute, with the
proviso that our method can use the Hessian even when it is not positive definite.
In the rest of this section we will use $\H$ to refer to either the Hessian or a substitute such
as $\G$ or $\F$. In \cite{martens2010deep} and the work we describe here, these matrices are approximated using a subset of data samples.
In both HF and KSD, the whole computation is preconditioned using the diagonal of $\F$ (since this is
easy to compute); however, in the discussion below we will gloss over this preconditioning.
In HF, on each iteration the CG algorithm is used to approximately compute
\begin{equation}
\d = - (\H + \lambda \I)^{-1} \g,
\end{equation}
where $\d$ is the step direction, and $\g$ is the gradient. The step size is determined by a
backtracking line search. The value of $\lambda$ is kept updated by Levenburg-Marquardt style
heuristics. Other heuristics are used to control the stopping of the CG iterations. In addition,
the CG iterations for optimizing $\d$ are not initialized from zero (which would be the natural
choice) but from the previous value of $\d$; this loses some convergence guarantees but seems to
improve performance, perhaps by adding a kind of momentum to the updates.
In our method (again glossing over preconditioning), we compute a basis for the subspace
spanned by $\{ \g, \H \g, \ldots, \H^{K-1} \g, \d_{\mathrm{prev}} \}$, which is the Krylov
subspace of dimension $K$, augmented with the previous search direction. We then optimize the objective function over this subspace using BFGS, approximating
the objective function using a subset of samples.
\section{Krylov Subspace Descent in detail}
In this section we describe the details of the KSD algorithm, including the
preconditioning.
For notation purposes: on iteration $n$ of the overall optimization we will write
the training data set used to obtain the gradient as ${\cal A}_n$ (which is
always the entire dataset in our experiments); the set used to compute the Hessian
or Hessian substitute as ${\cal B}_n$; and the set used for BFGS
optimization over the subspace, as ${\cal C}_n$.
For clarity when dealing with multiple subset sizes, we will typically normalize all
quantities by the number of samples: that is, objective function values, gradients,
Hessians and the like will always be divided by the number of samples in the
set over which they were computed.
On each iteration we will compute a diagonal preconditioning matrix $\D$ (we omit
the subscript $n$). $\D$ is expected to be a rough approximation to the Hessian.
In our experiments, following~\cite{martens2010deep}, we set $\D$ to the diagonal
of the Fisher matrix computed over ${\cal A}_n$.
To precondition, we define a new variable $\tilde{\btheta} = \D^{1/2} \btheta$, compute the Krylov
subspace in terms of this variable, and convert back to the ``canonical'' co-ordinates.
The result is the subspace spanned by the vectors
\begin{equation}
\left\{ (\D^{-1} \H)^k \D^{-1} \g, 0\leq k < K \right\} \label{eqn:subs}
\end{equation}
We adjoin the previous step direction $\d_\mathrm{prev}$ to this, and it becomes the
subspace we optimize over with BFGS. The algorithm to compute an orthogonal
basis for the subspace, and the Hessian (or Hessian substitute) within it, is given as
Algorithm~\ref{alg:proj}.
\begin{algorithm}[h]
\caption{Construct basis $\V = \left[\v_1,\ldots,\v_{K+1}\right]$ for the subspace, and the Hessian (or substitute) $\bar{\H}$ in the co-ordinates of the subspace.}
\label{alg:proj}
\begin{algorithmic}[1]
\STATE $\v_1 \gets \D^{-1} \g$
\STATE $\v_1 \gets \frac{1}{\sqrt{\v_1^T \v_1}} \v_1$
\FOR { $k = 1 \ldots K+1$ }
\STATE $\w \gets \H \v_k$ \algorithmiccomment{If Gauss-Newton matrix, computed with Algorithm~\ref{alg:gn}.}
\IF { $k < K$ }
\STATE $\u \gets \D^{-1} \w$ \algorithmiccomment{$\u$ will be $\v_{m+1}$}
\ELSIF { $k = K$ }
\STATE $\u \gets \d_\mathrm{prev}$ \algorithmiccomment{Previous search direction; use arbitrary nonzero vector if 1st iter}
\ENDIF
\FOR { $j = 1 \ldots k$ }
\STATE $\bar{h}_{k,j} \gets \w^T \v_j$ \algorithmiccomment{Compute element of reduced-dimension Hessian}
\STATE $\u \gets \u - (\u^T \v_j) \v_j$ \algorithmiccomment{Orthogonalize $\u$}
\ENDFOR
\IF { $k \leq K$}
\STATE $\v_{k+1} \gets \frac{1}{\sqrt{\u^T \u}} \u$ \algorithmiccomment{Normalize length and set next direction.}
\ENDIF
\ENDFOR
\STATE \algorithmiccomment{Now set upper triangle of $\bar{\H}$ to lower triangle.}
\end{algorithmic}
\end{algorithm}
On each iteration of optimization, after computing the basis $\V$ with Algorithm~\ref{alg:proj}
we do a further preconditioning step within the subspace, which gives us a new, non-orthogonal
basis $\hat{\V}$ for the subspace. This step is done to help the BFGS converge faster.
\begin{algorithm}[h]
\caption{Krylov Subspace Descent}
\label{alg:ksd}
\begin{algorithmic}[1]
\STATE $\d_\mathrm{prev} \gets \e_1$ \algorithmiccomment{or any arbitrary nonzero vector}
\FOR { $n = 1, 2 \ldots$ }
\STATE \algorithmiccomment{Sample three sets from training data, ${\cal A}_n$, ${\cal B}_n$ and ${\cal C}_n$.}
\STATE $\g \gets \frac{1}{ |{\cal A}_n| } \sum_{i \in {\cal A}_n} \g_i(\btheta)$ \algorithmiccomment{ Get average function gradient over this batch. }
\STATE Set $\D$ to diagonal of Fisher matrix on ${\cal A}_n$, floored to $\epsilon$ times its maximum.
\STATE Run Algorithm~\ref{alg:proj} to find $\V$ and $\bar{\H}$ on subset ${\cal B}_n$
\STATE Let $\hat{\H}$ be the result of flooring the eigenvalues of $\bar{\H}$ to $\epsilon$ times the maximum.
\STATE Do the Cholesky decomposition $\hat{\H} = \C \C^T$
\STATE Let $\bar{\V} = \V \C^{-T}$ (do this in-place; $\C^{-T}$ is upper triangular)
\STATE $\a \gets 0 \in \Re^{K+1}$
\STATE Find the optimum $\a^*$ with BFGS for about $K$ iterations using the subset ${\cal C}_n$, with objective function measured at $\btheta + \bar{\V}\a$ and gradient $\bar{\V}^T \g$ (where $\g$ is the gradient w.r.t. $\btheta$).
\STATE $\d_\mathrm{prev} \gets \bar{\V}\a^*$
\STATE $\btheta \gets \btheta + \d_\mathrm{prev}$
\ENDFOR
\end{algorithmic}
\end{algorithm}
The complete algorithm is given as Algorithm~\ref{alg:ksd}. The most important
parameter is $K$, the dimension of the Krylov subspace (e.g. 20). The flooring
constant $\epsilon$ is an unimportant parameter; we used $10^{-4}$. The subset sizes may be important; we recommend that ${\cal A}_n $ should be all of the training data, and ${\cal B}_n$ and ${\cal C}_n$ should each be about $1/K$ of the training data, and disjoint from each other but not from ${\cal A}_n$. This is the subset size
that keeps the computation approximately balanced between the gradient computation, subspace
construction and subspace optimization. Implementations of the BFGS algorithm would typically also have parameters: for instance,
parameters of the line-search algorithm and stopping critiera; however, we expect that
in practice these would not have too much effect on performance because
the algorithm is likely to converge almost exactly (since the subspace dimension and the
number of iterations are about the same).
\section{Experiments}
\label{sec:exp}
To evaluate KSD, we performed several experiments to compare it with SGD and with
other second order optimization methods, namely L-BFGS and HF. We report both
training and cross validation errors, and running time (we terminated the algorithms
with an early stopping rule using held-out validation data). Our
implementations of both KSD and HF are based on Matlab using
Jacket\footnote{www.accelereyes.com} to perform the expensive matrix operations
on a Geforce GTX580 GPU with 1.5GB of memory.
\subsection{Datasets and models}
Here we describe the datasets that we used to compare KSD to other methods.
\begin{table}
\centering
\begin{tabular}{l|c|c|c|c|c|c}
\hline
Dataset&Train smp.&Test smp.&Input&Output&Model&Task\\
\hline
CURVES&20K&10K&784 (bin.)&784 (bin.)&400-200-100-50-25-5&AE\\
MNIST$_{AE}$&60K&10K&784 (bin.)&784 (bin.)&1000-500-250-30&AE\\
MNIST$_{CL}$&60K&10K&784 (bin.)&10 (class)&500-500-2000&Class\\
MNIST$_{CL,PT}\footnotemark[1]$&60K&10K&784 (bin.)&10 (class)&500-500-2000&Class\\
Aurora&1.2M&100K\footnotemark[2]&352 (real)&56 (class)&512-1024-1536&Class\\
Starcraft&900&100&5077 (mix)&8 (class)&10&Class\\
\hline
\end{tabular}
\caption{Datasets and models used in our setup.}
\label{tab:models}
\end{table}
\begin{itemize}
\item CURVES: Artificial dataset consisting of curves at $28\times28$ resolution. The dataset consists of 20K training samples, and 10K testing samples. We considered an autoencoder network, as in \cite{HinSal06}.
\item MNIST: Single digit vision classification task. The digits are $28\times28$ pixels, with a 60K training, and 10K testing samples. We considered both an autoencoder network, and classification \cite{HinSal06}.
\item Aurora: Spoken digits dataset, with different levels of real noise (airport, train station, ...). We used PLP features and performed classification of 56 English phones. These frame level phone error rates are the ones reported in Table~\ref{tab:results}. Also reported in the text are Word Error Rates, which were produced by using the phone posteriors in a Tandem system, concatenated with standard MFCC to train a Hidden Markov Model with Gaussian Mixture Model emissions. Further details on the setup can be found in \cite{VinyalsICASSP11}.
\item Starcraft: The dataset consists of a real time strategy video game sequences from 1000 games. The goal is to predict the strategy the opponent chose based on a fully observed game sequence after five minutes, and features contain orderings between buildings, presence/absence features, or times that certain buildings were built.
\end{itemize}
The models (i.e. network architectures) for each dataset are summarized in Table~\ref{tab:models}. We tried to explore a wide variety of models covering different sizes, input and output characteristics, and tasks. Note that the error reported for the autoencoder (AE) task is the L2 norm squared between input and output, and for the classification (Class) task is the classification error (i.e. 100-accuracy). The non linearities considered were logistic functions for all the hidden layers except for the ``coding'' layer (i.e. middle layer) in the autencoders, which was linear, and the visible layer for classification, which was softmax.
\footnotetext[1]{For MNIST$_{CL,PT}$ we initialize the weights using pretraining RBMs as in \cite{HinSal06}. In the other experiments, we did not find a significant difference between pretraining and random initialization as in \cite{martens2010deep}.}
\footnotetext[2]{We report both classification error rate on a 100K CV set, and word error rate on a 5M testing set with different levels of noise}
\subsection{Results and discussion}
Table~\ref{tab:results} summarizes our results. We observe that KSD converges faster than HF, and tends to lead to lower generalization error. Our implementation for the two methods is almost identical; the steps that dominate the computation (computing objective functions, gradients and Hessian or Gauss-Newton products) are shared between both and are computed on a GPU.
For all the experiments we used the Gauss-Newton matrix (unless otherwise specified). The dimensionality of the Krylov subspace was set to 20, the number of BFGS iterations was set to 30 (although in many cases the optimization on the projected gradients converged before reaching 30), and an L2 regularization term was added to the objective function. However, motivated by the observation that on CURVES, HF tends to use a large number of iterations, we experimented with a larger subspace dimension of $K=80$ and these are the numbers we report in Table~\ref{tab:results}.
For compatibility in memory usage with KSD, we used a moving window of size 10 for the L-BFGS methods.
We do not report SGD performance in Figures~\ref{fig:aurora} and~\ref{fig:curves} as it was worse
than L-BFGS.
When using HF or KSD, pre-training helped significantly in the MNIST classification task, but not for the other tasks (we do not show the results with pre-training in the other cases; there was no significant difference). However, when using SGD or CG for optimization (results not shown), pre-training helped on all tasks except Starcraft (which is not a deep network). This is consistent with the notion put forward in~\cite{martens2010deep} that it might be possible to do away with the need for pre-training if we use powerful second-order optimization methods. The one exception to this, MNIST, has zero training error when using HF and KSD, which is consistent with a regularization interpretation of pre-training. This is opposite to the conclusions reached in~\cite{ErhanAISTATS2009} (their conclusion was that pre-training helps by finding a better ``basin of attraction''), but that paper was not using these types of optimization methods. Our experiments support the notion that when using advanced second-order optimization methods and when overfitting is not a major issue, pre-training is not necessary. We are not giving this issue the attention it deserves, since the primary focus of this paper is on our optimization method; we may try to support these conclusions more convincingly in future work.
\begin{table}
\centering
\begin{tabular}{l|c|c|c|c|c|c}
\hline
&\multicolumn{3}{|c|}{HF}&\multicolumn{3}{|c}{KSD}\\
\hline
Dataset&Tr. err.&CV err.&Time&Tr. err.&CV err.&Time\\
\hline
CURVES&0.13& \textbf{0.19}&1 &0.17 &0.25 &0.2 \\
MNIST$_{AE}$&1.7& 2.7&1 &1.8 &\textbf{2.5} &0.2 \\
MNIST$_{CL}$&0\%& 2.01\%&1 &0\% &\textbf{1.70\%} &0.6 \\
MNIST$_{CL,PT}$&0\%& 1.40\%&1 &0\% &\textbf{1.29\%} &0.6 \\
Aurora&5.1\%& 8.7\%&1 &4.5\% &\textbf{8.1\%} &0.3 \\
Starcraft&0\%& 11\%&1 &0\% &\textbf{5\%} &0.7 \\
\hline
\end{tabular}
\caption{Results comparing two second order methods: Hessian Free and Krylov Subspace Descent. Time reported is relative to the running time of HF (lower than 1 means faster).}
\label{tab:results}
\end{table}
In Figures~\ref{fig:aurora}~and~\ref{fig:curves}, we show the convergence of KSD and HF with both the Hessian and Gauss-Newton matrices. HF eventually ``gets stuck'' when using the Hessian; the algorithm was not designed to be used for non-positive definite matrices. Even before getting stuck, it is clear that it does not work well with the actual Hessian. Our method also works better with the Gauss-Newton matrix than with the Hessian, although the difference is smaller. Our method is always faster than HF and L-BFGS.
\begin{figure}[ht]
\begin{minipage}[b]{0.5\linewidth}
\centering
\includegraphics[width=\linewidth]{aurora2.eps}
\caption{Aurora convergence curves for various algorithms.}
\label{fig:aurora}
\end{minipage}
\hspace{0.5cm}
\begin{minipage}[b]{0.5\linewidth}
\centering
\includegraphics[width=\linewidth]{curves2.eps}
\caption{CURVES convergence curves for various algorithms.}
\label{fig:curves}
\end{minipage}
\end{figure}
\section{Conclusion and future work}
In this paper, we proposed a new second order optimization method. Our approach relies on efficiently computing the matrix-vector product between the Hessian (or a PSD approximation to it), and a vector. Unlike Hessian Free (HF) optimization, we do not require the approximation of the Hessian to be PSD, and our method requires fewer heuristics; however, it requires more memory.
Our planned future work in this direction includes investigating the circumstances under which pre-training is necessary: that is, we would like to confirm our statement that pre-training is not necessary when using sufficiently advanced optimization methods, as long as overfitting is not the main issue. Current work shows that the presented method is also able to efficiently train recursive neural networks, with no need to use the structural damping of the Gauss-Newton matrix proposed in~\cite{MartensRNN}.
\bibliographystyle{plain}
|
1,108,101,566,398 | arxiv | \section{Conclusion}
This short article presented an overview of a set of video-based techniques designed for real-world crowd insights and the management of dense crowds in urban environments, and described results from actual deployments.
\input{mybib.bbl}
\end{document}
|
1,108,101,566,399 | arxiv | \section{Introduction}
The interaction between matter and strong magnetic fields is a
subject of permanent research \cite{LAI,MIRANSKY}. In particular
the combination of magnetic fields and the strong interaction has
been intensively debated in the low \cite{CHAKRABARTY,
BRODERICK,DONG,RABHI,CHANDRA,BANDY,HABER, MUKHERJEE,GRASSO,SUH}
and medium energy regimes
\cite{KLEVANSKY,CHAKRABARTY2,EBERT,AVANCINI,CHAUDHURI}. In the
first case, the use of hadronic degrees of freedom is
indispensable. Among the most used models of the hadronic
interaction, the Quantum Hadro-Dynamics(QHD) has a remarkable
versatility to describe a variety of phenomena and its
results have a satisfactory accuracy when it is required.\\
QHD has been used to study the interaction of hadrons and
magnetics fields, for instance in the structure and composition of
neutron stars \cite{CHAKRABARTY, BRODERICK,DONG},the liquid-gas
phase transition \cite{RABHI}, the neutrino propagation in nuclear
matter \cite{CHANDRA}, the deconfinement phase transition
\cite{BANDY}, magnetic catalysis \cite{HABER, MUKHERJEE}, the
modification of nuclear structure \cite{GRASSO}, and the
formation of magnetic domains \cite{SUH}. \\
Within this description there is a general agreement that the
anomalous magnetic moments (AMM) of the hadrons play a
significative role when the magnetic energy approaches the QCD
scale, i.e. $q B \approx (220 MeV)^2$
\cite{BRODERICK,DONG,CHANDRA,MUKHERJEE}.\\
One of the features of the QHD models is the simplicity of
conceptual resources and procedures. The crucial point for these
models is the Mean Field Approximation (MFA) where the meson
fields are replaced by their in medium-mean values. In addition,
the bilinear products of fermion fields are replaced by their
expectation values. In the last case the contributions coming from
the Dirac sea of fermions are usually disregarded. The procedure
is completed with the requirement of self-consistency of the
scalar meson fields, which are not directly related to conserved
charges.\\
The same procedure was adopted for a model based on the chiral
SU(3) symmetry of the strong interaction \cite{PAPAZOGLOU}, which
was used to study different aspects of hadronic matter subject
to an external magnetic field \cite{MISHRA,AGUIRR3,MISHRA1}.\\
Some attempts has been made to incorporate the vacuum contribution
within this scheme \cite{HABER, MUKHERJEE}. However, in
\cite{HABER} the AMM of the nucleons are neglected, although very
strong magnetic intensities are considered ($q\, B \approx (500
MeV)^2$). Furthermore, there is no
contribution of the neutron.\\
On the other hand, in \cite{MUKHERJEE} a low magnetic intensity
expansion is proposed for the nucleon propagator, where the
discrete energy spectrum of the protons due to the Landau
quantization is not taken into account.\\
The technical difficulties arising when the vacuum contributions
in the presence of an external magnetic field are included have
recently been considered within the Nambu and Jona-Lasinio model
of the quark interaction \cite{AVANCINI}.
An analysis of the magnitude of the vacuum effects under the
influence of strong magnetic fields, taking into account all the
physical ingredients in a coherent manner, is necessary to discuss
the validity of the usual MFA.\\
This is precisely the aim of the present work. Here a version of
the QHD model with polynomial meson interactions is used; it is
known as FSUGold \cite{TODD}. Contributions of the vacuum are
evaluated by using a nucleon propagator which includes the
anomalous magnetic moments and the full interaction with the
external magnetic field \cite{AGUIRRE1,AGUIRRE2}. This propagator
has been used to evaluate meson properties \cite{AGUIRR3,
AGUIRRE2} and the effect of the AMM within the Nambu and
Jona-Lasinio model \cite{CHAUDHURI}.\\
Within this scheme I evaluate the effective nucleon mass and
statistical properties such as the grand canonical potential and
the magnetization as functions of the baryonic density and the
magnetic intensity at zero temperature.
This work is organized as follows. In the next section the QHD
prescriptions for the MFA as well as its extension to include the
vacuum contributions are presented. Some numerical results are
discussed in Sec. III, and the last section is devoted to drawing
the conclusions of this work.
\section{Vacuum corrections to the MFA within the QHD model}
The field equations for the QHD model supplemented with the
couplings of an external magnetic field to the charge of the
proton, as well as to the anomalous magnetic moments of both
protons and neutrons, are \cite{BRODERICK}
\begin{eqnarray}
\left(i\, \not \! \partial- m + g_s \sigma + q_b \not \!\! A- g_w
\not \! \omega- g_r \bm{\tau \cdot} \not \!\! \bm{\rho} -\kappa_b
\, \sigma_{\mu\nu} F^{\mu \nu}\right)\Psi_b=0 \label{NuclEq}
\end{eqnarray}
\begin{eqnarray}
\left(\square + m_s^2 + g_{s2}\, \sigma + g_{s3}\,
\sigma^2\right)\sigma=g_s \sum_b \bar{\Psi}_b \Psi_b\label{Sigma}
\end{eqnarray}
\begin{eqnarray}
\partial_\mu \Omega^{\mu \nu}+\left(m_w^2+ G_w \omega_\mu \omega^\mu +
G_{r w} \bm{\rho}_\mu {\bm \cdot} \bm{\rho}^\mu\right)\omega^\nu=
g_w \sum_b \bar{\Psi}_b \gamma^\nu\Psi_b \label{Omega}
\end{eqnarray}
\begin{eqnarray}
D_\mu R_a^{\mu \nu}+\left(m_r^2+ G_{r w} \omega_\mu
\omega^\mu\right)\rho_a^\nu=g_r \sum_{b c} \bar{\Psi}_b
\tau^{(a)}_{b c} \gamma^\nu \Psi_c \label{Rho}
\end{eqnarray}
where the index $b$ in Eq. (\ref{NuclEq}) indicates proton or
neutron, $\Omega_{\mu \nu}$ and $R_{\mu \nu}$ are the field
tensors for the $\omega$ and $\rho$ fields, and the couplings
constants are related to the notation of \cite{TODD} by
$g_s=g_{\sigma N}$, $g_{s2}=\kappa\, g_{\sigma N}^3/2$,
$g_{s3}=\lambda \,g_{\sigma N}^4/6$, $G_w=\zeta\, g_{\omega
N}^4/6$, and $G_{r w}=2\, \Lambda_w g_{r N}^2 g_{w N}^2$.
Assuming uniform matter distribution, the meson fields in these
equations are replaced by functions depending only on the bulk
properties of the system. Furthermore, the products of fermionic
fields on the right-hand side of Eqs. (\ref{Sigma}),
(\ref{Omega}), and (\ref{Rho}) are replaced by their expectation
values. Under such conditions, and adopting the reference frame of
rest matter, only the cases with $\nu=0$ gives non-zero values in
Eqs. (\ref{Omega}) and (\ref{Rho}). Finally, as the weak decay is
not contemplated in the interaction, only the case $a=3$ in Eq.
(\ref{Rho}) gives a non-zero contribution.\\
The above mentioned expectation values can be evaluated by using
the appropriate fermion propagators
\begin{eqnarray}
{\cal N}_{s\,b}=&<\bar{\Psi}_b\,\Psi_b>&= -i\;\lim_{t'\rightarrow
t^+}\,\text{Tr}\{G_b(t,\vec{r},t',\vec{r})\}
\\
{\cal N}^\nu_b=&<\bar{\Psi}_b\,
\gamma^\nu\,\Psi_b>&=-i\;\lim_{t'\rightarrow
t^+}\,\text{Tr}\{\gamma^\nu\,G_b(t,\vec{r},t',\vec{r})\}
\end{eqnarray}
In the momentum representation they can be rewritten as
\begin{eqnarray}
{\cal N}_{s\,b}&=& -i\;\lim_{\epsilon \rightarrow 0^+}\,\int
\frac{d^4p}{(2\pi)^4} e^{-i p_0 \epsilon}\;\text{Tr}\{G_b(p)\}
\label{SDen} \\
{\cal N}^\nu_b&=&-i\;\lim_{\epsilon \rightarrow 0^+}\,\int
\frac{d^4p}{(2\pi)^4} e^{-i p_0 \epsilon}\;\text{Tr}\{\gamma^\nu
G_b(p)\} \label{0Den}
\end{eqnarray}
In \cite{AGUIRRE1} a fermion propagator which includes the
full interaction with the external magnetic field, through its
coupling to the proton charge and to the AMM also, was used for
this purpose. For the sake of completeness the explicit form of
the neutron propagator is
\begin{eqnarray}
G_n(x',x)= \sum_s \int \frac{d^4p}{(2 \pi)^4} e^{-i p^\mu\,(x_\mu
'-x_\mu)} \Lambda_s \; \Xi_p \label{PropN}\end{eqnarray}
where
\begin{eqnarray}
\Lambda_s&=&\frac{ s}{2 \Delta}i\; \gamma^1 \gamma^2\left[ \not \!
u+ i \gamma^1 \gamma^2 (s \Delta-\kappa B)\right] \left( \not \!
v+m+ i s \Delta \gamma^1 \gamma^2\right) \\
\Xi_p&=&\frac{1}{p_0^2-E_s^2+i\epsilon}+ 2
\pi\,i\,n_F(p_0)\,\delta(p_0^2-E_s^2) \label{DecomN}
\end{eqnarray}
whereas for the proton one has
\begin{equation}
G_p(x',x)=e^{i \Phi } \int \frac{d^4 p}{(2 \pi)^4} e^{-i
p^\mu\,(x'_\mu-x_\mu)} \left[ G_0(p)+e^{-p_\bot^2/\beta}
\sum_{n,s}(-1)^n G_{n,s} (p)\right] \label{PropP}
\end{equation}
where
\begin{eqnarray}
G_0(p)&=&2 e^{-p_\bot^2/\beta}\Lambda^0 \; \Xi_{0\, 1}
\end{eqnarray}
\begin{eqnarray}
G_{n s}(p)&=&\frac{\Delta_n+s m}{2 \Delta_n}\Big\{( \not \!
u-\kappa_p B+s \Delta_n) \left(1+i \gamma^1 \gamma^2\right) L_n(2
p_\bot^2/\beta)-( \not \! u+\kappa_p B-s \Delta_n)
\nonumber\\
&&\times \left(1-i \gamma^1 \gamma^2\right) \frac{s \Delta_n-m}{s
\Delta_n+m} L_{n-1}(2 p_\bot^2/\beta)+ \left( \not \! u\, i
\gamma^1 \gamma^2+ s \Delta_n-\kappa_p B\right) \not \! v \frac{s
\Delta_n- m}{p_\bot^2}
\nonumber \\
&&\times \left[ L_n(2 p_\bot^2/\beta)-L_{n-1}(2
p_\bot^2/\beta)\right]\Big\}
\; \Xi_{n s} \\
\Xi_{n s}&=&\frac{1}{p_0^2-E_{n s}^2+i\epsilon}+2
\pi\,i\,n_F(p_0)\,\delta(p_0^2-E_{n s}^2) \label{DecomP}
\end{eqnarray}
In these expressions the index $s=\pm 1$ corresponds to the
projection of the spin in the direction of the uniform magnetic
field, the index $n\geq 1$ takes account of the discrete Landau
levels, and the following notation is used: $\beta=q B$, $\not \!
u=p_0 \gamma^0-p_z \gamma^3$, $\not \! v=-p_x \, \gamma^1-p_y\,
\gamma^2$, $p_\bot^2=p_x^2+p_y^2$, $L_m$ stands for the Laguerre
polynomial of order $m$, and
\begin{eqnarray}
E_s&=&\sqrt{p_z^2+(\Delta-s\,\kappa_n \nonumber
B)^2}\nonumber\\
\Delta&=&\sqrt{m^2+p^2_x+p^2_y} \nonumber \\
E_{n s}&=&\sqrt{p_z^2+(\Delta_n-s\,\kappa_p B)^2}\nonumber \\
\Delta_n&=&\sqrt{m^2+2 n q B} \nonumber
\end{eqnarray}
Finally, the phase factor $\Phi=q B(x+x')(y'-y)/2$ embodies the
gauge fixing.
If these propagators are used in Eqs. (\ref{SDen}) and
(\ref{0Den}), but keeping only the second terms of Eqs.
(\ref{DecomN}) and (\ref{DecomP}), then the MFA is obtained
\cite{AGUIRRE1}. The correction to the densities coming from the
Dirac sea of nucleons can be evaluated by using Eqs. (\ref{SDen})
and (\ref{0Den}) but retaining only the first terms of Eqs.
(\ref{DecomN}) and (\ref{DecomP}). The expressions thus obtained
are divergent and must be renormalized. Since the main residue in
a Lorenz expansion depends on the magnetic intensity, a
regularization procedure must be defined to extract relevant
contributions. The details of this calculations are left for the
Appendix, and here the final results are shown. ${\cal
N}_\nu^{\text{vac}}=0$ for protons and neutrons,
\begin{eqnarray}
{\cal N}_{s\,p}^{\text{reg}}&=&\frac{1}{4 \pi^2}\Big[ 2 \beta
\kappa_p B+ \frac{\beta^2}{3 m}-m^3 + 2 \beta (m+\kappa_p B)
\ln\left(\frac{m}{m+\kappa_p B}\right)\nonumber \\
&&+m\left(\beta-m^2\right) \ln\left(\frac{2 \beta}{m^2}\right)-2 m
\beta \ln\left(\frac{\Gamma(m^2/2\beta)}{\sqrt{2 \pi}}\right)\Big]
\label{VacP}
\end{eqnarray}
for protons, and
\begin{eqnarray}
{\cal N}_{s\,n}^{\text{reg}}=\frac{m}{4 \pi^2}\left[6 (\kappa_n
B)^2+(m-\kappa_n B)^2 \ln\left(\frac{m}{m-\kappa_n
B}\right)+(m+\kappa_n B)^2 \ln\left(\frac{m}{m+\kappa_n B}\right)
\right] \label{VacN}
\end{eqnarray}
for neutrons.\\
It must be pointed out that the first three terms on the
right-hand side of Eq. (\ref{VacP}) come from the subtraction
proposed in the regularization procedure. It is interesting to
note that Eq.(\ref{VacN}) becomes zero if $\kappa_n=0$ is taken,
while, taking $\kappa_p=0$ in Eq. (\ref{VacP}) and writing
$x=m^2/2\beta$, this equation reduces to
\begin{eqnarray}
-\frac{m \beta}{2 \pi^2}\Big[ -\frac{1}{12 x}+x -x \ln(x)
+\frac{1}{2} \ln\left(\frac{x}{2 \pi}\right)+
\ln\left(\Gamma(x)\right)\Big]. \label{Haber}
\end{eqnarray}
Here the first two terms correspond to the second and third terms
of (\ref{VacP}). With exception of the first term between square
brackets in Eq.(\ref{Haber}), this expression can be recognized as
the vacuum correction term on the right-hand side of Eq. (28a) of
Ref. \cite{HABER}. As explained before, the discrepant term is
justified by the subtraction prescription used here.
With these results, one can extend the standard definition of the
effective nucleon mass in QHD models $m=m_0-g_s \, \bar{\sigma}$,
where $m_0=939$ MeV and the uniform mean value $\bar{\sigma}$ is
obtained from Eq.(\ref{Sigma}) by neglecting the coordinate
dependence and replacing $\bar{\Psi}_b \Psi_b \rightarrow {\cal
N}_{s \, b}$. And the scalar baryonic density is the sum of the
MFA result and the vacuum correction given by Eqs. (\ref{VacP})
and (\ref{VacN}). The self-consistency is imposed by evaluating
${\cal N}_{s \, b}$ in the unknown mass $m$.
For further applications it is useful to obtain the vacuum
correction to the energy density. The baryonic contribution to the
energy density arises from the mean field value of the Hamiltonian
density operator \cite{AGUIRRE1}
\begin{eqnarray}
{\cal E}_b=<{\cal H}_b>=-i\;\lim_{t'\rightarrow
t^+}\,\text{Tr}\left\{i \gamma^0 \frac{\partial}{\partial
t}\,G_b(t,\vec{r},t',\vec{r})\right\}. \label{EDen}
\end{eqnarray}
By using the method described in the Appendix, the following
results are obtained:
\begin{eqnarray}
{\cal E}_n^{\text{reg}}&=&\frac{1}{48 \pi^2}\Big\{\left[(\kappa_n
B)^4-6 m^2(\kappa_n B)^2-3 m^4 \right]\ln\left[\frac{m^2-(\kappa_n
B)^2}{m^2}\right]-4 \kappa_n B m^3 \ln\left(\frac{m+\kappa_n
B}{m-\kappa_n B}\right)\nonumber \\
&&+\frac{13}{6}(\kappa_n B)^2\left[6 m^2-(\kappa_n
B)^2\right]\Big\} \label{Evacn}\\
{\cal E}_p^{\text{reg}}&=&\frac{1}{8 \pi^2}\Big\{-4 \beta^2
\zeta'(-1,\lambda)-\frac{1}{2}\ln\left(\frac{m^2}{2\beta}\right)\left(\mu^2-2
\beta
\mu+\frac{2}{3}\beta^2\right)-\frac{1}{4}m^4+\left[\beta+(\kappa_p
B)^2\right]^2\nonumber \\
&&-2 \beta (m+\kappa_p B)^2 \ln\left(\frac{m+\kappa_p
B}{m}\right)+\frac{1}{3}\beta(\beta+2 m \kappa_p
B)\left(\frac{\kappa_p B}{m}\right)^2 -\frac{2}{3}\beta^2+2 m
\beta \kappa_p B \nonumber \\
&&+\frac{1}{45}\left(\frac{\beta}{m}\right)^4\Big\} \label{Evacp}
\end{eqnarray}
where $m$ stands for the vacuum value, i.e., $m=m_0$,
$\mu=m_0^2+(\kappa_p B)^2$, $\lambda=\mu/2\beta$, and $\zeta'$
indicates the derivative of the Hurwitz zeta function
respect to its first argument.\\
The magnetization of the system can be evaluated as ${\cal
M}=\left(\partial {\cal E}/\partial B\right)_{N_b}$, where ${\cal
E}$ is the hadronic contribution to the total energy
\cite{BRODERICK}. Finally, using the chemical potentials
associated with the conservation of the baryonic number of protons
and neutrons, the pressure at zero temperature can be evaluated as
$P=\sum_b \mu_b n_b-{\cal E}$.
\section{Results and discussion}
In this section several properties of dense nuclear matter are
analyzed, considering baryonic densities lesser than three times
the normal nuclear density and magnetic intensities between
$10^{14}$ and $10^{19}$ G. The isospin composition of matter has
also been taken as a relevant variable to be examined. However,
the main conclusions of this work are basically independent of the
isospin asymmetry, so I examine in what follows the
symmetric nuclear matter case. \\
The parameters are taken from the FSU model \cite{TODD}.
First the effective nucleon mass is analyzed. It is directly
affected by the vacuum corrections to the scalar densities given
by Eq. (\ref{SDen}). In Fig. 1 the dependence of $m$ on the
magnetic intensity is displayed at constant baryonic density. For
this purpose I take $n/n_0=0, \, 1, \, 2$, where $n_0$ stands for
the normal nuclear density. In each case the results including
vacuum correction (CC) and without it (NC) are
compared. For intensities below $5 \times 10^{18}$ G, both cases yields
very similar results. Up to this point, the inclusion of
corrections produces higher values of the effective mass. This
effect is more pronounced at lower densities, for instance at
$B=10^{19}$ G the differences between the two cases are 2.7, 2.2,
and 1.9 MeV for the densities $n/n_0=0, \,1, \, 2$ respectively.
This can be understood because the vacuum effect at fixed magnetic
intensity reduces to a constant which dominates at very low
densities. But, as the density increases, the MFA provides
a growing contribution.\\
In Fig. 1 a wider range of magnetic intensities is considered in
order to compare with previous results. For instance in
\cite{HABER} the difference for the nucleon mass between the CC
and NC cases at zero density is approximately $10$ and $20$ MeV
for $B=10^{19}$ and $1.6 \times 10^{19}$ G, respectively, whereas
in these calculations I have obtained $3$ MeV and $9$ MeV for the
same values of the magnetic intensity. This fact is illustrated in
Fig. 2a, where the results of the present work (solid line) are
contrasted with those obtained by following the procedure and
model parameters used in \cite{HABER} (dotted line). In order to
expand the analysis, I also show the results obtained with the FSU
model including the AMM but adopting the regularization procedure
of Ref.\cite{HABER} (dash-dotted line). I conclude that the
numerical discrepancy comes mainly from the different
regularization prescriptions and in a minor degree can be ascribed
to the model parameters and to the presence of the AMM.
Notwithstanding, even for the extreme intensity $B=10^{19}$ G, all
the approaches predict an increment of the effective mass not
greater than $2 \%$ of the experimental value $m_0$.\\
The role of the anomalous magnetic moments is analyzed throughout
the three panels of Fig. 2. The outcome for the present
calculations with $\kappa_p=\kappa_n=0$ at fixed density is
represented by the curves with dashed lines. Neglecting the AMM
yields a decreasing effective mass, that increasingly differs from
the full calculations as the magnetic intensity and the baryonic
density are increased. For a given density and low intensities ($B
< 10^{18}$ G) the results with or without AMM are almost
identical, whereas for the greatest intensity examined here
($B=1,6\times 10^{19}$ G)
the difference grows from $10$ to $25$ MeV.
As the next step, the pressure at zero temperature is studied. In
Fig.3 the pressure as a function of the particle density is shown
for a constant magnetic intensity, for the specific values
$B=10^{18}, \, 5\times 10^{18}$ and $10^{19}$ G. Again a
comparison between the CC and NC cases is made. For each pair of
curves the CC case presents higher values for the whole range of
densities. Furthermore the separation between each pair does not
vary significatively with the density. This can be explained
because the vacuum correction does not depend on the density, and
the changes induced in the meson mean values are so weak that the
CC curve practically copies the same features of the NC one.
However the shift between the twin curves increases appreciably
with the magnitude of $B$. This behavior remains when the
isospin composition is varied.\\
An interesting consequence which can be appreciated in Fig. 3, is
that the vacuum correction preserves the thermodynamical
instabilities of the MFA. Therefore a spinodal decomposition
similar to that shown in \cite{RABHI} must be expected, with the
same range of densities but extending to higher pressures.\\
It must be said that in order to allow an easier comparison within
the same figure, the constant contribution of the magnetic field
to the total energy density is not shown in Fig. 3.
As the last statistical subject to be analyzed the magnetization
${\cal M}$ induced by the external magnetic field is considered.
It must be mentioned that within the scheme of regularization
presented here, one can obtain finite vacuum contributions to
${\cal M}$ as also would be
the case for the magnetic susceptibility $\chi=\partial{\cal M}/\partial
B$.\\
It is known
that ${\cal M}$ is a very weak quantity, since it is proportional
to the electric charge. The correction to the energy density,
given by Eqs. (\ref{Evacn}) and (\ref{Evacp}), does not depend on
the matter density, nor does its contribution to the
magnetization. Hence I analyze the dependence on $B$ of the
difference $\Delta {\cal M}$ between the full magnetization and
the corresponding result without vacuum effect. To compare with
previous calculations \cite{DONG, RABHI2} which used the same
hadronic interaction, the adimensional ratio $\Delta{\cal M}/q^2B$
as a function of the magnetic intensity is shown in Fig. 4,
separately for the proton and neutron contributions within the
range $10^{17}$ $< B < 10^{19}$ G. By considering the
results of \cite{RABHI2} for $B>10^{18}$ G, an almost general
trend is that this quantity decreases by increasing the magnetic
intensity and decreasing the matter density. Due to the tiny
results I obtained for the neutron component, one can expect
that the vacuum corrections to this component could have
significative effects only in the very low density regime.
However, in this regime the assumption of homogeneous matter is
not valid. So, one can conclude that the vacuum correction to the
neutron component is negligible, with the possible exception of
certain special configurations, for instance, in the case of the
neutron gas surrounding the nuclear clusters in the inner crust of
a neutron star.\\
In regard to the proton component, a growing magnitude is obtained
as $B$ is increased, reaching at $B=10^{19}$ G the value
$10^{-5}$. This represents approximately $10 \%$ of the result at
a density $n/n_0 = 0.2$ (see Fig. 8 of Ref. \cite{RABHI2}) and $1
\%$ at $n/n_0 = 1.2$. In conclusion, the vacuum correction for the
proton component starts to be significant for intensities $B
\approx 10^{19}$ G, modifying the MFA result only by a few percent
for densities below the normal nuclear density.
\section{Conclusions}
In this work I have proposed an extension of the MFA for nuclear
matter under the effect of a uniform magnetic field, by including
contributions from the Dirac sea of hadrons. I have used the
covariant FSU model of the nuclear interaction \cite{TODD} and the
calculations have been made by using a covariant propagator which
takes account of the full effect of the magnetic field as well as
the effect of the anomalous magnetic moments. Hence, several
issues left open in previous investigations \cite{HABER,
MUKHERJEE} are considered.
Since the interaction used is just an effective model of the
strong interaction, I have not considered a renormalization
scheme. In particular I did not try to renormalize the external
magnetic field since, within the model used, it is not a dynamical
variable. I have proposed instead a regularization procedure to
obtain physically
meaningful results from the divergent contributions.\\
The procedure has the advantage of yielding finite results for the
vacuum correction to the magnetization ${\cal M}=\partial {\cal
E}/\partial B$ as well as to the higher order derivatives, for
instance the magnetic susceptibility $\chi=\partial {\cal
M}/\partial B$.
Within the scheme proposed I have evaluated different nuclear
properties at zero temperature. The effective nucleon mass is
representative of the single-particle properties, and the
pressure and the magnetization correspond to bulk properties in
thermodynamical equilibrium. They have been analyzed for a range
of matter densities and magnetic intensities that ensures
the confidence in the model.\\
For all the cases I have obtained moderate corrections, which
becomes significant for densities below the normal nuclear
density, and magnetic intensities above $10^{18}$ G. \\
Taking into account that QHD models use hadronic degrees of
freedom exclusively and that their parameters are adjusted to the
low energy phenomenology, it is consistent that vacuum corrections
do not reveal high energy manifestations. These conclusions
support the validity of the MFA for the regime of parameters
studied.
\section{Acknowledgements} This work has been partially supported
by a grant from the Consejo Nacional de Investigaciones
Cientificas y Tecnicas, Argentina.
|
1,108,101,566,400 | arxiv | \section{Introduction}
\label{motivation}
Today's cars, self-driving or not, can exchange information with the outside over a standardised interface: the On-Board-Diagnosis (OBD) interface, which exists since 1988. OBD-II is a vehicle diagnostic system, which makes important electronic control units (ECU) of the car and their data accessible. This allows reading the current speed, rpm and other information from the car. In 1996 the USA made it mandatory for every new car that is sold to have this interface.
According to Regulation (EC) No. 715/2007, all new passenger car registrations in the EU since 2001 for gasoline engines and since 2004 also for diesel engines must be equipped with an OBD interface.
This means that nowadays a huge amount of cars are equipped with this On-Board-Diagnosis system.
However, this also means that the driver is strongly encouraged or forced to connect external devices (dongles) to this port first, so that they can then read out the car's data via the OBD interface.
In this paper we show that the OBD interface lacks basic data authentication mechanisms making it possible to place our firewall as a man-in-the-middle.
In the remainder of this paper we will use the term \textit{dongle} to refer to the third-party device that will be connected to the car over the OBD-II interface.
Dongles might offer all functionality on their own (standalone) or consisting of a hardware connector and a mobile phone connected via bluetooth.
As mentioned above, the On-Board-Diagnostic interface allows for interaction with a variety of ECUs and to obtain valuable data for an overall insight into the current state of a vehicle.
Especially telecommunications service providers such as Vodafone \cite{vodafone}, Telekom \cite{telekom} or Telefonica \cite{telefonica} offer OBD-II connectors in combination with a mobile data connection.
Note, this also broadens the attack surface as it could provide remote access to the CAN (Controller Area Network) bus, transforming a perceived internal attack surface (needing physical access to the OBD-II port) into an external threat \cite{7030108}, \cite{7952095}.
But apart from that, these service providers gain insights into an enormous amount of highly private data about the drivers' everyday life \cite{website:newscardata}.
Among other things, this includes the driving style, routes and accurate driving times. Manufacturers and insurance companies as well as service providers from the business sector are already using the information provided as a basis for monetarizing the individual journeys of users.
An example would be the reduction of car insurance if you have a particularly restrained and safe driving style.
This monetarization can also work in the opposite direction.
This means that drivers who drive very aggressively, for example, are punished with higher rates.
In summary, OBD-II communication creates two new threat categories:
\begin{enumerate}
\item Most obvious is the problem of malicious inbound flow from an adversarial dongle to the car
\item less obvious --judging from the limited amount of research so far-- is the problem of personal data leakage in outbound flow from the car to the dongle
\end{enumerate}
As there is no encryption or authentication on the CAN bus by default \cite{Groza2018SecuritySF}, not only inbound threats are very real and far from difficult to implement \cite{Groza2018SecuritySF}, but also the ability to tamper with data on its way from the car to the dongle is not prohibited due to the --by default-- missing data origin-authentication.
\subsection{Goals and Contributions}
All this raises the question of whether it is possible to develop an effective and modular firewall-like filtering approach for data flows in both directions via the given interface.
We therefore propose a rule-based approach suitable for the OBD-II interface and all affected protocols to protect the driver and his car from possible malicious dongles by modifying or rejecting data flowing in both directions.
The idea of something like a firewall in a car was proposed by Rivzi et al. where they showed a distributed firewall approach and put an inbound filter on each ECU in a vehicle \cite{rizvi:firewall}.
However, our approach is specifically designed to manipulate traffic, especially the content of traffic from the car to the dongle.
To the best knowledge of the authors this has not been done before and our approach still allows full or reduced operation of the dongle, unlike current commercially available blockers, which only offer an all-blocked or all-allowed approach.
Let us stress that being able to manipulate traffic on OBD-II is an attack vector on its own, highlighting an absence of crucial security mechanisms for this communication channel.
We exemplify the attack by building a Man-in-the-OBD-II interface
to highlight
how the data flow
accessible
via the OBD-II interface can be modified or restricted. One of the hurdles to be considered is the provision of the firewall for the end user. The objective is to make it as easy as possible for users to enter the firewall without any major entry hurdles. Therefore, in Section \ref{chap_spefification:section:policymanagement} we introduce a configurable policy language in the well-known JSON format, which eliminates this problem.
Our modular design works on the standardised OBD-II stack and we have chosen to implement both inbound filtering and outbound filtering for Controller Area Network protocol messages. CAN 2.0 specification was published in 1991 by Robert Bosch GmbH \cite{CANBOSCH} and standardized by ISO 11898-1 \cite{ISO11898-1} first in 2003, and is one of the most common used protocols for vehicular information exchange through dongles today.
However, since there are many different communication mechanisms and protocols in the OBD-II stack, our general approach abstracts from the protocol itself.
For brevity we focus in this work on CAN message filtering and rewriting in both directions, which can be seen as one module in our general OBD-II man-in-the-middle firewall concept. This makes it possible to easily add other protocols to our basic system with a functioning implementation for CAN. It means that our protocol-agnostic approach also enables manufacturer-specific solutions. The only hurdle that remains for anyone who wants to use the firewall is to configure the correct rules using our rule language. For this, the user must be able to understand the relevant important data, e.g. the message format within CAN, and know exactly which commands are sent. In the case of CAN, however, there is already a lot of current research that helps to reverse engineer messages (\cite{libreCAN}, \cite{automatic_reverse_CAN} and \cite{READ_reverse}).
\subsection{Outline}
In this work
we
focus on two aspects:
\begin{enumerate}[I:]
\item Identify the threats a security layer in between the OBD-II dongle and the car would solve
\item Implement such a security layer as a Man-in-the-Middle for the OBD-II and show how it can fool dongles to protect the car driver's privacy
\end{enumerate}
We present an overview of related work in Section 2.
In Section 3 we facilitate the standard threat modeling tools DREAD and STRIDE to analyze systematically the possible threats that can occur and, in the best case, can all be avoided by implementing our approach.
We
highlight
the most important threats by means of this procedure.
In Section 4 we present the architecture of our solution that abstracts the dongle from the actual vehicle network and shields the car as best as possible.
In Section 5 we discuss briefly the implementation of our approach
for the case of CAN messages.
We
provide
a rule-based policy language.
By means of different options and types within the
rules it
is possible to set the firewall filters
for inbound and outbound data flows.
We evaluate the impact on the identified threats and on selected dongles in Section 6.
Finally, we conclude in Section 7.
\section{Related Work}
\label{related_work}
Work has been done on the vulnerabilities added by the dongles itself:
The idea of analysing dongles' behaviour
was discussed by
Wen et al. \cite{247700}; they provide a comprehensive vulnerability analysis of 77 OBD-II dongles. In the paper the authors propose an automated tool called DongleScope to perform an analysis and to test the dongles.
In the paper published by Yadav et al. \cite{yadav2016} the authors give an overview of various security vulnerabilities and points of entry for malicious entities in vehicular systems.
The trove of information that can be gained from a car
is shown in several works
(\cite{website:newscardata}, \cite{8252037}, \cite{8300417} and \cite{8492706});
they
show how to monitor automobiles, predict the condition of the internal hardware, detect driving behaviors and discover different anomalies.
In the remainder of the related work section we grouped the works by their view point on the information flow.
\subsection{General vehicular security concepts describe the threat of unwanted information flow}
In the paper by Bernardini et al. \cite{BERNARDINI201713}, eight security requirements and five safety requirements for vehicle systems are defined and explained. They also describe how existing systems and solutions such as the AUTOSAR architecture, LIN, FlexRay, MOST and Ethernet/BroadR-Reach are aligned and can be used to fulfil these requirements. Furthermore, the authors explain in detail which safety concepts are to be pursued in vehicle systems and which possible problems or limitations may arise.
Hoppe et al. \cite{10.1007hoppe} show that it is necessary to examine and possibly modify already existing security vehicle systems. Among other things, the authors show that the intrusion detection system does not fully protect cars from intruders. Even though this system is one of the newer ones in vehicle safety, according to Hoppe et al. some improvements need to be made to minimise the risk of attacks. This shows us that even current security concepts are often not fully developed and cannot offer complete protection. Therefore, our approach is to enable a security layer as simply as possible according to the plug and play concept.
Studnia et al. \cite{studnia:hal-00848234} have looked at fundamental problems related to car security. Among other things, they found that the computing power of a car is very limited and this can lead to problems when using strong cryptography within certain protocols. In addition, they figured out that car manufacturers must validate the software running on an ECU embedded within a vehicle and test it on a periodic basis to guarantee its integrity. An entire vehicle can become vulnerable if bugs remain in the vehicle system. These effects are of course reflected in the severity of the respective bug. In the event that a security flaw is utilised, it can require anywhere from several months to years for a patch to be installed for all of the specific cars that were already on the road. This implies that it is an extremely important task to prevent malicious code from entering the vehicle in the first place. Therefore, our solution is to safeguard the OBD-II interface and thus exclude possible attackers.
\subsection{Filtering of inbound traffic towards the car's ECUs exists}
\label{sub:filtering_inbound_traffic}
Wolf et al. \cite{Wolf2004SecurityIA} examined the prevailing architecture as well as the threats that are prevalent in contemporary vehicles. They discovered that the gateways built into the automotive network require the use of powerful firewalls. In addition to this, they stated that the firewall implemented in the gateways also need to possess rules that control access on the basis of the security relevance of the particular network.
While filtering approaches or firewall concepts for networks inside the vehicle do exist and are nothing completely unknown, the range of available research is very limited, especially compared to works on inter-vehicle networks like VANETS. Even less information is available about existing solutions for cars being deployed.
For example NXP describes the need to protect the car's networked devices from unwanted outside traffic by a gateway for
''filtering inbound and outbound network traffic based on rules, disallowing data transfers from unauthorized sources.``\cite{nxp_gateway_whitepaper}.
NXP further states that a more fine-granular approach ``[...] may include context-aware filtering`` \cite{nxp_gateway_whitepaper}.
But often the exact mechanisms used as well as the security functions in real vehicles are not publicly published.
Another manufacturer's solution is the ``Central Gateway'' for central in-vehicle communication from Bosch, which lists a firewall and an intrusion detection system on its product page \cite{bosch_gateway}. However, neither the info PDFs nor the actual page list more precise details. Even when specifically asked at the responsible department, we were unable to get any further information about the security features mentioned.
The company Karamba Security \cite{karamba_security} in 2016 released a security architecture that acts as a gateway between a car's access points and critical networks/modules. Karamba calls it ECU Endpoint Security: Dropper Detection and Malware Prevention. To define factory policies, the developers had the idea of having a system embedded directly in the firmware. This is to prevent malicious code from infiltrating the system. Each ECU specifies its own policy and generates a so-called whitelist of permitted programm binaries, processes, scripts and network behaviour.
In academic literature Rizvi et al. presents this as a distributed approach for a firewall system in automobile networks \cite{rizvi:firewall}: Their system is focused to let only authorised packets reach an internal device using a Hybrid Security System (HSS) that uses many individual firewalls located in front of each module and at each electronic unit.
\subsection{Commercial available approaches towards filtering and securing the OBD-II Interface}
Practical OBD devices which do offer such inbound filtering or just blocking all access are available on the market.
The most critical feature touted as blockable by most entry filters is the use of ``key duplicators and an accessible OBD-II socket`` that would allow car thieves to generate new access codes thus obsoleting the original keys. This creates a safety barrier between the external devices and the data bus protecting vehicle functions against unauthorized access and manipulation. Using key duplicators and an accessible OBD-II socket, a car thief can easily generate new access codes, outflanking in a few moves the existing car alarm system. As can be clearly seen in Table \ref{tab:overview_commercial_products}, almost all approaches get delivered with only two modes supplied: always-deny (Off) or always-allow (On).
Their focus is on preventing malicious senders' packets from reaching important devices in the vehicle by turning the CAN bus access off.
Of course that would also completely block data that might travel in the other direction, but this privacy impact is not advertised.
Furthermore, with the existing approaches, the use of an OBD-II dongle is not possible when activated, as absolutely no data is available for processing. Our approach closes this gap.
\begin{table}[]
\label{tab:overview_commercial_products}
\caption{Commercially available products' capabilities compared to our Man-in-the-OBD-II approach}
\centering
\resizebox{\textwidth}{!}{%
\begin{tabular}{|l|c|c|c|c|}
\hline
\multicolumn{1}{|c|}{\textbf{Product}} & \textbf{\begin{tabular}[c]{@{}c@{}}Operation \\ Types\end{tabular}} & \textbf{Modes} & \textbf{Filtering} & \textbf{Method} \\ \hline
Diagnostic BOX - OBD Blocker \cite{diagnostic_box_obd_blocker} & None & On/Off & None & MiM \\ \hline
Ampire CAN-BUS Firewall \cite{ampire_can_firewall} & None & On/Off & None & MiM \\ \hline
Ampire OBD-Firewall \cite{ampire_obd_firewall} & None & On/Off & None & MiM \\ \hline
Paser Firewall OBD2 \cite{paser_firewall_obd} & None & On/Off & None & MiM \\ \hline
Electronic anti theft OBD plug \cite{electronic_anti_theft_obd} & None & On/Off & None & MiM \\ \hline
AutoCYB \cite{auto_cyb_lock} & None & \begin{tabular}[c]{@{}c@{}}Mounted/\\ Unmounted\end{tabular} & None & Lock \\ \hline
CAN Hacker Diagnostic Firewall \cite{can_hacker_firewall} & Unknown & Unknown & SIDs/PIDs & Unknown \\ \hline \hline
\textit{Man-in-the-OBD-II} & \textit{\begin{tabular}[c]{@{}c@{}}reject, limit, \\ replace\end{tabular}} & \textit{\begin{tabular}[c]{@{}c@{}}delay, pub\_once,\\ id\_range, val\_range\end{tabular}} & \textit{Individual} & \textit{MiM} \\ \hline
\end{tabular}
}
\end{table}
\section{Threat Modelling for OBD-II}
\label{sec:threat_identification}
We conducted a threat analysis for a commercial passenger car with an OBD-II dongle.
Threats were categorized according to the STRIDE method \cite{STRIDE}.
We then use the DREAD method \cite{DREAD} to rate the seriousness of those threats.
\subsection{Threats following STRIDE}
For brevity we limit this to the most relevant
eight threats, which are
used
to draw a
conclusion about the effectiveness or usability of our approach
in Section \ref{threat_impacts_with_firewall}.
\paragraph{\textbf{$T_\alpha$}}
\textbf{Malicious Device plugs directly into OBD-II}
This threat is virtually impossible to prevent. As soon as the attacker has physical access to the interface, he can, for example, bypass all upstream hardware security modules (such as our firewall approach) or simply plug them off. $T_\alpha$ could only be effectively prevented by additional physical security mechanisms. One possibility would be to physically separate the hardware security modules from the accessible OBD-II interface in such a way that it is no longer possible for an attacker to either:
\begin{itemize}
\item Separate the module from the vehicle, or
\item the use of the OBD-II interface becomes unusable as soon as the module is disconnected.
\end{itemize}
However, this would require some modification of the OBD-II standard and is not really feasible.
\paragraph{\textbf{$T_\beta$}}
\textbf{Attacker compromises a running service on dongle}
The threat class $T_\beta$ is a class which includes threats that may or may not occur frequently, depending on the security standards applied in the development of the software running inside the dongle.
The attack surface is very large due to the use of cloud services, an internet interface (e.g. by means of a SIM module in the dongle) or a Wi-Fi interface. The very fact that the service communicates with a cloud service via an API means that in addition to the service inside the dongle also the cloud services' endpoints must be regularly maintained and also updated. If this is not done, the risk of a possible attack increases further.
In the following, there are two subcategories in the $T_\beta$ class, namely:
\begin{enumerate}[I:]
\item Dongle refers to hardware where our firewall runs on
\item Dongle refers to the third-party hardware that plugs into our firewall
\end{enumerate}
\paragraph{\textbf{$T_\gamma$}}
\textbf{Attacker can send arbitrary CAN commands}
This threat is enabled by either $T_\alpha$ or $T_\beta$. The attacker is thus able to control, for example, displays in the instrument cluster or the opening and closing of the electric windows. Generally speaking, it is possible for him to contact all ECUs that communicate via CAN and can be reached via the OBD-II interface. This enables a wide range of attack vectors.
\paragraph{\textbf{$T_\delta$}}
\textbf{Attacker can read communication on CAN-Bus}
No direct physical damage can be caused by this threat, as it is only a matter of reading access to the CAN bus. Nevertheless, the privacy of the respective user can be violated. It is possible to listen to all communication on the CAN bus that is accessible via OBD-II. By analysing the traffic, a variety of conclusions can be drawn. For example, statements can be made about the individual driving behaviour of each driver in road traffic.
\paragraph{\textbf{$T_\epsilon$}}
\textbf{Damaging ECUs by executing specific commands}
In order to damage individual ECUs by means of specific messages, the attacker needs a profound understanding of the respective vehicle system as well as the ECU to be damaged. In most cases, such an attack is not possible without extensive prior testing and analysis of the hardware. Of course, one can actively try to damage the hardware by sending harmful and unwanted CAN messages. However, a try-and-error approach offers little chance of success.
\paragraph{\textbf{$T_\zeta$}}
\textbf{Person endangerment by deactivating safety functions}
This threat is a very dangerous one, as it may actively cause physical harm to people. On the one hand, an attacker could succeed in deactivating critical safety systems such as the airbag, the ABS (anti-lock braking system) or the ESP (electronic stability program). Thus, in the event of a driving manoeuvre where the respective system is required, a possible accident would be the result. Furthermore, an attacker could, for example, display the wrong speed. As an example, he could simply display a much lower speed than is actually being driven at the moment. Thus, the driver could be harmed if he is at that time in a certain road passage where maintaining a certain minimum speed is critical to safety.
\paragraph{\textbf{$T_\eta$}}
\textbf{Permanent infiltration of the vehicle system by uploading malware}
One
goal
during an attack or after a successful intrusion into a system is to make the access persistent.
$T_\eta$ describes the maintenance of unrestricted access even after possible malicious OBD-II devices have been removed. This requires in-depth knowledge of the ECUs available in the target vehicle. Possible targets are especially ECUs that have a memory module or dedicated firmware that can be overwritten.
\begin{figure}
\vspace{-1em}
\begin{floatrow}
\vspace{-1em}
\ffigbox{%
\hskip-0.2cm\includegraphics[width=0.5\textwidth]{Figures/threats_attack_tree.pdf}
}{%
\caption{Attack tree of specified threats}
\label{fig:threats_attack_tree}
}
\capbtabbox{%
\hskip-0.6cm\begin{tabular}{|c|c|c|c|c|c|c|c|c|}
\hline
\multicolumn{1}{|l|}{} & $T_\alpha$ & $T_{\beta_I}$ & $T_{\beta_{II}}$ & $T_\gamma$ & $T_\delta$ & $T_\epsilon$ & $T_\zeta$ & $T_\eta$ \\ \hline
\textbf{D} & 3 & 3 & 3 & 3 & 2 & 3 & 3 & 3 \\ \hline
\textbf{R} & 3 & 2 & 2 & 2 & 2 & 2 & 2 & 1 \\ \hline
\textbf{E} & 3 & 2 & 2 & 2 & 2 & 1 & 2 & 1 \\ \hline
\textbf{A} & 3 & 3 & 3 & 3 & 3 & 3 & 3 & 3 \\ \hline
\textbf{D} & 3 & 2 & 2 & 3 & 3 & 2 & 3 & 2 \\ \hline
\multicolumn{1}{|l|}{DREAD Risk} & 3.0 & 2.4 & 2.4 & 2.6 & 2.4 & 2.2 & 2.6 & 2.0 \\ \hline
\end{tabular}
}{%
\caption{DREAD rating of selected threats}
\label{tab:obd_dread_table}
}
\vspace{-1cm}
\end{floatrow}
\end{figure}
\subsection{Results of OBD-II Threat Modeling}
Based on the seven overall threat classes defined and explained above,
we
used the DREAD rating model \cite{DREAD} to
roughly classify how big or serious the individual threats are.
The individual values for each threat
can be seen in Table \ref{tab:obd_dread_table}.
Levels range from low (1) to high (3) risk.
When comparing the individual result values, it becomes clear that $T_\alpha$ is the highest rated threat with a risk rating of 3.0. The main reason for this is that $T_\alpha$ is a physical threat. With physical threats, the possibilities of an attacker are always greater than, for example, in the case of a remote only attack. The lowest rated threat is $T_\eta$ with a risk rating of 2.0. This is because this type of attack requires a tremendous amount of knowledge and skill to execute. Usually a lot of research and testing needs to be done on the real physical devices/ECUs to even find vulnerabilities that make it possible to carry out the attack. It is also interesting that, with the exception of threat $T_\delta$, the damage for all other threats is always rated at the maximum of three (\textit{high}). The reason for this lies in the effect of the respective threats. $T_\delta$ describes the possibility to read messages of the CAN bus. This means that a possible attack can also be described as passive, since it never actively changes anything on the BUS or writes anything to it. Nevertheless, sensitive data (e.g. regarding the driver's privacy) can be collected by reading and possibly decoding some messages. Therefore, $T_\delta$ still has a rating of two, which means medium in terms of the DREAD rating system.
Since some threats enable other threats, we have created an attack tree of the seven selected threat classes. An analysis via attack trees provides a graphical, easy-to-understand modelling of threats. It helps us to classify the different attack possibilities of the OBD-II interface more precisely and to develop possible countermeasures to prevent such attacks. In Figure \ref{fig:threats_attack_tree} you can see this tree, which can also be divided into three hierarchies named $L_{I}$, $L_{II}$ and $L_{III}$.
The first level is the basic threat hierarchy. This means that threats from higher levels are always based on threats from the basic level. The threats $T_\alpha$ and $T_{\beta_I}$ are therefore needed to realise attacks with the threats from the layers below. Therefore, we also define these two threats as entry threats. In the best case, all entry threats can be eliminated or prevented or mitigated so that all further threats become obsolete or not quite as severe. The threats in our last level $L_{III}$ are the most specific in terms of the knowledge required or the techniques used.
\section{Architecture of the Man-in-the-OBD}
In this section we
describe the architecture and the individual abstract
components, which we will then implement in Chapter \ref{chap:implementation_and_evaluation}.
For brevity we only briefly explain the basic ideas of the respective components theoretically and show why we choose exactly this approach.
\subsection{Producer/Consumer Scheme}
\label{sub:producer_consumer_scheme}
The producer-consumer problem (also known as the bounded buffer problem) is a classic example of a multiprocess synchronisation issue, the first version of which dates back to Edsger W. Dijkstra in 1965. Nevertheless, there are now promising approaches in software development to efficiently eliminate this problem \cite{10.1145/3335772.3335782}. We have decided on such an approach (more details can be found in Section \ref{sub:producer_consumer_solution}). A filtering approach is best realised with a buffer in which incoming messages are accumulated. Afterwards, they can be processed one after the other, depending on the queue. This model is ideal if you want to be as unrestricted as possible in the processing phase. Depending on the respective computing power, several producers or consumers can be started. In this way, load peaks can be easily absorbed. The modular approach can also be applied by means of differently implemented producers.
\subsection{Modular Approach for Protocol Bindings}
Since the producer/consumer scheme allows us to easily create several differently implemented producers, a uniform interface must be defined. This interface ensures that the responsible consumer can correctly process and forward the incoming messages. By means of this approach it is possible to support incoming messages of all protocols. With this method, a high-performance and efficient filtering is possible.
\subsection{CAN-Bus Binding}
In order to support the CAN protocol for our implementation, a connector is needed to receive messages as well as to be able to send filtered or processed messages again. For this purpose, already widely used libraries (e.g. \cite{cantools}, \cite{porcelain}) as well as the common syntax for coding and decoding are used. Furthermore, it is desirable if the binding understands the so-called DBC format. DBC stands for CAN Database and is a proprietary format that describes the data structure over a CAN bus. A CAN~DBC file is a text file that provides all the necessary information for decoding CAN bus raw data into physical values. If a DBC file is available for the respective manufacturer, the user can easily decode the CAN data streams and thus analyse them in order to manipulate or block them as desired in our firewall approach. This kind of provision can ensure a possibility to support as many manufacturers as possible. Analogous to this binding, a binding for ISO~9141 or SAE~J1850 could also be written and integrated into our pipeline as a producer in order to support these protocols.
\subsection{Processing Pipeline}
A concurrent and multi-level data input and data processing pipeline is to be realised for the processing pipeline. It must also be possible to efficiently consume different sources, the so-called producers. It would also be desirable if there was the possibility to configure the processing pipeline with regard to the resources to be used. More precisely, the number of processes for producer and consumer as well as the concurrency and the batch size to be used. In Figure \ref{fig:software_architecture_data_generation} you can see an schematic overview of the pipeline. The blue circles in the figure symbolise the producers. As already mentioned, it will be possible to develop a producer for each protocol. So in the future there may be a producer for CAN, one for ISO~9141 and so on. There is a uniform interface to adhere to. The producers then send their messages to the concurrent and multi-stage data ingestion service. There, the incoming messages should be analysed and then asynchronously serialised and inspected. Here, serialisation refers to the application of the active rules and not to the quantity of messages.
\begin{figure}
\centering
\includegraphics[width=0.95\textwidth]{Figures/new_software_architecture_data_generation.pdf}
\caption{Simplified representation of the processing pipeline}
\label{fig:software_architecture_data_generation}
\end{figure}
\vspace{-2.5em}
\subsection{Serialization}
After the incoming messages have been serialised and bundled into batches, the messages are to be checked for the active rules as efficiently as possible. Since the behaviours of a rule can be sorted according to their strictness in the restriction, the strictest behaviours should currently apply. This allows the individual behaviours to carry out the checks in parallel.
\vspace{-1em}
\subsection{Data Storage}
It should be possible to record the data as well. In the best case, data should be stored in a database in a uniform, reusable format. This ensures that the logged CAN messages can be easily searched or filtered for various purposes. Since the amount of CAN messages can be immense, there should be an option to deactivate or activate the permanent logging.
\vspace{-1em}
\subsection{Policy management}
\label{chap_spefification:section:policymanagement}
Policy management systems can be implemented either as specialised hardware or as software on general-purpose operating systems. However, the underlying idea is always the same. There is a set of defined rules that determine which packets the separated network can receive and how those packets are modified if necessary \cite{8406593}. The best known firewall tools under Linux and Unix are \textit{iptables} and \textit{ipfw}. However, since these are far too extensive and complex for our current needs, we have opted for a simple implementation of our own. Other well-known policy languages such as RPSL \cite{rfc7909}, SRL \cite{rfc2723}, PAX \cite{nossik-pax-pdl-00}, PFDL \cite{ietf-policy-framework-pfdl-00} may also not be suitable for our application and often the entry hurdle would be much higher than with our simplified policies in the form of the widely known JSON format. These facts are the main reasons why we do not currently use a specific rule framework or rule engine. As briefly touched on, our policies are managed using configurations specified in JSON format. The format is a simple one that is adapted to the current use case, but can be extended in a modular way. The current overall structure can be seen in Tables \ref{tab:policy_properties_list} and \ref{tab:policy_property_behaviour}. In any case, a kind of version check must be carried out at implementation stage for the respective rules to be applied. On the other hand, an extension is almost impossible or backwards compatibility cannot be guaranteed. Table \ref{tab:policy_properties_list} shows the overall wrapper structure for a rule definition. This contains general information such as a description, the protocol type to be filtered and the version of the policy language currently in use. Table \ref{tab:policy_property_behaviour} describes the structure of a so-called behaviour. A rule can theoretically have as many behaviours capsules as desired. Here, each currently available behaviour type (namely reject, limit and replace) is applied once for demonstration purposes.
\subsection{Rule enforcement}
The enforcement of rules as well as individual behaviours is based on an assessment of importance. This means that more important behaviours and rules outweigh less important ones. For this purpose, there is a special type rating, which is specified using the type property of the behaviours. As in Section 4.5, this has the great advantage that the individual behaviours can be executed in parallel after the initial filtering of the strictest rating and thus best fit our scalable approach.
\newpage
\begin{table}[H]
\centering
\caption{List of basic properties with their associated functionality}
\begin{tabular}{|p{0.13\textwidth}|p{0.18\textwidth}|p{0.69\textwidth}|}
\hline
\textbf{Property} & \textbf{Type} & \textbf{Description} \\ \hline \hline
name & \textit{$<$String$>$} & Is just a simple naming of the individual rules for better distinction. The name does not have to be unique. \\ \hline
description & \textit{$<$String$>$} & Briefly describes the created rule in a few words. \\ \hline
version & \textit{$<$String$>$} & The version number specifies the version of the properties to be used. \\ \hline
protocol & \textit{$<$Protocol-Type$>$} & Declares the protocol type to be used for the respective rule. Currently there is only \textit{$<CAN>$} as a declarable type. \\ \hline
behaviours & \textit{$[<$Behaviour$>]$} & The behaviour field defines a list of all actions to be performed later during the execution of each rule. \\ \hline
\end{tabular}
\label{tab:policy_properties_list}
\end{table}
\vspace{-3em}
\begin{table}[H]
\centering
\caption{List of properties for a single behaviour inside a policy\\}
\begin{tabular}{|p{0.12\textwidth}|p{0.14\textwidth}|p{0.72\textwidth}|}
\hline
\textbf{Property} & \textbf{Type} & \textbf{Description} \\ \hline \hline
type & \textit{$<$String$>$} & Currently, three different behaviour types are supported: \begin{itemize}
\item \textit{reject} - Ignores all messages with the defined identifier and associated value
\item \textit{limit} - Limits all accruing values of a message from the defined identifier by means of a predefined value
\item \textit{replace} - Always exchanges all message values of a given identifier with the given value
\end{itemize} \\ \hline
identifier & \textit{$<$String$>$} & Defines the identifier of the CAN message present on the bus \\ \hline
value & \textit{$<$String$>$} & Determines the data payload to be used for the respective set type \\ \hline
\multicolumn{3}{|c|}{\textit{The following properties are optional and do not have to be set}} \\ \hline
delay & \textit{$<$Integer$>$} & If the delay property is set, all messages that fall below the specified behaviour will be delayed. The value is given as an integer value and defines the delay time in milliseconds. \\ \hline
pub\_once & \textit{$<$Boolean $>$} & Allows messages in the scope of the behaviour to be allowed only once per system start. Once the message has been read once, it is whitelisted and then not forwarded. By default, the value is set to \textit{false}. \\ \hline
id\_range & \textit{$<$String$>$} & By means of the identifier range, the behaviour value range to be enforced can be extended. \\ \hline
val\_range & \textit{$<$String$>$} & Allows messages in the scope of the behaviour to be allowed only once per system start. Once the message has been read once, it is whitelisted and then not forwarded. By default, the value is set to \textit{false}. \\ \hline
\end{tabular}
\label{tab:policy_property_behaviour}
\end{table}
\newpage
\section{Implementation}
\label{chap:implementation_and_evaluation}
Next we describe all the components that run our developed approach in the background. This includes the processing of incoming messages, the used producer \& consumer approach, the storage of individual messages as well as our filter module.
\subsection{Producer/Consumer Solution}
\label{sub:producer_consumer_solution}
As already described in Section \ref{sub:producer_consumer_scheme}, our approach should benefit from the so-called producer consumer construct and thus make a modular approach more feasible. This is one of the disciplines in which Elixir can demonstrate all its abilities and advantages in the best possible way. To build our solution, we use the library called \textit{Broadway} \cite{broadway}.
The library can be used to create concurrent, multi-stage data input and data processing pipelines.
It is also possible to implement your own producers, which is perfect for our use case. Depending on the use case, it can make a lot of sense to summarise the processed messages as a so-called batch before the actual publication.
While we don't need the batchers to communicate with an extraneous API, it allows us to process the storing of CAN messages in an encapsulated way and thus have no runtime loss for publishing already filtered CAN messages. This allows for increased throughput and consequently improved overall performance of our pipeline. Batches are simply defined via the configuration option. The configuration is of course adaptive and can be extended very easily if required. A schematic representation of how our current testing pipeline looks is shown in Figure \ref{fig:broadway_pipeline}.
\begin{figure}
\centering
\includegraphics[width=0.7\textwidth]{Figures/broadway_pipeline.pdf}
\caption{Representation of producer, processor and batcher pipeline}
\label{fig:broadway_pipeline}
\end{figure}
\vspace{-2em}
\subsection{Storing of CAN-Messages}
The storage of CAN messages can be set via the web user interface. By default, CAN messages are not saved. However, if the option is activated, all filtered messages are stored in a PostgreSQL database in the \texttt{batcher\_db\_storage}. Since the storage is carried out in one of the batcher processes, all storage operations can be carried out completely independently of the runtime of the actual filter pipeline. PostgreSQL was chosen because the \textit{Phoenix framework} uses it as the default database. The advantage of this, however, is that the adapter responsible for the connection to the database can simply be exchanged. This makes it easy to switch to a Time Series Database (TSDB) such as InfluxDB \cite{influx} or Riak \cite{riak}. TSDBs are suitable because CAN messages are time-distributed data. Since the storage and further processing of past CAN bus data is not relevant for our work, the use of PostgreSQL is sufficient for our purposes.
In addition to the raw data of the identifier and the payload which are stored using the \texttt{byte} data type, we also store the encoded data as \texttt{varchar(255)}. This has the advantage that we can display the data in a simple and user-friendly way in the front-end without having to decode a large number of data sets each time.
\subsection{Pipeline Benchmarks}
In Table \ref{tab:benchmark_results} you can see the measurement results for the runs of our pipeline with different numbers of behaviours.
Our firewall runs on a Raspberry Pi 4 with 8 GB RAM.
The measurements generally show that our filtering pipeline only requires a small amount of computing time and therefore has no impact on the usability of the dongles. The average runtime of the pipeline increases with the increase in the number of behaviours defined in a rule. In general, the additional execution time has no effect on the correct functioning either way, as the dongles are inherently time uncritical (more on this in Section \ref{conclusion}).
\vspace{-2em}
\begin{table}[H]
\centering
\resizebox{\textwidth}{!}{%
\begin{tabular}{|l|l|l|l|l|l|l|l|}
\hline
\textbf{Measurement} & \textbf{Iterations per Second} & \textbf{Average} & \textbf{Deviation} & \textbf{Median} & \textbf{Minimum} & \textbf{Maximum} & \textbf{Sample Size} \\ \hline
3 behaviours & 243.87 & 4.10 ms & $\pm$9.71\% & 4.08 ms & 3.05 ms & 6.79 ms & 1218 \\ \hline
30 behaviours & 225.89 & 4.43 ms & $\pm$11.89\% & 4.36 ms & 3.29 ms & 10.23 ms & 1218 \\ \hline
300 behaviours & 164.58 & 6.08 ms & $\pm$11.90\% & 6.03 ms & 4.45 ms & 11.07 ms & 823 \\ \hline
3000 behaviours & 83.22 & 12.02 ms & $\pm$7.44\% & 12.02 ms & 9.55 ms & 14.68 ms & 416 \\ \hline
30000 behaviours & 16.74 & 59.72 ms & $\pm$7.09\% & 59.10 ms & 51.62 ms & 69.53 ms & 84 \\ \hline
\end{tabular}%
}
\caption{Results of the run time comparison of the single measurements}
\label{tab:benchmark_results}
\end{table}
\vspace{-2em}
\section{Evaluation of Impact on Threats and Existing Dongles}
To evaluate we have tested our approach with real software currently available on the market; we report the tests of the RYD-Box \cite{ryd_box} and the Volkswagen (VW) Data Plug \cite{data_plug} in this paper. We ran two different test cases both were carried out with a Volkswagen Golf model 7: In case $T1$ messages are blocked (Table \ref{tab:blocked_can_messages} contains detailed information about the blocked messages). In case $T2$, we manipulate messages' contents (for details see Table \ref{tab:manipulated_can_messages}). In addition to the respective PID (process identifier) of the message, we have also included the minimum and maximum values defined in the standard and the firewall response defined by our rule in the tables, e.g. \texttt{blocked} in case $T1$ and limiting the speed value to a maximum of \texttt{100} in $T2$. Finally, we also theoretically discuss which threats identified using STRIDE and sorted using DREAD can be mitigated by our approach.
\subsection{Testing the RYD-Box}%
\label{testing_ryd_box}
First we ran $T1$ and blocked CAN requests for both speed and current mileage using our firewall. Then we did a test drive and checked the data displayed to us in the app. Here we were still shown the last known mileage, which no longer matched after the drive with the firewall activated. We had tested the dongle beforehand without the firewall, so the service already had access to our mileage at that time. This shows that the mileage is stored in the cloud at certain times of the respective car, but also that it is possible to successfully block the transmission of the mileage with our approach and still use the remaining functions of the dongle. We then displayed the speed history of our recorded journey. There we noticed that despite blocking the messages for the current speed on the OBD-II interface, the approximate driving speed was also tracked in the app. Our first assumption was that the app calculated the speed using the GPS data of the smartphone. However, after another drive without a smartphone in the car, we found out that the RYD-Box has both a GSM module with a separate SIM card and a GPS sensor. This enables the service to approximate the speed using GPS. Alternatively, additional sensors such as an acceleration sensor or a gyroscope could be installed to collect driving statistics. However, we have not yet disassembled the dongle and can therefore only list additional possibilities to the GSM and GPS modules.
Afterwards, we reconfigured our firewall for $T2$ so that it always returns 200000 km when the current mileage is requested. In addition, we have limited the current speed to 50 km/h in a rule using behaviour. Then we did the same test drive as for $T1$ and compared the results in the app again. As we expected, the speed limit had no effect on our tracked distance. We simply assume that the application does not use the current speed transmitted via CAN for the calculations or displays within the app for a journey. However, we could see modified mileage, namely the value of 200000 km as defined in the behaviour of our rule, becoming displayed in RYD's smartphone app, without any hint on it being modified.
\subsection{Testing the VW Data Plug}
\label{testing_vw_data_plug}
We blocked the CAN requests to the OBD-II gateway for $T1$ using our firewall for the current mileage and the current speed. After connecting our mobile phone via bluetooth to the dongle and the app, we were able to display data from our vehicle. However, the mileage could not be displayed because it was successfully blocked by our firewall. Once again, we drove a test lap for the recording of a journey. Again, just like with the RYD-Box, a speed could be read out in the app afterwards with the Volkswagen Data Plug, despite the blocking by our firewall. However, here again we have the same case as in the test with the RYD-Box. The speed for the saved journeys is simply not displayed over the speed that can be requested via OBD-II. In contrast to the RYD-Box, the VW Data Plug from Volkswagen does not have a GSM or GPS module, but the application must have access to the current location and the GPS services of the smartphone in order to function at all. For this reason, no test was possible without a smartphone in the vehicle. Due to the compact and smaller design of the VW Data Plug, we can assume that no additional sensors are installed there, and only the sensors installed in the smartphone (such as the acceleration sensor or the gyroscope) are used to evaluate the driving behaviour.
For the $T2$, our firewall was again configured in the same way as it was above during the test for the RYD-Box. After our new round of tests, we were able to prove exactly the same behaviour as the RYD-Box showed. We were able to see our manipulated vehicle mileage within the We Connect application, but the manipulation of the speed had no effect, as the speed is approximated by GPS as explained before.
\vspace{-2em}
\begin{table}[H]
\centering
\resizebox{0.8\textwidth}{!}{%
\begin{tabular}{|l|c|c|c|c|}
\hline
\textbf{PID (hex)} & \textbf{min\_value} & \textbf{max\_value} & \textbf{Firewall response} & \textbf{Description} \\ \hline
0C & 0 & 255 & $<blocked>$ & Vehicle speed \\ \hline
A6 & 0 & 429,496,729.5 & $<blocked>$ & Odometer \\ \hline
\end{tabular}%
}
\caption{Overview of CAN-Messages to be blocked in $T1$}
\label{tab:blocked_can_messages}
\end{table}
\vspace{-3em}
\begin{table}[H]
\centering
\resizebox{0.8\textwidth}{!}{%
\begin{tabular}{|l|c|c|c|c|}
\hline
\textbf{PID (hex)} & \textbf{min\_value} & \textbf{max\_value} & \textbf{Firewall response} & \textbf{Description} \\ \hline
0C & 0 & 255 & $<min\_value>$ to 100 & Vehicle speed \\ \hline
A6 & 0 & 429,496,729.5 & 200,000.0 & Odometer \\ \hline
\end{tabular}%
}
\caption{Overview of CAN-Messages to be manipulated in $T2$}
\label{tab:manipulated_can_messages}
\end{table}
\subsection{Evaluation of Existing Dongles }
\label{conclusion_of_testing}
\vspace{-0.2em}
These two examples in Section \ref{testing_ryd_box} and \ref{testing_vw_data_plug} are only intended to show the feasibility and do by no means represent a full analysis of all car manufacturers in combination with different dongles. Because our approach is protocol agnostic and also not dependent on the underlying message format, it can be assumed that our OBD-II firewall will work with all possible combinations. Our main takeaway from testing against real existing dongles is that our approach is
surprisingly easy to apply in practice.
This means that no message origin authentication features (like digital signatures on message contents \cite{DBLP:conf/camad/PohlsP17,DBLP:conf/icics/0003SPF16}) on the CAN bus are used, as the dongle does not detect the manipulation of message's content in test case $T2$.
Additionally we gained interesting deeper insights into the functionality of the two dongles.
We contemplate a closer look at the dongles in combination with the related smartphone application, such as in the case of the Volkswagen Data Plug, because we found that this combined setup additionally used information collected from the smartphone's sensors; these might also be tampered with - albeit in different ways:
We consider it quite conceivable to trick the application corresponding to the dongle to the extent of feeding it false journeys with faked speeds. To do this, you would have to isolate the application to run it within an emulator where you can influence the position information and the gyroscope's readings.
\subsection{Evaluation of Threat Mitigation}
\label{threat_impacts_with_firewall}
Finally we check whether our approach prevents or at least mitigates
the threats $T_\alpha$ to $T_\eta$ as identified in Section \ref{sec:threat_identification}.
With regard to the threats $T_\alpha$ and $T_{\beta_I}$, our approach unfortunately cannot do anything. In general, it is almost impossible to achieve hundred percent security against physical attacks or manipulations. With regard to the $\beta$ threat class, however, you can try to make your services within the vehicle as secure as possible and close any security gaps discovered with regular updates. However, if the attacker has physical access, often even the best systems cannot be protected. Direct and unlimited access to low-level debug interfaces such as those provided by JTAG (Joint Test Action Group) and SWD (Serial Wire Debug) would make it possible to take complete control of the device, e.g. stop and change code execution, access memory and registers, or even dump the firmware.
However, $T_\gamma$ is preventable by means of our approach. With the help of the rules and the behaviours, it can be defined within the firewall which messages are allowed through and which are not. This means that it is no longer possible for an attacker to simply send arbitrary messages.
Likewise, the threat $T_\delta$ can be prevented by configuring the firewall for the desired permitted communication using the method described above.
Our approach not only allows or blocks certain messages, but also enables to modify the data payload for defined messages before publishing them to the OBD-II device.
Since the threats $T_\gamma$ and $T_\delta$ can be prevented by our approach, the threats based on $T_\gamma$ can also be avoided automatically. Thus, threats $T_\epsilon$, $T_\zeta$ and $T_\eta$ are also eliminated. This means that our firewall approach successfully prevents all threats on the hierarchies $L_{II}$ and $L_{III}$ (see Figure \ref{fig:threats_attack_tree}).
\section{Conclusion}
\label{conclusion}
While it is yet not really widespread to install filtering approaches such as firewalls in vehicle systems at the OBD-II outgoing interface, this can become a crucial interconnection point between the car and third party dongles in the future.
The need for such solutions will grow in the coming years, as more and more different approaches and products will come onto the market to make existing cars even smarter. And the OBD-II interface is clearly the standard entry point for realising such smart car approaches.
Manufacturers agree that security must be added to CAN or OBD-II, but dongles available on the market today can be easily fooled by manipulated messages.
Following the threats we have systematically defined in Section ~\ref{sec:threat_identification}, the proposed Man-in-the-OBD shows that without any means to authenticate messages from the car's electronic control units, e.g. the car's current speed, dongles connected to the OBD-II interface can be fooled.
We are the first to show that a Man-in-the-OBD-II interface is able to serve the dongle manipulated values without it noticing the falsification.
We exploit the missing authentication to protect the car's drivers privacy.
This is of course a benign application,
used maliciously the falsified information provided to the environment of the car could have negative consequences. During the test cases, we were also able to test whether our firewall without rules has an effect on the functionality of the dongles used. This was not the case which can be easily explained by the fact that our approach in normal cases without active roles is simply two physically separated CAN-BUS systems forwarding all messages between the buses.
$L_{I}$-threats are directly attacking the dongle, like $T_\alpha$ and $T_{\beta_I}$.
Those can of course not be mitigated by our firewall,
because our firewall simply is a dongle.
However, we show that threats of hierarchies $L_{II}$ and $L_{III}$ can be effectively prevented by policing the OBD-II interface.
This is already an important step to protect against external malicious devices.
We have implemented it and prototypically showed that it works for the CAN protocol.
However, our architecture is general, and our approach could be used even for other interfaces; our modular approach allows to extend it to other protocols.
For an increased privacy we showed that the connected dongles providing data for cloud services to monitor the car via a mobile phone application are not affected by the small delay caused by our filtering or manipulation of the data traffic.
\subsection{Future Work}
\label{future_work}
Due to the diversity of protocols available via OBD-II, we limited ourselves to the CAN protocol. It would definitely be an enrichment for our firewall if the remaining protocols accessible via OBD-II were also covered by our firewall.
Furthermore, the OBD-II interface is the most exposed interface, but it would be valuable to research
a firewall or man-in-the-middle attack at
other interfaces; for example the infotainment system.
Technologies such as AndroidAuto \cite{android_auto} and Apple CarPlay \cite{apple_carplay} are already finding their way into vehicles. But also completely independent Android systems are delivered with the so-called Android Automotive OS \cite{android_os}.
If apps would have vulnerabilities or contain malicious code
it would be fatal if such apps could suddenly gain control over the vehicle. Therefore, an adapted firewall approach similar to our current one could also be very useful to isolate the car from the infotainment system. Another point that should be examined in future work is how to implement our developed approach on less expensive devices than the Raspberry Pi. If applicable, approaches should also be considered here that shift the actual computations to a central processing unit provided by the car, as an example.
\bibliographystyle{splncs04}
|
1,108,101,566,401 | arxiv | \section{Introduction}
The question we would like to answer is the following one\footnote{for the notation and the exact definition of the objects that are involved we refer to the original source.}:
\begin{question}\label{conj1}
Let $(X,X_0)$ be an henselian couple (in the sense of \textnormal{Definition \ref{definition henselian couple}}). Is it true that \it{(ii)} and \it{(iii)} in \textnormal{\cite[Exposé XII, Proposition 6.5]{SGA4}} hold with $\mathbb{L}=\mathbb{P}$ and for every $n$?
\end{question}
This question appears in \cite[Exposé XII, Remarks 6.13]{SGA4}.
We can restate it as follows:
\begin{question}\label{conj2}
Let $(X,X_0)$ be an henselian couple. Is it true that
\begin{enumerate}
\item[a.] the base change functor induces an equivalence between the category of étale coverings of $X$ and the category of étale coverings of $X_0$?
\item[b.]for any torsion étale sheaf $\mathscr{F}$ and for any integer $n$, the morphism
\[
H^n(X,\mathscr{F})\longrightarrow H^n(X_0,\mathscr{F}_{|X_0})
\]
is an isomorphism?
\end{enumerate}
\end{question}
\begin{remark}
\begin{enumerate}
\item When $X$ is proper and finitely presented over an henselian ring $(A,m)$ and $X_0=X\times_{Spec(A)} Spec(A/m)$, we know that the answer to \textnormal{Question \ref{conj1}} is affirmative. This is the proper base change theorem in étale cohomology.
\item The case $(X,X_0)=(Spec(A),Spec(A/I))$\footnote{an affine henselian couple is the same thing as an henselian pair. See Remark \ref{henselian pairs are affine henselian couples}.}, for $(A,I)$ an henselian pair was studied an solved by R. Elkik in \textnormal{\cite{Elk}} and by O. Gabber in \textnormal{\cite{Gab}}.
\end{enumerate}
\end{remark}
We propose a solution in the following situation:\\
$(\dagger)$ \hspace{0.5cm}{\itshape Let $X$ be proper over a noetherian affine scheme $Spec(A)$ and $X_0=X\times_{Spec(A)} Spec(A/I)$ for some ideal $I\subseteq A$.}\\
We will see that, under these assumptions, $(X,X_0)$ is an henselian couple for which Question \ref{conj1} has a positive answer. To achieve this, we will first generalize \cite[Theorem 3.1]{Art69} to the following form:
\begin{theorem}\label{proper base change over henselian pairs}
Let $(A,I)$ be an henselian pair. Let $S=Spec(A)$ and let $f:X\longrightarrow S$ be a proper finitely presented morphism. Let $X_0=X\times_S S_0$, where $S_0=Spec(A/I)$. Then
\[
\text{É}t_f(X)\longrightarrow \text{É}t_f(X_0)
\]
\[
Z\mapsto Z\times_S S_0
\]
is an equivalence of categories.
\end{theorem}
Here \textit{É}$t_f(W)$ denotes the category of finite étale schemes over $W$. The key tools for the proof are Artin's approximation theory and \cite[Tag 0AH5]{SP}, which combined with \cite[Corollary 1.8]{Art69} yields the following theorem
\begin{theorem}\label{second answer to Artin's first question}
Let $(A,I)$ be an henselian pair with $A$ noetherian. Let $\hat{A}$ be the $I$-adic completion of $A$ and assume that one of the following hypothesis is satisfied:
\begin{enumerate}
\item $A\longrightarrow \hat{A}$ is a regular ring map;
\item $A$ is a G-ring;
\item $(A,I)$ is the henselization\footnote{here the henselization is the left adjoint to the inclusion functor \textit{Henselian Pairs}$\longrightarrow$ \textit{Pairs}} of a pair $(B,J)$, where $B$ is a noetherian G-ring.
\end{enumerate}
Let $\mathscr{F}$ be a functor which is locally of finite presentation\footnote{see \cite[Definition 1.5]{Art69} }
\[
\textit{A-algebras}\longrightarrow \textit{Sets}
\]
Given any $\hat{\xi}\in \mathscr{F}(\hat{A})$ and any $N\in \mathbb{N}$, there exists an element $\xi \in \mathscr{F}(A)$ such that
\[
\xi \equiv \hat{\xi} \text{ } mod \text{ }I^N
\]
i.e. $\xi$ and $\hat{\xi}$ have the same image in $\mathscr{F}(A/I^N)\cong \mathscr{F}(\hat{A}/\hat{I}^N)$
\end{theorem}
\begin{remark}
In \textnormal{Theorem \ref{second answer to Artin's first question}} we have that $\textit{3.} \Rightarrow \textit{2.} \Rightarrow \textit{1.}$ See \textnormal{\cite[Tag 0AH5]{SP}}.
\end{remark}
\begin{remark}\label{first step in the solution}
\textnormal{Theorem \ref{proper base change over henselian pairs}}, joined with \textnormal{\cite[Corollary 1]{Gab}}, gives us a positive answer to \textnormal{Question \ref{conj1}} when $(X,X_0)$ is proper and finitely presented over an henselian pair. Moreover, we will see how we can always reduce to this case from situation $(\dagger)$.
\end{remark}
\section{Proof of Theorem \ref{proper base change over henselian pairs}}
This proof is an adaption of the one given in the local case by Artin (see \cite[Theorem 3.1]{Art69}). This generalization is possible thanks to Popescu's characterization of regular morphisms between noetherian rings, which provides us Theorem \ref{second answer to Artin's first question} as a corollary.\\
First we reduce to the case where $A$ is the henselization of a finitely presented $\mathbb{Z}$-algebra. in order to do this, we need the following two preliminary lemmas.
\begin{lemma}\label{finite étale covers modulo iso functor locally of finite presentation}
Let $S=Spec(A)$ and let $g:X \longrightarrow S$ be a proper morphism of finite presentation. Then the functor
\[
\mathscr{F} : \textit{A-Algebras}\longrightarrow \textit{Sets}
\]
\[
B \mapsto \{\text{finite étale coverings of } Spec(B)\times_S X\}/\text{isomorphism}
\]
is locally of finite presentation.
\end{lemma}
\begin{proof}
See the beginning of the proof of \textnormal{\cite[Theorem 3.1]{Art69}}.
\end{proof}
\begin{lemma}\label{hom locally of finite presentation}
Let $S=Spec(A)$ and let $g:X \longrightarrow S$ be a proper morphism of finite presentation. Let $Z_1\rightarrow X$ and $Z_2\rightarrow X$ be two finite étale covers of $X$. Then the functor
\[
\mathscr{G}: \textit{A-algebras}\longrightarrow \textit{Sets}
\]
\[
B\mapsto Hom_{X\times_S Spec(B)}(Z_1\times_S Spec(B),Z_2\times_S Spec(B))
\]
is locally of finite presentation.
\end{lemma}
\begin{proof}
The lemma is a straightforward consequence of \cite[Theorem 8.8.2.(i)]{EGA4.3}.
\end{proof}
Let $(A,I)$ be an henselian pair and write $A$ as a direct limit $\varinjlim A_i$, where each $A_i$ is a subalgebra of $A$ that is finitely generated over $\mathbb{Z}$. Let $(A_i^h,(I\cap A_i)^h)$ be the henselization of $(A_i,(I\cap A_i))$ for each $i$. Then by \cite[Chapter XI, Proposition 2]{Ray} $\varinjlim (A_i^h,(I\cap A_i)^h)$ is an henselian pair. It is easy to see that
\[
(A,I)=\varinjlim (A_i^h,(I\cap A_i)^h)
\]
Write $S_i=Spec(A_i^h)$ for every index $i$. Then
\[
S=\varprojlim S_i
\]
By \cite[Thereom 8.8.2. (ii)]{EGA4.3} we know that $X$ comes from a finitely presented scheme $X_{i_0}$ for some index $i_0$, i.e. $X\cong X_{i_0}\times_{S_{i_0}}S$. Moreover, by \cite[Theorem 8.10.5]{EGA4.3}, we can assume that $X_{i_0}$ is also proper over $S_{i_0}$. As the functor
\[
\mathscr{F}: A^h_{i_0}-Algebras\longrightarrow Sets
\]
\[
B \mapsto \{\text{finite étale coverings of } Spec(B)\times_{S_{i_0}} X_{i_0}\}/\text{isomorphism}
\]
is locally of finite presentation, we have that
\[
\mathscr{F}(A)=\varinjlim \mathscr{F}(A^h_i)
\]
Therefore, every finite étale cover of $X$ comes from a finite étale cover of $X_i=S_i\times_{S_{i_0}}X_{i_0}$
for a suitable index $i$.
\begin{remark}
All schemes $X_{i_0}\times_{S_{i_0}}S_i$ and $X\cong X_{i_0}\times_{S_{i_0}}S$ are quasi-compact and quasi-separated, as they are proper over affine schemes.
\end{remark}
Let $Z\rightarrow X$ and $W\rightarrow X$ be two finite étale covers of $X$. Then we can assume without loss of generality that they come from two finite étale covers $Z_{i_0}\rightarrow X_{i_0}$, $W_{i_0}\rightarrow X_{i_0}$. Then by Lemma \ref{hom locally of finite presentation} we see that
\[
\varinjlim Hom_{X_i}(Z_i,W_i)=Hom_X(Z,W)
\]
It is then clear that we can reduce the proof of Theorem \ref{proper base change over henselian pairs} to the case where $(A,I)$ is the henselization of a pair $(B,J)$, where $B$ is finitely generated over $\mathbb{Z}$. In particular, $B$ is a G-ring and Theorem \ref{second answer to Artin's first question} holds.
\begin{lemma}\label{essential surjectivity}
The functor in Theorem \ref{proper base change over henselian pairs} is essentially surjective.
\end{lemma}
\begin{proof}
Consider a finite étale morphism $X_0'\longrightarrow X_0$. Label $\hat{A}$ the completion of $A$ with respect to the ideal $I$ and let $\hat{S}=Spec(\hat{A})$, $\hat{X}=X\times_S \hat{S}$. Notice that $\hat{A}$ is a complete separated ring by Krull's theorem (see \cite[Theorem 10.17]{AM}). By \cite[Theorem 18.3.4]{EGA4.4}, we have that the functor
\[
\textit{É}t_f(\hat{X})\longrightarrow \textit{É}t_f(X_0)
\]
\[
Z\mapsto Z\times_S S_0
\]
is an equivalence of categories. Then there exists some $[\hat{X}'\longrightarrow \hat{X}]\in \mathscr{F}(\hat{A})$ such that
\[
\hat{X}'\times_{\hat{S}}S_0\cong X_0'
\]
By Theorem \ref{second answer to Artin's first question} we get that there exists some finite étale morphism $X'\longrightarrow X$ which is congruent modulo $I$ to $\hat{X}'\longrightarrow \hat{X}$, i.e.
\[
X'\times_S S_0\cong X_0'
\]
\end{proof}
It remains only to show that the functor in Theorem \ref{proper base change over henselian pairs} is fully faithful.
\begin{lemma}\label{fully faithfulness}
The functor in \textnormal{Theorem \ref{proper base change over henselian pairs}} is fully faithful.
\end{lemma}
\begin{proof}
Let $X'$ and $X''$ be two finite étale schemes over $X$ and let $\phi \in Hom_X(X',X'')$. The morphism $\phi$ corresponds uniquely to its graph $\Gamma_{\phi} : X'\longrightarrow X'\times_X X''$, which is an open immersion as both $X'$ and $X''$ are of finite type over $X$ and as $X''$ is étale over $X$ (see \cite[Corollaire 3.4]{SGA1}). Also notice that $\Gamma_{\phi}$ is a closed immersion (see \cite[Exercise 3.3.10]{L}). If we assume that $X'$ is connected and nonempty, $\phi$ corresponds uniquely to a connected component of $X'\times_X X''$ of degree one over $X'$. The degree of such a component can be measured at any point of $X'$. We conclude therefore by applying the next lemma to a component of $X'\times_X X''$.
\end{proof}
\begin{lemma}
$X$ is nonempty and connected if and only if the same is true for $X_0$.
\end{lemma}
\begin{proof}
We are given the following cartesian square
\[
\begindc{\commdiag}[20]
\obj(0,30)[1]{$X_0$}
\obj(30,30)[2]{$X$}
\obj(0,0)[3]{$S_0$}
\obj(30,0)[4]{$S$}
\mor{1}{2}{$$}
\mor{1}{3}{$$}
\mor{2}{4}{$f$}
\mor{3}{4}{$$}
\enddc
\]
If $X$ is connected and nonempty, then $f(X)\subseteq S$ is a nonempty closed subset of $S$ (as $f$ is proper). Let $J$ be an ideal of $A$ that identifies $f(X)$. Let $f(x)=p \in V(J)$ be a closed point of $S$. As $I$ is contained in the Jacobson radical of $A$, the prime ideal $p$ lies in $S_0$. Then
\[
\begindc{\commdiag}[20]
\obj(0,30)[1]{$X_0$}
\obj(30,30)[2]{$X$}
\obj(0,0)[3]{$S_0$}
\obj(30,0)[4]{$S$}
\obj(-20,50)[5]{$\{x\}$}
\mor{1}{2}{$$}
\mor{1}{3}{$$}
\mor{2}{4}{$f$}
\mor{3}{4}{$$}
\mor{5}{2}{$$}
\mor{5}{3}{$$}
\mor{5}{1}{$$}[-1,1]
\enddc
\]
In particuar, $X_0$ is nonempty. Furthermore, as this argument can be used for any connected component of $X$, if $X$ is disconnected then also $X_0$ is disconnected.\newline
Conversely, assume that $X_0$ is disconnected. Label $C_0$ a nonempty connected component of $X_0$. As the scheme $X_0$ is quasi-compact, $C_0$ is open and closed in $X_0$. Therefore, $C_0\longrightarrow X_0$ is a finite étale morphism. By Lemma \ref{essential surjectivity}, there exists a finite étale morphism $C\longrightarrow X$ which induces $C_0\longrightarrow X_0$. As $C_0$ is connected and nonempty, the same is true for $C$. The morphism $C\longrightarrow X$ is therefore of degree $1$ at every point of $C$. As it is also finite and étale, it is both an open and a closed immersion, i.e. $C$ is a connected component of $X$. If $C=X$, we would get $C_0=X_0$, a contradiction. Then $X$ is disconnected. Finally, it is clear that if $X_0$ is nonempty, $X$ is nonempty too.
\end{proof}
Theorem \ref{proper base change over henselian pairs} follows immediately from Lemma \ref{essential surjectivity} and Lemma \ref{fully faithfulness}.
\section{Henselian couples}
Recall that an henselian pair $(A,I)$ is a ring $A$ together with an ideal $I\subseteq$ such that
\begin{enumerate}
\item $I$ is contained in the Jacobson ideal of $A$;
\item for every finite $A$ algebra $B$, there is a bijection between the set of idempotent elements of $B$ and the set of idempotent elements of $B\otimes_A A/I$.
\end{enumerate}
For more details, see \cite{Ray}.\\
Let $(A,I)$ be an henselian pair. Then for every finite morphism $Spec(B)=X\longrightarrow Spec(A)$, we have a bijection
\[
Id(B)=Of(X)=Of(X_0)=Id(B/IB) \hspace{0.5cm} \text{where } X_0=X\times_{Spec(A)} Spec(A/I)
\]
Here $Of(Z)$ denotes the set of subsets of $Z$ which are both open and closed.\\
This fact suggests the following definition (see \cite[Définition 18.5.5]{EGA4.4}), which is meant to generalize the notion of henselian pair to the non-affine setting.
\begin{definition}\label{definition henselian couple}
Let $X$ be a scheme and let $X_0$ be a closed subscheme. We say that $(X,X_0)$ is an \textit{henselian couple} if for every finite morphism $Y\longrightarrow X$ we have a bijection
\[
Of(Y)=Of(Y_0)
\]
where $Y_0=Y\times_X X_0$.
\end{definition}
\begin{remark}
If $X$ is locally noetherian, it is a consequence of \textnormal{\cite[Proposition 6.1.4]{EGA1}} and \textnormal{\cite[Corollaire 6.1.9]{EGA1}} that connected sets in $Of(X)$ (resp. $Of(X_0)$) are in bijection with $\Pi_0(X)$ (resp. $\Pi_0(X_0)$), the set of connected components of $X$ (resp. $X_0$).
\end{remark}
\begin{remark}
It is a consequence of \textnormal{\cite[Corollary 5.1.8]{EGA1}} that $(X,X_0)$ is an henselian couple if and only if $(X_{red},(X_0)_{red})$ is an henselian couple as well.
\end{remark}
\begin{remark}\label{henselian pairs are affine henselian couples}
It is immediate to observe that if $(A,I)$ is a pair and $(Spec(A),Spec(A/I))$ is an henselian couple, then $I$ is contained in the Jacobson radical of $A$. In fact, if $m\subseteq A$ is a maximal ideal, then we have a bijection
\[
Of(Spec(A/m))=Of(Spec(A/m\otimes_A A/I))
\]
In particular, $Spec(A/m\otimes_A A/I)$ can not be the empty scheme. Therefore, as it is a closed subscheme of $Spec(A/m)$, we must have an equality $Spec(A/m)=Spec(A/m\otimes_A A/I)$, whence $I\subseteq m$. Moreover, if $Z\longrightarrow Spec(A)$ is a finite morphism, then $Z=Spec(B)$ is affine and the corresponding morphism $A\longrightarrow B$ is finite. Then we have bijections
\[
Id(B)=Of(Spec(B))=Of(Spec(B/IB))=Id(B/IB)
\]
We have just showed that an affine henselian couple is an henselian pair. The converse was observed at the beginning of this section.
\end{remark}
\begin{lemma}\label{a couple which is proper over a noetherian henselian pair is henselian}
Let $(A,I)$ be an henselian pair with $A$ noetherian and let $X$ be a proper $A$-scheme. Set $S=Spec(A)$, $S_0=Spec(A/I)$ and let $X_0=X\times_S S_0$. Then $(X,X_0)$ is an henselian couple.
\end{lemma}
\begin{proof}
This is a trivial consequence of Theorem \ref{proper base change over henselian pairs} and \cite[Exposé XII, Proposition 6.5 (i)]{SGA4}.
\end{proof}
\begin{lemma}\label{if the pair associated to a couple is henselian, the couple is henselian}
Let $X$ be a scheme and let $X_0$ be a closed subscheme. Let $A$ be a noetherian ring and assume that $X$ is proper over $Spec(A)$. Also assume that $X_0=X\times_{Spec(A)} Spec(A/I)$ for some ideal $I\subseteq A$. Put $J=ker(B=\mathscr{O}_X(X)\longrightarrow \mathscr{O}_{X_0}(X_0))$. If $(B,J)$ is an henselian pair, then $(X,X_0)$ is an henselian couple.
\end{lemma}
\begin{proof}
Let $(A^h,I^h)$ be the henselization of the couple $(A,I)$ given by \cite[Tag 0A02]{SP}. Then we have the following diagram
\[
\begindc{\commdiag}[30]
\obj(50,20)[1]{$(X,X_0)$}
\obj(0,0)[2]{$(Spec(A^h),Spec(A^h/I^h))$}
\obj(50,0)[3]{$(Spec(A),Spec(A/I))$}
\mor{1}{3}{$f$}
\mor{2}{3}{$\gamma$}
\enddc
\]
which induces the following diagram of pairs:
\[
\begindc{\commdiag}[30]
\obj(30,20)[1]{$(B,J)$}
\obj(0,0)[2]{$(A^h,I^h)$}
\obj(30,0)[3]{$(A,I)$}
\mor{3}{1}{$$}
\mor{3}{2}{$$}
\mor{2}{1}{$\psi$}[1,1]
\enddc
\]
The morphism $\psi$ is the one induced by the universal property of $(A^h,I^h)$. As
\[
Hom_{Rings}(A^h,B)=Hom_{Schemes}(X,Spec(A^h))
\]
the homomorphism $\psi$ identifies a unique morphism of schemes $\phi : X \longrightarrow Spec(A^h)$. Thus we get the following commutative diagram
\[
\begindc{\commdiag}[30]
\obj(30,20)[1]{$X$}
\obj(0,0)[2]{$Spec(A^h)$}
\obj(30,0)[3]{$Spec(A)$}
\mor{1}{2}{$\phi$}
\mor{2}{3}{$\gamma$}
\mor{1}{3}{$f$}
\enddc
\]
Moreover, by \cite[Tag 0AGU]{SP}, we get that
\[
\gamma^{-1}(Spec(A/I))=Spec(A^h\otimes_A A/I)=Spec(A^h/I^h)
\]
whence
\[
X\times_{Spec(A^h)} Spec(A^h/I^h)=X_0
\]
Therefore, the couple $(X,X_0)$ lies over the henselian couple $(Spec(A^h),Spec(A^h/I^h))$. Furthermore, $A^h$ is a noetherian ring (see \cite[Tag 0AGV]{SP}). Finally, as $f$ is a proper morphism and $\gamma$ is separated, we get that $\phi$ is proper as well by \cite[Proposition 3.3.16]{L}. Then we can conclude that $(X,X_0)$ is an henselian couple by the previous lemma.
\end{proof}
The previous lemma tells us that, under some appropriate hypothesis, if the pair
\[
(\mathscr{O}_X(X),ker(\mathscr{O}_X(X)\longrightarrow \mathscr{O}_{X_0}(X_0)))
\]
is henselian, then $(X,X_0)$ is an henselian couple. It is natural to ask if the converse is true, i.e. if given an henselian couple $(X,X_0)$ the associated pair is henselian. An answer is provided by the next lemma.
\begin{lemma}\label{henselian pair associated to an henselian couple}
Let $X$ be a quasi-compact and quasi-separated scheme and let $i: X_0\longrightarrow X$ be a closed immersion such that $(X,X_0)$ is an henselian couple.\\ Then $(B,J)=(\mathscr{O}_X(X),ker(\mathscr{O}_X(X)\longrightarrow \mathscr{O}_{X_0}(X_0)))$ is an henselian pair.
\end{lemma}
\begin{proof}
By \cite[Tag 09XI]{SP}, it is sufficient to show that for every étale ring map $B\longrightarrow C$ together with a $B$-morphism $\sigma: C\longrightarrow B/J$, there exists a $B$-morphism $C\longrightarrow B$ which lifts $\sigma$. \\
Consider the cartesian diagram
\[
\begindc{\commdiag}[20]
\obj(0,30)[1]{$X_C=X\times_{Spec(B)} Spec(C)$}
\obj(50,30)[2]{$X$}
\obj(0,0)[3]{$Spec(C)$}
\obj(50,0)[4]{$Spec(B)$}
\mor{1}{2}{$$}
\mor{1}{3}{$$}
\mor{2}{4}{$$}
\mor{3}{4}{$$}
\enddc
\]
As $Spec(C)\longrightarrow Spec(B)$ is étale and separated, the morphism $X_C \longrightarrow X$ is étale and separated as well. Then, by \cite[Proposition 18.5.4]{EGA4.4}, we have a bijection
\[
\Gamma(X_C/X)\longrightarrow \Gamma(X_C\times_X X_0/X_0)
\]
between the sections of $X_C\longrightarrow X$ and those of $X_C\times_X X_0\longrightarrow X_0$.\\
\textit{Observation 1.} The universal property of $X_C\times_X X_0$ tells us that
\[
\Gamma(X_C\times_X X_0/X_0)\cong Hom_X(X_0,X_C)
\]
\textit{Observation 2.} Let $\mathscr{J}\subseteq \mathscr{O}_X$ be the sheaf of ideals associated to $X_0$. Then we have a short exact sequence of $\mathscr{O}_X$-modules
\[
0\longrightarrow \mathscr{J} \longrightarrow \mathscr{O}_X \longrightarrow i_*\mathscr{O}_{X_0}\longrightarrow 0
\]
Applying the global sections functor, we get an exact sequence
\[
0\longrightarrow J=\mathscr{J}(X)\longrightarrow \mathscr{O}_X(X)=B \longrightarrow \mathscr{O}_{X_0}(X_0)
\]
Hence, we have an homomorphism
\[
B/J\longrightarrow \mathscr{O}_{X_0}(X_0)
\]
Therefore, we get a morphism of schemes
\[
X_0 \longrightarrow Spec(\mathscr{O}_{X_0}(X_0))\longrightarrow Spec(B/J)
\]
Also notice that the diagram
\[
\begindc{\commdiag}[20]
\obj(0,30)[1]{$X_0$}
\obj(30,30)[2]{$X$}
\obj(0,0)[3]{$Spec(B/J)$}
\obj(30,0)[4]{$Spec(B)$}
\mor{1}{2}{$$}
\mor{1}{3}{$$}
\mor{2}{4}{$$}
\mor{3}{4}{$$}
\enddc
\]
is commutative.
Now consider the diagram
\[
\begindc{\commdiag}[20]
\obj(0,30)[1]{$X_C$}
\obj(30,30)[2]{$X$}
\obj(0,0)[3]{$Spec(C)$}
\obj(30,0)[4]{$Spec(B)$}
\obj(-30,50)[5]{$X_0$}
\obj(-30,-20)[6]{$Spec(B/J)$}
\mor{1}{2}{$$}
\mor{1}{3}{$$}
\mor{2}{4}{$$}
\mor{3}{4}{$$}
\mor{5}{2}{$$}
\mor{5}{3}{$$}
\mor{5}{6}{$$}
\mor(-28,48)(-2,32){$\exists ! \hspace{0.1cm}\tilde{\alpha}$}[0,1]
\mor{6}{3}{$$}
\mor{6}{4}{$$}
\enddc
\]
Label $\tilde{\alpha}: X_0\longrightarrow X_C$ the $X$-morphism provided by the universal property of $X_C$ and let $\alpha : X \longrightarrow X_C$ be the corresponding $X$-morphism in $\Gamma(X_C/X)$.\\
Consider the following commutative diagram
\[
\begindc{\commdiag}[20]
\obj(0,40)[1]{$X$}
\obj(30,40)[2]{$X_C$}
\obj(60,40)[3]{$X$}
\obj(30,20)[4]{$Spec(C)$}
\obj(60,20)[5]{$Spec(B)$}
\obj(60,0)[6]{$Spec(B/J)$}
\mor{1}{2}{$\alpha$}
\mor{2}{3}{$$}
\mor{2}{4}{$$}
\mor{3}{5}{$$}
\mor{4}{5}{$$}
\mor{6}{4}{$$}
\mor{6}{5}{$$}
\mor{1}{4}{$$}
\cmor((0,43)(2,48)(10,50)(30,50)(50,50)(58,48)(60,43)) \pdown (30,52){$id_X$}
\enddc
\]
and the corresponding commutative diagram in \textit{Rings}:
\[
\begindc{\commdiag}[20]
\obj(0,40)[1]{$B$}
\obj(30,40)[2]{$\mathscr{O}_{X_C}(X_C)$}
\obj(60,40)[3]{$B$}
\obj(30,20)[4]{$C$}
\obj(60,20)[5]{$B$}
\obj(60,0)[6]{$B/J$}
\mor{2}{1}{$\alpha$}
\mor{3}{2}{$$}
\mor{4}{2}{$$}
\mor{5}{3}{$id_B$}[\atright,\solidarrow]
\mor{5}{4}{$\phi$}
\mor{4}{6}{$\sigma$}[\atright,\solidarrow]
\mor{5}{6}{$\pi$}
\mor{4}{1}{$\psi$}
\cmor((0,43)(2,48)(10,50)(30,50)(50,50)(58,48)(60,43)) \pdown (30,52){$id_B$}
\enddc
\]
It is then clear that $\psi$ is the $B$-morphism we were looking for. This concludes the proof of the lemma.
\end{proof}
\begin{corollary}\label{an henselian couple proper over a noetherian ring is proper over an henselian pair}
Let $(X,X_0)$ be an henselian couple. Assume that $X$ is proper over a noetherian ring $A$ and that $X_0=X\times_{Spec(A)} Spec(A/I)$ for some ideal $I\subseteq A$. Then $(X,X_0)$ is proper over an henselian pair.
\end{corollary}
\begin{proof}
As $X$ is proper over $Spec(A)$, it is a quasi-compact and quasi-separated scheme. Hence, by Lemma \ref{henselian pair associated to an henselian couple}, $(\mathscr{O}_X(X),ker(\mathscr{O}_X(X)\longrightarrow \mathscr{O}_{X_0}(X_0)))$ is an henselian pair. Therefore, by the same construction described in Lemma \ref{if the pair associated to a couple is henselian, the couple is henselian}, we get that $(X,X_0)$ is proper over $(A^h,I^h)$.
\end{proof}
\begin{corollary}\label{characterization of henselian couples proper over a noetherian ring}
Let $(X,X_0)$ be a couple and assume that $X$ is proper over a noetherian ring $A$ and that $X_0=X\times_{Spec(A)} Spec(A/I)$ for some ideal $I\subseteq A$. Then $(X,X_0)$ is an henselian couple if and only if $(\mathscr{O}_X(X),ker(\mathscr{O}_X(X)\longrightarrow \mathscr{O}_{X_0}(X_0)))$ is an henselian pair.
\end{corollary}
By Remark \ref{first step in the solution} every henselian couple $(X,X_0)$ which arises as in Lemma \ref{a couple which is proper over a noetherian henselian pair is henselian} satisfies conditions $\textit{2.}$ and $\textit{3.}$ in \cite[Exposé XII, Proposition 6.5]{SGA4} with $\mathbb{L}=\mathbb{P}$ and for every $n$.
Then, applying Corollary \ref{an henselian couple proper over a noetherian ring is proper over an henselian pair}, we get the following result:
\begin{theorem}
Let $(X,X_0)$ be an henselian couple. Assume that $X$ is proper over a noetherian ring $A$ and that $X_0=X\times_{Spec(A)} Spec(A/I)$ for some ideal $I\subseteq A$. Then conditions $\textit{2.}$ and $\textit{3.}$ in \textnormal{\cite[Exp. XII, Remarks 6.13]{SGA4}} are satisfied with $\mathbb{L}=\mathbb{P}$ and for every $n$.
\end{theorem}
This gives a positive answer to Question \ref{conj1} if we assume that hypothesis $(\dagger)$ hold.
\textbf{Acknowledgments.} A special thank you to Moritz Kerz. It is worthy to mention that he introduced me to the problem treated in this paper. In particular, I would like to point out that he mentioned Popescu's Theorem to me, which I did not know until then, grasping the fact that it could have been an helpful tool for my purposes. I also wish to thank him for the time he dedicated to the review of this paper.\\
I also wish to thank Federico Binda for the many interesting discussions I had with him and for his precious advices.
|
1,108,101,566,402 | arxiv | \section{Introduction}
Atomic nuclei are among the most fascinating quantum many-body
systems that depict a rich variety of shapes and structures
\cite{BM75}. Most of the nuclei are known to have axially-symmetric,
dominantly quadrupole, deformed shape in the ground-state. However,
there are also regions in the nuclear periodic table, referred to as
transitional regions, where axial-symmetry is broken and a triaxial
mean-field description is appropriate to characterize the properties
of these nuclei \cite{JS99}. For nuclei depicting triaxial shapes,
there is a long-standing issue whether these nuclei have rigid or
soft $\gamma$-deformation (see, for example, discussions in Refs.
\cite{SW01,GS03,DJ09}). Traditionally, there are two extreme
phenomenological models that describe the triaxiality: the one with
a rigid-$\gamma$ deformation of Davydov and Flippov (DF) \cite{AS58}
and the $\gamma$-soft model of Wilets and Jean \cite{LM56}. Both
models give rise to similar level energies and $B(E2)$ transition
strengths for the ground-state bands and, therefore, it is
impossible to delineate the two different modes of excitations. In
fact, there have been suggestions \cite{OS87} that the two
descriptions are equivalent and intrinsic single-particle wave
functions obtained from a triaxially-deformed well are useful in
describing low-lying collective states. However, it has been
demonstrated \cite{NV91} that the phase of the odd-even staggering
(i.e. the staggering of the odd- and even-spin levels) of the
observed $\gamma$-bands could shed light on the nature of the
triaxial shape with rigid-$\gamma$ rotation exhibiting an opposite
staggering pattern to that of the $\gamma$-soft case.
Recently, using the experimental techniques of above-barrier Coulomb
excitation and inelastic scattering, $\gamma$-band energies of
$^{76}$Ge have been extended considerably \cite{YT13}. It has been
shown that the odd-even staggering of the $\gamma$-band is quite
opposite to that of all neighboring nuclei and is in conformity with
that expected for a rigid-$\gamma$ deformation \cite{YT13,EA07}.
This is one of rare examples of atomic nuclei exhibiting
rigid-$\gamma$ deformation in the low-lying states. The observed
yrast- and excited states have been discussed \cite{YT13} using the
DF model and also the spherical shell model (SSM) approaches. In the
SSM approach \cite{NY08}, the pairing plus quadrupole-quadrupole
interaction was employed in the
$\{g_{9/2},\,p_{1/2},\,p_{3/2},\,f_{5/2}\}$ configuration space, and
it has been demonstrated that SSM provides a very good description
of the observed data for the low-lying states in $^{76}$Ge.
The purpose of the present work is to investigate the high-spin
properties of $^{76}$Ge using the multi-quasiparticle (qp) triaxial
projected shell model (TPSM) approach
\cite{GH08,JG09,JG11,GJ12,Ch12}. In the SSM analysis, the primary
emphasis was on the low-lying states and the present investigation
complements the results obtained by the SSM approach. In TPSM, apart
from 0-qp, 2- and 4-qp configurations are explicitly included in the
basis space. Therefore, in this model it is possible to investigate
the high-spin band-structures, which provides important information
on the interplay between collective and single-particle excitations,
and thus to probe single-particle structures in the neutron-rich
mass region. In the present study, we have also performed a detailed
study of the neighboring nuclei to investigate the nature of
$\gamma$-deformation in these nuclei in comparison to $^{76}$Ge.
The manuscript is organized as follows. In the next section, we
provide a few details of the TPSM model for completeness and further
details can be found in our earlier publications
\cite{GH08,JG09,JG11,GJ12,Ch12,YK00}. Section III is completely
devoted to the investigation of $^{76}$Ge and in section IV, the
results of the neighboring Ge- and Se-isotopes are presented and
discussed. Finally, in section V, we provide a summary of the work
performed in the present manuscript.
\section{Outline of the triaxial projected shell model}
In TPSM, triaxially-deformed Nilsson states are employed as a
starting basis to describe a nucleus exhibiting axial and triaxial
deformations. An explicit three-dimensional angular-momentum
projection is then performed for configurations built from the
deformed Nilsson states. A triaxial qp configuration is an admixture
of different $K$ (projection along the symmetry axis) states, and
the vacuum configuration is composed of $K=0,2,4,...$
states for an even-even system. It was shown \cite{YK00} that the
angular-momentum projection from the $K=0$, 2, and 4 states
correspond to the ground, $\gamma$- and $\gamma\gamma$- bands,
respectively. The model has recently been extended
\cite{GH08,JG09,JG11,GJ12,Ch12,GC06,SB10} to include multi-qp
configurations in the model space, which allows one to describe
states of collective $\gamma$-vibrations and qp excitations on an
equal footing. For instance, the multi-qp TPSM approach has been
used to investigate the interplay between the vibrational and the
quasi-particle excitation modes in $^{166-172}$Er \cite{JG11}. It
was demonstrated that a low-lying $K=3$ bands observed in these
nuclei, the nature of which had remained unresolved, are built on
triaxially-deformed 2-qp states. This band is observed to interact
with the $\gamma$-vibrational band and becomes favored at high
angular-momentum for some Er-nuclei. In another study \cite{Ch12},
the long-standing puzzle of the very different $E2$ decay rates from
the same 2-quasineutron $K^\pi = 6^+$ isomers in the $N = 104$
isotones was investigated. It was shown that the highly
$K$-forbidden transition from the $6^+$ isomer to the ground-state
band is sensitive to the mixing with the $6^+$ state of the
$\gamma$-vibrational band.
For even-even systems, the TPSM basis are composed of 0-qp (qp
vacuum), 2-proton, 2-neutron, and 4-qp configurations, i.e.,
\begin{eqnarray}
\{ \hat P^I_{MK}\left|\Phi\right>, ~\hat P^I_{MK}~a^\dagger_{p_1}
a^\dagger_{p_2} \left|\Phi\right>, ~\hat P^I_{MK}~a^\dagger_{n_1}
a^\dagger_{n_2} \left|\Phi\right>, \nonumber \\~\hat
P^I_{MK}~a^\dagger_{p_1} a^\dagger_{p_2} a^\dagger_{n_1}
a^\dagger_{n_2} \left|\Phi\right> \}, \label{basis}
\end{eqnarray}
where $P^I_{MK}$ is the three-dimensional
angular-momentum-projection operator \cite{RS80} and
$\left|\Phi\right>$ in (\ref{basis}) represents the triaxial qp
vacuum state. The qp basis chosen in (\ref{basis}) is adequate to
describe high-spin states up to $I\sim 20\hbar$ for even-even
systems. In the present analysis we shall, therefore, restrict our
discussion to this spin regime. It is noted that for the case of
axial symmetry, the qp vacuum state has $K=0$ \cite{KY95}, whereas
in the present case of triaxial deformation, the vacuum state is a
superposition of all possible $K$-values. Rotational bands with the
triaxial basis states, Eq. (\ref{basis}), are obtained by specifying
different values for the $K$-quantum number in the angular-momentum
projection operator \cite{RS80}.
\begin{table}
\caption{The axial deformation parameter, $\epsilon$, and triaxial
deformation parameter, $\epsilon'$, employed in the calculation for
$^{70-80}$Ge and $^{76-82}$Se. The $\gamma$ deformation is related
to the above two parameters through $\gamma = \tan^{-1}
(\epsilon'/\epsilon)$. $\epsilon$ is related to the $\beta$
deformation through $\epsilon = 0.95\times\beta$.}
\begin{tabular}{c|ccccccccccc}
\hline & $^{70}$Ge &$^{72}$Ge & $^{74}$Ge & $^{76}$Ge & $^{78}$Ge & $^{80}$Ge &$^{76}$Se & $^{78}$Se & $^{80}$Se & $^{82}$Se \\
\hline $\epsilon$ & 0.235 & 0.230 & 0.220 & 0.200 & 0.210 & 0.200 & 0.260 &0.256 & 0.220 & 0.180 \\
$\epsilon'$& 0.145 & 0.150 & 0.155 & 0.160 & 0.150 & 0.145 & 0.155 & 0.150 & 0.130 &
0.130 \\
$\gamma$ & 31.68 & 33.11 & 35.17 & 38.66 &
35.54 & 35.94 & 30.81 & 30.37 & 30.58 & 35.84
\\\hline
\end{tabular}\label{TableDeforPara}
\end{table}
\begin{table}
\caption{The QQ-force strengths $\chi$ in unit of $10^{-2}$ MeV for
$^{70-80}$Ge and $^{76-82}$Se isotopes. }
\begin{tabular}{cccc}
\hline
& $\chi_{nn}$ & $\chi_{pp}$ & $\chi_{np}$ \\\hline
$^{70}$Ge & 8.9649 & 7.9944 & 8.4658 \\
$^{72}$Ge & 8.7473 & 7.5382 & 8.1202 \\
$^{74}$Ge & 8.5451 & 7.1283 & 7.8046 \\
$^{76}$Ge & 7.8172 & 6.3219 & 7.0299 \\
$^{78}$Ge & 8.0674 & 6.3338 & 7.1482 \\
$^{80}$Ge & 8.6937 & 6.6345 & 7.5947 \\
$^{76}$Se & 7.8240 & 6.7959 & 7.2918 \\
$^{78}$Se & 7.6688 & 6.4576 & 7.0372 \\
$^{80}$Se & 7.7027 & 6.2968 & 6.9643 \\
$^{82}$Se & 8.0623 & 6.4064 & 7.1868 \\
\hline
\end{tabular}\label{ChiValue}
\end{table}
As in the earlier projected shell model \cite{KY95} calculations, we
use the pairing plus quadrupole-quadrupole Hamiltonian
\begin{equation}
\hat H = \hat H_0 - {1 \over 2} \chi \sum_\mu \hat Q^\dagger_\mu
\hat Q^{}_\mu - G_M \hat P^\dagger \hat P - G_Q \sum_\mu \hat
P^\dagger_\mu\hat P^{}_\mu . \label{hamham}
\end{equation}
Here $\hat H_0$ is the spherical single-particle Hamiltonian which
contains a proper spin-orbit force described by the Nilsson
parameters \cite{Ni69}. The QQ-force strength $\chi$ is related to
the quadrupole deformation $\epsilon$ as a result of the
self-consistent HFB condition and the relation is given by
\cite{KY95}:
\begin{equation}
\chi_{\tau\tau'} =
{{{2\over3}\epsilon\hbar\omega_\tau\hbar\omega_{\tau'}}\over
{\hbar\omega_n\left<\hat Q_0\right>_n+\hbar\omega_p\left<\hat
Q_0\right>_p}},\label{chi}
\end{equation}
where $\omega_\tau = \omega_0 a_\tau$, with $\hbar\omega_0=41.4678
A^{-{1\over 3}}$ MeV, and the isospin-dependence factor $a_\tau$ is
defined as
\begin{equation}
a_\tau = \left[ 1 \pm {{N-Z}\over A}\right]^{1\over 3},\nonumber
\end{equation}
with $+$ $(-)$ for $\tau =$ neutron (proton). The harmonic
oscillation parameter is given by $b^2_\tau=b^2_0/a_\tau$ with
$b^2_0=\hbar/{(m\omega_0)}=A^{1\over 3}$ fm$^2$. With Eq.
(\ref{chi}) and the deformation parameters in Table
\ref{TableDeforPara}, the QQ-force strength $\chi$ for all nuclei
studied in the present work can then be determined and are shown in
Table \ref{ChiValue}. The monopole pairing strength $G_M$ (in MeV)
is of the standard form
\begin{equation}
G_M = {{G_1 - G_2{{N-Z}\over A}}\over A} ~{\rm for~neutrons,}~~~~
G_M = {G_1 \over A} ~{\rm for~protons.} \label{pairing}
\end{equation}
In the present calculation, we take $G_1=20.82$ and $G_2=13.58$,
which approximately reproduce the observed odd-even mass difference
in the mass region. This choice of $G_M$ is appropriate for the
single-particle space employed in the model, where three major
shells are used for each type of nucleons ($N=3,4,5$ for both
neutrons and protons). The quadrupole pairing strength $G_Q$ is
assumed to be proportional to $G_M$, and the proportionality
constant being fixed as 0.18. These interaction strengths are
consistent with those used earlier for the same mass region
\cite{js01,rp01}.
\begin{figure}[htb]
\centerline{\includegraphics[trim=0cm 0cm 0cm
0cm,width=0.45\textwidth,clip]{band_ver2.eps}} \caption{(Color
online) Theoretical band diagram for $^{76}$Ge. The labels $(K,\#)$
characterize the states, with $K$ denoting the $K$ quantum number
and $\#$ the number of quasiparticles. For example, (0,0), (2,0),
and (4,0) correspond to the $K=0$ ground-, $K=2$ $\gamma$-, and
$K=4$ $\gamma$$\gamma$-band, respectively, projected from the 0-qp
state. (1,2n), (3,2n), (1,2p), (3,2p), (2,4), and (4,4) correspond,
respectively, to the projected 2-neutron-aligned state,
2-proton-aligned state, 2-neutron-plus-2-proton aligned state, with
different $K$ quantum numbers.} \label{fig1}
\end{figure}
\begin{figure}[htb]
\centerline{\includegraphics[trim=0cm 0cm 0cm
0cm,width=0.45\textwidth,clip]{energy_ver2.eps}}
\caption{(Color online) Comparison of the calculated band
energies with available experimental data for $^{76}$Ge. Data are
taken from Ref. \cite{YT13}.} \label{fig2}
\end{figure}
\begin{figure}[htb]
\centerline{\includegraphics[trim=0cm 0cm 0cm
0cm,width=0.45\textwidth,clip]{stag_ver2df.eps}} \caption{(Color
online) Comparison of the TPSM calculation with experimental data
\cite{YT13} for $^{76}$Ge. Results of the spherical shell model
(SSM) calculations are also shown. (a) Staggering parameter S(I) for
the $\gamma$ band, and (b) B(E2) values for the yrast band. The
B(E2) data are taken from Ref. \cite{BE2}. } \label{fig3}
\end{figure}
\begin{figure}[htb]
\centerline{\includegraphics[trim=0cm 0cm 0cm
0cm,width=0.45\textwidth,clip]{76ge_wavef.eps}} \caption{(Color online)
Probabilities of the projected configurations in the yrast-, 1st-,
and 2nd-excited bands.} \label{fig4}
\end{figure}
\begin{figure}[htb]
\centerline{\includegraphics[trim=0cm 0cm 0cm
0cm,width=0.35\textwidth,clip]{MI_76ge.eps}} \caption{(Color online)
Comparison of the TPSM calculation with experimental data
\cite{YT13} for $^{76}$Ge for the relation between spin $I$ and
transition energy $E_\gamma$. Results of the spherical shell model
(SSM) calculations \cite{NY08} are also shown. } \label{fig5}
\end{figure}
\begin{figure}[htb]
\centerline{\includegraphics[trim=0cm 0cm 0cm
0cm,width=0.45\textwidth,clip]{stag_ver55df.eps}} \caption{(Color
online) (a) Calculated B(E2) values and (b) transition energies
$E_\gamma$ for the yrast-band of $^{76}$Ge with varying
$\epsilon'$.} \label{fig6}
\end{figure}
\section{Results of $^{76}$Ge and rigid $\gamma$-deformation}
TPSM calculations proceed in several stages. In the first stage, the
deformed basis space is constructed by solving the
triaxially-deformed Nilsson potential. In the present work, we have
employed $\epsilon=0.20$ and $\epsilon'=0.16$ (see Table
\ref{TableDeforPara}) in the Nilsson potential to generate the
deformed basis for $^{76}$Ge. The value of $\epsilon$ has been
adopted from the earlier study \cite{LU07} and the value of
$\epsilon'$ has been chosen so that the behavior of the $\gamma$
band is properly described. We shall discuss later the dependence of
the calculation on the triaxial parameter. Pairing is described by
performing a BCS calculation for the single-particle states
generated by the triaxially-deformed Nilsson potential. In the
present work, no particle-number projection is included, and
therefore, this quantum number is conserved only on the average at
the BCS level. In the second step, the good angular-momentum states
are obtained from the deformed basis by employing the
three-dimensional angular-momentum projection technique. The
projected bands obtained from 0-, 2-, and 4-qp states close to the
Fermi surface are displayed in Fig.~\ref{fig1} (the so-called band
diagram, see Ref. \cite{KY95}). The projection from the 0-qp
configuration gives rise to band structures with $K=0,2,4$,
corresponding to the ground-, $\gamma$- and $\gamma\gamma$-band
\cite{YK00}. The calculated band-head energy of the $\gamma-$ and
$\gamma\gamma$-bands are about 1.21 MeV and 3.03 MeV, respectively,
above the ground state.
It is observed from Fig.~\ref{fig1} that the projected bands from
2-quasineutron state having $K=1$ and 3 cross the ground-state band
at $I=8$. These bands are the $\gamma$-band built on the
2-quasineutron-aligned configurations. The 2-quasiproton states are
at higher excitation energies as compared to the 2-neutron states,
and therefore, do not cross the ground-state band. Further, at
$I=18$, the 4-qp structures (2-quasineutron plus 2-quasiproton)
having $K=2$ and 4 cross the yrast-configuration. We stress that in
Fig.~\ref{fig1}, only the lowest bands are displayed for clarity. In
the actual analysis, we use more than thirty-five configurations in
the mixing for each spin-state.
In the third and the final stage, the projected basis are used to
diagonalize the shell model Hamiltonian, Eq.~(\ref{hamham}). The
band energies, obtained after diagonalization, are shown in
Fig.~\ref{fig2} with the available experimental data. It is evident
from the figure that TPSM results are in excellent agreement with
the known experimental energies. In Fig.~\ref{fig2}, the excitation
spectrum is predicted for the $\gamma\gamma$-band, and we hope that
this well-developed band will be populated in future experimental
studies.
In order to understand the nature of the triaxial shape in
$^{76}$Ge, the staggering parameter, defined as,
\begin{equation}
S(I) = \frac{[E(I)-E(I-1)]-[E(I-1)-E(I-2)]}{E(2^{+}_1)} \label{SI}
\end{equation}
is plotted for the $\gamma$-band in Fig.~\ref{fig3}(a). In the same
figure we also provide the existing results of the SSM approach
\cite{NY08}. It is evident from the figure that the experimental
staggering parameter for the known energy levels is reproduced quite
accurately by the TPSM calculations and also by the SSM study. The
TPSM results indicate that above spin $I=10$, the staggering
amplitudes become smaller, and the reason for this is due to a
considerable mixing of the 2-qp configurations with the
$\gamma$-band at higher spins. In order to probe the mixing, the
probabilities of various projected configurations are plotted in
Fig.~\ref{fig4} for the yrast, the 1st-, and the 2nd-excited bands.
The yrast band up to $I=8$ is dominated by the 0-qp configuration
with $K=0$, and above this spin the 2-neutron-aligned band is the
dominant configuration. Above $I=16$, the yrast band is primarily
composed of 4-qp configurations. The 1st-excited band has the
dominant $K=2$ 0-qp configuration until $I=7$ and, therefore, is the
$\gamma$-band. However, above $I=7$, the 1st-excited band has $K=0$
dominant component. The 2-nd excited band has dominant $K=4$ 0-qp
configuration, referred to as $\gamma\gamma$-band, up to $I=7$.
Above this spin value, mixed structures are obtained. The $K=2$
state from the 0-qp configuration seems to become important along
with some 2-qp configurations.
We have also evaluated quadrupole transition probabilities along the
yrast band in the framework of TPSM \cite{JY01}. The standard
effective charges ($e_\pi=1.5e$ and $e_\nu=0.5e$) are used in the
calculation for $^{76}$Ge, and later for all other nuclei studied in
the present work. Experimentally, data for the lowest two
transitions in the yrast band of $^{76}$Ge are available \cite{BE2}.
In the lower panel of Fig.~\ref{fig3}, the $B(E2)$ transition
probabilities are plotted as a function of spin. The calculated
transitions from the SSM approach \cite{NY08} are also displayed in
the figure for comparison. It is seen from the figure that the TPSM
results reproduce the lowest two known transitions quite well while
the SSM values \cite{NY08} are somewhat under-predicted. The
calculated transitions using the TPSM approach predict a drop at
$I=8$ due to the crossing of the 2-quasineutron-aligned band at this
spin value. Above $I=8$, the $B(E2)$ transitions are predicted to
increase rapidly with spin and then drop again at $I=18$ due to the
alignment of two more quasiprotons. On the other hand, the
SSM-predicted transitions depict an increase for the $I=4
\rightarrow$ 2 transition, but above this spin value the SSM
transitions show almost a constant behavior. Thus, there appears to
be a discrepancy between the TPSM and SSM results for the transition
probabilities and it is highly desirable to perform the lifetime
measurements for the high-spin states in $^{76}$Ge. As quadrupole
transition probabilities measure the collective behavior of a
system, a correct description of it usually requires a sufficiently
large model space \cite{YS09}.
\begin{table}
\caption{Ratios of B(E2) rates between states with initial
spin-parity $I^\pi_i$ and final $I^\pi_{f1}$ and $I^\pi_{f2}$, given
by R = B(E2; $I^\pi_i$ $\rightarrow$ $I^\pi_{f1}$)/ B(E2; $I^\pi_i$
$\rightarrow$ $I^\pi_{f2}$). Experimental values \cite{YT13} are
compared with those calculated by the TPSM, the Davydov and Filippov
model (DF), and the spherical shell model (SSM) \cite{NY08}.}
\begin{tabular}{ccccccc}\hline
\hline $I^\pi_{i}$ & $I^\pi_{f1}$ & $I^\pi_{f2}$ &R$_{\rm{Expt.}}$ &R$_{\rm{TPSM}}$ &R$_{\rm{DF}}$ &R$_{\rm{SSM}}$\\
\hline
$2_2^+$ & $0_1^+$ & $2_1^+$ & 0.027 (\textit{3)} & 0.05 & 0 & 0.04 \\
$3_1^+$ & $2_1^+$ & $2_2^+$ & 0.029(\textit{$^{+6}_{-4}$)} & 0.04 & 0 & 0.06 \\
$4_2^+$ & $4_1^+$ & $2_2^+$ & 1.34(\textit{4)} & 1.62 & 0.46 & 0.93 \\
$5_1^+$ & $4_2^+$ & $3_1^+$ & $<$6.3 &1.35 & 1.0 & 1.29 \\
$6_2^+$ & $4_1^+$ & $4_2^+$ & 0.038(\textit{14)} &0.19 & 0 &
0.48 \\\hline\hline
\end{tabular}\label{TableBE2}
\end{table}
In Table \ref{TableBE2}, a comparison is provided for the measured
ratios of the $B(E2)$ transition strengths with TPSM predictions and
also with results obtained using the SSM and DF model approaches
\cite{YT13}. It is noted that both TPSM and SSM provide a reasonable
description of the known transitions.
\begin{figure}[htb]
\centerline{\includegraphics[trim=0cm 0cm 0cm
0cm,width=0.47\textwidth,clip]{theexpt_ge.eps}}
\caption{(Color online) Comparison of the calculated band
energies with available experimental data for $^{70,72,74,78,80}$Ge.
Data are taken from Ref. \cite{DA70,DA72,DA74,DA78,DA80}.} \label{fig7}
\end{figure}
\begin{figure}[htb]
\centerline{\includegraphics[trim=0cm 0cm 0cm
0cm,width=0.47\textwidth,clip]{theexpt_se.eps}}
\caption{(Color online) Comparison of the calculated band
energies with available experimental data for $^{76-82}$Se. Data are
taken from Ref. \cite{DA78,DA80,DA76,DA82}.} \label{fig8}
\end{figure}
Aligned quasiparticles carry valuable information about the single
particle structures in the neutron-rich mass region. To explore the
alignment behavior in $^{76}$Ge, angular-momentum is displayed in
Fig.~\ref{fig5} as a function of transition energy $E_{\gamma}$ for
the measured data, which is compared with the present TPSM results
and the corresponding SSM ones. It is clearly seen that the three
curves coincide with each other at low-spins, indicating an
excellent agreement of both the calculations with experiment.
However, it is noted that after $I=8$, the SSM results deviate from
the experimental ones for higher $E_{\gamma}$. The TPSM results, on
the other hand, appear to give a better description of the data,
although, it also cannot reproduce the data point at $I=12$. For
high spin states, TPSM predicts smaller $E_{\gamma}$, thus larger
moments of inertia for this nucleus. The predicted TPSM behavior can
be understood as the results of mixing of multi-qp states at high
spins (see Fig. \ref{fig4} and discussions), which continuously
supplies angular momentum to the system as spin increases. There
could be several reasons for the discrepancy noted at I=12 in
Fig.~\ref{fig5}. We consider the major reason could be due to
constant pairing approximation used in the present TPSM approach.
The BCS equations are solved for the ground-state and same pairing
solution obtained is employed for all the states. This is clearly a
crude approximation for high-spin state as it is known that pairing
correlations are reduced for these states.
In order to investigate the importance of the triaxiality on the
high-spin properties in $^{76}$Ge, the spin-dependence of $B(E2)$
transition probabilities and the transition energies are plotted in
Fig.~\ref{fig6} for varying values of $\epsilon'$. In the upper
panel, for all values of $\epsilon'$, $B(E2)$ show drops at about
$I=8$ and 16 corresponding to band-mixings. However, for lower
values of $\epsilon'$, substantial drops indicate more sudden
changes in the wave functions as compared to the case of
$\epsilon'=0.16$. The angular-momentum plot against $E_{\gamma}$ in
the lower panel of Fig.~\ref{fig6} depicts sharp backbends for lower
values of $\epsilon'$, again due to sharper band-crossings. For
higher values of $\epsilon'$, angular-momentum plot shows a smooth
upward trend and for $\epsilon'=0.16$ the behavior agrees with the
experimental data, corresponding to the triaxiality parameter
$\gamma \approx 30^\circ$.
We would like to add that successful application of the DF model for
$^{76}$Ge to describe the observed $\gamma$ band \cite{YT13} favors
the picture of a rigid-$\gamma$ deformation for this system.
Nevertheless, this model is clearly an over-simplified approach. It
has been pointed out \cite{YK00,YS08} that the underlying physical
picture of generating $\gamma$-vibration in deformed nuclei,
suggested in the framework of TPSM, is analogous to the classical
picture of Davydov and Filippov \cite{AS58}, yet TPSM is a fully
microscopic method. It is interesting to see that both shell models
(SSM and TPSM), though starting from quite different bases
(spherically symmetric vs. triaxially deformed) give nearly
identical results for the low-lying states of $^{76}$Ge, as seen in
Figs. \ref{fig3} and \ref{fig5}, as well as Table \ref{TableBE2}.
Deviations of the results of TPSM from SSM are predicted for
high-spin states (Fig. \ref{fig5}). The extension of measurements to
higher spin is highly desirable as this will shed light on the
limitations of the SSM and the TPSM approaches.
\section{Results and discussions of the neighboring nuclei}
It has been pointed out in ref. \cite{YT13} that $^{76}$Ge is a
unique example in this mass region that depicts a rigid
$\gamma$-deformation with the staggering phase of the $\gamma$-band
in conformity with the DF model. All other nuclei in the
neighborhood have staggering phase opposite to that $^{76}$Ge and
are categorized as $\gamma$-soft nuclei. It is, therefore, quite
interesting to study neighboring nuclei, as well, in order to probe
the mechanisms behind the opposing staggering phase of $^{76}$Ge in
relation to its neighbors. To have a complete overview for the mass
region, we have performed extensive calculations for other even-even
Ge-isotopes, $^{70,72,74,78,80}$Ge, as well as for some Se-
isotopes, $^{76,78,80,82}$Se. For these calculations, the axial
deformations $\epsilon$ are taken from Ref. \cite{Raman} (converted
from $\beta$ to $\epsilon$ by multiplying by 0.95 factor) and the
values are listed in Table \ref{TableDeforPara}. The values for
$\epsilon'$, given also in Table \ref{TableDeforPara}, are chosen in
such a way that the observed band head of the $\gamma-$band is
reproduced. In a few cases where the $\gamma$-band has not been
observed, the $\epsilon'$ of the neighboring nucleus is adopted. The
interaction strengths in Eqs. (\ref{hamham}) and (\ref{pairing}) are
kept the same as in the $^{76}$Ge calculation. In Figs. \ref{fig7}
and \ref{fig8}, the calculated band energies for these nuclei are
compared with the available experimental data. The results clearly
indicate that TPSM approach also provides a good description for
these nuclei apart from $^{76}$Ge.
\begin{figure}[htb]
\centerline{\includegraphics[trim=0cm 0cm 0cm
0cm,width=0.45\textwidth,clip]{stagg_Ge_v2.eps}} \caption{(Color
online) Comparison of calculated staggering parameter S(I) for the
$\gamma$ band before and after configuration mixing for
$^{70-80}$Ge. S(I) parameter before mixing are divided by a factor of three
so that they fit in the figure.} \label{fig9}
\end{figure}
\begin{figure}[htb]
\centerline{\includegraphics[trim=0cm 0cm 0cm
0cm,width=0.45\textwidth,clip]{stagg_Se_v1.eps}} \caption{(Color
online) Comparison of calculated staggering parameter S(I) for the
$\gamma$ band before and after configuration mixing for
$^{76-82}$Se. S(I) parameter before mixing are divided by a factor of three
so that they fit in the figure. } \label{fig10}
\end{figure}
We shall now turn to the discussion of the staggering phase of the
nuclei depicted in Figs. \ref{fig7} and \ref{fig8} in relation to
$^{76}$Ge. First of all we would like to mention that the model
space in TPSM is constructed from a triaxially-deformed basis with a
given set of deformation parameters $(\epsilon, \epsilon')$ shown in
Table \ref{TableDeforPara}. There are no explicit phonon or
vibrational degrees of freedom in the model. Naively, a model based
on a fixed triaxial deformation is of the kind of Davydov and
Filippov model \cite{AS58}. However, the TPSM is a fully microscopic
theory, and fixed deformations are only used for construction of
basis states. It is important to note that unlike the
phenomenological asymmetric rotor model \cite{AS58}, our results
depend not only on the deformation parameters but also
on the detailed microscopic isotope-dependent shell filling, and
more importantly, on the configuration mixing of the various
quasiparticle states \cite{YK00,YS08}. We would also like to remind
here that in the spherical shell model approach, although, starting
from a bare spherical basis, it can equally describe the
deformed nuclei as well.
The theoretical results of staggering parameter $S(I)$ (see Eq.
(\ref{SI})) for Ge- and Se-isotopes are plotted in Figures
\ref{fig9} and \ref{fig10} before and after mixing of
configurations. What is plotted in Figs. \ref{fig9} and \ref{fig10}
are the full TPSM results after mixing, as shown in Figs. \ref{fig7}
and \ref{fig8}, and those for the projected 0-qp state with $K=2$
[labeled in Fig. \ref{fig1} as (2,0)] only. The latter represents
the major component of the $\gamma$ band \cite{YK00}. The comparison
is made systematically for the Ge and Se isotopes, and therefore,
one may see the effect of isotope-dependent shell-filling.
It is noted from Figs. \ref{fig9} and \ref{fig10} that before
configuration mixing, the calculated $S(I)$ (in black diamonds) show
a rather similar spin-dependent behavior for all the ten nuclei
under consideration. In particular, all of them have the same
staggering phase in $S(I)$. However, the results turn out to be
extremely interesting after a full mixing of quasiparticle
configurations shown in Eq. (\ref{basis}). After the configuration
mixing, only the staggering phase of the $S(I)$ (in blue squares)
for $^{76}$Ge remains unchanged while all other nuclei depict an
opposite phase as compared to $^{76}$Ge. We may thus conclude that
the staggering pattern of $S(I)$ is determined by the configuration
mixing, which is isotope-dependent. A strong mixing of the
configurations in the TPSM basis (\ref{basis}) can lead to
modifications in the nuclear shape, as shown in Figs. \ref{fig9} and
\ref{fig10}, from a rigid triaxial rotor to the one that is soft in
$\gamma$ deformation when interpreted in terms of two extreme
phenomenological models of $\gamma$-rigid of Davydov and Flippov and
$\gamma$-soft of Wilets and Jean.
\begin{figure}[htb]
\centerline{\includegraphics[trim=0cm 0cm 0cm
0cm,width=0.45\textwidth,clip]{STAGGERING_76Ge_78Se-ver1.eps}}
\caption{(Color online) Comparison of calculated staggering
parameter S(I) for the $\gamma$ band (results after configuration
mixing) with different triaxial deformation parameters $\epsilon'$
for $^{76}$Ge and $^{78}$Se. } \label{fig10'}
\end{figure}
In order to gain further insight on the above results, we have
calculated the staggering parameter $S(I)$ as a function of
$\epsilon'$, with the results displayed in Fig. \ref{fig10'}. These
results are obtained after the configuration mixing with varying
triaxial deformation in the Nilsson Hamiltonian that generate the
intrinsic basis. It is seen that for $^{76}$Ge, the
experimentally-observed phase of the staggering is reproduced only
for a large value of $\epsilon'$. In contrast, for all other
isotopes the phase is independent of $\epsilon'$, with $^{78}$Se as
an illustrative example in Fig. \ref{fig10'}.
\begin{figure}[htb]
\centerline{\includegraphics[trim=0cm 0cm 0cm
0cm,width=0.45\textwidth,clip]{be2_Ge_ver2.eps}} \caption{(Color
online) Comparison of the TPSM calculation of B(E2) values for the
yrast band with experimental data
\cite{DA70,DA72,DA74,DA78,DA80,TH78,RS78,AM807,AM802} for
$^{70-80}$Ge. Results of the spherical shell model (SSM)
calculations \cite{NY08} are also shown for the available nuclei. }
\label{fig11}
\end{figure}
\begin{figure}[htb]
\centerline{\includegraphics[trim=0cm 0cm 0cm
0cm,width=0.45\textwidth,clip]{be2_Se_ver1.eps}} \caption{(Color
online) Comparison of the TPSM calculation of B(E2) values for the
yrast band with experimental data \cite{DA78,DA80,DA82,DA76,76RL,TH78,RS78,AM807,AM802,KH820} for $^{76-82}$Se.
Results of the spherical shell model (SSM) calculations \cite{NY08}
are also shown for the available nuclei. } \label{fig12}
\end{figure}
We have also calculated the $B(E2)$ values along the yrast band for
$^{70,72,74,78,80}$Ge and $^{76,78,80,82}$Se, and compared them with
available experimental data in Figs. \ref{fig11} and \ref{fig12}.
The calculated $B(E2)$s from the SSM approach \cite{NY08} are also
displayed in the figures for comparison. As is evident from these
figures, the TPSM calculations describe the known experimental
$B(E2)$ values quite nicely. The SSM calculation \cite{NY08} for
$^{78,80}$Ge and $^{78,80,82}$Se, although, reproduce well the
existing experimental data, however, as in the $^{76}$Ge case, the
SSM transitions depict an increase for low spins, but drop
significantly at high spins. In particular, above $I=8$ the SSM
transitions show a completely different behavior as compared to the
TPSM calculation which, in general shows an increasing trend toward
higher spins. There appears to be a major discrepancy between the
TPSM and SSM results for the transition probabilities in high-spin
states for all the nuclei studied in the present work.
\begin{table}
\caption{Calculated inter-band $B(E2)$ values (in W.u.) from
$\gamma$ band to ground band for $^{76}$Ge and $^{78}$Se.}
\begin{tabular}{ccc}\hline
\hline $(I,K)_i\rightarrow (I,K)_f$ & $^{76}$Ge & $^{78}$Se\\
\hline
$(2,2)\rightarrow (0,0)$ & 5.39 & 3.59 \\
$(4,2)\rightarrow (2,0)$ & 5.78 & 0.70 \\
$(6,2)\rightarrow (4,0)$ & 4.55 & 1.84\\
$(8,2)\rightarrow (6,0)$ & 13.38 & 36.54 \\
$(10,2)\rightarrow (8,0)$ & 8.60 & 5.04\\
$(12,2)\rightarrow (10,0)$ & 1.66 & 0.26 \\
$(14,2)\rightarrow (12,0)$ & 0.21 & 0.07 \\
$(16,2)\rightarrow (14,0)$ & 0.09 & 0.34 \\
$(18,2)\rightarrow (16,0)$ & 5.76 & 0.50 \\
$(20,2)\rightarrow (18,0)$ & 2.35 & 0.65 \\
\hline
$(2,2)\rightarrow (2,0)$ & 33.26 &26.29 \\
$(3,2)\rightarrow (2,0)$ & 9.15 & 6.14 \\
$(3,2)\rightarrow (4,0)$ &0.23 & 0.51 \\
$(4,2)\rightarrow (4,0)$ & 20.90 & 16.76 \\
$(5,2)\rightarrow (4,0)$ & 9.27 & 3.13 \\
$(5,2)\rightarrow (6,0)$ &7.05 & 5.74 \\
$(6,2)\rightarrow (6,0)$ & 10.41 & 9.24 \\
$(7,2)\rightarrow (6,0)$ & 7.65 & 2.52 \\
$(7,2)\rightarrow (8,0)$ &6.84 & 7.62 \\
$(8,2)\rightarrow (8,0)$ & 7.87 & 4.82 \\
$(9,2)\rightarrow (8,0)$ & 6.50 & 9.48\\
$(9,2)\rightarrow (10,0)$ &3.47 & 6.56 \\
$(10,2)\rightarrow (10,0)$ &4.14 &7.84 \\
$(11,2)\rightarrow (10,0)$ & 5.13 & 9.67 \\
$(11,2)\rightarrow (12,0)$ & 0.11 & 2.39 \\
$(12,2)\rightarrow (12,0)$ &2.27 &7.71 \\
$(13,2)\rightarrow (12,0)$& 4.15 & 6.62\\
$(13,2)\rightarrow (14,0)$ &0.19 & 1.04 \\
$(14,2)\rightarrow (14,0)$ &0.29 & 4.91 \\
$(15,2)\rightarrow (14,0)$& 0.09 & 4.96\\
$(15,2)\rightarrow (16,0)$ &0.81 & 0.69 \\
$(16,2)\rightarrow (16,0)$ &0.36 &2.56 \\
$(17,2)\rightarrow (16,0)$& 1.25 & 4.86\\
$(17,2)\rightarrow (18,0)$ &1.73 & 1.25 \\
$(18,2)\rightarrow (18,0)$ &0.19 & 0.94 \\
$(19,2)\rightarrow (18,0)$& 1.88 & 5.30\\
$(20,2)\rightarrow (18,0)$ &2.57 & 1.24 \\
\hline \hline
\end{tabular}\label{LinkingBE2}
\end{table}
In Table \ref{LinkingBE2}, we present the calculated inter-band
B(E2) values that link the $\gamma$ band to the ground band. An
early example of a similar TPSM calculation can be found in Ref.
\cite{Pla02}. We give all the possible linking transitions for the
low-lying states in $^{76}$Ge, together with those for $^{78}$Se as
an illustrative example. It will be quite interesting to compare
these values with the results from other models, for instance, the
O(6) limit of the Interacting Boson Model \cite{CB85,SOG89}.
\begin{figure}[htb]
\centerline{\includegraphics[trim=0cm 0cm 0cm
0cm,width=0.45\textwidth,clip]{MI_ge_v1.eps}} \caption{(Color online)
Comparison of the TPSM calculation with experimental data \cite{DA70,DA72,DA74,DA78,DA80}
for $^{70-80}$Ge for the relation between spin $I$ and transition
energy $E_\gamma$. Results of the spherical shell model (SSM)
calculations \cite{NY08} are also shown for the available nuclei. }
\label{fig13}
\end{figure}
\begin{figure}[htb]
\centerline{\includegraphics[trim=0cm 0cm 0cm
0cm,width=0.45\textwidth,clip]{MI_se_v1.eps}} \caption{(Color online)
Comparison of the TPSM calculation with experimental data \cite{DA78,DA80,DA82,DA76}
for $^{70-80}$Ge for the relation between spin $I$ and transition
energy $E_\gamma$. Results of the spherical shell model (SSM)
calculations \cite{NY08} are also shown for the available nuclei. }
\label{fig14}
\end{figure}
Finally, in Figs. \ref{fig13} and \ref{fig14}, experimentally-known
angular-momenta are displayed as functions of transition energy
$E_\gamma$ for $^{70,72,74,78,80}$Ge and $^{76,78,80,82}$Se, which
are compared with the present TPSM results and the corresponding SSM
ones \cite{NY08}. It is clearly seen that both theoretical
calculations describe the known data very well. Nevertheless, it is
observed, as in the case of $^{76}$Ge, discussed earlier that roughly above
$I = 8$, the TPSM and SSM results deviate from each other for higher
spin states. The predicted SSM values show pronounced zigzag pattern in the
curves while the TPSM results appear more smoother.
\section{Summary}
To summarize, the recently reported experimental measurement for
$^{76}$Ge \cite{YT13} suggested that this nucleus may be a rare
example of a nucleus exhibiting a rigid $\gamma$ deformation in its
low-lying states. Our microscopic calculations using the
multi-quasiparticle triaxial projected shell model support this
inference. By studying various physical quantities, it is shown that
in order to describe the data accurately for both the yrast and
$\gamma$-vibrational bands in $^{76}$Ge, a fixed triaxial
deformation parameter $\gamma\approx 30^\circ$ is required for the
TPSM calculation, which is consistent with that of the DF model
\cite{YT13}. The TPSM results are discussed closely with the
experimental observations and also compared with the previous
spherical shell model calculations \cite{NY08}. Furthermore,
experimental identification of the $\gamma\gamma$- band, predicted
in the present work for this $\gamma$-rigid nucleus, would be very
interesting.
To further demonstrate that the TPSM model with the same parameters
as those of $^{76}$Ge is also applicable to the neighboring nuclei,
we have made a systematic investigation for $^{70,72,74,78,80}$Ge
and $^{76,78,80,82}$Se, and discussed the results. It has been
demonstrated that configuration mixing of various quasiparticle
states can result in a dynamical change for a nucleus from being a
$\gamma$-rigid like to a $\gamma$-soft type when interpreted in
terms of the two phenomenological models of $\gamma$-rigid of
Davydov and Flippov and $\gamma$-soft of Wilets and Jean. The
odd-even staggering phase of the $\gamma$-band is quite opposite in
these two models and has been proposed to be an indicator of the
nature of the $\gamma$-deformation. What we have shown using the
microscopic TPSM model is that the configuration mixing can lead to a
transition from $\gamma$-rigid to $\gamma$-soft phases, at least,
for nuclei studied in the present work. It remains to be explored
whether similar observation is valid for other regions as well.
The $^{76}$Ge nucleus belongs to the group of a few candidates where
neutrinoless double-$\beta$ decay may be observed. In this context,
we note that the recent beyond-mean-field calculations of nuclear
matrix elements for neutrinoless double-$\beta$ decay, based on the
energy density functional method using the Gogny D1S functional,
assumed axial symmetry for the $^{76}$Ge shape \cite{RM10}. As the
nuclear matrix elements serve an important link between
$\beta$-decay observations and the neutrino mass \cite{EV02}, it
remains to be demonstrated what modifications triaxial mean-field
deformation will make in the evaluation of the nuclear
matrix-elements.
\section{Acknowledgement}
Research at Shanghai Jiao Tong University was supported by the
National Natural Science Foundation of China (No. 11135005) and by
the 973 Program of China (No. 2013CB834401).
|
1,108,101,566,403 | arxiv | \section{Introduction}
A recent paper of Livingston and Meier raises an interesting question about {\em superslice knots}. Recall~\cite{brakes:superslice} that a knot $K$ in $S^3$ is said to be superslice if there is a slice disk $D$ for $K$ such that the double of $D$ along $K$ is the unknotted 2-sphere $S$ in $S^4$. We will refer to such a disk as a {\em superslicing disk}. In particular, a superslice knot is slice and also doubly slice, that is, a slice of an unknotted $2$-sphere in $S^4$. Livingston and Meier ask about the converse in the smooth category.
\begin{lmprob}[Livingston-Meier \cite{livingston-meier:ds}]\label{prob:lm}
Find a smoothly slice knot $K$ with $\Delta_K(t) = 1$ that is not smoothly superslice.
\end{lmprob}
The corresponding question in the topological (locally flat) category is completely understood~\cite{livingston-meier:ds,meier:ds}, for a knot $K$ with $\Delta_K(t) = 1$ is topologically superslice.
In this note we give a simple solution to problem 4.6, making use of Taubes' proof~\cite{taubes:periodic} that Donaldson's diagonalization theorem~\cite{donaldson} holds for certain non-compact manifolds. For $K$ a knot in $S^3$, we write $\Sigma_k(K)$ for a $k$-fold cyclic branched cover of $S^3$ branched along $K$. The same notation will be used for the corresponding branched cover along an embedded disk in $B^4$ or sphere in $S^4$.
\begin{theorem}\label{T:ss}
Suppose that $J$ is a knot with Alexander polynomial $1$, so that $\Sigma_k(J) = \partial W$, where $W$ is simply connected and the intersection form on $W$ is definite and not diagonalizable. Then the knot $K = J \# -J$ is smoothly doubly slice, but is not smoothly superslice.
\end{theorem}
An unpublished argument of Akbulut says that the positive Whitehead double of the trefoil is a knot $J$ satisfying the hypotheses of the theorem, with $k=2$. The construction is given as~\cite[Exercise 11.4]{akbulut:book} and is also documented, along with some generalizations, in the paper~\cite{cochran-gompf:donaldson}. Hence $J$ gives an answer to Problem 4.6. We remark that for the purposes of the argument, it doesn't matter if $W$ is positive or negative definite, as one could replace $J$ by $-J$ and change all the signs.
We need a simple and presumably well-known algebraic lemma.
\begin{lemma}\label{L:pushout}
Suppose that
$$
\xymatrix{
& B \ar[dr]^{j_1} &\\
A \ar[ur]^{i_1} \ar[dr]_{i_2}&& C\\
& B \ar[ur]_{j_2} &
}
$$
is a pushout of groups, and that $i_1 = i_2$. Then $C$ surjects onto $B$. \end{lemma}
\begin{proof}
This follows from the universal property of pushouts; the identity map $\mathrm{id}_B$ satisfies $\mathrm{id}_B \circ i_1 = \mathrm{id}_B \circ i_2$, and hence defines a homomorphism $C \to B$ with the same image as $\mathrm{id}_B$.
\end{proof}
Applying \lemref{pushout} to the decomposition of the complement of the unknot in $S^4$ into two disk complements, we obtain the following useful facts. (The first of these was presumably known to Kirby and Melvin; compare~\cite[Addendum, p. 58]{kirby-melvin:R}, and the second is due to Gordon and Sumners~\cite{gordon-sumners:ball-pairs}.)
\begin{corollary}\label{C:disk}
If $K$ is superslice and $D$ is a superslicing disk, then
$$
\pi_1(B^4 -D) \cong \mathbb Z\ \text{and}\ \Delta_K(t)=1.
$$
\end{corollary}
\begin{proof}
The lemma says that there is a surjection $\mathbb Z \cong \pi_1(S^4 - S) \to \pi_1(B^4 -D)$. Hence $\pi_1(B^4 -D)$ is abelian and so must be isomorphic to $\mathbb Z$. This condition implies, using Milnor duality~\cite{milnor:covering} in the infinite cyclic covering, that the homology of the infinite cyclic covering of $S^3-K$ vanishes, which is equivalent to saying that $\Delta_K(t)=1$.
\end{proof}
\begin{proof}[Proof of \thmref{ss}]
It is standard~\cite{sumners:inv2} that any knot of the form $J \mathbin{\#} -J$ is doubly slice. In fact, it is a slice of the $1$-twist spin of $J$, which was shown by Zeeman~\cite{zeeman:twist} to be unknotted.
Suppose that $K$ is superslice and let $D$ be a superslicing disk, so $D \cup_K D = S$, an unknotted sphere. Then $S^4 = \Sigma_k(S) = V \cup_Y V$, where we have written $Y= \Sigma_k(K)$ and $V = \Sigma_k(D)$. By \clref{disk}, the $k$-fold cover of $B^4 - D$ has $\pi_1 \cong \mathbb Z$, so the branched cover $V$ is simply connected.
Note that $\Sigma_k(K) = \Sigma_k(J) \mathbin{\#} -\Sigma_k(J)$. Since $\Delta_J(t) =1$, the same is true for $\Delta_K(t)$, moreover this implies that both $\Sigma_k(J)$ and $\Sigma_k(K)$ are homology spheres. An easy Mayer-Vietoris argument says that $V = \Sigma_k(D)$ is a homology ball; in fact \clref{disk} implies that it is contractible. Adding a $3$-handle to $V$, we obtain a simply-connected homology cobordism $V'$ from $\Sigma_k(J)$ to itself. By hypothesis, there is a manifold $W$ with boundary $\Sigma_k(J)$ and non-diagonalizable intersection form. Stack up infinitely many copies of $V'$, and glue them to W to make a definite periodic-end manifold $M$, in the sense of Taubes~\cite{taubes:periodic}. Since $\pi_1(V)$ is trivial, $M$ is {\em admissible} (see ~\cite[Definition 1.3]{taubes:periodic}), and Taubes shows that its intersection form (which is the same as that of $W$) is diagonalizable. This contradiction proves the theorem.
\end{proof}
The fact that $\pi_1(B^4-D) \cong \mathbb Z$ for a superslicing disk leads to a second obstruction to supersliceness, based on the Ozsv{\'a}th--Szab{\'o}\ $d$-invariant~\cite{oz:boundary}. Recall from~\cite{manolescu-owens:delta} (for degree $2$ covers) and~\cite{jabuka:delta} in general that for a knot $K$ and prime $p$, that one denotes by $\delta_{p^n}(K)$ the $d$-invariant of a particular spin structure $\mathfrak s$ on $\Sigma_{p^n}$ pulled back from the $3$-sphere. The fact that a $p^n$ fold branched cover of a slicing disk is a rational homology ball implies that if $K$ slice then $\delta_{p^n}(K)= 0$. For a non-prime-power degree $k$, the invariant $\delta_k(K)$ might not be defined, because $\Sigma_k(K)$ is not a rational homology sphere. (One might define such an invariant using Floer homology with twisted coefficients as in~\cite{behrens-golla:twist,levine-ruberman:correction}, but there's no good reason that it would be a concordance invariant.)
\begin{theorem}\label{T:dss}
If $K$ is superslice, then for any $k$, the $d$-invariant $d(\Sigma_k(K),\mathfrak s_0)$ is defined and vanishes.
\end{theorem}
\begin{proof}
Since by \clref{disk} the Alexander polynomial is trivial, so $\Sigma_k(K)$ is a homology sphere, and hence $d(\Sigma_k(K),\mathfrak s_0)$ is defined. (There is only the one spin structure.) As in the proof of \thmref{ss}, the branched cover $\Sigma_k(D)$ is contractible, and
hence~\cite[Theorem 1.12]{oz:boundary}, $d(\Sigma_k(K),\mathfrak s_0) = 0$.
\end{proof}
Sadly, we do not know any examples of a slice knot where \thmref{dss} provides an obstruction to it being superslice. For such a knot would not be ribbon, so we would also have a counterexample to the slice-ribbon conjecture!
\begin{ack}
Thanks to Hee Jung Kim for an interesting conversation that led to this paper, and to Chuck Livingston, Paul Melvin, and Nikolai Saveliev for comments on an initial draft.
\end{ack}
\vspace*{-2ex}
\def$'${$'$}
\providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace}
|
1,108,101,566,404 | arxiv | \subsection*{Introduction} Atomic and molecular physics arc
certainly the oldest subfields of quantum physics and everybody
knows the major role that they played in the early 1920s when the
first principles of quantum mechanics were elaborated. After this
{\it golden era}, many physicists considered research in these
subfields essentially complete and in fact, for several decades,
activity in atomic and molecular physics decreased steadily in
comparison with the large effort directed towards nuclear and
high-energy physics. The advent of lasers (LASER means : Light
Amplification by Stimulated Emission of Radiation), and more
precisely of tunable lasers, changed this situation and since the
early 1970s, atomic and molecular physics, gradually intermingling
with laser physics, has become an area where more and more
activity is contributing substantially to the understanding of
many phenomena.
\newline
There has been a gradual change in the use of lasers in the
teaching as well as research laboratory. Increasingly, the laser
is not being used for spectroscopy, but as a tool [1]. However,
the art of making a laser operate is to work out how to get
population inversion for the relevant transition. Once we have
population inversion, we have a mechanism for generating gain in
the laser medium. \newline
Thermodynamic arguments tell us, in addition to the black-body law
of radiation, that the interaction between electromagnetic waves
and matter at any temperature cannot produce amplification, for
radiation at the temperature of matter cannot be made more intense
by interaction of the two without violating the second law [2].
\subsection*{Two-Level System}
We now wish to study how we can use stimulated emission to make a
light amplifier. In a gas of atoms in thermal equilibrium, the
population of the lower level ($N_1$) will always be greater than
the population of the upper level ($N_2$). (Please see Eq. (2)
below). Therefore, if a light beam is incident on the medium (cf.
Fig. 1), there will always be more upward transitions due to
absorption than downward transitions due to stimulated emission
(as the absorption is proportional to $N_1$ and the stimulated
emission is proportional to $N_2$). Hence there will be net
absorption, and the intensity of the beam will diminish on
progressing through the medium. To amplify the beam, we require
that the rate of stimulated emission transitions exceeds the rate
of absorption. If the light beam is sufficiently intense that we
can ignore spontaneous emission1 and the levels are
non-degenerate, this implies that $N_2$ must exceed $N_1$. This is
a highly non-equilibrium situation, and is called {\it population
inversion}.
\newline Inspection of Eq. (2) below implies that population inversion
corresponds to negative temperatures! This is not as ridiculous as
it sounds, because the atoms are not in thermal equilibrium. Once
we have population inversion, we have a mechanism for generating
gain in the laser medium. The art of making a laser operate is to
work out how to get population inversion for the relevant
transition. \newline
The rate of change of electromagnetic energy confined in a region
where it interacts with a group of particles must, from Einstein's
work, have the form
\begin{equation}
B_{12} N_1 u(\nu)=A_{21} N_2+ B_{21} N_2 u(\nu)
\end{equation}
where $N_1$ and $N_2$ are the numbers of molecules in the lower
and upper of two quantum states. In thermal equilibrium the ratio
of $N_2$ to $N_1$ at temperature $T$ is given by Boltzmann's law :
\begin{equation}
\frac{N_2}{N_1}=\frac{g_2}{g_1} e^{-\hbar \nu/k_B T}
\end{equation}
where $g_2$, $g_1$ are the degeneracies of level 2,1 respectively,
and $\hbar \nu=E_2-E_1$ (if the energy of thermal motion is
sufficient : $k_B T > E_2 -E_1$, then a part of particles are
thrown into the upper level). \newline In fact, we may have the
following case : e.g., a light quantum may be absorbed by the
medium: and, in this case an absorption is produced. The
difference in energy between the upper and lower levels is equal
to the quantum energy. This process is connected with the decrease
in energy of the electromagnetic field and is called {\it
resonance absorption} [3].\newline On the other hand, we may also
have : Under the influence of a quantum, a particle may be
transferred from the upper level to the lower level. Such a
transfer will be accompanied by the emission of a light quantum
identical in frequency, direction of propagation and polarization
to the quantum which produced the emission. This process is
connected with an increase of the field energy and is called {\it
stimulated emission}. \newline
We can
easily understand that, from above expression, we have, once both
of the two levels are non-degenerate,
\begin{equation}
\frac{N_2}{N_1} \le 1 \hspace*{12mm} \mbox{if} \hspace*{6mm} T
\ge 0.
\end{equation}
It is just this property of high ordering of a system with
negative temperature which makes it possible to produce
high-coherent emission in quantum oscillators, to produce high
sensitive quantum amplifiers, and to separate the energy stored in
the state with negative temperature in a very short time, of the
order of the reciprocal of the emission frequency [2].\newline
Meanwhile from the expression of Eq. (2), if both of the two
levels are degenerate and $e^{-\hbar \nu/k_B T}$is not the
dominated term at the right-hand-side of Eq. (2), we can still
have the population inversion (i.e., $N_2/N_1 >1$) via the tuning
of ratio of $g_2/g_1$ or degeneracies of level 2,1. The possible
illustration of equation (2) is shown in Fig. 2 where some cases
of $g_2/g_1$ and $\hbar \nu/k_B T$ were selected. Note that there
is no population inversion ($N_2/N_1 \le 1$) for $g_2/g_1 \sim
1$ and $\hbar \nu/k_B T >0.1$.
\subsection*{Example : Argon Ion Laser}
To give an example for the latter situation, we shall consider
Argon which has 18 electrons with the configuration
1s$^2$2s$^2$2p$^6$3s$^2$3p$^6$. Argon atoms incorporated into a
discharge tube can be ionized by collisions with the electrons
Since there are six 4p levels as compared to only two 4s levels,
the statistics of the collisional process leaves three times as
many electrons in the 4p level than in the 4s level ($g_2/g_1=3$).
Hence we have population inversion. Moreover, cascade transitions
from higher excited states also facilitates the population
inversion mechanism. The lifetime of the 4p level is 10 ns, which
compares to the 1 ns lifetime of the 4s level. The partial
application of above reasoning leads to the Argon ion (normally
Ar$^+$) Lasers.
\subsection*{Higher-Level Systems}
Some lasers are classified as being three-level systems. The
standard example is ruby, which was the first laser ever produced.
The key difference between a three-level laser and a four-level
laser is that the lower laser level is the ground state. It is
much more difficult to obtain population inversion in three-level
lasers because the lower laser level initially has a very large
population. Let this population be $N_g$. By turning on the pump,
we excite $dN$ atoms to level 1, which then decay to level 2. Thus
the population of Level 2 will be $dN$, and the population of the
ground state will be ($N_g-dN$). Hence for population inversion we
require $dN > (N_g-dN)$, that is $dN
> N_g/2$. Therefore, in order to obtain population inversion we have
to pump more than half the atoms out of the ground state into the
upper laser level. This obviously requires a very large amount of
energy. This contrasts with the four-level lasers in which the
lower laser level is empty before the pumping process starts, and
much less energy is required to reach threshold. Despite the fact
that the threshold for population inversion is very high in a
three-level system, they can be quite efficient once this
threshold is overcome. Ruby lasers pumped by bright flash lamps
actually give very high output pulse energies. However, they only
work in pulsed mode. Continuous lasers tend to be made using
four-level systems.
\subsection*{Conclusion}
In this short paper by using the energy-level diagram for atoms
with two energy levels we revisit the population inversion that is
the fundamental principle for lasing and although it is closely
related to the negative temperature state in non-equilibrium
thermodynamics. We also illustrate the role of quantum
degeneracies or their relationship with the lasing via the tuning
of population inversion.
|
1,108,101,566,405 | arxiv | \section{Introduction\label{sec:introduction}}
Let $\Omega$ be a bounded polyhedral domain in $\ensuremath{\mathbb{R}}^d$, $d\in\ensuremath{\mathbb{N}}$.
We consider the linear parabolic partial differential equation
\begin{equation}
\label{eq:strong}
\begin{aligned}
\ensuremath{\partial _t} u + \L u &= f \qquad
&&\text{in } \Omega\times (0,T) \\
u &= 0 &&\text{on } \partial\Omega\times (0,T)\\
u(\cdot,0) &= u_0 &&\text{in } \Omega.
\end{aligned}
\end{equation}
Hereafter, $\L u=-\divo{\Am\nabla u} + cu$ is a second order
elliptic operator with respect to space and $\ensuremath{\partial _t} u=\frac{\partial
u}{\partial t}$ denotes the partial derivative with respect to
time. In the simplest setting $\mathcal{L}=-\Delta$, whence
\eqref{eq:strong} is the heat equation. Precise
assumptions on data are provided in Section~\ref{ss:weak_formulation}.
The objective of this paper is the design and a detailed convergence
analysis of an efficient adaptive finite element method for solving
\eqref{eq:strong} numerically. To this end, we resort
to adaptive finite elements in space combined with a discontinuous
Galerkin \dGs time-stepping scheme in Section~\ref{ss:fem}. The conforming
finite element spaces are continuous piecewise polynomials of fixed degree over a
simplicial triangulation of the domain $\Omega$.
In each single time-step, we reduce or enlarge the local time-step size
and refine and coarsen the underlying triangulation.
The adaptive decisions are based on a posteriori
error indicators.
{Numerous such estimators for various error notions are available in
the literature. Error bounds in $L^\infty(L^2)$ can e.g. be found in
\cite{ErikssonJohnson:91} or \cite{LakkisMakridakis:06}, where the
latter result is based on the elliptic reconstruction technique, which was introduced in
\cite{MakridakisNochetto:03} in the semi-discrete context.
The $L^2(H^1)$ respectively $L^2(H^1)\cap H^1(H^{-1})$ error bounds
in~\cite{Picasso:98,Verfurth:03}
are based on energy techniques and
have been used with a \dGs[0] time-stepping scheme in the adaptive
methods and convergence analysis presented in \cite{ChenFeng:04,KrMoScSi:12}. }
For our purpose, we generalise the residual based
estimator \cite{Verfurth:03} to higher order \dGs[s] schemes in Section~\ref{s:errest}.
The estimator is build from five indicators:
an indicator for the initial error, indicators for the temporal and
spatial errors, a coarsening error indicator, and an indicator
controlling the so-called consistency error.
It is important to notice that besides the first indicator all other indicators
accumulate in $L^2$ in time.
The adaptation of the time-step-size uses informations of the indicators
for the temporal error and the consistency error. The adaptation of the
spatial triangulation is based on refinement by bisection using
information from the indicators for the spatial error and for the coarsening
error. Very recently an independently developed guaranteed a
posteriori estimator for higher order
$\dGs$ schemes is provided in~\cite{ErnSmearsVohralik:16} using
equilibrated flux based bounds for the spatial error.
By now the convergence and optimality of adaptive methods for
stationary inf-sup stable respectively coercive
problems is well-estab\-lished
\cite{BiDaDe:04,CaKrNoSi:08,ChenFeng:04,DiKrSt:16,
Dorfler:96,DieningKreuzer:08,KreuzerSiebert:11,
MoSiVe:08,MekchayNochetto:05,MoNoSi:00,MoNoSi:02,
Siebert:11,Stevenson:07};
compare also with the overview article \cite{NoSiVe:09}.
The essential design principle motivating the adaptive strategies in
most of the above methods is the equal distribution of the error.
The importance of this principle is highlighted by the near
characterisations of nonlinear approximation classes with the help
of a thresholding algorithm in \cite{BiDaDePe:02,GaspozMorin:14}.
In contrast to the situation for above mentioned problems, the
convergence analysis of adaptive approximation of
time-dependent problems is still in its infancy. In
\cite{SchwabStevenson:09} optimal computational complexity of
an adaptive wavelet method for parabolic problems is proved using a
symmetric and coercive discretisation based on a least squares formulation.
To our best
knowledge, there exist only two results \cite{ChenFeng:04,KrMoScSi:12}
concerned with a rigorous convergence analysis of time-space adaptive
finite element methods. In \cite{ChenFeng:04}, it is
proved for the \dGs[0] time-stepping scheme, that each single
time-step terminates and that the error of the computed approximation
is below a prescribed tolerance when the final time is
reached. This, however, is not guaranteed and thus theoretically the adaptively generated
sequence of time instances $\{t_n\}_{n\ge 0}$ may not be finite and
such that $t_n\to t_\star<T$ as $n\to\infty$. This drawback has been overcome
in~\cite{KrMoScSi:12} with the help of an a priori computed minimal
time-step size
in terms of the
right-hand side $f$ and the discrete initial value $U_0$.
However, neither design of the two methods heeds the principle of
equally distributing the error. Let us shed some light on this fact
with the help of the initial value problem
\begin{align*}
\ensuremath{\partial _t} u + u = f\quad\text{in}~(0,T) \qquad\text{and}\qquad u(0)=u_0.
\end{align*}
Let $0=t_0<t_1<\ldots<t_N=T$ be some partition of $(0,T)$. Using the
\dGs[0] time-stepping scheme we obtain $\{U_n\}_{n=0}^N$, such that
\begin{align*}
\frac{U_n-U_{n-1}}{\tau_n}+U_n=f_n:=\frac1{\tau_n}\int_{t_{n-1}}^{t_n}f\,{\rm d}t,\quad
n=1,\ldots, N, \qquad U_0=u_0,
\end{align*}
where $\tau_n=t_n-t_{n-1}$. Let $\ensuremath{\mathcal{U}}$ be the piecewise affine
interpolation $\ensuremath{\mathcal{U}}$ of the nodal values $\{U_n\}_{n=0}^N$. Then we
have with
Young's inequality, that
\begin{align*}
\int_0^T\frac12 \ensuremath{\partial _t}|u-\ensuremath{\mathcal{U}}|^2+|u-\ensuremath{\mathcal{U}}|^2\,{\rm d}t&=
\sum_{n=1}^N\int_{t_{n-1}}^{t_n}
(f-f_n)(u-\ensuremath{\mathcal{U}})+(U_n-\mathfrak{
u})(
u-\ensuremath{\mathcal{U}})\,{\rm d}t
\\
&\le \sum_{n=1}^N\int_{t_{n-1}}^{t_n} |f-f_n|^2 +
|U_n-\ensuremath{\mathcal{U}}|^2+ \frac12 |u-\ensuremath{\mathcal{U}}|^2\,{\rm d}t.
\end{align*}
A simple computation reveals
$\int_{t_{n-1}}^{t_n}|U_n-\ensuremath{\mathcal{U}}|^2\,{\rm d}t= \frac13 \tau_n
|U_n-U_{n-1}|^2$. This term and $\int_{t_{n-1}}^{t_n}|f-f_n|^2\,{\rm d}t$
are the so-called time and consistency a posteriori indicators.
In order to illustrate the basic differences in the design of the
adaptive schemes, we shall concentrate
on the time indicator.
In~\cite{ChenFeng:04,KrMoScSi:12} the partition is constructed such that
\begin{align*}
|U_n-U_{n-1}|^2 \le \frac{\ensuremath{\text{\texttt{TOL}}}^2}{T},\quad\text{which implies}
\quad \sum_{n=1}^N \tau_n |U_n-U_{n-1}|^2\le
\sum_{n=1}^N \tau_n \frac{\ensuremath{\text{\texttt{TOL}}}^2}{T}=\ensuremath{\text{\texttt{TOL}}}^2,
\end{align*}
i.e. the accumulated indicator is below the
prescribed tolerance $\ensuremath{\text{\texttt{TOL}}}$.
We call this the $L^\infty$-strategy and remark that it does not aim
at equally distributing the local
indicators.
In contrast to this, we shall use the $L^2$-strategy
\begin{align*}
\tau_n|U_n-U_{n-1}|^2\le\ensuremath{\text{\texttt{tol}}}^2.
\end{align*}
Thanks to the uniform energy bound
\begin{align}\label{eq:energy-ode}
\sum_{n=1}^N|U_n-U_{n-1}|^2\le \int_0^T|f|^2\,{\rm d}t +|U_0|^2
\end{align}
(see Corollary~\ref{cor:uniform_bound} below) we conclude then, that
\begin{multline*}
\sum_{n=1}^N \tau_n |U_n-U_{n-1}|^2=\sum_{\tau_n\le\delta}^N \tau_n
|U_n-U_{n-1}|^2+\sum_{\tau_n>\delta}^N \tau_n |U_n-U_{n-1}|^2
\\
\le \delta\, \Big(\int_0^T|f|^2\,{\rm d}t +|U_0|^2\Big)+\frac{T}{\delta}
\ensuremath{\text{\texttt{tol}}}^2
=\left(\frac{T}{\int_0^T|f|^2\,{\rm d}t +|U_0|^2}\right)^{\frac12}\,\ensuremath{\text{\texttt{tol}}}
\end{multline*}
where $\delta=(T/(\int_0^T|f|^2\,{\rm d}t +|U_0|^2))^{1/2}$. Taking
$\ensuremath{\text{\texttt{tol}}}=\ensuremath{\text{\texttt{TOL}}}^2/\delta$ guarantees that the error is below the
prescribed tolerance $\ensuremath{\text{\texttt{TOL}}}$.
These arguments directly generalise to
semi-discretisations of~\eqref{eq:strong} in time. In the case of a full
space-time discretisation of~\eqref{eq:strong} additional
indicators are involved, for which a control similar
to~\eqref{eq:energy-ode} is not available.
We therefore enforce that these indicators are bounded by the
time or the consistency indicator. If these indicators are
equally distributed in time, then this results also in an
equal distribution of the other indicators. Otherwise, we shall use the
$L^\infty$-strategy from \cite{ChenFeng:04,KrMoScSi:12} as a backup strategy.
The detailed algorithm \text{TAFEM}\xspace for \eqref{eq:strong} is
presented in Section~\ref{sec:tafem} and its
convergence analysis is given in Section~\ref{sec:convergence}.
The
advantage of our new approach over the algorithms in
\cite{ChenFeng:04,KrMoScSi:12} is twofold. First,
from the fact that the \text{TAFEM}\xspace aims in an equal distribution of the
error,
we expect an improved performance.
Second, we use an $L^2$-strategy for the consistency error, which
requires only $L^2$-regularity of $f$ in time in stead of the
$H^1$-regularity needed for the $L^\infty$-strategy in
\cite{ChenFeng:04,KrMoScSi:12}.
This makes the proposed method suitable for problems, where
the existing approaches may even fail completely.
We conclude the paper in Section~\ref{sec:numerics} by comments on the
implementation in \textsf{DUNE}\xspace~\cite{DUNE:16} and some numerical experiments. The experiments
confirm the expectations and show a more than
competitive performance of our algorithm~\text{TAFEM}\xspace.
\section{The Continuous and Discrete Problems\label{s:prob+disc}}
In this section, we state the weak formulation of the continuous problem together with the assumptions on
data. Then the discretisation by adaptive finite
elements in space combined with the \dGs scheme in time is introduced.
\subsection{The Weak Formulation\label{ss:weak_formulation}}
For $d\in\ensuremath{\mathbb{N}}$, let $\Omega\subset\ensuremath{\mathbb{R}}^d$ be a bounded, polyhedral domain
that is meshed by some conforming simplicial mesh $\ensuremath{\grid_\textrm{init}}$.
We denote by $H^1(\Omega)$ the Sobolev space of square integrable
functions $L^2(\Omega)$ whose first derivatives are in
$L^2(\Omega)$ and we let $\ensuremath{\mathbb{V}}:=H_0^1(\Omega)$ be the space of functions in
$H^1(\Omega)$ with vanishing trace on $\partial\Omega$. For
any measurable set $\omega$ and $k\in\ensuremath{\mathbb{N}}$, we denote by $\norm[\omega]{\cdot}$ the
$L^2(\omega;\ensuremath{\mathbb{R}}^k)$ norm, whence
\begin{math}
\norm[H^1(\Omega)]{v}^2=\norm[\Omega]{v}^2 + \norm[\Omega]{\nabla v}^2.
\end{math}
We suppose that the data of \eqref{eq:strong} has the following properties:
$\Am:\Omega\rightarrow \ensuremath{\mathbb{R}}^{d\times d}$ is piecewise Lipschitz over
$\ensuremath{\grid_\textrm{init}}$ and is symmetric positive definite with eigenvalues $0<
a_*\leq a^*<\infty$, \hbox{i.\,e.},\xspace
\begin{equation}\label{A-bounds}
a_*|\boldsymbol{\xi}|^2\leq\Am(x)\boldsymbol{\xi}\cdot\boldsymbol{\xi}\leq
a^*|\boldsymbol{\xi}|^2,
\qquad \text{for all } \boldsymbol{\xi}\in\ensuremath{\mathbb{R}}^d,\; x\in\Omega,
\end{equation}
$c\in L^\infty(\Omega)$ is non-negative, \hbox{i.\,e.},\xspace $c\geq0$ in $\Omega$,
$f\in L^2((0,T);L^2(\Omega))= L^2(\Omega\times(0,T))$, and $u_0\in L^2(\Omega)$.
We next turn to the weak formulation of \eqref{eq:strong}; compare with
\cite[Chap.~7]{Evans:10}. We let $\mathcal{B}:\ensuremath{\mathbb{V}}\times
\ensuremath{\mathbb{V}}\rightarrow\ensuremath{\mathbb{R}}$ be
the symmetric bilinear form associated to the weak form
of elliptic operator $\mathcal{L}$, \hbox{i.\,e.},\xspace
\begin{align*}
\bilin{w}{v} \mathrel{:=} \int_\Omega \Am\nabla v \cdot \nabla w
+c\,vw \,dV \qquad\text{for all } v,\,w\in\ensuremath{\mathbb{V}}.
\end{align*}
Recalling the Poincar\'e-Friedrichs inequality
\begin{math}
\norm[\Omega]{v}\le C(d,\Omega)\norm[\Omega]{\nabla v}
\end{math}
for all $v\in \ensuremath{\mathbb{V}}$ \cite[p.~158]{GilbargTrudinger:01} we deduce from
\eqref{A-bounds} that $\mathcal{B}$ is a scalar product on $\ensuremath{\mathbb{V}}$ with
induced norm
\begin{align*}
\Enorm{v}^2:=\bilin{v}{v}= \int_\Omega \Am \nabla v\cdot\nabla v +
cv^2\,dV \qquad \text{for all } v\in H_0^1(\Omega).
\end{align*}
This \emph{energy norm} is equivalent to the
$H^1$-norm $\norm[H^1(\Omega)]{\cdot}$ and we shall use the energy
norm in the subsequent analysis. We denote the
restriction of the energy norm to some subset $\omega\subset\Omega$
by $\Enorm[\omega]{\cdot}$ and let $\ensuremath{\mathbb{V}}^*\mathrel{:=} H^{-1}(\Omega)$
be the dual space of $H^1_0(\Omega)$ equipped with the operator norm
\begin{math}
\Enorm[*]{g}\mathrel{:=} \sup_{v\in\ensuremath{\mathbb{V}}} \frac{\dual{g}{v}}{\Enorm{v}}.
\end{math}
The weak solution space
\begin{align*}
\W\mathrel{:=} \left\lbrace u\in L^2(0,T; \ensuremath{\mathbb{V}}) \mid
\ensuremath{\partial _t} u\in L^2(0,T;\ensuremath{\mathbb{V}}^*)
\right\rbrace.
\end{align*}
is a Banach space endowed with the norm
\begin{align*}
\Wnorm{v}^2=\int_{0}^{T}\Enorm[*]{\ensuremath{\partial _t} v}^2 +
\Enorm{v}^2\,{\rm d}t+\norm{v(T)}^2,\qquad v\in\W[0,T].
\end{align*}
Moreover, it is continuously embedded into $C^0([0,T];L^2(\Omega))$; see e.g. \cite[Chap.~5]{Evans:10}.
After these preparations, we are in the position to state the weak
formulation of \eqref{eq:strong}: A function $u\in\W$ is a weak
solution to \eqref{eq:strong} if it satisfies
\begin{subequations}\label{eq:weak}
\begin{alignat}{2}\label{eq:weak.a}
\dual{\ensuremath{\partial _t} u(t)}{v} + \bilin{u(t)}{ v} &= \scp{f(t)}{v}
\quad&&\text{for all } v\in \ensuremath{\mathbb{V}},\; \text{a.e. } t \in(0,T),\\
\label{eq:weak.b}
u(0) &= u_0.
\end{alignat}
\end{subequations}
Hereafter, $\scp{\cdot}{\!\cdot}$ denotes the $L^2(\Omega)$ scalar
product. Since the operator $\mathcal{L}$ is elliptic, problem
\eqref{eq:weak} admits for any $f\in L^2(0,T;L^2(\Omega))$ and $u_0\in
L^2(\Omega)$ a unique weak solution; compare e.g. with \cite[Chap.~7]{Evans:10}.
\subsection{The Discrete Problem\label{ss:fem}}
For the discretization of \eqref{eq:weak} we use adaptive finite elements in
space and a \dGs scheme with adaptive time-step-size controle.
\paragraph{Adaptive Grids and Time Steps} For the adaptive space
discretization we restrict ourselves to simplicial grids and
local refinement by bisection; compare with
\cite{Bansch:91,Kossaczky:94,Maubach:95,Traxler:97} as well as
\cite{NoSiVe:09,SchmidtSiebert:05} and the references therein. To be
more precise, refinement is based on the initial conforming
triangulation $\ensuremath{\grid_\textrm{init}}$ of $\Omega$ and a procedure $\texttt{REFINE}$ with the
following properties. Given a conforming triangulation $\mathcal{G}$ and a
subset $\mathcal{M}\subset\mathcal{G}$ of \emph{marked elements}. Then
\begin{displaymath}
\texttt{REFINE}(\mathcal{G},\mathcal{M})
\end{displaymath}
outputs a conforming refinement $\mathcal{G}_+$ of $\mathcal{G}$ such that all elements in
$\mathcal{M}$ are bisected at least once. In general, additional elements are refined in
order to ensure conformity. The input $\mathcal{G}$ can either be
$\ensuremath{\grid_\textrm{init}}$ or the output of a previous application of $\texttt{REFINE}$. The
class of all conforming triangulations that can be produced from
$\ensuremath{\grid_\textrm{init}}$ by finite many applications of $\texttt{REFINE}$, we denote by $\ensuremath{\mathbb{G}}\xspace$. For $\mathcal{G}\in\ensuremath{\mathbb{G}}\xspace$
we call $\mathcal{G}_+\in\ensuremath{\mathbb{G}}\xspace$ a \emph{refinement} of $\mathcal{G}$ if $\mathcal{G}_+$ is
produced from $\mathcal{G}$ by a finite number of applications of $\texttt{REFINE}$
and we denote this by $\mathcal{G}\leq\mathcal{G}_+$ or
$\mathcal{G}_+\ge\mathcal{G}$. Conversely, we call
any $\mathcal{G}_-\in\ensuremath{\mathbb{G}}\xspace$ with $\mathcal{G}_-\le \mathcal{G}$ a
\emph{coarsening} of $\mathcal{G}$.
Throughout the discussion we only deal with conforming grids, this
means, whenever we refer to some triangulations $\mathcal{G}$, $\mathcal{G}_+$, and $\mathcal{G}_-$ we
tacitly assume $\mathcal{G},\mathcal{G}_+,\mathcal{G}_-\in\ensuremath{\mathbb{G}}\xspace$.
One key property of the refinement by bisection is uniform shape
regularity for any $\mathcal{G}\in\ensuremath{\mathbb{G}}\xspace$. This means that all constants depending
on the shape regularity are uniformly bounded depending on
$\ensuremath{\grid_\textrm{init}}$.
For the discretization in time we let $0=t_0<t_1<\dots<t_N=T$ be a
partition of $(0,T)$ into half open subintervals $I_n=(t_{n-1},t_n]$ with
corresponding local time-step sizes $\tau_n:=\abs{I_n} = t_n-t_{n-1}$, $n=1,\dots,N$.
\paragraph{Space-Time Discretization}
For the spatial discretization we use Lagrange finite elements. This is,
for any $\mathcal{G}\in\ensuremath{\mathbb{G}}\xspace$ the finite element space $\ensuremath{\mathbb{V}}(\mathcal{G})$ consists of
all continuous, piecewise polynomials of fixed degree $\ell\ge1$ over
$\mathcal{G}$ that vanish on $\partial\Omega$. This gives a
conforming discretization of $\ensuremath{\mathbb{V}}$, \hbox{i.\,e.},\xspace $\ensuremath{\mathbb{V}}(\mathcal{G})\subset\ensuremath{\mathbb{V}}$.
Moreover, Lagrange finite elements give nested spaces, \hbox{i.\,e.},\xspace
$\ensuremath{\mathbb{V}}(\mathcal{G})\subset\ensuremath{\mathbb{V}}(\mathcal{G}_+)$ whenever $\mathcal{G}\le\mathcal{G}_+$.
We denote by $\gridn[0]$ the triangulation at $t_0=0$ and for $n\ge
1$, we denote by $\gridn$ the grid in $I_n$ and let
$\ensuremath{\mathbb{V}}_n=\VG[\gridn]$, $n=0,\ldots, N$, be
the corresponding finite element spaces. For
$\mathcal{G}\in\ensuremath{\mathbb{G}}\xspace$ we denote by $\ProG\colon L^2(\Omega)\to\ensuremath{\mathbb{V}}(\mathcal{G})$
the $L^2$ projection onto $\ensuremath{\mathbb{V}}(\mathcal{G})$ and set
$\ProG[n]\mathrel{:=}\ProG[\gridn]$.
On each time interval, the discrete approximation is polynomial in time
over the corresponding spatial finite element
space. Let $s\in\ensuremath{\mathbb{N}}_0$, for any real vector space $\ensuremath{\mathbb{U}}$ and interval
$I\subset \ensuremath{\mathbb{R}}$, we denote by
\begin{align*}
\Ps\big(I,\ensuremath{\mathbb{U}}\big):=\Big\{t\mapsto \sum_{i=0}^s t^iV_i:V_i\in\ensuremath{\mathbb{U}}\Big\}
\end{align*}
the space of all polynomials with degree less or equal $s$ over
$\ensuremath{\mathbb{U}}$. We write $\Ps(\ensuremath{\mathbb{U}}):=\Ps(\ensuremath{\mathbb{R}},\ensuremath{\mathbb{U}})$ and $\Ps:=\Ps(\ensuremath{\mathbb{R}})$.
Furthermore, for an interval $I\subset (0,T)$ we let
\begin{displaymath}
f_I\in \Ps(I,L^2(\Omega))
\end{displaymath}
be the best-approximation of $f_{|I}$ in $L^2(I,L^2(\Omega))$.
In particular we use
$f_n\mathrel{:=} f_{I_n}$ as a time-discretization of $f$ on $I_n$.
For $s=0$, $f_I=\fint_I f\,{\rm d}t$ is the mean value of $f$ on $I$.
In the following, we introduce the so called discontinuous Galerkin
time-stepping scheme $\dGs$ of degree $s$, where $\dGs[0]$
is the well know implicit Euler scheme.
To this end, we denote for $n\ge 1$ the actual grid on $I_n$ by $\gridn$ and
let $\Vn=\ensuremath{\mathbb{V}}(\gridn)$ be the corresponding finite element space.
We start with a suitable initial refinement
$\gridn[0]$ of $\ensuremath{\grid_\textrm{init}}$ and an approximation
$\Un[0]=\Pi_0 u_0=\Pi_{\gridn[0]} u_0\in\Vn[0]$ of the initial value
$u_0$. Note that in principle, any suitable interpolation operator can
be used instead of $\Pi_0$.
We then inductively compute for $n>0$ a
solution $\UIn\in\Ps(\Vn)$ to the problem
\begin{align}\label{eq:discrete}
\begin{split}
\int_{I_n}\scp{\ensuremath{\partial _t} \UIn}{V} + \bilin{\UIn}{V}\,{\rm d}t
&+\scp{\jump{U}_{n-1}}{V(t_{n-1})}
= \int_{I_n}\scp{f_{n}}{ V} \,{\rm d}t
\end{split}
\end{align}
for all $V\in\Ps(\Vn)$. Thereby $f_n\mathrel{:=} f_{I_n}$ and $\jump{U}_{n-1}$ denotes the jump
\begin{align*}
\jump{U}_{n-1}:=U_{n-1}^+-U_{n-1}^-,
\end{align*}
of $U$ across $t_{n-1}$,
where we used $ U_{n-1}^+:=\lim_{t\downarrow
t_{n-1}}\UIn(t)$, $U_{n}^-:=\UIn(t_{n})$,
$n=1,\ldots,N$, and
$U_0^-:=U_0$.
Note that with this definition we have
$U_{n-1}^-=U(t_{n-1})$. The solution $U$ is uniquely defined
\cite{Thomee:06}
and we will see below that \eqref{eq:discrete}
is equivalent to an $s+1$ dimensional second
order elliptic system. Note that $U$ is
allowed to be discontinuous across the nodal points $t_0,\ldots,t_N$ and
hence in general $U\not\in\W$.
In order to construct from $U$ a conforming function,
we recall that the \dGs schemes are
closely related to Runge Kutta RadauIIA collocation methods; see
e.g. \cite{AkMaNo:09}. The corresponding RadauIIA quadrature
formula with abscissae $c_1, \ldots, c_{s+1}$ and weights
$b_1,\ldots,b_{s+1}$ is exact of degree $2s$. In fact, we have
\begin{align}
\label{eq:RadauIIA}
\sum_{j=1}^{s+1}b_j P(c_j) =\int_0^1P(t)\,{\rm d}t\qquad \text{for all}~P\in\Ps[2s].
\end{align}
We define $\ensuremath{\mathcal{U}}\in \W$, $\ensuremath{\mathcal{U}}_{|I_n}\in\Ps[s+1](\ensuremath{\mathbb{V}})$ as the piecewise interpolation
of $U$ at the local RadauIIA points $t_n^j:=t_{n-1}+c_j\tau_n$, \hbox{i.\,e.},\xspace
\begin{subequations}\label{eq:hU}
\begin{align}\label{eq:hUa}
\ensuremath{\mathcal{U}}(t_{n}^j) &= \UIn(t_{n}^j)\in\Vn, \qquad j =1,\ldots,s+1.
\intertext{The continuous embedding of $\W$ in $C^0([0,T];L^2(\Omega))$
additionally enforces}
\label{eq:hUb}
\ensuremath{\mathcal{U}}(t_{n-1}) &= \Un[n-1]^-\in\Vn[n-1].
\end{align}
\end{subequations}
Hence $\ensuremath{\mathcal{U}}$ is uniquely defined by
\begin{align}\label{eq:UhLagrange}
\ensuremath{\mathcal{U}}_{|I_n}:= \sum_{j=0}^{s+1} L_j\big(\tfrac{t-t_{n-1}}{\tau_n}\big)\,
U(t_n^j);
\end{align}
with the Lagrange polynomials
\begin{align}\label{eq:Lagrange}
L_j(t):= \prod_{\atop{i=0}{i\neq j}}^{s+1}
\frac{t-c_j}{c_i-c_j}\in\Ps[s+1],\qquad j=0,\ldots,s+1
\end{align}
and $c_0:=0$.
Using integration by parts with
respect to time, \eqref{eq:RadauIIA}, and
\eqref{eq:hU}, we observe that \eqref{eq:discrete} is equivalent to
\begin{align}\label{eq:discreteb}
\int_{I_n}\scp{\ensuremath{\partial _t} \ensuremath{\mathcal{U}}}{V} + \bilin{U}{V}\,{\rm d}t
&
= \int_{I_n}\scp{f_{n}}{ V} \,{\rm d}t
\end{align}
for all $n=1,\ldots,n$ and $V\in\Ps(\Vn)$.
We emphasize that $\ensuremath{\mathcal{U}}(t)$ is a finite element function,
since for $t\in I_n$, we have $\ensuremath{\mathcal{U}}(t)\in\ensuremath{\mathbb{V}}(\gridn[n-1]\oplus\gridn)\subset\ensuremath{\mathbb{V}}$,
where $\gridn[n-1]\oplus\gridn$
is the smallest common refinement of $\gridn[n-1]$ and $\gridn$, which we
call \emph{overlay}.
Continuity of $\ensuremath{\mathcal{U}}$ in time, in combination with $\ensuremath{\mathcal{U}}(t)\in\ensuremath{\mathbb{V}}$ for all
$t\in I$ then implies $\ensuremath{\mathcal{U}}\in\W$.
\begin{rem}
For $s=0$ we see from~\eqref{eq:discrete} that in each time-step
$n\in\ensuremath{\mathbb{N}}$, we need to solve for partial differential operators of the form $-\Delta+\mu$ with
$\mu=\frac1{\tau_n}$ in order to
compute $U_n$. Unfortunately, for $s>0$, though still coercive,
\eqref{eq:discrete} becomes a $s+1$ dimensional coupled non-symmetric
system. Recently, in~\cite{Smears:15} a PCG method for a
symmetrisation of~\eqref{eq:discrete} is proposed, which is fully
robust with respect to the discretisation parameters
$s$ and $\tau$, provided a solver for the
operator $-\Delta+\mu$, $\mu\ge0$ is available.
\end{rem}
\section{A Posteriori Error Estimation\label{s:errest}}
One basic ingredient of adaptive methods are a posteriori error
indicators building up a reliable upper bound for the error in terms
of the discrete solution and given data. The \dGs[0] method
corresponds to the implicit Euler scheme and residual based
estimators for the heat equation can be found in
\cite{Verfurth:03}. In this section we generalize this result
and prove reliable and efficient residual based estimators for
\dGs schemes \eqref{eq:discrete}, with arbitrary $s\in\ensuremath{\mathbb{N}}_0$.
Some arguments in this section are straight forward
generalizations of those
in \cite{Verfurth:03} and we only sketch their proofs, others are
based on new ideas and therefore we shall prove them in detail.
\subsection{Equivalence of Error and Residual\label{s:err=res}}
In order to prove residual based error estimators, one first has to
relate the error to the residual. To this end we
note that \eqref{eq:weak} can be taken as an operator equation in
$L^2(0,T;\ensuremath{\mathbb{V}}^*)\times L^2(\Omega)$. Its residual $\Res(\ensuremath{\mathcal{U}})$ in $\ensuremath{\mathcal{U}}\in\W$ is given by
\begin{align}\label{eq:Res}
\begin{split}
\scp[]{\Res(\ensuremath{\mathcal{U}})}{v}&=\scp[]{\ensuremath{\partial _t}(u-\ensuremath{\mathcal{U}})}{v}+\bilin{u-\ensuremath{\mathcal{U}}}{v}
\\
&=\scp{f-\ensuremath{\partial _t} \ensuremath{\mathcal{U}}}{v}-\bilin{\ensuremath{\mathcal{U}}}{v}\qquad\qquad\text{for all
$v\in\ensuremath{\mathbb{V}}$.}
\end{split}
\end{align}
From \cite{TantardiniVeeser:16b}, we have the following identity
between the residual and the error.
\begin{prop}[Abstract Error Bound]\label{p:err=res}
Let $u\in\W$ be the solution of \eqref{eq:weak} and let
$\ensuremath{\mathcal{U}}\in\W$ be the approximation defined in \eqref{eq:hU} for
time instances $t_0=0,\ldots,t_N=T$ and time-step sizes
$\tau_n:=t_{n}-t_{n-1}$, $n=1,\ldots,N$. Then it holds for $0\le k\le N$, that
\begin{subequations}\label{eq:err=res}
\begin{align}\label{eq:err<res}
\norm[\W]{u-\ensuremath{\mathcal{U}}}^2&= \norm{u_0-U_0}^2 +
\norm[{L^2(0,T;\ensuremath{\mathbb{V}}^*)}]{\Res(\ensuremath{\mathcal{U}})}^2
\intertext{and}\label{eq:res<err}
\norm[L^2(I_n,\ensuremath{\mathbb{V}}^*)]{\Res(\ensuremath{\mathcal{U}})}^2&\le 2\big\{
\norm[L^2(I_n;\ensuremath{\mathbb{V}}^*)]{\ensuremath{\partial _t}(u-\ensuremath{\mathcal{U}})}^2+\norm[L^2(I_n;\ensuremath{\mathbb{V}})]{u-\ensuremath{\mathcal{U}}}^2\big\}.
\end{align}
\end{subequations}
\end{prop}
The rest of
this section concentrates on proving computable upper and lower bounds for
the error. We note that the initial error $\norm{u_0-U_0}$
in \eqref{eq:err=res} is already a posteriori computable, whence
it remains to estimate the dual norm of the residual. However, there is
another issue of separating the influence of the temporal and the spatial
discretization to the error.
In particular, defining the
temporal residual $\ensuremath{\Res_\tau}(\ensuremath{\mathcal{U}}) \in L^2(0,T;\ensuremath{\mathbb{V}}^*)$ as
\begin{align}
\label{df:rest}
\scp[]{\ensuremath{\Res_\tau}(\ensuremath{\mathcal{U}})}{v}&\mathrel{:=}\bilin{U-\ensuremath{\mathcal{U}}}{v}
\intertext{and the spatial residual $\ensuremath{\Res_h}(\ensuremath{\mathcal{U}})\in L^2(0,T;\ensuremath{\mathbb{V}}^*)$
as}
\label{df:resh}
\scp[]{\ensuremath{\Res_h}(\ensuremath{\mathcal{U}})}{v}&\mathrel{:=}\scp[]{f_n-\ensuremath{\partial _t}\ensuremath{\mathcal{U}}}{v}-\bilin{U}{v}
\qquad\text{on}\quad I_n,
\end{align}
we obtain
\begin{align}\label{eq:resdeco}
\Res(\ensuremath{\mathcal{U}})= \ensuremath{\Res_\tau}(\ensuremath{\mathcal{U}})+\ensuremath{\Res_h}(\ensuremath{\mathcal{U}})+f-f_n \qquad\text{on}\quad I_n.
\end{align}
In what follows we use this decomposition to prove separated time and
space error indicators, which build up a reliable and efficient bound
for the error.
\subsection{Temporal Residual\label{s:rest}}
Recalling the definition of the Lagrange polynomials
\eqref{eq:Lagrange}, we have the local unique representation
\begin{align*}
\UIn(t)= U_{n-1}^+L_0(\tfrac{t-t_{n-1}}{\tau_n})+\sum_{j=1}^{s+1}U(t_{n}^j)
\,L_j(\tfrac{t-t_{n-1}}{\tau_n})
\in\Ps(\ensuremath{\mathbb{V}}_n)
\end{align*}
for all $t\in I_n$. Hence, by \eqref{eq:UhLagrange} we get
\begin{align*}
\ensuremath{\mathcal{U}}(t)-U(t)&=(U_{n-1}^--U_{n-1}^+)\,L_0(\tfrac{t-t_{n-1}}{\tau_n})
\end{align*}
and thanks to~\eqref{eq:hU} and \eqref{eq:RadauIIA}, we obtain
\begin{align}\label{eq:rest}
\begin{split}
\int_{I_n}\norm[\ensuremath{\mathbb{V}}^*]{\ensuremath{\Res_\tau}(\ensuremath{\mathcal{U}})}^2\,{\rm d}t&=
\int_{I_n} \Enorm{U- \ensuremath{\mathcal{U}}}^2 \,{\rm d}t
=\Enorm{ U_{n-1}^--
U_{n-1}^+}^2\int_{I_n}|L_0(\tfrac{t-t_{n-1}}{\tau_n})|^2\,{\rm d}t
\\
&=\tau_n\, C_\tau\,\Enorm{U_{n-1}^-- U_{n-1}^+}^2,
\end{split}
\end{align}
where
$C_\tau=C_\tau(s):=\int_{0}^1|L_0(t)|^2\,{\rm d}t$.
\begin{rem}\label{r:Ct}
Observing that the RadauIIA abscissae are the roots of the
polynomial $\lambda_{s}(2t-1)-\lambda_{s+1}(2t-1)$ and
$\lambda_{s}(-1)=(-1)^{s}$,
with the Legendre polynomials $\lambda_n$, $n\in\ensuremath{\mathbb{N}}_0$, it follows
that we have the representation
$$L_0(t)=\frac{(-1)^s}2(\lambda_{s}(2t-1)-\lambda_{s+1}(2t-1))$$ and it
can be easily shown that
$C_\tau=\frac14(\frac{1}{2s+3}+\frac1{2s+1})$.
\end{rem}
\subsection{The Spatial Residual\label{s:Resh}}
In this section we estimate the spatial residual.
\begin{lem}\label{l:UpSpat}
Let $U$ be the approximation of \eqref{eq:discrete} to the solution $u$
of \eqref{eq:weak} and let $\ensuremath{\mathcal{U}}$ be its interpolation defined by
\eqref{eq:hU}. Then there exists a constant $C_\ensuremath{\mathbb{G}}\xspace>0$, such that
\begin{align*}
\int_{I_n}\norm[\ensuremath{\mathbb{V}}^*]{\ensuremath{\Res_h}(\ensuremath{\mathcal{U}})}^2\,{\rm d}t&\le C_\ensuremath{\mathbb{G}}\xspace \sum_{\ensuremath{E}\xspace\in\mathcal{G}_n}\int_{I_n}
h_\ensuremath{E}\xspace^2\norm[\ensuremath{E}\xspace]{\ensuremath{\partial _t} \ensuremath{\mathcal{U}}+\L U-f_n}^2+h_\ensuremath{E}\xspace\norm[\partial E]{J(U)}^2\,{\rm d}t
\end{align*}
for all $1\le n\le N$. Thereby, for $V\in\Vn$ we denote by $J(V)|_S$ for an interior side $\ensuremath{S}\xspace$
the jump of the normal flux $\Am\nabla V\cdot\ensuremath{\vec{n}}$ across $\ensuremath{S}\xspace$
and for boundary sides $\ensuremath{S}\xspace$ we set $J(V)|_S\equiv0$.
The mesh-size of an element
$\ensuremath{E}\xspace\in\mathcal{G}$ is given by $h_\ensuremath{E}\xspace\mathrel{:=} |\ensuremath{E}\xspace|^{1/d}$.
\end{lem}
\begin{proof}
Recalling \eqref{df:resh},
we first observe that
$\norm[\ensuremath{\mathbb{V}}^*]{\ensuremath{\Res_h}(\ensuremath{\mathcal{U}})}^2\in\Ps[2s]$,
whence by \eqref{eq:RadauIIA} we have
\begin{align*}
\int_{I_n} \norm[\ensuremath{\mathbb{V}}^*]{\ensuremath{\Res_h}(\ensuremath{\mathcal{U}})}^2\,{\rm d}t = \tau_n \sum_{j=1}b_j
\norm[\ensuremath{\mathbb{V}}^*]{\ensuremath{\Res_h}(\ensuremath{\mathcal{U}})(t_{n}^j)}^2.
\end{align*}
Therefore, it suffices to estimate $\norm[\ensuremath{\mathbb{V}}^*]{\ensuremath{\Res_h}(\ensuremath{\mathcal{U}})}^2$
at the abscissae of the RadauIIA quadrature formula. For arbitrary
$V_j\in\ensuremath{\mathbb{V}}_n$, $j=1,\ldots,s+1$ choose
$V\in\Ps(\ensuremath{\mathbb{V}}_n)$ in \eqref{eq:discreteb} such that
$V(t+c_i\tau_n)=V_j\delta_{ij}$, $1\le i\le s+1$.
Then exploiting again \eqref{eq:RadauIIA} yields the Galerkin orthogonality
\begin{align*}
\scp[]{\ensuremath{\Res_h}(\ensuremath{\mathcal{U}})(t_n^j)}{V_j}&= 0 \qquad j=1,\ldots,s+1.
\end{align*}
Since $V_j\in\Vn$ was arbitrary, we have for any $v\in\ensuremath{\mathbb{V}}$, that
\begin{align*}
\scp[]{\ensuremath{\Res_h}(\ensuremath{\mathcal{U}})(t_n^j)}{v}&=\scp[]{\ensuremath{\Res_h}(\ensuremath{\mathcal{U}})(t^j_n)}{v-V}\qquad \text{for
all } V\in\ensuremath{\mathbb{V}}_n.
\end{align*}
Using integration by parts with respect to the space variable,
the Cauchy-Schwarz inequality, the scaled trace inequality, and choosing $V$ as
a suitable interpolation of $v$, we arrive at
\begin{align*}
\norm[\ensuremath{\mathbb{V}}^*]{\ensuremath{\Res_h}(\ensuremath{\mathcal{U}})(t^j_n)}^2 \le C_\ensuremath{\mathbb{G}}\xspace \sum_{\ensuremath{E}\xspace\in\mathcal{G}_n}\Big\{&
h_\ensuremath{E}\xspace^2\norm[\ensuremath{E}\xspace]{(\ensuremath{\partial _t} \ensuremath{\mathcal{U}}+\L U-f_n) (t^j_n)}^2
+h_\ensuremath{E}\xspace\norm[\partial E]{J(U) (t^j_n)}^2\Big\}.
\end{align*}
The right hand side is a pointwise evaluation of a polynomial of
degree $2s$ and thus the claimed upper bound follows from~\eqref{eq:RadauIIA}.
\end{proof}
The following result shows that the spatial indicators are locally
efficient as well.
\begin{lem}\label{l:LowSpat}
Under the conditions of Lemma \ref{l:UpSpat}, we have
\begin{align*}
\sum_{\ensuremath{E}\xspace\in\mathcal{G}_n}\int_{I_n}
h_\ensuremath{E}\xspace^2\norm[\ensuremath{E}\xspace]{\ensuremath{\partial _t} \ensuremath{\mathcal{U}}+\L U-f_n}^2&+h_\ensuremath{E}\xspace\norm[\partial
E]{J(U)}^2\,{\rm d}t
\\
&\le C \big\{ \int_{I_n}\norm[\ensuremath{\mathbb{V}}^*]{\ensuremath{\Res_h}(\ensuremath{\mathcal{U}})}^2+\oscG[\mathcal{G}_n]^2(f_n,\ensuremath{\mathcal{U}})\,{\rm d}t\Big\},
\end{align*}
where
\begin{align*}
\oscG[\mathcal{G}_n]^2(f_n, \ensuremath{\mathcal{U}}):=\sum_{\ensuremath{E}\xspace\in\mathcal{G}_n}
h_\ensuremath{E}\xspace^2\norm[\ensuremath{E}\xspace]{\ensuremath{\partial _t} \ensuremath{\mathcal{U}}+\L U-f_n-R_\ensuremath{E}\xspace }^2
+ h_\ensuremath{E}\xspace\norm[\partial E]{J(U)-J_E}^2
\end{align*}
with at time $t\in I_n$ pointwise $L^2(\Omega)$-best approximations $R_\ensuremath{E}\xspace(t)\in \Ps[2\ell-2](E)$
respectively $J_E(t)_{|\ensuremath{S}\xspace}\in \Ps[2\ell-1](\ensuremath{S}\xspace)$ for each side
$\ensuremath{S}\xspace\subset \partial\ensuremath{E}\xspace$. The constant $C>0$ depends solely on
the shape regularity of $\ensuremath{\mathbb{G}}\xspace$.
\end{lem}
\begin{proof}
With the same arguments as in the proof of Lemma \ref{l:UpSpat},
for each $1\le j\le s+1$ it suffices to
prove that
\begin{align*}
\begin{split}
C_\ensuremath{\mathbb{G}}\xspace\sum_{E\in\mathcal{G}_n}h_\ensuremath{E}\xspace^2\norm[\ensuremath{E}\xspace]{(\ensuremath{\partial _t} \ensuremath{\mathcal{U}}+\L U-f_n)(t^j_n)}^2
&+h_\ensuremath{E}\xspace\norm[\partial E]{J(U)(t^j_n)}^2
\\
&\leq
C\big\{\norm[\ensuremath{\mathbb{V}}^*]{\ensuremath{\Res_h}(\ensuremath{\mathcal{U}})(t^j_n)}^2+\oscG[\mathcal{G}_n]^2(f_n,U)(t^j_n)\big\}
\end{split}
\end{align*}
This however, follows with standard
techniques used in a posteriori estimation of elliptic second order
problems; see e.g. \cite{Verfuerth:2013,MekchayNochetto:05} and
compare with the case of the implicit Euler scheme $s=0$ in \cite{Verfurth:03}.
\end{proof}
\subsection{Estimation of the Error\label{s:err_est}}
By means of the decomposition of the residual \eqref{eq:resdeco}, we can
combine the above results to obtain a reliable and
efficient error estimator for \eqref{eq:strong}. To this end, we
introduce the following error indicators for the sake of brevity of presentation:
For $\mathcal{G}\in\ensuremath{\mathbb{G}}\xspace$ and $v\in\ensuremath{\mathbb{V}}$, the estimator for the initial value is
given by
\begin{subequations}\label{eq:indicators}
\begin{align}
\label{df:inest}
\Einit:=\norm{v-\mathcal{I}_\mathcal{G} v}^2
\end{align}
For $f\in
L^2(0,T;L^2(\Omega))$, $t_\star\in (0,T)$ and $I=(t_\star,t_\star+\tau]\subset (t_\star,T]$,
the so called consistency
error, which is inherited by the decomposition of the residual
\eqref{eq:resdeco} is defined by
\begin{align}
\label{df:fest}
\Econs[f,t_\star,\tau]&:=3\inf_{\bar f \in
\Ps(L^2(\Omega))}\int_{I}\norm{f-\bar f}^2\,{\rm d}t.
\intertext{For $v^-,v^+\in\ensuremath{\mathbb{V}}$,
$\mathcal{G}\in\ensuremath{\mathbb{G}}\xspace$, $V\in\Ps(\VG)$, $\ensuremath{E}\xspace\in\mathcal{G}$, and
$g\in\Ps(L^2(\Omega))$ the indicator
}
\label{df:test}
\Etc[v^+,v^-,\tau]&:=\tau\, 3\, C_{\tau}\, \Enorm{v^--
v^+}^2
\intertext{is motivated by
\eqref{eq:rest} and Lemma \ref{l:UpSpat} suggests to define the
spatial indicators by}
\label{df:sest}
\begin{split}
\Espace[V,v^-,t_\star,\tau,g,\mathcal{G},\ensuremath{E}\xspace]&
:=3\,C_\ensuremath{\mathbb{G}}\xspace\int_{I}
h_\ensuremath{E}\xspace^2\norm[\ensuremath{E}\xspace]{\ensuremath{\partial _t} \mathcal{V}+\L V-g}^2
+h_\ensuremath{E}\xspace\norm[\partial E]{J(V)}^2\,{\rm d}t
\\
&=3\,C_\ensuremath{\mathbb{G}}\xspace\,\tau\,\sum_{j=1}^{s+1}b_j\Big\{
h_\ensuremath{E}\xspace^2\norm[\ensuremath{E}\xspace]{(\ensuremath{\partial _t} \mathcal{V}+\L V-g)(t_\star+c_j\tau)}^2
\\
&\phantom{=3\,C_\ensuremath{\mathbb{G}}\xspace\,\tau\,\sum_{j=1}^{s+1}b_j\Big\{ }+h_\ensuremath{E}\xspace\norm[\partial E]{J(V)(t_\star+c_j\tau)}^2\Big\}.
\end{split}
\end{align}
\end{subequations}
Here we have used, analogously to \eqref{eq:UhLagrange}, that
\begin{align}\label{eq:Vh}
\mathcal{V}(t):= \sum_{j}^{s+1} L_j\big(\tfrac{t-t_\star}{\tau}\big)\,
V(t_\star+c_j\tau)+L_0\big(\tfrac{t-t_\star}{\tau}\big) v^-\in\Ps[s+1](\ensuremath{\mathbb{V}}).
\end{align}
\begin{prop}[Upper Bound]\label{p:upper}
Let $u\in\W$ be the solution of \eqref{eq:weak} and let
$\ensuremath{\mathcal{U}}\in\W$ be the approximation defined in \eqref{eq:hU} for
time instances $t_0=0,\ldots,t_N=T$ and time-step sizes
$\tau_n:=t_{n}-t_{n-1}$, $n=1,\ldots,N$. Then we have the estimate
\begin{align*}
\norm[\W]{u-\ensuremath{\mathcal{U}}}^2&\le \Einit[u_0,\mathcal{G}_0]+\sum_{n=1}^N\Big\{
\Etc[U_{n-1}^+,U_{n-1}^-,\tau_n]
\\
&\phantom{\le \Einit[u_0,\mathcal{G}_0]+\sum_{n=1}^N\Big\{
} +
\Espace[U,U_{n-1}^-,\tn,\tau_n,f_n,\mathcal{G}_n]+\Econs[f,t_{n-1},\tau_n]\Big\}.
\end{align*}
\end{prop}
\begin{proof}
By the decomposition of the residual \eqref{eq:resdeco} and the triangle inequality, we
estimate on each interval $I_n$, $n=1,\ldots,N$
\begin{align*}
\norm[L^2(I_n;\ensuremath{\mathbb{V}}^*)]{\Res(\ensuremath{\mathcal{U}})}^2&\leq 3
\norm[L^2(I_n;\ensuremath{\mathbb{V}}^*)]{\ensuremath{\Res_\tau}(\ensuremath{\mathcal{U}})}^2
+3\norm[L^2(I_n;\ensuremath{\mathbb{V}}^*)]{\ensuremath{\Res_h}(\ensuremath{\mathcal{U}})}^2
\\
&\qquad
+3 \norm[L^2(I_n;\ensuremath{\mathbb{V}}^*)]{f-f_n}^2.
\end{align*}
Now the assertion follows by Proposition \ref{p:err=res},
\eqref{eq:rest}, and Lemma \ref{l:UpSpat}.
\end{proof}
\begin{prop}[Lower Bound]\label{p:lower}
Supposing the conditions of Proposition \ref{p:upper}, we have
\begin{align*}
\Etc[U_{n-1}^+,U_{n-1}^-,\tau_n]&+
\Espace[U,U_{n-1}^-,\tn,\tau_n,f_n,\mathcal{G}_n]
\\
&\qquad\leq C\,\Big\{
\norm[L^2(I_n;\ensuremath{\mathbb{V}}^*)]{\ensuremath{\partial _t}(u-\ensuremath{\mathcal{U}})}^2+\norm[L^2(I_n;\ensuremath{\mathbb{V}})]{u-\ensuremath{\mathcal{U}}}^2
\\
&\qquad\qquad\quad+\int_{I_n}\oscG[\mathcal{G}_n]^2(f_n, \ensuremath{\mathcal{U}})\,{\rm d}t+\Econs[f,t_{n-1},\tau_n]\Big\},
\end{align*}
where the constant $C$ depends solely on the shape regularity of
$\ensuremath{\mathbb{G}}\xspace$ and on $s$.
\end{prop}
\begin{proof}
We first consider the spatial indicators. By Lemma \ref{l:LowSpat}
there exists $C>0$, such that
\begin{align*}
\Espace[U,U_{n-1}^-,\tn,\tau_n,f_n,\mathcal{G}_n]\le
C
\norm[L^2(I_n;\ensuremath{\mathbb{V}}^*)]{\ensuremath{\Res_h}(\ensuremath{\mathcal{U}})}^2+C\int_{I_n}\oscG[\mathcal{G}_n]^2(f_n,
\ensuremath{\mathcal{U}})\,{\rm d}t.
\end{align*}
The first term on the right hand side can be further estimated using
the decomposition of the residual, the triangle inequality, and
\eqref{eq:rest} to obtain
\begin{align}\label{eq:3}
\begin{split}
\norm[L^2(I_n;\ensuremath{\mathbb{V}}^*)]{\ensuremath{\Res_h}(\ensuremath{\mathcal{U}})}
&\le\norm[L^2(I_n;\ensuremath{\mathbb{V}}^*)]{\Res(\ensuremath{\mathcal{U}})}+
\norm[L^2(I_n;\ensuremath{\mathbb{V}}^*)]{f-f_n}
\\
&\quad + \E{c\tau}(U_{n-1}^-,U_{n-1}^+,\tau_n).
\end{split}
\end{align}
It remains to bound the temporal estimator. To this end, we
introduce a nontrivial auxiliary function $\alpha\in\Ps[2s+2]$ such
that $\alpha\perp\Ps[2s+1]$ and
\begin{align*}
\int_0^1L_0^2(t)\,\alpha(t)\,{\rm d}t = 1,
\end{align*}
which is possible since $L_0^2\in\Ps[2s+2]\setminus \Ps[2s+1]$.
Recalling \eqref{eq:rest}, \eqref{df:rest}, and \eqref{eq:resdeco},
we have for $\alpha_n(t)\mathrel{:=}
\alpha\big(\tfrac{t-t_{n-1}}{\tau_n}\big)$ that
\begin{align*}
\Etc[U_{n-1}^+,U_{n-1}^-,\tau_n]&
= C_\tau\Enorm{U_{n-1}^+-U_{n-1}^-}^2\int_{I_n}
L_0^2\big(\tfrac{t-t_{n-1}}{\tau_n}\big)\,\alpha_n(t)\,{\rm d}t
\\
&=C_\tau\int_{I_n}\alpha_n\,\scp[]{\Res(\ensuremath{\mathcal{U}})}{U-\ensuremath{\mathcal{U}}}
-\alpha_n\,\scp{f-f_n}{U-\ensuremath{\mathcal{U}}}\,{\rm d}t
\\
&\quad-C_\tau
\int_{I_n}\alpha_n\,\scp[]{\ensuremath{\Res_h}(\ensuremath{\mathcal{U}})}{U-\ensuremath{\mathcal{U}}}\,{\rm d}t.
\end{align*}
The last term vanishes since
$\Scp[]{\ensuremath{\Res_h}(\ensuremath{\mathcal{U}})}{U-\ensuremath{\mathcal{U}}}\in\Ps[2s+1]$. Using the Cauchy Schwarz and Young inequalities,
we can hence estimate
\begin{align*}
\Etc[U_{n-1}^+,U_{n-1}^-,\tau_n]\leq
2C_\tau\norm[L^\infty(0,1)]{\alpha}^2\,
\Big\{\norm[L^2(I_n;\ensuremath{\mathbb{V}}^*)]{\Res(\ensuremath{\mathcal{U}})}^2+\norm[L^2(I_n;\ensuremath{\mathbb{V}}^*)]{f-f_n}^2\Big\}.
\end{align*}
Combining this with \eqref{eq:3}, we arrive at
\begin{align}\label{eq:4}
\begin{split}
\Etc[U_{n-1}^+,U_{n-1}^-,\tau_n]&+
\Espace[U,U_{n-1}^-,t,\tau_n,f_n,\mathcal{G}_n]
\\
&\le \Big(C
\big(1+2\norm[L^\infty(0,1)]{\alpha}^2C_\tau\big)+2\norm[L^\infty(0,1)]{\alpha}^2C_\tau\Big)
\\
&\qquad \Big\{ \norm[L^2(I_n;\ensuremath{\mathbb{V}}^*)]{\Res(\ensuremath{\mathcal{U}})}^2+\norm[L^2(I_n;\ensuremath{\mathbb{V}}^*)]{f-f_n}^2\Big\}.
\end{split}
\end{align}
Together with Proposition \ref{p:err=res} this is the desired estimate.
\end{proof}
\begin{rem}[Implicit Euler]
We emphasize that the proof of the lower bound Proposition
\ref{p:lower} is slightly different from the one in
\cite{Verfurth:03} and yields different constants also for
the \dGs[0] scheme. To see
this, we observe that the definition of $\alpha$ implies for $s=0$
that
\begin{align*}
\alpha(t)=30(6t^2-6t+1),\qquad\text{whence}\qquad
\norm[L^\infty(0,1)]{\alpha}= 30.
\end{align*}
Therefore,
we conclude for the constant in~\eqref{eq:4} with
$C_\tau=\frac13$ from Remark~\ref{r:Ct}, that
\begin{align*}
C
\big(1+2\norm[L^\infty(0,1)]{\alpha}^2C_\tau\big)+2\norm[L^\infty(0,1)]{\alpha}^2
C_\tau=
601\,C +600,
\end{align*}
where $C$ is the constant in the estimate of Lemma~\ref{l:LowSpat}.
In contrast to this, the techniques used in \cite{Verfurth:03} for the
implicit Euler scheme yield the
constant
\begin{align*}
\big(1+7C_\ensuremath{\mathbb{G}}\xspace^{1/2}\big)^2\,C_\ensuremath{\mathbb{G}}\xspace^{1/2}\, C^{3/2}\, 12^2.
\end{align*}
\end{rem}
\begin{rem}[Elliptic Problem]\label{r:elliptic}
In case of the implicit Euler scheme \dGs[0], it is well known, that
in each time-step $1\le n\le N$, $\UIn\in \Ps[0](\Vn)=\Vn$ is the
Ritz approximation to a coercive elliptic problem. Moreover, the
spatial estimators \eqref{df:sest} are the standard residual based
estimators for this elliptic problem. This observation transfers to
the \dGs scheme for $s\ge1$. To see this, we observe that (after
transformation to the unit interval) \eqref{eq:discrete} is a
Galerkin approximation to the solution $u_\tau\in\Ps(\ensuremath{\mathbb{V}})$ of a
problem of the kind
\begin{align}\label{eq:elliptic}
\begin{split}
\int_0^1\frac1\tau\scp{\ensuremath{\partial _t} u_\tau}{v} + \bilin{u_\tau}{v}\,{\rm d}t
&+\frac1\tau\scp{u_\tau(0)}{v(0)}
\\
&= \int_0^1\scp{\bar f}{v} \,{\rm d}t +\frac1\tau\scp{v^-}{v(0)}
\end{split}
\end{align}
for all $v\in\Ps(\ensuremath{\mathbb{V}})$ and some data $\bar f\in\Ps(L^2(\Omega))$,
$v^-\in L^2(\Omega)$, and $\tau>0$. The mappings $v\mapsto v(0)$ and
$v\mapsto\ensuremath{\partial _t} v$ are linear and continuous on $\Ps(\ensuremath{\mathbb{V}})$, whence this
equation can be taken as a vector valued
linear variational problem of second
order on $\ensuremath{\mathbb{V}}^{s+1}$. Testing with $v=u_\tau$
proves coercivity
\begin{align*}
\begin{split}
\int_0^1\frac1\tau\scp{\ensuremath{\partial _t} u_\tau}{u_\tau} + \bilin{u_\tau}{u_\tau}\,{\rm d}t
&+\frac1\tau\scp{u_\tau(0)}{u_\tau(0)}
\\
&=\frac1{2\tau}\norm{u_\tau(0)}^2+\frac1{2\tau}\norm{u_\tau(1)}^2+
\int_0^1\Enorm{u_\tau}^2\,{\rm d}t.
\end{split}
\end{align*}
Obviously, its residual in $V\in\Ps(\ensuremath{\mathbb{V}})$ is given by
\begin{align*}
\scp[]{\ensuremath{\Res_h}(\mathcal{V})}{v}&=\scp[]{\bar
f-\ensuremath{\partial _t}\mathcal{V}}{v}-\bilin{V}{v},\quad v\in\Ps(\ensuremath{\mathbb{V}}),
\end{align*}
where $\mathcal{V}\in\Ps[s+1](\ensuremath{\mathbb{V}})$ is such that
$\mathcal{V}(c_j)=V(c_j)$, $j=1,\ldots, s$ and
$\mathcal{V}(0)=v^-$; compare with~\eqref{eq:hU}.
Thanks to Lemmas \ref{l:UpSpat} and \ref{l:LowSpat}, for
$V\in\Ps(\VG)$, $\mathcal{G}\in\ensuremath{\mathbb{G}}\xspace$, the standard
residual based estimator for this problem is given by
$\Espace[V,v^-,\tau,0,\bar f,\mathcal{G}]$.
\end{rem}
\paragraph{Energy Estimation}
We shall now generalise the energy estimate from \cite{KrMoScSi:12} to
higher order $\dGs$ schemes.
\begin{prop}[Uniform global energy estimate]
\label{prop:uniform_bound}
Assume $N\in\ensuremath{\mathbb{N}}\cup\{\infty\}$ and arbitrary time instances
$0=t_0<\cdots<t_N\le T$ with time -step-sizes
$\tau_1,\ldots,\tau_N>0$.
Let $U_0=\ProG[0]u_0$ and for $1\le n\le N$ let $\UIn\in\Ps(\Vn)$
be the discrete solutions to \eqref{eq:discrete} and let $\ensuremath{\mathcal{U}}\in\W$
as defined in \eqref{eq:hU}.
Then for any $m=1,\dots,N$ we have
\begin{gather*}
\sum_{n=1}^m \norm{\ensuremath{\partial _t}\ensuremath{\mathcal{U}}}^2 +
\enorm{U_{n-1}^+-\ProG[n]U_{n-1}^-}^2
+\enorm{U_{n}^-}^2 -\enorm{\ProG[n]U_{n-1}^-}^2
\le \sum_{n=1}^m\int_{I_n}\norm{f_n}^2\,{\rm d}t.
\end{gather*}
\end{prop}
\begin{proof}
We choose $V\mathrel{:=}\ProG[n]\ensuremath{\partial _t}\ensuremath{\mathcal{U}}_{|I_n}\in\Ps(\Vn)$ as a test function in
\eqref{eq:discreteb} obtaining
\begin{align}\label{eq:1}
\int_{I_n}\norm{\ProG[n]\ensuremath{\partial _t}\ensuremath{\mathcal{U}}}^2 + \bilin{
U}{\ProG[n] \ensuremath{\partial _t}\ensuremath{\mathcal{U}}}\,{\rm d}t = \int_{I_n} \scp{f_{n}}{\ProG[n]\ensuremath{\partial _t}\ensuremath{\mathcal{U}}}\,{\rm d}t.
\end{align}
In order to analyse the second term on the left hand side, we first observe
that
$\ProG[n]\ensuremath{\partial _t}\ensuremath{\mathcal{U}}_{|I_n}=\ensuremath{\partial _t}\ProG[n]\ensuremath{\mathcal{U}}_{|I_n}\in\Ps(\Vn)$. Recalling
\eqref{eq:hUb} and
that $\mathcal{B}:\ensuremath{\mathbb{V}}\times\ensuremath{\mathbb{V}}\to\ensuremath{\mathbb{R}}$ is constant in time, we obtain integrating by
parts, that
\begin{align*}
\int_{I_n} \bilin{U}{\ProG[n] \ensuremath{\partial _t}\ensuremath{\mathcal{U}}}\,{\rm d}t&=
- \int_{I_n} \bilin{\ensuremath{\partial _t} U}{\ProG[n]\ensuremath{\mathcal{U}}}\,{\rm d}t +
\Enorm{\Un^-}^2-\bilin{\Un[n-1]^+}{\ProG[n]\Un[n-1]^-}.
\intertext{Since $\bilin{\ensuremath{\partial _t} U}{\ProG[n]\ensuremath{\mathcal{U}}}_{|I_n}\in\Ps[2s]$, we can apply
\eqref{eq:RadauIIA} and conclude with \eqref{eq:hUa} that}
\int_{I_n} \bilin{U}{\ProG[n] \ensuremath{\partial _t}\ensuremath{\mathcal{U}}}\,{\rm d}t&=
- \int_{I_n} \bilin{\ensuremath{\partial _t} U}{U}\,{\rm d}t +
\Enorm{\Un^-}^2-\bilin{\Un[n-1]^+}{\ProG[n]\Un[n-1]^-}
\\
&=
\frac12\Enorm{\Un[n-1]^+-\ProG[n]\Un[n-1]^-}^2
- \frac12\Enorm{ \ProG[n]\Un[n-1]^-}^2+ \frac12\Enorm{ \Un^-}^2,
\end{align*}
where we used that $\bilin{\ensuremath{\partial _t} \UIn}{\UIn}= \frac12\ensuremath{\partial _t} \Enorm{\UIn}^2$.
Inserting this in \eqref{eq:1} yields
\begin{multline*}
\int_{I_n}\norm{\ProG[n]\ensuremath{\partial _t}\ensuremath{\mathcal{U}}}^2\,{\rm d}t+ \frac12\Enorm{\Un[n-1]^+-\ProG[n]\Un[n-1]^-}^2
- \frac12\Enorm{ \ProG[n]\Un[n-1]^-}^2+ \frac12\Enorm{ \Un^-}^2
\\
= \int_{I_n} \scp{f_{n}}{\ProG[n]\ensuremath{\partial _t}\ensuremath{\mathcal{U}}}\,{\rm d}t.
\end{multline*}
Estimating the right hand side with the help of the Cauchy-Schwarz
and the Young
inequality proves the assertion.
\end{proof}
\begin{cor}
\label{cor:uniform_bound}
Under the conditions of Proposition \ref{prop:uniform_bound}, assume that
\begin{align}\label{cond:energ-cont}
\enorm{\Un[n-1]^-}^2-\enorm{\ProG[n]\Un[n-1]^-}^2 +
\frac12\int_{I_n} \norm{\ProG[n]\ensuremath{\partial _t} \ensuremath{\mathcal{U}}}^2\,{\rm d}t \ge 0\quad\text{ for}\quad
n=1,\dots,N.
\end{align}
Then we have the estimate
\begin{align*}
\sum_{n=1}^m \frac12\int_{I_n} \norm{\ProG[n]\ensuremath{\partial _t} \ensuremath{\mathcal{U}}}^2\,{\rm d}t +
\enorm{\Un[n-1]^+-\ProG[n]\Un[n-1]^-}^2
\le \norm[\Omega\times(0,t_m)]{f}^2 +
\enorm{ U_0}^2-\enorm{\Un[m]^-}^2.
\end{align*}
In particular, the series $\sum_{n=1}^N \enorm{\Un[n-1]^+-\ProG[n]\Un[n-1]^-}^2
$ is uniformly bounded irrespective of the
sequence of time-step-sizes used.
\end{cor}
\begin{proof}
Summing up the nonnegative terms in \eqref{cond:energ-cont} yields
\begin{gather*}
0\le \sum_{n=1}^{m}\enorm{\Un[n-1]^-}^2-\enorm{\ProG[n]\Un[n-1]^-}^2 +
\frac12\int_{I_n} \norm{\ProG[n]\ensuremath{\partial _t} \ensuremath{\mathcal{U}}}^2\,{\rm d}t,
\end{gather*}
which is equivalent to
\begin{gather*}
\enorm{\Un[m]^-}^2 - \enorm{\UIn[0]}^2\le\sum_{n=1}^{m}\enorm{\Un[n-1]^+}^2-\enorm{\ProG[n]\UIn[n-1]^-}^2 +
\frac12\int_{I_n} \norm{\ProG[n]\ensuremath{\partial _t} \ensuremath{\mathcal{U}}}^2\,{\rm d}t.
\end{gather*}
Using this in the estimate of Proposition
\ref{prop:uniform_bound} yields the desired estimate.
\end{proof}
Having a closer look at the indicator $\E{c\tau}$ we note that, since
we allow for coarsening, it is not a pure temporal error
indicator. Coarsening may cause the loss of information and
to few information my lead to wrong decisions within the adaptive method.
For this reason we use the triangle inequality to
split
\begin{subequations}
\label{eq:indicators_2}
\begin{align}
\Etc[v^-,v^+,\tau,\mathcal{G}]\le \Ecoarse[v^-,\tau,\mathcal{G}] + \Etime
\end{align}
into a measure
\begin{align}
\label{eq:Ec}
\Ecoarse[v^-,\tau,\mathcal{G}] &\mathrel{:=}\sum_{\ensuremath{E}\xspace\in\mathcal{G}}\Ecoarse:=
6\,C_{\tau}\sum_{\ensuremath{E}\xspace\in\mathcal{G}}\tau\Enorm[\ensuremath{E}\xspace]{\ProG v^- - v^-}^2
\intertext{for the coarsening error and}
\label{eq:Et}
\Etime &\mathrel{:=} 6\,C_{\tau}\tau\Enorm{v^+ - \ProG v^-}^2,
\end{align}
\end{subequations}
which serves as an indicator for the temporal error. This allows us to control the coarsening error separately.
Assuming that \eqref{cond:energ-cont} holds, Corollary~\ref{cor:uniform_bound} provides
control of the sum of the time error indicators
\begin{math}
\Etime[{\Un[n-1]^+},{\Un[n-1]^-},\tau_n,\mathcal{G}_n] = 6C_{c\tau}\tau\enorm{{\Un[n-1]^+}-\ProG[n]{\Un[n-1]^-}}^2
\end{math}. Assumption \eqref{cond:energ-cont} would trivially be
satisfied for the Ritz-projection $R_n\Un[n-1]^-$ of $\Un[n-1]^-$ into
$\Vn$, since $\enorm{R_n\Un[n-1]^-}\le\enorm{\Un[n-1]^-}$. The
$L^2$-projection $\Pron\Un[n-1]^-$, however, does not satisfy
this monotonicity property in general and therefore coarsening may
lead to an increase of energy. The algorithm presented below ensures
that \eqref{cond:energ-cont} is fulfilled at the end of every time-step.
To this end, using the notation~\eqref{eq:Vh}, we
define for $V\in\Ps(\ensuremath{\mathbb{V}}(\mathcal{G}))$, $v^-\in\ensuremath{\mathbb{V}}$,
$t_\star\in(0,T)$, $I=(t_\star,t_\star+\tau]\subset(t_\star,T]$, $\mathcal{G}\in\ensuremath{\mathbb{G}}\xspace$, and $\ensuremath{E}\xspace\in\mathcal{G}$, the indicators
\begin{equation*}
\Estar[V,v^-,t_\star,\tau,\mathcal{G},\ensuremath{E}\xspace] \mathrel{:=} \enorm[\ensuremath{E}\xspace]{\ProG v^-}^2 - \enorm[\ensuremath{E}\xspace]{v^-}^2
- \frac12\int_I \norm[E]{ \ProG\ensuremath{\partial _t} \mathcal{V}}^2\,{\rm d}t,
\end{equation*}
as well as the convenient notation
\begin{math}
\Estar[V,v^-,t_\star,\tau,\mathcal{G}] \mathrel{:=} \sum_{E\in\mathcal{G}} \Estar[V,v^-,t_\star,\tau,\mathcal{G},\ensuremath{E}\xspace]
\end{math}.
Condition \eqref{cond:energ-cont} is then equivalent to
$\Estar[U,{\Un[n-1]^-},t_{n-1},\tau_n,\mathcal{G}_n]\le 0$, $n=1,\dots,N$.
Note that the term $- \int_{I_n}\norm[E]{ \ProG \ensuremath{\partial _t} \mathcal{V}}^2$ may
compensate for $\enorm[\ensuremath{E}\xspace]{\ProG v^-}^2 > \enorm[\ensuremath{E}\xspace]{v^-}^2$.
\section{The adaptive algorithm \text{TAFEM}\xspace}
\label{sec:tafem}
Based on the observations in the previous section and a new concept for
marking we shall next describe the adaptive algorithm \text{TAFEM}\xspace~in this section.
In contrast to the algorithms presented in~\cite{KrMoScSi:12} and
\cite{ChenFeng:04}, the
\text{TAFEM}\xspace is based on a different marking philosophy.
In fact, they mark according to the same indicators,
\eqref{df:fest}-\eqref{df:sest} and \eqref{eq:indicators_2}, but
in contrast to \cite{KrMoScSi:12,ChenFeng:04}, the \text{TAFEM}\xspace uses an $L^2$ instead of an
$L^\infty$ strategy.
Philosophically, this aims at an $L^2$
rather than an
$L^\infty$ equal-distribution of the error in time; compare also with
the introductory
section~\ref{sec:introduction}.
We follow a bottom
up approach, i.e., we first state basic properties on some rudimentary modules
that are treated as black box routines, then describe three core modules
in detail, and finally combine these procedures in the adaptive algorithm
\text{TAFEM}\xspace.
\subsection{Black Box Modules}
As in~\cite{KrMoScSi:12}, we use
standard modules \ADAPTINIT{}, \COARSEN[], \MARKREFINE{}, and \SOLVE\
as black box routines. In particular, we use the subroutine \MARKREFINE{} in an
object-oriented fashion, \hbox{i.\,e.},\xspace the functionality of \MARKREFINE{} changes
according to its arguments. We next state the basic properties of these
routines.
\begin{ass}[Properties of modules]\label{ass:modules}
We suppose that all rudimentary modules terminate with an output
having the following properties.
\begin{enumerate}[(1)]
\item For a given initial datum $u_0\in L^2(\Omega)$ and tolerance $\ensuremath{\text{\texttt{TOL}}_0}>0$,
the output
\begin{displaymath}
(\Un[0],\gridn[0]) = \ADAPTINIT{(u_0,\ensuremath{\grid_\textrm{init}},\ensuremath{\text{\texttt{TOL}}_0})}
\end{displaymath}
is a refinement $\gridn[0]\ge\ensuremath{\grid_\textrm{init}}$ and an approximation
$\Un[0]\in\ensuremath{\mathbb{V}}(\gridn[0])$ to $u_0$ such that $\Einit[{u_0,\gridn[0]}]\le\ensuremath{\text{\texttt{TOL}}_0}^2$.
\item For given $g\in L^2(\Omega)$,
$\bar f\in\Ps(L^2(\Omega))$, $t_\star\in(0,T)$,
$I=(t_\star,t_\star
+\tau]\subset
(t_\star,T]$, and $\mathcal{G}\in\ensuremath{\mathbb{G}}\xspace$, the output
\begin{displaymath}
U_{I} = \SOLVE{(g, \bar f, t, \tau, \mathcal{G})}
\end{displaymath}
is the solution $U_{I}\in\Ps(I,\VG)$ to the discrete elliptic problem
\begin{displaymath}
\int_{I}\scp{\ensuremath{\partial _t} U_{I}}{V} +
\bilin{U_{I}}{V}\,{\rm d}t +\scp{U_{I}(t)}{V(t)}= \scp{g}{V} + \int_{I}\scp{\bar f}{V} \,{\rm d}t
\end{displaymath}
for all $V\in\Ps(\VG)$; compare with~\eqref{eq:discrete}. Hereby we assume exact integration and linear algebra.
\item For a given grid $\mathcal{G}\in\ensuremath{\mathbb{G}}\xspace$ and a discrete function $V\in\ensuremath{\mathbb{V}}(\mathcal{G})$ the output
\begin{displaymath}
\mathcal{G}_* = \COARSEN[{(V,\mathcal{G})}]
\end{displaymath}
satisfies $\mathcal{G}_*\le\mathcal{G}$.
\item For a given grid $\mathcal{G}$ and a set of indicators
$\{\E{\ensuremath{E}\xspace}\}_{\ensuremath{E}\xspace\in\mathcal{G}}$ the output
\begin{displaymath}
\mathcal{G}_* = \MARKREFINE{(\{\E{\ensuremath{E}\xspace}\}_{\ensuremath{E}\xspace\in\mathcal{G}}, \mathcal{G})}\in\ensuremath{\mathbb{G}}\xspace
\end{displaymath}
is a conforming refinement of $\mathcal{G}$, where at least one
element in the subset
$\argmax\{\E{\ensuremath{E}\xspace}:\ensuremath{E}\xspace\in\mathcal{G}\}\subset\mathcal{G}$ has been refined.
\item For given grids $\mathcal{G},\grid_{\textrm{old}}\in\ensuremath{\mathbb{G}}\xspace$ and a set of indicators
$\{\E{\ensuremath{E}\xspace}\}_{\ensuremath{E}\xspace\in\mathcal{G}}$, the output
\begin{displaymath}
\mathcal{G}_* = \MARKREFINE{(\{\E{\ensuremath{E}\xspace}\}_{\ensuremath{E}\xspace\in\mathcal{G}}, \mathcal{G},\grid_{\textrm{old}})}\in\ensuremath{\mathbb{G}}\xspace
\end{displaymath}
is a conforming refinement of $\mathcal{G}$, where at least one element
of the set $\{\ensuremath{E}\xspace\in\mathcal{G}\colon
h_{\mathcal{G}|\ensuremath{E}\xspace}>h_{\grid_{\textrm{old}}|\ensuremath{E}\xspace}\}$ of
\emph{coarsened elements (with respect to $\grid_{\textrm{old}}$)}
is refined.
\end{enumerate}
\end{ass}
For a more detailed description of these modules see Section~3.3.1 of~\cite{KrMoScSi:12}.
\subsection{The Core Modules}
The first core module \texttt{CONSISTENCY} controls the consistency error
$\E{f}$. Recalling its definition in \eqref{df:fest}, we see that the
consistency error is solely influenced by the time-step
size and can be computed without solving expensive discrete systems.
Therefore, \texttt{CONSISTENCY} is used in the initialization of each time
step to adjust the time-step-size such that the local consistency
indicator $\Econs$ is below a local tolerance
$\ensuremath{\text{\texttt{tol}}_f}$.
It is important to notice that this module follows the classic
\emph{thresholding} algorithm, which ensures quasi-optimal order of
convergence in terms of the degrees of freedom; compare e.g. with \cite{BiDaDePe:02}.
\begin{algorithm}
\caption{Module \texttt{CONSISTENCY} (Parameters $\sigma,\kappa_1\in(0,1)$
and $\kappa_2>1$)}
\label{alg:consistency}
\baselineskip=15pt
\flushleft
\CONSISTENCY[(f,t,\tau,\ensuremath{\text{\texttt{tol}}_f})]
\begin{algorithmic}[1]
\STATE{compute $\Econs$}
\WHILE[$\bigstar$\,enlarge $\tau$]{$\Econs < \sigma\,\ensuremath{\text{\texttt{tol}}_f}^2$ and $\tau < T-t$} \label{line:CONS_IF_start}
\STATE \label{line:CONS_IF_taun} $\tau = \min\{\kappa_2\tau, T-t\}$
\STATE{compute $\Econs$}
\ENDWHILE \label{line:CONS_IF_end}
\WHILE[$\bigstar$\,reduce $\tau$]{$\Econs > \ensuremath{\text{\texttt{tol}}_f}^2$}
\label{line:CONS_while2_start}
\STATE $\tau = \kappa_1\tau$
\STATE{compute $\Econs$}
\ENDWHILE\label{line:CONS_while2_end}
\STATE $\bar f = f_{[t,{t+\tau}]} $
\RETURN $\bar f,\tau$
\end{algorithmic}
\end{algorithm}
We start with termination of the module \texttt{CONSISTENCY}.
\begin{lem}[Termination of \texttt{CONSISTENCY}]
\label{lem:termination-consistency}
Assume $f \in L^2((0,T);L^2(\Omega))$. Then for any $t\in(0,T)$ and $\tau^{\textrm{in}}\in(0,T-t]$,
\begin{displaymath}
(\bar f,\tau)=\texttt{CONSISTENCY}(f,t,\tau^{\textrm{in}},\ensuremath{\text{\texttt{tol}}_f})
\end{displaymath}
terminates
and
\begin{align}\label{Cons:ineqtol}
\Econs \le \ensuremath{\text{\texttt{tol}}_f}^2.
\end{align}
\end{lem}
\begin{proof}
The proof is straightforward since $\Econs$ is monotone non-increasing and $\Econs \rightarrow 0$ when $\tau\rightarrow 0$.
\end{proof}
Obviously, a local control of the form~\eqref{Cons:ineqtol} does
not guarantee, that the global consistency error is below some prescribed tolerance $\ensuremath{\text{\texttt{TOL}}_f}$.
For this reason, we first precompute some local tolerance
$\ensuremath{\text{\texttt{tol}}_f}$ from the global tolerance $\ensuremath{\text{\texttt{TOL}}_f}$ by the following module $\texttt{TOLFIND}$.
\begin{algorithm}[H]
\caption{\texttt{TOLFIND} (Parameters: $\tilde\tau_0$)}
\label{alg:TOLFIND}
\baselineskip=15pt
\flushleft
\texttt{TOLFIND}$(f,T,\ensuremath{\text{\texttt{TOL}}_f})$
\begin{algorithmic}[1]
\STATE initialize $N_f$ and set
$\ensuremath{\text{\texttt{tol}}_f}=\ensuremath{\text{\texttt{TOL}}_f}$,
$\tilde t_0 =0$,
\label{lines:tolfind-start-init}
\LOOP\label{line:tol-loop}
\STATE $\epsilon=n=0$
\REPEAT\label{line:time-loopTF}
\STATE $n = n+1$
\STATE $(f_n,\tilde\tau_n) = \CONSISTENCY[({f,\tilde t_{n-1},\tilde\tau_{n-1}}, \ensuremath{\text{\texttt{tol}}_f})]$
\STATE $\epsilon =\epsilon+ \E{f}^2(f,\tilde t_{n-1},\tilde\tau_n)$
\UNTIL{$\tilde t_n=\tilde t_{n-1}+\tilde\tau_n <
T$} \label{line:time-loop-endTF}
\STATE $N_f=n$
\IF{$\epsilon >\frac12 \ensuremath{\text{\texttt{TOL}}_f}^2$}\label{line:tf-second-case}
\STATE $\ensuremath{\text{\texttt{tol}}_f}^2=\frac12\ensuremath{\text{\texttt{tol}}_f}^2$
\ELSE
\STATE
\textbf{break}\COMMENT{$\bigstar$\,std.~exit}\label{line:TOLFIND-break3}
\ENDIF
\ENDLOOP \label{line:tol-loop-end}
\STATE $\ensuremath{\text{\texttt{tol}}_f}^2=\min\{\ensuremath{\text{\texttt{tol}}_f}^2 ,\ \frac{\ensuremath{\text{\texttt{TOL}}_f}^2}{2N_f}\}$ \label{line:tol-def}
\RETURN $\ensuremath{\text{\texttt{tol}}_f}$
\end{algorithmic}
\end{algorithm}
The next result states that if all local consistency indicators
are below the threshold $\ensuremath{\text{\texttt{tol}}_f}$ then
the accumulation of the consistency indicators stays indeed below the
prescribed global tolerance $\ensuremath{\text{\texttt{TOL}}_f}$.
\begin{lem}[Termination of \texttt{TOLFIND}]
\label{lem:termination-tolfind}
Assume $f \in L^2((0,T);L^2(\Omega))$. Then for any $\ensuremath{\text{\texttt{TOL}}_f}>0$,
we have that
\begin{displaymath}
\ensuremath{\text{\texttt{tol}}_f}=\texttt{TOLFIND}(f,T,\ensuremath{\text{\texttt{TOL}}_f})>0
\end{displaymath}
terminates. Moreover, let $0= t_0< t_1<\cdots< t_N=T$ be arbitrary
with $\tau_n=\tn-\tn[n-1]$, $n=1,\ldots,N$, then
\begin{align}\label{Cons:ineq}
\Econs[f,{\tn[n-1]},\tau_n]\le \ensuremath{\text{\texttt{tol}}_f}^2,~n=1,\ldots, N \quad\Rightarrow\quad
\sum_{n=1}^N \Econs[f,{\tn[n-1]},\tau_n] \le \ensuremath{\text{\texttt{TOL}}_f}^2.
\end{align}
\end{lem}
\begin{proof}
The proof is divided into three steps.
\step{1} We show that the process from lines~\ref{line:time-loopTF}
to~\ref{line:time-loop-endTF} terminates. To this end, we recall the
parameters $\sigma,\kappa_1\in(0,1)$ and $\kappa_2>1$ from
$\CONSISTENCY[(f,{\tilde t_{n-1}},\tilde\tau_{n-1},\ensuremath{\text{\texttt{tol}}_f})]$.
We argue by contradiction
and assume that an infinite monotone sequence $\{\tilde t_n\}_{n\ge0}
\subset[0,T]$ is constructed by \texttt{TOLFIND}~with
$\lim_{n\to\infty}\tilde t_n=t^\star\in(0,T]$.
Let us first assume that
$t^\star <T$, and let $\ell_0,m_0\in\ensuremath{\mathbb{N}}$ such that $\kappa^{\ell_0}_2\ge
\kappa_1^{-m_0}\ge \kappa_2$. Then there exists $n_0\in\ensuremath{\mathbb{N}}$, such that
\begin{align}\label{eq:ell_0}
t^\star+\kappa^{\ell_0}_2
\tilde\tau_{n}<T\qquad\text{and}\qquad
\Econs[f,{\tilde t_n},\kappa^{\ell_0-1}_2
\tilde\tau_{n} ]\le \sigma\ensuremath{\text{\texttt{tol}}_f}^2
\end{align}
for all $n\ge n_0$ since $\tilde\tau_{n}\to 0$ and
\begin{align*}
\Econs[f,{\tilde t_n},\kappa^{\ell_0-1}_2
\tilde\tau_{n} ]\le
\norm[\Omega\times(\tilde t_n,\tilde t_n+\kappa^{\ell_0-1}_2
\tilde\tau_{n})]{f}^2\to 0\quad\text{as}~n\to\infty.
\end{align*}
Therefore, from the loops in lines \ref{line:CONS_IF_start} to
\ref{line:CONS_IF_end} and in \ref{line:CONS_while2_start} to
\ref{line:CONS_while2_end} of $\CONSISTENCY[]$, we conclude
that
$\tilde\tau_{n_0+1}\ge
\kappa_2^{\ell_0}\kappa^{m_0}_1\tilde\tau_{n_0}\ge
\tilde\tau_{n_0}$. Indeed, we have by~\eqref{eq:ell_0} and
\begin{align*}
\Econs[f,{\tilde t_{n_0}},\kappa^{\ell}_2\kappa^{m_0}_1
\tilde\tau_{n_0} ]\le
\Econs[f,{\tilde t_{n_0}},\kappa^{\ell-1}_2\tilde\tau_{n_0} ]\le
\sigma\ensuremath{\text{\texttt{tol}}_f}^2
\end{align*}
that $\ell\ge \ell_0$ and $m\le m_0$.
Consequently, we have
$\tilde\tau_{n}\ge \tilde\tau_{n_0}$, for all $n\ge n_0$ by induction.
This is the contradiction.
Let now $t^\star=T$, then with similar arguments as before, we conclude
that $\Econs[f,{\tilde t_n},T-{\tilde t_n}]\le \sigma\ensuremath{\text{\texttt{tol}}_f}^2$ for some
$n\in\ensuremath{\mathbb{N}}$, and we have from line~\ref{line:CONS_IF_taun} of
\CONSISTENCY[], that
$\tilde\tau_{n}=T-\tilde t_n$, which contradicts the assumption in this case.
\step{2} We next check that the condition of line~\ref{line:tf-second-case}
is violated after finite many steps.
Since the span of characteristics of dyadic intervals is
dense in $L^2(0,T)$, we can choose $M>0$, such that the squared
consistency error on
the grid of $2^M$ uniform intervals is below $\frac14 \ensuremath{\text{\texttt{TOL}}_f}^2$.
We split the intervals generated in $\texttt{TOLFIND}(f,T,\ensuremath{\text{\texttt{tol}}_f})$ into
\begin{align*}
\ensuremath{\mathbb{I}}_{in}:=\big\{n: (\tilde t_{n-1},\tilde t_n]\subset T(m2^{-M},(m+1)2^{-M}]~\text{for some}~m\in\{0,\ldots,2^M-1\}\big\}
\end{align*}
and $\ensuremath{\mathbb{I}}_{out}:=\{1,\ldots,N_f\}\setminus\ensuremath{\mathbb{I}}_{in}$
according to whether or not they are included in one of the dyadic
intervals. Therefore, we have, with the monotonicity of the consistency
error and $\#\ensuremath{\mathbb{I}}_{out}\le 2^M$, that
\begin{align*}
\epsilon =
\sum_{n\in\ensuremath{\mathbb{I}}_{in}}\Econs[f,{\tilde t_{n-1}},\tilde\tau_{n}]+\sum_{n\in\ensuremath{\mathbb{I}}_{out}}\Econs[f,{\tilde
t_{n-1}},\tilde\tau_{n}]\le
\frac14\ensuremath{\text{\texttt{TOL}}_f}^2 + 2^M\ensuremath{\text{\texttt{tol}}_f}^2.
\end{align*}
Taking $\ensuremath{\text{\texttt{tol}}_f}^2< 2^{-(M+2)}\ensuremath{\text{\texttt{TOL}}_f}^2$, we see that the condition of
line~\ref{line:tf-second-case} is violated, which proves the assertion.
\step{3} Combining the above steps, we conclude that \texttt{TOLFIND} \
terminates and it remains to prove~\eqref{Cons:ineq}. To this end, we
proceed similarly as in \step{2} and let
\begin{align*}
\ensuremath{\mathbb{I}}_{in}:=\big\{n: (t_{n-1},t_n]\subset (\tilde t_{m-1},\tilde
t_m]~\text{for some}~m\in\{1,\ldots,N_f\}\big\}.
\end{align*}
and $\ensuremath{\mathbb{I}}_{out}:=\{1,\ldots, N\}\setminus\ensuremath{\mathbb{I}}_{in}$.
By monotonicity, we have $\sum_{n\in\ensuremath{\mathbb{I}}_{in}}\Econs[f,{ t_{n-1}},\tau_{n}]\le
\sum_{n=1}^{N_f}\Econs[f,{ \tilde t_{n-1}},\tilde
\tau_{n}]\le\ensuremath{\text{\texttt{TOL}}_f}^2/2$ and thus the assertion follows from
\begin{align*}
\sum_{n=1}^N\Econs[f,{t_{n-1}},\tau_{n}]&=
\sum_{n\in\ensuremath{\mathbb{I}}_{in}}\Econs[f,{ t_{n-1}},\tau_{n}]+\sum_{n\in\ensuremath{\mathbb{I}}_{out}}\Econs[f,{
t_{n-1}},\tau_{n}]
\\
&\le \frac{\ensuremath{\text{\texttt{TOL}}_f}^2}2 + N_f\ensuremath{\text{\texttt{tol}}_f}^2 = \frac{\ensuremath{\text{\texttt{TOL}}_f}^2}2 +
N_f\frac{\ensuremath{\text{\texttt{TOL}}_f}^2}{2N_f}\le \ensuremath{\text{\texttt{TOL}}_f}^2.\qedhere
\end{align*}
\end{proof}
\begin{rem}[Estimation of $\ensuremath{\text{\texttt{tol}}_f}$ under regularity assumptions]\label{rem:underregular}
Supposing the regularity assumption $f \in
H^s((0,T);L^2(\Omega))$, $s\in(0,1]$, the following idea may be used
as an alternative for the estimation of $\ensuremath{\text{\texttt{tol}}_f}$ with \texttt{TOLFIND}.
Let $\delta>0$. Then using Lemma~\ref{lem:termination-consistency}
together with \Poincare's inequality in $H^s$ and the fact that there
are at most $ \frac{T}{\delta}$ disjoint intervals of length $\delta$
in $(0,T]$, we obtain
\begin{align*}
\sum_{n=1}^N\Econs[f,{t_{n-1}},\tau_{n}]&=
\sum_{\tau_n>\delta}\Econs[f,{ t_{n-1}},\tau_{n}]+\sum_{\tau_n\leq\delta}\Econs[f,{
t_{n-1}},\tau_{n}]
\\
&\le \frac{T}{\delta}\ensuremath{\text{\texttt{tol}}_f}^2 +\sum_{\tau_n\leq\delta} \tau_n^{2s}
\norm[H^s(t_{n-1},t_n,L^2(\Omega))]{f}^{2}
\\
&= \frac{T}{\delta}\ensuremath{\text{\texttt{tol}}_f}^2 + \delta^{2s} \norm[H^s(0,T,L^2(\Omega))]{f}^{2}.
\end{align*}
By choosing $\delta= \left(\frac{T \, \ensuremath{\text{\texttt{tol}}_f}}{
\norm[H^s(0,T,L^2(\Omega))]{f}}\right)^{\frac{2}{2s+1}}$, the
previous estimate turns into
\begin{align*}
\sum_{n=1}^N\Econs[f,{t_{n-1}},\tau_{n}]&\leq 2\,T^{\frac{2s}{2s+1}} \, \norm[H^s(0,T,L^2(\Omega))]{f}^{\frac{2}{2s+1}} \ensuremath{\text{\texttt{tol}}_f}^{\frac{4s}{2s+1}}.
\end{align*}
In other words, if a priori knowledge
on the regularity of the right hand side is available then
\texttt{TOLFIND} \ can be replaced by the somewhat simpler term
$$
\ensuremath{\text{\texttt{tol}}_f}^2 =
2^{-\frac{2s+1}{2s}}\,T^{-1}
\norm[H^s(0,T,L^2(\Omega))]{f}^{-\frac{1}{s}}
\,\ensuremath{\text{\texttt{TOL}}_f}^{\frac{2s+1}{s}}.$$
\end{rem}
We turn to the module \texttt{ST\_ADAPTATION}, listed in
Algorithm~\ref{alg:TS_ADAPTATION}, which handles a single time-step.
The module adapts the grid and the time-step-size according to the
indicators involving the discrete solution of the current time-step,
namely the space indicator $\E{\mathcal{G}}$ and
the separated coarsening and time
indicators $\E{c}$ and $\E{\tau}$.
The routine requires right at te start of each iteration the computation of the discrete solution on
the actual grid and with the current time-step-size; see
line~\ref{line:TSA-solve}. Note that in
\texttt{ST\_ADAPTATION} only refinements are performed (both in space and in
time).
Recalling the discussion in the introductory section,
Section~\ref{sec:introduction}, we aim to use a thresholding algorithm
for the indicators $\E{\tau}$, in order to equally distribute the
time error. To this end, we first need to guarantee $\E{*}\le
0$ in order to control the global time error
with the help of the uniform energy estimate form
Corollary~\ref{cor:uniform_bound}. Since for neither the
space nor the coarsening errors there is a similar control available,
we relate the corresponding indicators to the time
or the consistency indicator, i.e. to
adapt the spatial triangulation until
\begin{align}\label{eq:l2strat}
\E{c}^2,\E{\mathcal{G}}^2\le
\E{\tau}^2+\E{f}^2.
\end{align}
Here we have invoked the consistency indicator $\E{f}$ on the right
hand side although it is controlled
by \texttt{CONSISTENCY} outside \texttt{ST\_ADAPTATION}
-- note that $\E{f}$ does not depend on the discrete solution. In
fact, from the uniform
energy estimate, Corollary~\ref{cor:uniform_bound}, we have that
$\E{\tau}$ vanishes faster than $\E{f}$ by one order, when no additional regularity of $f$
is assumed. Consequently, the time-step size is dictated by $\E{f}$,
which may leed to $\E{\tau}\ll\ensuremath{\text{\texttt{tol}}_{\grid\tau}}$. Thanks to
Lemma~\ref{lem:termination-tolfind}, we expect that \eqref{eq:l2strat}
leads to an equal distribution of the errors in time
in most cases. However, the case
$\max\{\E{\tau},\E{f}\}\ll\min\{\ensuremath{\text{\texttt{tol}}_{\grid\tau}},\ensuremath{\text{\texttt{tol}}_f}\}$ cannot be avoided
theoretically, hence we have accomplished~\eqref{eq:l2strat}
with a safeguard $L^\infty$ marking; compare with
lines \ref{line:TSA-Espace-condition}
and~\ref{line:TSA-Ecoarse-condition} of \texttt{ST\_ADAPTATION}.
Note that in the above discussion, we have concentrated
on an equal distribution in time and have tacitly assumed that
in each time-step the local space indicators are optimally
distributed, which is motivated by the optimal convergence analysis
for elliptic problems; compare e.g. with \cite{Stevenson:07,CaKrNoSi:08,DiKrSt:16}.
\begin{algorithm}
\caption{Module \texttt{ST\_ADAPTATION} (Parameter
$\kappa\in(0,1)$
)}
\label{alg:TS_ADAPTATION}
\baselineskip=15pt
\flushleft{\STADAPTATION[(U_t^-, f, t, \tau, \mathcal{G},\grid_{\textrm{old}},\ensuremath{\text{\texttt{tol}}_{\grid\tau}})]}
\algsetup{indent=1.5em}
\begin{algorithmic}[1]
\STATE compute $\Econs$
\LOOP
\STATE $I=[t,t+\tau]$
\STATE $\bar f = f_I$
\STATE $U_{I} = \SOLVE{(U_t^-, \bar f, t,\tau,
\mathcal{G})}$ \label{line:TSA-solve}
\STATE $U_t^+=\lim_{s\searrow t}U_I(s)$
\STATE compute $\{\Espace[U_{I},U_t^-,t,\tau,\bar
f,\mathcal{G},\ensuremath{E}\xspace]\}_{\ensuremath{E}\xspace\in\mathcal{G}}$, $\{\Estar[{U_t^+,
U_t^-,\tau,\mathcal{G},E}]\}_{\ensuremath{E}\xspace\in\mathcal{G}}$
$\Etime[U_t^+,U_t^-,\tau,\mathcal{G}]$, and
$\{\Ecoarse[U_t^-,\tau,\mathcal{G},E]\}_{E\in\mathcal{G}}$ \label{line:TSA-est-space-tc}
\IF
{$ \Etime[U_t^+,U_t^-,\tau,\mathcal{G}] >
\ensuremath{\text{\texttt{tol}}_{\grid\tau}}^2
$}
\label{line:TSA-Etime-condition}
\STATE $\tau =
\kappa\tau$ \COMMENT{\boxed{\textsf{A}}} \label{line:TSA-adapttau}
\STATE compute $\Econs$
\ELSIF
{$ \Espace[U_{I},U_t^-,t,\tau,\bar
f,\mathcal{G}] >
\Etime[U_t^+,U_t^-,\tau,\mathcal{G}]+\Econs+ \tau \ensuremath{\text{\texttt{tol}}_{\grid\tau}}
$} \label{line:TSA-Espace-condition}
\STATE $\mathcal{G} =
\MARKREFINE{(\{\Espace[U_{I},U_t^-,t,\tau,\bar f,\mathcal{G},\ensuremath{E}\xspace]\}_{\ensuremath{E}\xspace\in\mathcal{G}},
\,\mathcal{G})}$ \COMMENT{\boxed{\textsf{B}}}
\ELSIF
{$ \Ecoarse[U_t^-,\tau,\mathcal{G}]>
\Etime[U_t^+,U_t^-,\tau,\mathcal{G}] +\Econs+ \tau \tols
$}
\label{line:TSA-Ecoarse-condition}
\STATE $\mathcal{G}=\MARKREFINE{(\{\Ecoarse[U_t^-,\tau,\mathcal{G},\ensuremath{E}\xspace]\}_{\ensuremath{E}\xspace\in\mathcal{G}},
\,\mathcal{G})}$ \COMMENT{\boxed{\textsf{C}}}
\ELSIF
{$\Estar[{U_t^+, U_t^-,\tau,\mathcal{G}}]>0$} \label{line:TSA-Estar-condition}
\STATE $\mathcal{G} = \MARKREFINE{(\{\Estar[{U_t^+,
U_t^-,\tau,\mathcal{G},\ensuremath{E}\xspace}]\}_{\ensuremath{E}\xspace\in\mathcal{G}},
\mathcal{G},\grid_{\textrm{old}})}$\COMMENT{\boxed{\textsf{D}}}
\ELSE
\STATE \textbf{break}\COMMENT{$\bigstar$\,exit\,}\label{line:TSA-break1}
\ENDIF
\ENDLOOP \label{line:TSA-outer-end}
\RETURN $U_{I},\tau,\bar f,\mathcal{G}$
\end{algorithmic}
\end{algorithm}
\begin{rem}\label{rem:custimisation}
We note that the \textbf{if} conditions in
lines~\ref{line:TSA-Espace-condition}
and~\ref{line:TSA-Ecoarse-condition} of \STADAPTATION[] may involve additional
parameters. For instance, line \ref{line:TSA-Ecoarse-condition} may be
replaced by
\begin{algorithmic}[1]
\setcounter{ALC@line}{14}
\STATE{\textbf{else if}
{$ \Ecoarse[U_t^-,\tau,\mathcal{G}]\ge
\gamma_{c}\,
\Etime[U_t^+,U_t^-,\tau,\mathcal{G}] + \rho_c\Econs+\sigma_c \tau \ensuremath{\text{\texttt{tol}}_{\grid\tau}}$}
\textbf{then}}
\end{algorithmic}
with $\gamma_c,\rho_c,\sigma_c>0$ and similar for the space
indicator $\mathcal{E}_\mathcal{G}$ in line~\ref{line:TSA-Espace-condition} with constants $\gamma_\mathcal{G},\rho_\mathcal{G},
\sigma_\mathcal{G}>0$. This requires some modifications of the \text{TAFEM}\xspace,
which would make the presentation more technical.
For the sake of clarity of the presentation, we decided to skip
these customisation possibilities; compare also with
Remark~\ref{rem:custimisation2}.
\end{rem}
\subsection{The main module \text{TAFEM}\xspace}
We are now in the position to formulate the \text{TAFEM}\xspace in
Algorithm~\ref{alg:TAFEM} below.
In the initialization phase
the given tolerance $\ensuremath{\text{\texttt{TOL}}}>0$ is split into
tolerances
$\ensuremath{\text{\texttt{TOL}}_0},\ensuremath{\text{\texttt{TOL}}_f},\ensuremath{\text{\texttt{TOL}}_{\grid\tau}}>0$. Next, \ADAPTINIT{} provides a
sufficiently good approximation $\Un[0]$ of the initial datum $u_0$.
Then the time-step iteration is entered, where each single time-step
consists of the following main steps. We first initialize the time-step
size by \texttt{CONSISTENCY} and then conduct one coarsening step with
\texttt{COARSEN}.
The adaptation of the grid and time-step-size with respect to the indicators for
the spatial, temporal, and coarsening error is done by
\texttt{ST\_ADAPTATION}.
\begin{algorithm}[H]
\caption{\text{TAFEM}\xspace}
\label{alg:TAFEM}
\baselineskip=15pt
\begin{algorithmic}[1]
\STATE initialize $\mathcal{G}_\text{init}$, $\tau_0$ and set $t_0 =0$,
$n=0$ \label{lines:astfem-start-init}
\STATE split tolerance $\ensuremath{\text{\texttt{TOL}}}>0$ such that
\begin{math}
\ensuremath{\text{\texttt{TOL}}_0}^2 +
3\,\ensuremath{\text{\texttt{TOL}}_f}^2 + \ensuremath{\text{\texttt{TOL}}_{\grid\tau}}^2 = \ensuremath{\text{\texttt{TOL}}}^2 \label{TAFEM:splitTOL}
\end{math}
\STATE $\ensuremath{\text{\texttt{tol}}_f}=\texttt{TOLFIND}(f,T,\ensuremath{\text{\texttt{TOL}}_f})$
\STATE $(U_0^-,\mathcal{G}_0) = \ADAPTINIT{(u_0, \ensuremath{\grid_\textrm{init}},\ensuremath{\text{\texttt{TOL}}_0})}$
\label{lines:astfem-end-init}
\STATE compute $C_T:=
6\,\sqrt{6 \,C_{c\tau} \, T} \left( \norm[\Omega\times(0,T)]{f}^2 +
\enorm{ U_0^-}^2 \right)^\frac12 + 2\,T $\label{TAFEM:C_T}
\REPEAT\label{line:time-loop}
\STATE $n = n+1$
\STATE $\tau_n=\min\{\tau_{n-1},T-t\}$
\STATE $\tau_n = \CONSISTENCY[({f,t_{n-1},\tau_{n-1}}, \ensuremath{\text{\texttt{tol}}_f})]$
\label{line:astfem-end-I}
\STATE $\gridn = \COARSEN[{(U_{n-1}^-,\gridn[n-1])}]$ \label{line:astfem-start-I}
\STATE $(U_{|I_n}, \tau_n, f_n,\gridn) = \STADAPTATION[{(\Un[n-1]^-,t_n,\tau_n,
f,
\gridn,\gridn[n-1],\ensuremath{\text{\texttt{TOL}}_{\grid\tau}}^2/C_T)}]$ \label{line:astfem-IIa}
\STATE $U_{n}^-=U_{|I_n}(t_{n-1}+\tau_n)$
\UNTIL{$t_n=t_{n-1}+\tau_n < T$}
\end{algorithmic}
\end{algorithm}
\section{Convergence}
\label{sec:convergence}
In this section, we first prove that the core modules and \text{TAFEM}\xspace
terminate and then verify that the estimators and thus the error is below the
given tolerance. Throughout the section we suppose that the black-box
modules satisfy Assumption~\ref{ass:modules}.
Before turning to the main module \texttt{ST\_ADAPTATION}, as an
auxiliary result, we shall
consider convergence of the adaptive finite element method for
stationary elliptic problems of the kind~\eqref{eq:elliptic}, which have to be solved
in each timestep.
\begin{algorithm
\caption{\texttt{AFEM}}
\label{alg:AFEM}
\flushleft{\texttt{AFEM}$(v^-, \bar f, t, \tau, \mathcal{G}^0)$}
\algsetup{indent=1.5em}
\begin{algorithmic}[1]
\STATE set $k=0$
\LOOP
\STATE $U_\tau^k = \SOLVE{(v^-, \bar f, 0,\tau,
\mathcal{G}^k)}$
\STATE compute $\{\Espace[U_\tau^k,v^-,0,\tau,\bar
f,\mathcal{G},\ensuremath{E}\xspace]\}_{\ensuremath{E}\xspace\in\mathcal{G}}$,
\STATE $\mathcal{G}^{k+1} =
\MARKREFINE{(\{\Espace[U_\tau^k,v^-,0,\tau,\bar
f,\mathcal{G}^k,\ensuremath{E}\xspace]\}_{\ensuremath{E}\xspace\in\mathcal{G}},
\,\mathcal{G}^k)}$
\STATE $k=k+1$
\ENDLOOP
\end{algorithmic}
\end{algorithm}
\begin{prop}[Convergence for the Elliptic
Problem]\label{P:adapt-ellipt}
Suppose that $U_t^- \in L^2(\Omega)$, $\bar f\in\Ps(L^2(\Omega))$, and
$\tau>0$. Then, starting from any grid $\mathcal{G}^0\in\ensuremath{\mathbb{G}}\xspace$ we have
for the sequence $\{\mathcal{G}^k ,U_\tau^k\}_{k\ge 0}\subset \ensuremath{\mathbb{G}}\xspace\times
\Ps(\ensuremath{\mathbb{V}})$ generated by \texttt{AFEM}$(v^-,\bar f,t,\tau,\mathcal{G}^0)$, that
\begin{displaymath}
\Espace[U_\tau^k,v^-,\tau,t,\bar f,\mathcal{G}^K] \rightarrow
0\quad\text{as}~k\to\infty.
\end{displaymath}
\end{prop}
\begin{proof}
Recalling Remark~\ref{r:elliptic}, we have that
$\Espace[U_\tau^k,v^-,\tau,0,\bar f,\mathcal{G}^K]$ are the standard
residual based a
posteriori error estimators for the coercive
problem~\eqref{eq:elliptic}. From Lemmas~\ref{l:UpSpat}
and~\ref{l:LowSpat} and Assumption~\ref{ass:modules} on
\MARKREFINE{}, we have that the conditions of \cite[Theorem
2.2]{Siebert:11} are satisfied. This yields the assertion.
\end{proof}
\begin{lem}[Termination of \texttt{ST\_ADAPTATION}]
\label{lem:termination-tsa}
For any $t\in(0,T)$, $\tau^{\textrm{in}}\in(0,T-t]$, $\mathcal{G},\grid_{\textrm{old}}\in\ensuremath{\mathbb{G}}\xspace$,
and $U_t^-\in\ensuremath{\mathbb{V}}(\grid_{\textrm{old}})$, we have that
\begin{displaymath}
(U_I,\tau,\bar f,\mathcal{G})=\STADAPTATION[{(U_t^-,f,t,\tau^{\textrm{in}},\grid_{\textrm{in}},\grid_{\textrm{old}},
\ensuremath{\text{\texttt{tol}}_{\grid\tau}})}]
\end{displaymath}
terminates.
Moreover, we have $\mathcal{G}\ge\mathcal{G}_0$, $\Estar[{U_t^+, U_t^-,\tau,\mathcal{G}}]\le0$,
\begin{align*}
&\tau^{\textrm{in}}\ge \tau \ge
\min\left\{\tau^{\textrm{in}},\frac{\kappa\,\ensuremath{\text{\texttt{tol}}_{\grid\tau}}^2}{ 6 \big(
\norm[\Omega\times(t,t+\tau)]{f}^2 +
\enorm{ U_t^-}^2 \big)}\right\},
\end{align*}
and the indicators satisfy the tolerances
\begin{align*}
\Etime[U_t^+,U_t^-,\tau,\mathcal{G}] &\le
\ensuremath{\text{\texttt{tol}}_{\grid\tau}}^2,
\\
\Espace[U_{I},U_t^-,t,\tau,\bar f,\mathcal{G}]&\leq
\Etime[U_t^+,U_t^-,\tau,\mathcal{G}]
+\Econs + \tau \ensuremath{\text{\texttt{tol}}_{\grid\tau}},
\\
\Ecoarse[U_t^-,\tau,\mathcal{G}]&\leq
\Etime[U_t^+,U_t^-,\tau,\mathcal{G}]+
\Econs + \tau \ensuremath{\text{\texttt{tol}}_{\grid\tau}},
\end{align*}
where $U_t^+=\lim_{s\searrow t}U_{(t,t+\tau]}(s)$.
\end{lem}
\begin{proof}
In each iteration of the loop in \texttt{ST\_ADAPTATION} at first, a discrete
solution $U_I$ is computed on
the current grid $\mathcal{G}$ with the actual time-step size
$\tau$.
Then either the time-step-size is
reduced or the actual grid is refined. More precisely, exactly one of
the statements labeled as
\boxed{\textsf{A}},\dots,\boxed{\textsf{D}} in
Algorithm~\ref{alg:TS_ADAPTATION} is executed, any of them terminating
by Assumption~\ref{ass:modules}. Whenever one of these statements is
executed the corresponding indicator is positive.
In statement \boxed{\textsf{C}}
the grid is refined due to the coarsening indicator $\E{c}$.
Thanks to Assumption~\ref{ass:modules}~(5), after a finite number of
executions of \boxed{\textsf{C}}, a grid $\mathcal{G}$ is obtained with
$\grid_{\textrm{old}}\le\mathcal{G}$ and thus $\Ecoarse[U_t^-,\mathcal{G}]=0$,
i.e. statement \boxed{\textsf{C}} is not entered anymore.
This happens irrespective of
refinements in other statements.
In statement \boxed{\textsf{D}}
the grid is refined with respect to the indicators $\mathcal{E}_*$
controlling the energy gain due to coarsening. Therefore, it follows
from the same reasoning as for statement \boxed{\textsf{C}}, that
statement \boxed{\textsf{D}} is also executed at most until the coarsening is fully
removed after finite many refinement steps.
It is important to notice that if statement \boxed{\textsf{A}} is
executed then the conditions in
lines~\ref{line:TSA-Estar-condition} and~\ref{line:TSA-Etime-condition}
imply
\begin{align*}
\frac{1}{\tau} \leq \frac{1}{\tau}
6\,\tau\enorm{U_t^+-\ProG U_t^-}^2 \frac{1}{\ensuremath{\text{\texttt{tol}}_{\grid\tau}}^2} \leq
\frac{1}{\ensuremath{\text{\texttt{tol}}_{\grid\tau}}^2} 6\, \left(
\norm[\Omega\times(t,t+\tau)]{f}^2 +
\enorm{ U^-_t}^2 \right),
\end{align*}
where the last inequality follows from
Corollary~\ref{cor:uniform_bound}.
This implies that $\tau$ is bounded from below and thus Statement \boxed{\textsf{A}} is only executed
finite many times. This also proves the asserted lower bound on
the final time-step size.
Assuming that \texttt{ST\_ADAPTATION} does not terminate, we
infer from the fact that all other statements are only conducted finitely many times,
that statement \boxed{\textsf{B}} has to be executed infinite many
times. In other words, the loop reduces to the adaptive iteration \texttt{AFEM}
with fixed data $U_t^-$, $\bar f$, $t$, and $\tau$. Therefore,
Proposition~\ref{P:adapt-ellipt} contradicts the condition
in
line~\ref{line:TSA-Espace-condition}.
In summary, we deduce that \texttt{ST\_ADAPTATION} terminates and the
iteration is abandoned in line \ref{line:TSA-break1}. This proves
the assertion.
\end{proof}
We next address the termination of the main module \text{TAFEM}\xspace.
\begin{prop}[Termination of \text{TAFEM}\xspace]\label{P:termination_tafem}
The adaptive algorithm \text{TAFEM}\xspace terminates for any initial
time-step-size $\tau_0>0$ and produces a finite number of time instances
$0=t_0<\dots<t_N=T$.
Moreover, we have
$\Einit[{u_0,\gridn[0]}]\le\ensuremath{\text{\texttt{TOL}}_0}^2$ and that the consistency error complies with
\eqref{Cons:ineq}. For all $n=1,\ldots,N$, we have that the
estimates in Lemma~\ref{lem:termination-tsa} are satisfied with
$t=t_{n-1}$, $\tau=\tau_n$, $U_I=U_{|I_n}$, $U_t^\pm=U_{n-1}^\pm$,
$\mathcal{G}=\mathcal{G}_n$, and $\grid_{\textrm{old}}=\mathcal{G}_{n-1}$.
\end{prop}
\begin{proof}
Each loop starts with setting the time-step-size such that
$\tau_n\leq T-t_n$, $n\in\ensuremath{\mathbb{N}}$.
Thanks to Assumption~\ref{ass:modules} for the black-box modules,
Lemma~\ref{lem:termination-consistency} for \CONSISTENCY[], and
Lemma~\ref{lem:termination-tsa} for \texttt{ST\_ADAPTATION},
all modules of \text{TAFEM}\xspace terminate and in each timestep the asserted
properties are satisfied.
Since we have $\Estar[{U_{n-1}^+, U_{n-1}^-,\tau_n,\mathcal{G}_n}]\le0$
for all $n$, we may conclude
$\enorm{U_{n-1}^-}\le \norm[\Omega\times(0,T)]{f}^2+\enorm{U_0}$
from Lemma~\ref{cor:uniform_bound} and
thus it follows with Lemma~\ref{lem:termination-tsa}, that
\begin{align*}
\tau^{\textrm{in}}_n\ge \tau_n \ge
\min\Big\{\tau^{\textrm{in}}_n,\frac{\kappa\,\ensuremath{\text{\texttt{tol}}_{\grid\tau}}^2}{12 \big(
\norm[\Omega\times(0,T)]{f}^2 +
\enorm{ U_0}^2 \big)}\Big\},
\end{align*}
where $\tau^{\textrm{in}}_{n}=\CONSISTENCY[({f,t_{n-1},\tau_{n-1}},
\ensuremath{\text{\texttt{tol}}_f})]$. Assuming that the final time is not reached
implies $\tau_n\to 0$ as $n\to\infty$ and therefore there exists
$n_0\in\ensuremath{\mathbb{N}}$, such that $\tau_n=\tau^{\textrm{in}}_{n}$ for all $n\ge n_0$.
Now, the contradiction follows as in step~\step{1} of the proof of
Lemma~\ref{lem:termination-tolfind}.
\end{proof}
Collecting the results derived above allows us to prove the main result.
\begin{thm}[Convergence into Tolerance]\label{Thm:main}
Algorithm \text{TAFEM}\xspace computes for any prescribed tolerance $\ensuremath{\text{\texttt{TOL}}}>0$ and
initial time-step-size $\tau_0>0$ a partition $0<t_0<\dots<t_N=T$
with associated meshes $\{\gridn\}_{n=0,\dots,N}$, such that we have
for the
corresponding approximation $\ensuremath{\mathcal{U}}\in\W$ from~\eqref{eq:discreteb},
that
\begin{align*}
\Wnorm{u-\ensuremath{\mathcal{U}}} \le \ensuremath{\text{\texttt{TOL}}}.
\end{align*}
\end{thm}
\begin{proof}
Thanks to
Proposition~\ref{P:termination_tafem}, we have that \text{TAFEM}\xspace
terminates and it remains to prove the
error bound. For the sake of brevity of the presentation, we shall
use the abbreviations
\begin{alignat*}{2}
\Etime[n]&:= \Etime[{\Un[n-1]^+,\Un[n-1]^-,
\tau_n,\mathcal{G}_n}],&\qquad \Econs[n]&:= \Econs[f,t_{n-1},\tau_n],
\\
\Espace[n]&:=\Espace[{U,\Un[n-1]^-,t_{n-1},\tau_n, f_n,\gridn}],
&\quad\text{and}\quad
\Ecoarse[n]&:= \Ecoarse[{\Un[n-1]^-,\tau, \gridn}].
\end{alignat*}
The initial error satisfies $\Einit[{u_0,\gridn[0]}]\le\ensuremath{\text{\texttt{TOL}}_0}^2$ by
by Assumption~\ref{ass:modules}. Thanks to the choice of the precomputed local tolerance
$\ensuremath{\text{\texttt{tol}}_f}$, we know from Lemma~\ref{lem:termination-tolfind}
that the consistency error is bounded by
$\ensuremath{\text{\texttt{TOL}}_f}$, i.e. we have \eqref{Cons:ineq}.
When finalizing a time-step, we also have from Lemma~\ref{P:termination_tafem} that
\begin{align*}
\Etime[n] &\le \ensuremath{\text{\texttt{tol}}_{\grid\tau}}^2
\qquad
\text{and}
\qquad
\Espace[n], \Ecoarse[n]\leq
\Etime[n]+\Econs[n] +\tau_n \ensuremath{\text{\texttt{tol}}_{\grid\tau}},
\end{align*}
with $\ensuremath{\text{\texttt{tol}}_{\grid\tau}}=\ensuremath{\text{\texttt{TOL}}_{\grid\tau}}^2/C_T$.
Combining this with \eqref{eq:indicators_2} and \eqref{Cons:ineq},
we conclude
\begin{align*}
\sum_{n=1}^{N}\Espace[n]
+\,\Etc[{\Un[n-1]^+},{\Un[n-1]^-,\tau_n}]
&\le \sum_{n=1}^{N}\Espace[n]
+\Ecoarse[n]
+\Etime[n]
\\
&\le \sum_{n=1}^{N}
2\,\tau_n \ensuremath{\text{\texttt{tol}}_{\grid\tau}}+2\,\Econs[n]+ 3\,
\Etime[n]
\\
&\le2\,T \, \ensuremath{\text{\texttt{tol}}_{\grid\tau}} + 2\,\ensuremath{\text{\texttt{TOL}}_f}^2+3
\sum_{n=1}^{N}
\Etime[n].
\end{align*}
Using Corollary~\ref{cor:uniform_bound} for the last term, we get
for any $\delta>0$, that
\begin{align*}
\sum_{n=1}^{N}
\Etime[n]
&= \sum_{\tau_n>\delta}
\Etime[n] +
\sum_{\tau_n\leq\delta} \Etime[n]
\\
& \leq \frac{T}{\delta} \ensuremath{\text{\texttt{tol}}_{\grid\tau}}^2 + \delta \sum_{n=1}^N
6\, C_{\tau}\,\Enorm{\Un[n-1]^+ -\Pi_{\mathcal{G}_n}\Un[n-1]^-}^2 \\
&\leq \frac{T}{\delta} \ensuremath{\text{\texttt{tol}}_{\grid\tau}}^2 + \delta 6 \,C_{\tau} \left(
\norm[\Omega\times(0,T)]{f}^2 + \enorm{ U_0}^2 \right)
\end{align*}
and by choosing
\begin{equation*}
\delta = \left( \frac{T}{ 6 \,C_{c\tau} \left( \norm[\Omega\times(0,T)]{f}^2 +
\enorm{ U_0}^2 \right)}\right)^{\frac{1}{2}} \ensuremath{\text{\texttt{tol}}_{\grid\tau}},
\end{equation*}
we obtain
\begin{align*}
\sum_{n=1}^{N}
\Etime[n]
&\leq 2\left( 6\,C_{c\tau} \,T \left( \norm[\Omega\times(0,T)]{f}^2 +
\enorm{ U_0}^2 \right) \right) ^\frac12 \ensuremath{\text{\texttt{tol}}_{\grid\tau}} .
\end{align*}
Inserting this into the above estimate yields
\begin{multline*}
\sum_{n=1}^{N}\Espace[n]
+\,\Etc[{\Un[n-1]^+},{\Un[n-1]^-,\tau_n}]
\\
\begin{split}
&\le \underbrace{\left(
6\,\sqrt{6\,C_{c\tau}\,T}
\left( \norm[\Omega\times(0,T)]{f}^2 + \enorm{
U_0}^2 \right)^\frac12+ 2\,T \right)}_{=C_T}\, \ensuremath{\text{\texttt{tol}}_{\grid\tau}}+2\,\ensuremath{\text{\texttt{TOL}}_f}^2
\\
&\le\ensuremath{\text{\texttt{TOL}}_{\grid\tau}}^2+2\,\ensuremath{\text{\texttt{TOL}}_f}^2.
\end{split}
\end{multline*}
Collecting the bounds for the indicators $\E{0}$, $\E{\mathcal{G}}$,
$\E{c\tau}$, and $\E{f}$, recalling the splitting
\begin{displaymath}
\ensuremath{\text{\texttt{TOL}}_0}^2 +
3\,\ensuremath{\text{\texttt{TOL}}_f}^2+ \ensuremath{\text{\texttt{TOL}}_{\grid\tau}}^2 = \ensuremath{\text{\texttt{TOL}}}^2,
\end{displaymath}
and taking into account the upper bound of
Proposition~\ref{p:upper} proves the assertion.
\end{proof}
\begin{rem}\label{rem:custimisation2}
In order to guarantee the main result
(Theorem~\ref{Thm:main}) also for the modifications of
Remark~\ref{rem:custimisation} line~\ref{TAFEM:C_T} in \text{TAFEM}\xspace must
be changed to
\begin{algorithmic}[1]
\setcounter{ALC@line}{4}
\STATE compute $C_T:=
(1+\gamma_c+\gamma_\mathcal{G})\,2\,\sqrt{6 \,C_{c\tau} \, T} \left( \norm[\Omega\times(0,T)]{f}^2 +
\enorm{ U_0^-}^2 \right)^\frac12 + (\sigma_c+\sigma_\mathcal{G})\,T $.
\end{algorithmic}
Moreover, the splitting
of the tolerances in line~\ref{TAFEM:splitTOL} must be changed to
\begin{algorithmic}[1]
\setcounter{ALC@line}{1}
\STATE split tolerance $\ensuremath{\text{\texttt{TOL}}}>0$ such that
$\ensuremath{\text{\texttt{TOL}}_0}^2+(1+\rho_\mathcal{G}+\rho_c)\ensuremath{\text{\texttt{TOL}}_f}^2+\ensuremath{\text{\texttt{TOL}}_{\grid\tau}}^2=\ensuremath{\text{\texttt{TOL}}}^2$.
\end{algorithmic}
\end{rem}
\section{Numerical aspects and experiments\label{sec:numerics}}
We conclude the article by illustrating some practical aspects of the
implementation with three numerical experiments. We compare the
presented algorithm \text{TAFEM}\xspace with the algorithm \text{ASTFEM}\xspace introduced in
\cite{KrMoScSi:12}.
\subsection{The implementation}
The experiments are implemented in DUNE~\cite{DUNE:16} using the
DUNE-ACFEM (http://users.dune-project.org/projects/dune-acfem) module. The computations utilize
linear conforming finite elements on space and
$\dGs[0]$ as time-stepping scheme. All simulations where performed on a Intel\textregistered Core\texttrademark i7-6700HQ Processor with 64 GB RAM.
Both algorithms \text{TAFEM}\xspace and \text{ASTFEM}\xspace start from exactly the same initial
mesh $\ensuremath{\grid_\textrm{init}}$.
The initial values are interpolated on the mesh and local refinements
are performed in order to comply with the initial tolerance.
On the resulting meshes the needed constants are computed (the
minimal time-step size $\tau^*$ for \text{ASTFEM}\xspace and $\ensuremath{\text{\texttt{tol}}_f}$ from
{\texttt{TOLFIND}} for \text{TAFEM}\xspace ).
In order to control $\mathcal{E}_c$ and $\mathcal{E}_*$, the algorithms need to
handle two meshes and corresponding finite
element spaces at every new time-step. This is realised
exploiting the tree structure of the
refinements of macro elements as
in~\cite{KrMoScSi:12}.
At every new time-step all elements on the mesh are marked to be
coarsened up to two times and then adapted again if necessary.
The mentioned estimators are computed only up to constants and used
for the adaptive refinement progress.
The spatial marking relies on the equi-distribution strategy, which
marks every element with an estimator bigger than the
arithmetic mean.
The following remark lists the tolerance splitting used by \text{ASTFEM}\xspace.
\begin{rem}\label{R:tol-split}
In \cite{KrMoScSi:12}, the \text{ASTFEM}\xspace uses the tolerance splitting
\begin{align*}
\ensuremath{\text{\texttt{TOL}}}^2={\ensuremath{\text{\texttt{TOL}}}}_0^2+T\widetilde{\ensuremath{\text{\texttt{TOL}}}}_f^2+T\widetilde{\ensuremath{\text{\texttt{TOL}}}}_{\mathcal{G}\tau}^2+\widetilde{\ensuremath{\text{\texttt{TOL}}}}_*^2.
\end{align*}
Thereby $\ensuremath{\text{\texttt{TOL}}}_*^2$ is used to compute a minimal safeguard step-size
$\tau_*$.
The method computes then an approximation $\mathcal{U}\in\W$ to~\eqref{eq:weak}, such
that
\begin{align*}
\Einit[u_0,\mathcal{G}_0]\le \ensuremath{\text{\texttt{TOL}}}_0^2,\qquad \sum_{n=1}^N\Big\{\Econs[f,t_{n-1},\tau_n]\Big\}&\le T\widetilde{\ensuremath{\text{\texttt{TOL}}}}_f^2
\intertext{and}
\sum_{n=1}^N\Big\{
\Etc[U_{n-1}^+,U_{n-1}^-,\tau_n]
+
\Espace[U,U_{n-1}^-,\tn,\tau_n,f_n,\mathcal{G}_n]\Big\}
&\le T\widetilde{\ensuremath{\text{\texttt{TOL}}}}_{\mathcal{G}\tau}^2+\widetilde{\ensuremath{\text{\texttt{TOL}}}}_*^2.
\end{align*}
This motivates the relation
\begin{align*}
T\widetilde{\ensuremath{\text{\texttt{TOL}}}}_f^2=3{\ensuremath{\text{\texttt{TOL}}}}_f^2,\quad \text{and}\quad
T\widetilde{\ensuremath{\text{\texttt{TOL}}}}_{\mathcal{G}\tau}^2+\widetilde{\ensuremath{\text{\texttt{TOL}}}}_*^2 =\ensuremath{\text{\texttt{TOL}}}_{\mathcal{G}\tau}^2
\end{align*}
in the examples below.
For the simulations presented below we have used the following
comparable splittings for the two methods \text{ASTFEM}\xspace and \text{TAFEM}\xspace
relative to the total tolerance $\ensuremath{\text{\texttt{TOL}}}$:
\begin{itemize}
\item $\ensuremath{\text{\texttt{TOL}}}_0^2=\frac1{10}\ensuremath{\text{\texttt{TOL}}}^2$,
\item $\ensuremath{\text{\texttt{TOL}}}_f^2=T\widetilde{\ensuremath{\text{\texttt{TOL}}}}^2_f =\frac4{10} \ensuremath{\text{\texttt{TOL}}}^2$,
\item $\ensuremath{\text{\texttt{TOL}}}_{\mathcal{G}\tau}^2=T\widetilde{\ensuremath{\text{\texttt{TOL}}}}_{\mathcal{G}\tau}^2+\widetilde{\ensuremath{\text{\texttt{TOL}}}}_*^2 =\frac6{10} \ensuremath{\text{\texttt{TOL}}}^2$,
\item $\widetilde{\ensuremath{\text{\texttt{TOL}}}}_*^2 =\frac1{100} \ensuremath{\text{\texttt{TOL}}}^2$.
\end{itemize}
\end{rem}
\subsection{The experiments}
In this section, we introduce the three numerical experiments in
detail and discuss the numerical results.
\subsubsection{Singularity in time}\label{sec:singtime}
This numerical experiment is constructed on the spatial domain
$\Omega=(0,1)^2\subset\ensuremath{\mathbb{R}}^2$ over the time interval $(0,T)=(0,2)$ with
homogeneous Dirichlet boundary conditions and homogeneous initial
data. The right-hand side $f$ is choosen such that the exact solution
is given by
\begin{equation*}
u(\boldsymbol{x},t) = |t-\bar{t}|^\alpha\sin(\pi(x^2-x)t)\sin(\pi(y^2-y)t)
\end{equation*}
with parameters $\bar{t}=\frac{\pi}{3}$ and $\alpha=0.7$. The graph of
$u$ has a singularity in time at $t=\frac{\pi}{3}$. Hence, the
right-hand side contains the therm
$\operatorname{sgn}(t-\bar{t})\alpha|t-\bar{t}|^{\alpha-1}$. A direct
calculation shows that this term is $L^2$ -integrable but is not in
$H^1$.
This particular example shows one main advantage of \text{TAFEM}\xspace.
In fact, in contrast to \text{ASTFEM}\xspace, \text{TAFEM}\xspace does not require
the right hand side $f$ to have temporal derivative in $L^2$ in order
to control the consistency error $\E{f}$.
\begin{figure}[h]
\begin{subfigure}[c]{0.45\textwidth}
\subcaption{DoFs}
\begin{tikzpicture}[scale=0.75]
\begin{semilogyaxis}[xlabel={Time}]
\addplot[black, only marks, mark=+] table[ x=Time, y=Dofs ] {ts-logfile-astfem.dat};
\addlegendentry{\text{ASTFEM}\xspace}
\addplot[red, only marks, mark=o] table[x=Time, y=Dofs ] {ts-logfile-tafem.dat};
\addlegendentry{$\text{TAFEM}\xspace$}
\end{semilogyaxis}
\end{tikzpicture}
\end{subfigure}\qquad
\begin{subfigure}[c]{0.45\textwidth}
\subcaption{time-step size}
\begin{tikzpicture}[scale=0.75]
\begin{semilogyaxis}[xlabel={Time},legend style={legend pos=south west}]]
\addplot[black, only marks, mark=+] table[ x=Time, y=DeltaT ] {ts-logfile-astfem.dat};
\addlegendentry{\text{ASTFEM}\xspace}
\addplot[red, only marks, mark=o] table[x=Time, y=DeltaT ] {ts-logfile-tafem.dat};
\addlegendentry{$\text{TAFEM}\xspace$}
\end{semilogyaxis}
\end{tikzpicture}
\end{subfigure}
\caption{DoFs and time-step sizes for the singularity in time problem. }
\label{fig:ts-DoF+TSS}
\end{figure}
\begin{figure}{h}
\centering
\begin{tikzpicture}[scale=1]
\begin{semilogyaxis}[width = 10cm, height=8cm,xlabel={Time},legend style={legend pos=south west,font=\tiny}
\addplot[red, solid, mark={}] table[ x=Time, y expr=\thisrow{SpaceEstimate2}+\thisrow{CT-Estimate2}+\thisrow{D-Estimate2} ] {ts-logfile-tafem.dat};
\addlegendentry{\text{TAFEM}\xspace $\E{}^2$}
\addplot[black, dashed, thick, mark={}] table[x=Time, y expr=\thisrow{DeltaT}*(\thisrow{SpaceEstimate2}+\thisrow{CT-Estimate2}+\thisrow{D-Estimate2})] {ts-logfile-astfem.dat};
\addlegendentry{\text{ASTFEM}\xspace $\E{}^2$}
\addplot[blue, dotted, thick, mark={}] table[x=Time, y expr=\thisrow{SpaceEstimate2}+\thisrow{CT-Estimate2}+\thisrow{D-Estimate2}] {ts-logfile-astfem.dat};
\addlegendentry{\text{ASTFEM}\xspace $\frac1\tau \E{}^2$}
\end{semilogyaxis
\end{tikzpicture}
\caption{The local error estimators
$\E{\tau}^2+\E{\mathcal{G}}^2+\E{c}^2+\E{f}^2$ for \text{TAFEM}\xspace and \text{ASTFEM}\xspace as
well as the sum of local $L^\infty$ indicators
$\frac1\tau(\E{\tau}^2+\E{\mathcal{G}}^2+\E{c}^2+\E{f}^2)$ used by \text{ASTFEM}\xspace
for the singularity in time problem.}
\label{fig:ts-Etaf+East}
\end{figure}
\footnotesize
\begin{figure}[h]
\begin{tabular}{ccccc}
time & time-step \text{ASTFEM}\xspace & DoFs \text{ASTFEM}\xspace & time-step \text{TAFEM}\xspace & DoFs \text{TAFEM}\xspace \\ \hline
1.0 & 0.00613614 & 97 & 0.00353598 & 193\\
1.02 & 0.00433893 & 85 & 0.00353601 & 112\\
1.03 & 0.0030681 & 97 & 0.00250034 & 109\\
1.04 & 0.00153406 & 97 & 0.00125018 & 109\\
1.042 & 0.00108475 & 97 & 0.00125018 & 125\\
1.044 & 0.00108475 & 97 & 0.000884012 & 132\\
1.045 & 0.000542376 & 157 & 0.000625091 & 128\\
1.046 & 0.000383519 & 193 & 0.000312546 & 242\\
1.047 & 3.3899e-05 & 713 & 7.81367e-05 & 448\\
1.0471 & 1.19852e-05 & 2073 & 3.90684e-05 & 635\\
1.0472 & 9.36409e-08 & 226082 & 1.7266e-06 & 3693\\
1.0473 & $non$ & $non$ & 3.90691e-05 & 622\\
\end{tabular}
\caption{Time-steps and DoFs of \text{ASTFEM}\xspace and \text{TAFEM}\xspace for the singularity in time problem.}\label{T:TS-stepsize}
\end{figure}
\normalsize
\text{ASTFEM}\xspace was killed after time-step 288 in which $14\, 163\, 460$ DoFs
are used as well as a time-step size of $1.46338\mathrm{e}{-9}$. As can be
observed from
Figure~\ref{fig:ts-DoF+TSS}, \text{ASTFEM}\xspace massively
refines in time and space. It was killed before reaching the singularity at
$\bar{t}=\frac{\pi}{3}$, thereby accumulating
the total number of $498\, 228\, 711$ DoFs. The reason for this
behaviour lies in the $L^\infty$ marking. Indeed,
Figure~\ref{fig:ts-Etaf+East} shows that \text{ASTFEM}\xspace equally distributes
the $L^\infty$ indicators thereby leading to very small local errors,
which cause the strong spatial refinement. Note
that the minimal step-size $\tau_*$ in \text{ASTFEM}\xspace only applies when
temporal refinement is performed due to the time indicator
$\E{\tau}$, i.e., time-step sizes below the threshold can
$\tau_*$ be chosen when required by the consistency estimator
$\mathcal{E}_f$, which is the case close to the singularity. Consequently,
the behaviour of \text{ASTFEM}\xspace cannot essentially improved by a different
choice of $\ensuremath{\text{\texttt{TOL}}}_*$.
In contrast, the local estimators \text{TAFEM}\xspace appears to be quite equally distributed.
It uses slightly larger time-steps and by far less DoFs close to the
singularity; compare with the table of Fig.~\ref{T:TS-stepsize}. It completely outperforms \text{ASTFEM}\xspace and
reaches the final time with a total of $2\, 947\, 080$ DoFs in 618 time-steps.
\subsubsection{Jumping singularity}
Inspired by example $5.3$ of~\cite{MoNoSi:00}, we construct an
experiment where the solution has a strong spatial singularity that
changes its position in time. In the domain $\Omega\times(0,4]$,
with $\Omega=(0,3)\times(0,3)$, we define the elliptic operator $\L
u=-\divo{\Am\nabla u}$, where
\begin{equation*}
\Am(t,x) =
\begin{cases}
a_1\mathbb{I}\qquad & \text{if } (x-x_i)(y-y_i)\geq0\\
a_2\mathbb{I}\qquad & \text{if } (x-x_i)(y-y_i)<0
\end{cases}
\end{equation*}
with $a_1=161.4476387975881$, $a_2=1$, $i= \lceil t \rceil$,
$(x_1,y_1)=(1,2)$, $(x_2,y_2)=(1,1)$, $(x_3,y_3)=(2,1)$, and
$(x_4,y_4)=(2,2)$. This operator will `move' the singularity through
the points $x_i$.
Let $u$ be the function
\begin{equation*}
u(x,t) = \sum_{i=1}^4 s_i(t)\ r_i^\gamma\ \mu(\theta_i)
\end{equation*}
where
\begin{equation*}
s_i(t) =
\begin{cases}
(t-(i-1))^2(t-i)^2\quad &\text{if } i-1\leq t \leq i\\
0\qquad & \text{otherwise}
\end{cases}
\end{equation*}
and
\begin{equation*}
\mu(\theta) =
\begin{cases}
\cos((\frac{\pi}{2}-\sigma)\gamma)\cdot\cos((\theta-\frac{\pi}{2}+\rho)\gamma) \quad& \text{if } 0\leq\theta<\frac{1}{2}\pi\\
\cos(\rho\gamma)\cdot\cos((\theta-\pi+\sigma)\gamma) & \text{if } \frac{1}{2}\pi\leq\theta<\pi\\
\cos(\sigma\gamma)\cdot\cos((\theta-\pi-\rho)\gamma) & \text{if } \pi\leq\theta<\frac{3}{2}\pi\\
\cos((\frac{\pi}{2}-\rho)\gamma)\cdot\cos((\theta-\frac{3\pi}{2}-\sigma)\gamma) & \text{if } \frac{3}{2}\pi\leq\theta<2\pi\\
\end{cases}
\end{equation*}
with $\gamma=0.1$, $\rho=\frac{\pi}{4}$, $\sigma=-14.92256510455152$, $x-x_i=r_i\cos(\theta_i)$ and $y-y_i=r_i\sin(\theta_i)$.
It is easy to check that $u$ satisfies
\begin{equation*}
\partial_t u(x,t) +\L u(x,t) = \sum_{i=1}^4 r_i^\gamma \mu(\theta_i)\ \partial_t s_i(t)\ .
\end{equation*}
Based on the ideas presented in Remark~\ref{R:tol-split} we compare
\text{TAFEM}\xspace and \text{ASTFEM}\xspace with the same tolerance
$\ensuremath{\text{\texttt{TOL}}}=0.007$
\begin{figure}
\begin{subfigure}[c]{0.45\textwidth}
\subcaption{DoF}
\begin{tikzpicture}[scale=0.75]
\begin{semilogyaxis}[xlabel={Time}
\addplot[black, dashed, thick, mark={}] table[ x=Time, y=Dofs ] {js-logfile-astfem.dat};
\addlegendentry{\text{ASTFEM}\xspace}
\addplot[red, solid, mark={}] table[x=Time, y=Dofs ] {js-logfile-tafem.dat};
\addlegendentry{$\text{TAFEM}\xspace$}
\end{semilogyaxis
\end{tikzpicture}
\end{subfigure}\qquad
\begin{subfigure}[c]{0.45\textwidth}
\subcaption{time-step size}
\begin{tikzpicture}[scale=0.75]
\begin{semilogyaxis}[xlabel={Time}
\addplot[black, dashed, thick, mark={}] table[ x=Time, y=DeltaT ] {js-logfile-astfem.dat};
\addlegendentry{\text{ASTFEM}\xspace}
\addplot[red, solid, mark={}] table[x=Time, y=DeltaT ] {js-logfile-tafem.dat};
\addlegendentry{$\text{TAFEM}\xspace$}
\end{semilogyaxis}
\end{tikzpicture}
\end{subfigure}
\caption{DoFs and time-step sizes for the jumping singularity
problem.}
\label{fig:js-DoF+TSS}
\end{figure}
\begin{figure}
\begin{subfigure}[c]{0.22\textwidth}
\subcaption{$t\approx 0.5$}
\includegraphics[width=1.0\textwidth]{js-mesh-05}
\end{subfigure}
\quad
\begin{subfigure}[c]{0.22\textwidth}
\subcaption{$t\approx 1.0$}
\includegraphics[width=1.0\textwidth]{js-mesh-10}
\end{subfigure}
\quad
\begin{subfigure}[c]{0.22\textwidth}
\subcaption{$t\approx 1.5$}
\includegraphics[width=1.0\textwidth]{js-mesh-15}
\end{subfigure}
\quad
\begin{subfigure}[c]{0.22\textwidth}
\subcaption{$t\approx 2.0$}
\includegraphics[width=1.0\textwidth]{js-mesh-20}
\end{subfigure}
\begin{subfigure}[c]{0.22\textwidth}
\subcaption{$t\approx 2.5$}
\includegraphics[width=1.0\textwidth]{js-mesh-25}
\end{subfigure}
\quad
\begin{subfigure}[c]{0.22\textwidth}
\subcaption{$t\approx 3.0$}
\includegraphics[width=1.0\textwidth]{js-mesh-30}
\end{subfigure}
\quad
\begin{subfigure}[c]{0.22\textwidth}
\subcaption{$t\approx 3.5$}
\includegraphics[width=1.0\textwidth]{js-mesh-35}
\end{subfigure}
\quad
\begin{subfigure}[c]{0.22\textwidth}
\subcaption{$t\approx 4.0$}
\includegraphics[width=1.0\textwidth]{js-mesh-40}
\end{subfigure}
\caption{Adaptive grids for the jumping singularity problem.}
\label{fig:adaptMesh}
\end{figure}
\text{ASTFEM}\xspace makes excessive use of the nonstandard exit, i.e.,
the time-step sizes equal minimal
time-step size $\tau_\ast = 0.0123477$ for 276 of a total of 302
time-steps,
and uses a
total of 893771 Dofs. The $L^2-H^1$-error is $0.0546689$, the $L^2-L^2$-error is $0.0355061$ and the total computation time was 413.67 seconds.
The \text{TAFEM}\xspace uses a
total of 786789 Dofs in 291 time-steps. The
$L^2(0,4,H^1(\Omega))$-error is $0.0552438$, the
$L^2(0,4,L^2(\Omega))$-error is $0.034989$ and the total computation
time was 546.113 seconds (including \texttt{TOLFIND}).
The adaptive meshes generated by \text{TAFEM}\xspace are displayed in Figure~\ref{fig:adaptMesh}.
We see that the spatial adaptivity captures the position of the
singularity by local refinement and coarsens the region when the
singularity has passed by.
By having a look on Fig.~\ref{fig:js-DoF+TSS} we see, that \text{TAFEM}\xspace
makes more use of the spatial and temporal adaptivity and
achieves a similar result with slightly less effort.
The advantages of \text{TAFEM}\xspace come fully into their own in the presence of
singularities in time (see Section~\ref{sec:singtime}). For regular
(in time) problems, \text{TAFEM}\xspace is expected to perform
similar to \text{ASTFEM}\xspace up to the disadvantage that, at the beginning, the module {\texttt{TOLFIND}}
needs several adaptive iterations over
the time span, whereas the computation for the minimal time-step size
in \text{ASTFEM}\xspace only iterates once over the time. This is reflected in the
comparable computing times for the jumping singularity problem.
\subsubsection{Rough initial data} We conclude with an example
inspired on the numerical experiment~5.3.2
in~\cite{KrMoScSi:12} with homogeneous Dirichlet boundary
conditions and homogeneous right-hand side $f\equiv0$. As initial data we choose a checkerboard
pattern over $\Omega=(0,1)^2$ where $u_0\equiv-1$ on $\Omega_1=(\frac{1}{3},\frac{2}{3})\times
\left( (0,\frac{1}{3})\cup(\frac{2}{3},1) \right)\ \cup \ \left( (0,\frac{1}{3})\cup(\frac{2}{3},1) \right)
\times (\frac{1}{3},\frac{2}{3})$, $u_0\equiv1$ on
$\Omega\setminus\Omega_1$ and $u_0\equiv0$ on $\partial\Omega$.
Starting with an initial mesh with only 5 DoFs, the approximation of
$u_0$ uses Lagrange interpolation and refines
the mesh until $\|U_0-u_0\|_\Omega^2\leq\ensuremath{\text{\texttt{TOL}}_0}^2 = 10^{-2}$
is fulfilled. Starting \text{ASTFEM}\xspace and \text{TAFEM}\xspace
with a tolerance of $\ensuremath{\text{\texttt{TOL}}}=10^{-1}$ and running to the final
time $T=1$, we get the following results: \text{ASTFEM}\xspace needs 811 time-steps, a total sum of $436\, 199\, 377$ DoFs, with an estimated total error
of 0.0230905, and a total computation time of 81466.4 seconds.
The \text{ASTFEM}\xspace makes use of the
nonstandard exit for the first 270 time-steps, with
minimal time-step size of $\tau_\ast=7.77573\mathrm{e}{-7}$, the small
size of the time-steps in the beginning is also accompanied by extreme
spatial refinements contributing to the large total number of
DoFs. This is due to the
$L^\infty$-strategy that aims in an equal distributing of the time-indicators
$\frac1\tau \E{\tau}^2$ rather then $\E{\tau}^2$. In order to highlight this effect close to
the initial time, we have used a log scale for the time in
Figures~\ref{fig:ts-Ect-comp} and~\ref{fig:cb-DoF+TSS}.
The \text{TAFEM}\xspace only needs 117 time-steps and a total of $3\, 762\, 503$ DoFs
resulting in an estimated total error of 0.039855. It is about $20$
times faster with a total computation time of 3903.76 seconds (including \texttt{TOLFIND}).
\text{TAFEM}\xspace refines the mesh initially and then almost steadily coarsens in
time and space (see Figures~\ref{fig:cb-DoF+TSS} and~\ref{fig:adaptMesh2}~(E-H)).
Figure~\ref{fig:ts-Ect-comp} shows that the time indicators
$\E{\tau}^2$ are nearly equally distributed.
Both algorithms reduce the spatial resolution once the singular behaviour of the solution is
reduced; see Figures~\ref{fig:cb-DoF+TSS} and~\ref{fig:adaptMesh2}.
\begin{figure}
\centering
\begin{tikzpicture}
\begin{loglogaxis}[width = 9cm, height=7cm,xlabel={Time},legend
style={legend pos=south west,font=\tiny}]
\addplot[red, dashed, thick, mark={}] table[ x=Time, y expr=\thisrow{CT-Estimate2} ] {cb-logfile-tafem.dat};
\addlegendentry{\text{TAFEM}\xspace $\E{\tau}^2$}
\addplot[black, solid, mark={}] table[x=Time, y expr=\thisrow{DeltaT}*\thisrow{CT-Estimate2}] {cb-logfile-astfem.dat};
\addlegendentry{\text{ASTFEM}\xspace $\E{\tau}^2$}
\addplot[blue, dotted, thick, mark={}] table[x=Time, y expr=\thisrow{CT-Estimate2}] {cb-logfile-astfem.dat};
\addlegendentry{\text{ASTFEM}\xspace $\frac1\tau \E{\tau}^2$}
\end{loglogaxis}
\end{tikzpicture}
\caption{The local time indicator
$\E{\tau}^2$ for \text{TAFEM}\xspace and \text{ASTFEM}\xspace as
well as the local $L^\infty$ indicators
$\frac1\tau\E{\tau}^2$ used by \text{ASTFEM}\xspace
for the rough
initial data problem.}
\label{fig:ts-Ect-comp}
\end{figure}
\begin{figure}
\begin{subfigure}[c]{0.45\textwidth}
\subcaption{DoF}
\begin{tikzpicture}[scale=0.75]
\begin{loglogaxis}[xlabel={Time},legend style={legend pos=south west,font=\tiny}]
\addplot[black, solid, mark={}] table[ x=Time, y=Dofs ] {cb-logfile-astfem.dat};
\addlegendentry{\text{ASTFEM}\xspace}
\addplot[red, dashed, thick, mark={}] table[x=Time, y=Dofs ] {cb-logfile-tafem.dat};
\addlegendentry{$\text{TAFEM}\xspace$}
\end{loglogaxis}
\end{tikzpicture}
\end{subfigure}\qquad
\begin{subfigure}[c]{0.45\textwidth}
\subcaption{time-step size}
\begin{tikzpicture}[scale=0.75]
\begin{loglogaxis}[xlabel={Time},legend style={legend pos=north west,font=\tiny}]
\addplot[black, solid, mark={}] table[ x=Time, y=DeltaT ] {cb-logfile-astfem.dat};
\addlegendentry{\text{ASTFEM}\xspace}
\addplot[red, dashed, thick, mark={}] table[x=Time, y=DeltaT ] {cb-logfile-tafem.dat};
\addlegendentry{$\text{TAFEM}\xspace$}
\end{loglogaxis}
\end{tikzpicture}
\end{subfigure}
\caption{DoFs and time-step sizes for the rough initial data problem.}
\label{fig:cb-DoF+TSS}
\end{figure}
\begin{figure}
\begin{subfigure}[c]{0.22\textwidth}
\subcaption{$t\approx 0$}
\includegraphics[width=1.0\textwidth]{cb-mesh-astfem-0}
\end{subfigure}
\quad
\begin{subfigure}[c]{0.22\textwidth}
\subcaption{$t\approx 0.0015$}
\includegraphics[width=1.0\textwidth]{cb-mesh-astfem-0015}
\end{subfigure}
\quad
\begin{subfigure}[c]{0.22\textwidth}
\subcaption{$t\approx 0.0040$}
\includegraphics[width=1.0\textwidth]{cb-mesh-astfem-0040}
\end{subfigure}
\quad
\begin{subfigure}[c]{0.22\textwidth}
\subcaption{$t\approx 0.040$}
\includegraphics[width=1.0\textwidth]{cb-mesh-astfem-040}
\end{subfigure}
\begin{subfigure}[c]{0.22\textwidth}
\subcaption{$t\approx 0$}
\includegraphics[width=1.0\textwidth]{cb-mesh-tafem-0}
\end{subfigure}
\quad
\begin{subfigure}[c]{0.22\textwidth}
\subcaption{$t\approx 0.00025$}
\includegraphics[width=1.0\textwidth]{cb-mesh-tafem-00025}
\end{subfigure}
\quad
\begin{subfigure}[c]{0.22\textwidth}
\subcaption{$t\approx 0.0025$}
\includegraphics[width=1.0\textwidth]{cb-mesh-tafem-0025}
\end{subfigure}
\quad
\begin{subfigure}[c]{0.22\textwidth}
\subcaption{$t\approx 0.047$}
\includegraphics[width=1.0\textwidth]{cb-mesh-tafem-047}
\end{subfigure}
\caption{Adapted meshes generated with \text{ASTFEM}\xspace (A-D) and \text{TAFEM}\xspace (E-H) for the rough initial data problem.}
\label{fig:adaptMesh2}
\end{figure}
\newcommand{\etalchar}[1]{$^{#1}$}
\providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace}
\providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR }
\providecommand{\MRhref}[2]{%
\href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2}
}
\providecommand{\href}[2]{#2}
|
1,108,101,566,406 | arxiv |
\section{Introduction}
\label{intro}
The interior of a neutron star (NS) is one of the most extreme environments known to exist in the observable universe. The hydrostatic equilibrium established by the NSs own gravity acting against the degeneracy pressure of neutrons in its core is described by the Tolman-Oppenheimer-Volkoff (TOV) equations~\cite{TOV}. In order to solve the TOV equations, we need an equation of state (EoS) that relates the neutron degeneracy pressure inside the NSs core to its density at zero temperature. Due to the rapid cooling of NSs via neutrino emission and hence their fast \textbeta-equilibration, finite temperature and dissipative corrections to the EoS are negligible, allowing for NS matter to be modeled as a perfect fluid~\cite{cool}. The EoS of cold matter at supra nuclear densities is thought to be universal and has a one-to-one correspondence with the mass radius relationship of NSs~\cite{mr-pe}. Hence, knowledge of the cold matter nuclear EoS is of extreme importance in understanding not only the structure of NSs, but also the general properties of matter at such densities. However, limitations in our understanding of the strong nuclear force restrict our knowledge of the NS EoS to a number of approximate models. Quantitatively, the pressure density and the implied mass radius relationship for an EoS model $\mathcal{E}$ becomes:
\begin{eqnarray}
p &=& p_{\mathcal{E}}(\rho) \iff r = r_{\mathcal{E}}(m) \label{eos-pe}
\end{eqnarray}
where $p,\rho$ are the pressure and rest mass density at some point in the NS core, and $m,r$ are the mass and radius of the NS.
The tabulated set of equation of state models constitutes a discrete collection of models based on different theoretical descriptions of nuclear theories (for a recent list of tabulated EoS models, see~\cite{mr-review}), allowing us to compare one model against the others. However, this does not give us the ability to fully survey the pressure-density parameter space of matter at extreme densities. Parameterized EoS models such as the spectral decomposition of the adiabatic index in powers of logarithmic pressure~\cite{spectral} and the piecewise polytropic parameterization~\cite{piecewise}, are more flexible in the sense that a chosen range of the parameters correspond to a continuum in the pressure density space, which can be constrained empirically, given data. However, the data required for inferring the EoS are challenging to acquire from controlled experiments. The high density of NS cores, orders of magnitude higher than the nuclear saturation density~\cite{mr-review}, is impossible to re-create in a terrestrial laboratory. This has resulted in the NS EoS remaining largely unconstrained for a long time~\cite{mr-review}.
Observations of systems involving neutron stars often enable physicists to extract information regarding the NS EoS from measurements of the mass and/or radii of the NSs in such systems~\cite{mr-review}. The one-to-one correspondence between the NS mass-radius relationship and the EoS pressure-density relationship translates simultaneous mass-radius measurement into a measurement of the NS EoS itself. Observations of electromagnetic (EM) signals from such astrophysical systems, such as X-ray pulsar observations~\cite{EM-Xray1,EM-Xray2,EM-Xray3,EM-Xray4}, thermonuclear bursts and quiescent low mass X-ray binaries~\cite{thnc-qxlmb} etc, have been used to simultaneously measure the mass and radii of NSs and hence the NS EoS. However, these analyses are model-dependent and can be subject to the systematics of the individual EM emission models~\cite{mr-review}. Mass measurements of NSs through pulsar timing experiments~\cite{PT-1}, have also been informative about the NS EoSs using the fact that an EoS model implies a particular maximum mass of NSs. However, such precise mass measurements are possible only for pulsars in binary systems which comprise only 10 percent of the galactic pulsar population~\cite{PT-limit1,PT-limit2}. Thus, observations of EM signals from astrophysical NS systems have provided us with a natural laboratory for exploring the properties of cold nuclear matter at extreme densities, albeit in a rather limited notion. For a review of EoS measurements through electromagnetic signals from astrophysical NS systems see~\cite{mr-review}.
Observations of gravitational waves (GWs) from merging binary neutron stars (BNS)s at ground based GW observatories such as LIGO~\cite{LVK1} and Virgo~\cite{LVK2} has resulted in significant advances in the field of NS EoS measurement. The GW waveform, in addition to being sensitive to the component NS masses, carries an imprint of finite size effects in the evolution of BNS systems. Thus analyzing GW data from ground based observatories allows for the simultaneous measurements of NS mass and tidal deformability (akin to the radius), that are independent of EM emission model systematics. Hence GW based NS mass-radii measurements yield EoS measurements that are free of the limitations of EM signal based EoS measurements. The universality of cold nuclear matter EoS can be exploited to combine GW observations from multiple BNS events together, or along with other astrophysical and terrestrial observations, to yield joint measurements of the NS EoS. GW data from the two merging BNSs observed to date, GW170817~\cite{170817disc} and GW190425~\cite{190425}, analyzed together with EM counterpart observations of GW170817~\cite{170817Adisc1,170817Adisc2,ATdisc}, other EM based astrophysical observations, terrestrial measurements and theoretical considerations have yielded joint EoS constraints that have greatly improved our understanding of dense nuclear matter~\cite{multieos1917}. With future GW observations from more BNSs expected to further improve these measurements, a robust and efficient scheme EoS inference, jointly from the GW observations of multiple BNS mergers, needs to be developed and/or perfected before the next observing run, O4, of the LIGO/Virgo/KAGRA~\cite{LVK3} (LVK) detector network begins.
Bayesian hierarchical inference from GW data~\cite{thranetalbot2019} is a statistical framework that has been used to accurately measure the EoS from multiple GW observations. However, numerical implementations of such an analysis incur a heavy computational cost, one that grows with the number of events analyzed. Thus, such implementations are likely to be problematic given the number of events expected to be observable during O4, potentially requiring several weeks of computation time. Several techniques~\cite{gmm,interpolation,interpolation2,interpolation3,gwxteme} have been developed to circumvent this problem that achieve their speed-up gains by re-using information from single event EoS agnostic parameter estimation (PE) runs that are carried out regardless for GW transient catalogs. Among them, the likelihood approximation algorithm \textsc{GWXtreme}~\cite{gwxtreme-doc} has been shown to perform accurate Bayesian model comparison on the tabulated set of known EoS models with a latency of a few minutes per event~\cite{gwxteme}. However, being a model selection scheme, the current version of \textsc{GWXtreme} cannot compute empirical credible intervals on the EoS pressure-density space from the data using parameterized EoS models. It can only comment on how some discrete lines on that space, one corresponding to each known EoS model, compare against each other.
In this work, we develop an algorithm based on \textsc{GWXtreme} to hierarchically infer parameterized EoSs from multiple GW observations. We incorporate our physical knowledge of NS physics such as the requirements of causality, thermal stability, and observational consistency of the NS maximum mass as Bayesian priors on the EoS parameters and compute their posterior distributions using our generalized version of \textsc{GWXtreme}, with a run time of 20 hours to a day for order 10 events. The posterior samples of EoS parameters then translate naturally into empirically measured confidence intervals in the EoS pressure-density plane. Since the \textsc{GWXtreme} method is based on an approximation scheme~\cite{gwxteme} rather than distribution of computation over large number of computational resources, we do not need GPU parallelization to achieve our results, making this method easier to implement on most machines. Our analysis needs no prior assumption about the fiducial population of BNSs.
Following previous works~\cite{interpolation,gmm} we test our algorithm on a set of simulated events drawn from the galactic BNS population and demonstrate its high accuracy. In addition, we test our algorithm on data from the real events, GW170817 and GW190425, to show that the results produced by our algorithm are fully consistent with the existing un-approximated Bayesian PE results, despite being orders of magnitude faster. Furthermore, armed with this fast and cheap algorithm we study the variability of EoS constraints with component NS masses, the signal-to-noise ratio (SNR) of events analyzed and the number of events analyzed by drawing additional simulated events in narrow SNR bins and over a wide range of BNS total masses. We note that this work is proof of concept, where-in we show that \textsc{GWXtreme}'s likelihood approximation scheme can be generalized to perform fast, cheap and accurate hierarchical inference of parameterized NS EoS models from multiple GW observations, making \textsc{GWXtreme} a strong candidate for hierarchical EoS inference in O4. We leave further improvements to our algorithm for potentially more generalized EoS inference as part of a future work, while outlining the blueprints of such generalizations.
This paper is organized as follows. In Sec.~\ref{methods}, we describe our method of EoS inference in detail. First, we review the Bayesian hierarchical framework for EoS inference from GW data in Sec.~\ref{bh} and the problematic computational cost of its implementation. Then, in Sec.~\ref{approx} we describe our algorithm, its approximations and how it resolves the aforementioned problem. Next, Sec.~\ref{eos-prior} summarizes the EoS parameterization and the priors on the EoS parameters that we have chosen for studying and testing our algorithm. We wrap up the discussion of methods in Sec.~\ref{waveform} with a note on the compatibility of our method with various GW waveform models and the associated systematics. In Sec.~\ref{data} we describe the details of the data on which we test our algorithm, which include both real and simulated events. In Sec.~\ref{results}, we display the results of our study and discuss their implications. In Sec.~\ref{conclusion} we conclude with a summary of our method, its virtues and its potential for performing efficient hierarchical EoS inference in O4 given the results of our study. We also outline schemes for potential improvements to our algorithm which are left as upcoming explorations.
\section{Methods}
\label{methods}
\subsection{Bayesian Hierarchical Inference of Parameterized EoS}
\label{bh}
In this section we review the framework of Bayesian hierarchical inference using GW data in the context of EoS inference. For a review of Bayesian inference from GW data see~\cite{thranetalbot2019}. Time series data from a GW detector, $d(t)$ can be thought to comprise random noise $n(t)$, with a modeled distribution, and possibly a signal $h(t,\vec{\theta})$ characterized by the parameter $\theta$ as dictated by the assumed signal model:
\begin{equation}
d(t) = h(t,\vec{\theta})+n(t)\label{data0}
\end{equation}
The parameters $\vec{\theta}$ consist of both EoS sensitive observables (masses and tidal deformability) and other observables such as distance, sky position, etc., to which the EoS is not sensitive. The finite size effects of NSs manifest themselves in the data through the dependence of the waveform model on the component NS masses $m_i$ and the tidal deformability of the NSs, $\lambda_i$:
\begin{eqnarray}
\vec{\theta} &=& \{m_1,m_2,\lambda_1,\lambda_2,\vec{\theta}_{\text{ne}}\}\\
\lambda_i &=&\frac{2}{3G} k_2(m_i)r_i^5\label{lambda}
\end{eqnarray}
where $r_i$, are radii of the component NSs, $k_2$ the tidal love number~\cite{love-number}, and $\vec{\theta}_{\text{ne}}$ are other non EoS sensitive parameters that characterize the GW waveform. Given a noise model which specifies the probability distribution of the random noise time series $n(t)$, one can compute the likelihood of the observed time-series data as a function of the parameters characterizing the waveform model: $\mathcal{L}(d|\vec{\theta})$. Since the EoS only constrains the EoS sensitive observables, we can marginalize the likelihood over the remaining parameters $\vec{\theta}_{\text{ne}}$ using uninformative priors $p(\vec{\theta}_{\text{ne}})$ and construct the ingredients of our EoS inference analysis from that marginalized likelihood:
\begin{equation}
\mathcal{L}(d|m_1,m_2,\lambda_1,\lambda_2) = \int \mathcal{L}(d|m_1,m_2,\lambda_1,\lambda_2,\vec{\theta}_{\text{ne}})p(\vec{\theta}_{\text{ne}})\,d\vec{\theta}_{\text{ne}}\label{likelihood0}
\end{equation}
An EoS model, $\mathcal{E}$ implies a deterministic relationship between $m_i$ and $\lambda_i$, which imposes a delta-function prior on these quantities through Eq.~\eqref{eos-pe}:
\begin{equation}
p(\lambda_1,\lambda_2|m_1,m_2,{\mathcal{E}})=\delta(\lambda_1-\lambda_{\mathcal{E}}(m_1))\delta(\lambda_2-\lambda_{\mathcal{E}}(m_2))\label{prior-ms}
\end{equation}
where, the tidal parameter $\lambda$ as a function of mass is obtained by substituting Eq.~\eqref{eos-pe} into Eq.~\eqref{lambda}
Using the likelihood of GW data given EoS sensitive BNS parameters and an EoS model that imposes a prior on those parameters, one can construct the Bayesian evidence of the EoS model by marginalizing the likelihood over the prior:
\begin{widetext}
\begin{equation}
\mathcal{Z}(\mathcal{E}|d) = \int \mathcal{L}(d|m_1,m_2,\lambda_1,\lambda_2)p(\lambda_1,\lambda_2|m_1,m_2,\mathcal{E})p(m_1,m_2)\,dm_1\,dm_2\,d\lambda_1\,d\lambda_2\label{evidence}
\end{equation}
\end{widetext}
where $p(m_1,m_2)$ is an uninformative prior on the component masses. The overall evidence of an EoS model given multiple observations $\{d\}=\{d_1,d_2,...\}$ can be obtained by multiplying the individual evidences, yielding the joint evidence: $\mathcal{Z}(\mathcal{E}|\{d\})=\prod_{i}\mathcal{Z}(\mathcal{E}|d_{i})$. The ratio of the joint evidence for two different EoS models $\mathcal{E}_{1}$ and $\mathcal{E}_2$ yields the Bayes factor which can be compared against unity to perform model selection:
\begin{equation}
BF^{\mathcal{E}_1}_{\mathcal{E}_2}(\{d\}) = \frac{\mathcal{Z}(\mathcal{E}_1|\{d\})}{\mathcal{Z}(\mathcal{E}_2|\{d\})}
\end{equation}
This framework was used to perform model comparison for the set of known tabulated EoSs for the two BNS events observed by LVK to date: GW170817~\cite{170817MS}, and GW190425. However, model selection can only compare known EoS models with each other. While such an analysis sheds light on the physics of the NS EoS since each model has its own set of assumptions about said physics, empirical constraints on the pressure density space in the form of continuous credible intervals inferred from data are beyond its scope. Phenomenologically parameterized EoSs such as the spectral parameterization and the piecewise polytropic parameterization are free of this limitation.
Such an EoS model $\mathcal{E}$, characterized by the parameters $\vec{\gamma}$, would imply a pressure-density and the corresponding mass-tidal deformability relation:
\begin{eqnarray}
p &=& p(\rho,\vec{\gamma})\label{pe-param}\implies
\lambda =\lambda(m,\vec{\gamma})\label{ml-param}
\end{eqnarray}
The deterministic relation Eq.~\eqref{ml-param} would then impose a prior on $\lambda_i$ conditional on $m_i$ in the form of
\begin{eqnarray}
p(\lambda'_1,\lambda'_2|m_1,m_2,\vec{\gamma})&=&\delta(\lambda'_1-\lambda(m_1,\vec{\gamma}))\delta(\lambda'_2-\lambda(m_2,\vec{\gamma}))\label{prior-pe}
\end{eqnarray}
Replacing the EoS sensitive prior in Eq.~\eqref{evidence} with Eq.~\eqref{prior-pe} would allow for the reinterpretation of the left hand side of Eq.~\eqref{evidence} as the marginalized hierarchical likelihood $\mathcal{L}_h$ of GW data $d$, given the EoS parameters $\vec{\gamma}$ (viewed as hyper-parameters):
\begin{widetext}
\begin{equation}
\mathcal{L}_{h}(d|\vec{\gamma}) = \int \mathcal{L}(d|m_1,m_2,\lambda_1,\lambda_2)p(\lambda_1,\lambda_2|m_1,m_2,\vec{\gamma})p(m_1,m_2)\,dm_1\,dm_2\,d\lambda_1\,d\lambda_2\label{likelihood}
\end{equation}
\end{widetext}
The universality of the cold matter nuclear EoS then implies that the EoS hyper-parameters $\vec{\gamma}$ are also universal and will have the same value for all NS systems. This can be exploited to construct the joint ``quasi'' likelihood of GW data from multiple events, $\{d\}$ given EoS hyper-parameters by multiplying the individual event hierarchical likelihoods. One can then use Bayes Theorem to convert that joint likelihood into a posterior distribution of EoS hyper-parameters given GW data from multiple events, by imposing a prior on those hyper-parameters based on our knowledge of NS physics:
\begin{equation}
p(\vec{\gamma}|\{d\}) \propto p(\vec{\gamma},I)\prod_{i=1}^N \mathcal{L}_{h}(d_i|\vec{\gamma})\label{posterior}
\end{equation}
where $p(\vec{\gamma},I)$ encodes our a priori knowledge of NS physics by vanishing at values of $\vec{\gamma}$ for which the EoS becomes unphysical (examples of unphysicallity include acausal sound speed, thermal instability etc.). The abstract symbol $I$ represents our understanding of NS physics, to which our priors on the EoS hyper-parameters are conditional. This posterior distribution of the EoS hyper-parameters can be used to compute their Bayesian credible intervals which translate naturally into a credible interval on the EoS pressure-density space.
To compute credible intervals on the EoS hyper-parameters, one can in principle use standard Bayesian inference engines (such as direct quadrature or MCMC) on Eq.~\eqref{posterior} directly. However, the product of integrals in Eq.~\eqref{posterior} is difficult to deal with due to the presence of the delta functions in the integrands. The delta function priors imposed by the EoS model prevents us from approximating each integral in the product as a Monte Carlo sum over posterior samples yielded by single event EoS agnostic PE runs, that are carried out for generating GW transient catalogs. Unable to re-use information from single event PE runs, the alternative we are left with is to essentially redo the PE of BNS waveform parameters with delta function priors imposed on a subset of those parameters. This boils down to simultaneous inference of EoS hyper-parameters and the event specific waveform parameters, for all events. Numerical implementations of such PE would then require a large number of costly GW template waveform evaluations per event. This would lead to a rapidly increasing computational cost of hierarchical EoS inference with the number of events analyzed, potentially requiring several weeks of computation time per analysis in O4.
It would be much more efficient if an algorithm can be developed to numerically compute the marginalized hierarchical likelihood as a fast evaluating function of its arguments, by somehow re-using information from single event EoS agnostic PE runs, which are carried out regardless, for GW transient catalogs. In the next section, we describe how to achieve this using the \textsc{GWXtreme} likelihood approximation scheme and the algorithm we developed based on it.
\subsection{Likelihood Approximation Scheme}
\label{approx}
To evaluate the hierarchical likelihood of GW data given EoS parameters, we perform the integral in Eq.~\eqref{likelihood} numerically, after approximating the integrand in a way that drastically reduces computational cost and latency. We base our approximation on~\cite{gwxteme}, where reparameterization of the EoS sensitive observables to reduce the dimensionality of the integrand and KDEs to approximate the lower dimensional (marginalized) integrand were implemented, for rapid and computationally cheap computation of EoS evidences, i.e., Eq.~\eqref{evidence}. Since the integral in Eq.~\eqref{likelihood} is essentially the same as that in Eq.~\eqref{evidence} with the only difference being the parameterization of the EoS sensitive prior, similar approximations as in~\cite{gwxteme} can be used for the fast and cheap evaluation of our hierarchical marginalized likelihood.
To reduce the dimensionality of the integral in Eq.~\eqref{likelihood} and evaluate it numerically, we first note that by Bayes theorem, the single event likelihood given EoS sensitive BNS parameters, multiplied by EoS uninformative priors on those parameters, is proportional to the posterior distribution of those parameters given GW data: $p(m_1,m_2,\lambda_1,\lambda_2|d) \propto \mathcal{L}(d|m_1,m_2,\lambda_1,\lambda_2,\vec{\theta}_{\text{ne}} ) p(m_1,m_2,\lambda_1,\lambda_2)$. Note that this posterior has already been sampled during the single event EoS agnostic PE run for some choice of the uninformative priors: $p(m_1,m_2,\lambda_1,\lambda_2)=p_{\text{PE}}(m_1,m_2,\lambda_1,\lambda_2)$. This can be used to rewrite the integral in Eq.~\eqref{likelihood} in terms of the single-event posterior distribution of BNS parameters $p(m_1,m_2,\lambda_1,\lambda_2|d)$:
\begin{widetext}
\begin{equation}
\mathcal{L}_h(d|\vec{\gamma}) = \int \frac{p(m_1,m_2,\lambda_1,\lambda_2|d)}{p_{\text{PE}}(m_1,m_2,\lambda_1,\lambda_2)}p(\lambda_1,\lambda_2|m_1,m_2,\vec{\gamma})p(m_1,m_2)\,dm_1\,dm_2\,d\lambda_1\,d\lambda_2\label{likelihood2}
\end{equation}
\end{widetext}
As we shall demonstrate shortly, the single-event posterior distribution of BNS parameters given GW data can be accurately approximated as a fast evaluable numerical function, from its stochastic samples which are already generated and written to disk during the creation of GW transient catalogs. Before implementing such an approximation, we first show that the dimensionality of the integral can be now be reduced by reparameterizing the EoS sensitive observables. We re-parameterize $(m_1,m_2,\lambda_1,\lambda_2)$ into the chirp mass, mass ratio, and tidal parameters, $\mathcal{M}(m_1,m_2),q(m_1,m_2),\tilde{\Lambda}(\lambda_1,\lambda_2,m_1,m_2),\delta\tilde{\Lambda}(\lambda_1,\lambda_2,m_1,m_2)$, which are defined as follows:
\begin{widetext}
\begin{eqnarray}
\mathcal{M} &=& \frac{(m_1m_2)^{3/5}}{(m_1+m_2)^{1/5}}\\
q &=& m_2 / m_1 \\
\tilde{\Lambda} &=& \frac{8}{13}\big[(1+7\eta -31\eta^2)(\Lambda_1+\Lambda_2)+\sqrt{1-4\eta}(1+9\eta-11\eta^2)(\Lambda_1-\Lambda_2)\big]\label{Lt}\\
\delta\tilde{\Lambda} &=&\frac{1}{2}\bigg[ \sqrt{1-4\eta}\left(1-\frac{13272}{1319}\eta+\frac{8944}{1319}\eta^2\right)(\Lambda_1+\Lambda_2)+\left(1-\frac{15910}{1319} \eta +\frac{32850}{1319}\eta^2 +\frac{3380}{1319}\eta^3\right)(\Lambda_1-\Lambda_2)\bigg]\label{dLt}
\end{eqnarray}
where $\eta=m_1m_2/(m_1+m_2)^2$ is the symmetric mass ratio and $\Lambda_i=G\lambda_i[c^2/(Gm_i)]^5$ is the dimensionless tidal deformability. We note that the definitions Eq.~\eqref{Lt} and Eq.~\eqref{dLt} assume $m_1>m_2$. Under this reparameterization, Eq.~\eqref{likelihood2} becomes:
\begin{equation}
\mathcal{L}_h(d|\vec{\gamma}) \propto \int \frac{p(\mathcal{M},q,\tilde{\Lambda},\delta\tilde{\Lambda}|d)}{p_{\text{PE}}(\mathcal{M},q,\tilde{\Lambda},\delta\tilde{\Lambda})}p(\tilde{\Lambda},\delta\tilde{\Lambda}|\mathcal{M},q,\vec{\gamma})p(\mathcal{M},q)\,d\mathcal{M}\,dq\,d\tilde{\Lambda}\,d\delta\tilde{\Lambda}\label{likelihood3}
\end{equation}
where the Jacobians associated with the variable change cancel out and the reparameterized EoS sensitive priors are
\begin{equation}
p(\tilde{\Lambda}',\delta\tilde{\Lambda}'|\mathcal{M},q,\vec{\gamma})= \delta(\tilde{\Lambda}'-\tilde{\Lambda}(\mathcal{M},q,\vec{\gamma}))\delta(\delta\tilde{\Lambda}'-\delta\tilde{\Lambda}(\mathcal{M},q,\vec{\gamma}))
\end{equation}
\end{widetext}
To further simplify Eq.~\eqref{likelihood3}, we can choose the uninformative priors in the denominator of the integrand to be uniform in the reparameterized tidal deformabilities: $p_{\text{PE}}(\mathcal{M},q,\tilde{\Lambda},\delta\tilde{\Lambda})\propto p(\mathcal{M},q)$. This choice is compatible with at least one standard GW waveform for BNSs: the \textsc{TaylorF2} Waveform model~\cite{TaylorF2}. We elaborate more on Generalizations to other waveforms and the associated systematics in Sec.~\ref{waveform}. Under this choice of priors, Eq.~\eqref{likelihood3} becomes
\begin{equation}
\mathcal{L}_h(d|\vec{\gamma}) \propto \int p(\mathcal{M},q,\tilde{\Lambda},\delta\tilde{\Lambda}|d)p(\tilde{\Lambda},\delta\tilde{\Lambda}|\mathcal{M},q,\vec{\gamma})\,d\mathcal{M}\,dq\,d\tilde{\Lambda}\,d\delta\tilde{\Lambda}
\label{likelihood31}
\end{equation}
The reason for this reparameterization is as follows. The chirp mass $\mathcal{M}$ is known to be extremely well measured, with its posterior distribution being sharply peaked about a number equal to the mean of the samples of $\mathcal{M}$ which are available from single event PE runs. Among the tidal parameters, $\tilde{\Lambda}$, which enters the GW waveform model at the 5th post-Newtonian order, has the dominant contribution as compared to $\delta\tilde{\Lambda}$, which enters the waveform at the 6th post-Newtonian order. As a result, the posterior distribution of BNS parameters is largely independent of $\delta\tilde{\Lambda}$. Under these considerations, the posterior distribution of BNS parameters given GW data can be approximated as $p(\mathcal{M},q,\tilde{\Lambda},\delta\tilde{\Lambda}|d)\approx p(q,\tilde{\Lambda}|d)\delta(\mathcal{M}-\bar{\mathcal{M}})$, where $\bar{\mathcal{M}}$ is the mean of the chirp mass samples obtained from EoS agnostic single event PE runs. Substituting these into Eq.~\eqref{likelihood3} allows us to use the delta functions and reduce the dimensionality of the integral to one, provided the marginalized posterior $p(q,\tilde{\Lambda}|d)$ can be evaluated, at least approximately, as a numerical function of its parameters.
\begin{equation}
\mathcal{L}_h(d|\vec{\gamma}) \appropto \int p(q,\tilde{\Lambda}(\bar{\mathcal{M}},q,\vec{\gamma})|d)\,dq\label{likelihood4a}
\end{equation}
To evaluate $p(q,\tilde{\Lambda}|d)$ as a numerical function of $(q,\tilde{\lambda})$, we approximate it from EoS agnostic single event PE samples of its parameters via Gaussian kernel density estimation (KDE), customized to be immune to edge effects. Traditional Gaussian KDE fits a Gaussian around each posterior sample and approximates the density of those samples as the sum of those Gaussians~\cite{KDE1,KDE2}. The covariance matrix of each of the Gaussians is approximated from the sample covariance matrix of the posterior samples. While the KDE evaluation speed is very fast, it is susceptible to edge effects in its traditional form, becoming inaccurate for distributions with sharp edges. Single event PE runs implicitly assign the heavier NS as the primary mass $m_1$, resulting in a sharp edge on the distribution of mass ratio at $q=1$. To circumvent the incompatibility of this sharp edge with KDE, we add to the KDE probability distribution function (pdf) at each point, the value of the pdf at a point symmetric about the sharp edge, resulting in a bounded KDE, similar to~\cite{gwxteme}. This results in our KDE being free of edge effects and compatible with the sharp edge at $q=1$. The accuracy of this approximation is demonstrated in Fig.~\ref{kde-vis}. With this approximation of the marginalized posterior, the hierarchical likelihood for the $i$th GW event becomes
\begin{equation}
\mathcal{L}_{h}(d_i|\vec{\gamma}) \appropto \int_0^1 K_i(q,\tilde{\Lambda}(\bar{\mathcal{M}}_i,q,\vec{\gamma}))\,dq\label{likelihood4}
\end{equation}
where $K_i(q,\tilde{\Lambda})$ is the KDE approximation of $p(q,\tilde{\Lambda}|d_i)$ obtained from the EoS agnostic single event posterior samples $\{(q,\tilde{\Lambda})_i\}$. Since $K_i$ is a fast evaluating function of its arguments, the definite integral with a finite range in Eq.~\eqref{likelihood4} can be evaluated efficiently using numerical techniques such as the trapezoidal rule. Substituting Eq.~\eqref{likelihood4} into Eq.~\eqref{posterior} yields the approximate joint hierarchical posterior of EoS parameters given GW data from multiple BNS observations, the EoS parameterization model and the prior knowledge on NS matter physics, that is numerically evaluable almost instantaneously:
\begin{equation}
p(\vec{\gamma}|\{d\},I) \appropto p(\vec{\gamma},\textbf{I})\prod_{i=1}^N \int_{0}^1 K_i(q,\tilde{\Lambda}(\bar{\mathcal{M}}_i,q,\vec{\gamma}))\,dq \label{posterior2}
\end{equation}
The posterior distribution in Eq.~\eqref{posterior2} can be sampled stochastically to produce empirical constraints on the EoS pressure density relation. We use affine-invariant Markov Chain Monte Carlo (MCMC) ensemble sampling~\cite{emcee1}, as implemented in the the package \textsc{emcee}, with CPU parallelization inbuilt~\cite{emcee2}, to sample the posterior in Eq.~\eqref{posterior2}. We parallelize emcee on 50 CPU cores to sample the posterior in Eq.~\eqref{posterior2} within 20 hours to a day, for 10 BNS events. Using those posterior samples, for each value of the density $\rho$ in Eq.~\eqref{pe-param}, we evaluate $N_{\text{samples}}$ number of pressures $\{p(\rho,\vec{\gamma}_j)\}$, one corresponding to every posterior sample $\{\vec{\gamma}_j\}$ where, $j=1,2,...,N_{\text{samples}}$. We then compute the median, 5 and 95 percent quantiles from the set $\{p(\rho,\vec{\gamma}_j)\}$, which we plot against $\rho$ and interpret them as empirical constraints/credible intervals on the NS EoS\@. The posterior predictive distribution of the dimensionless tidal parameter at $1.4\,M_{\odot}$ can also be produced as a different representation of how well the EoS is constrained. These are obtained by computing the histograms of the values of the dimensionless tidal parameter at $1.4\,M_{\odot}$ $\Lambda_i=\Lambda(m=1.4\,M_{\odot},\vec{\gamma_i})$ corresponding to each posterior sample $\vec{\gamma_i}$ of the EoS parameters drawn form the joint posterior Eq.~\eqref{posterior2}. We produce these constraints by performing our analysis on GW data from both real and simulated events and demonstrate the accuracy and efficiency of our method.
Even though we do not simultaneously infer BNS population parameters, we note that our algorithm is generalizable to do so while retaining its low computational cost. We discuss the blueprints of such a calculation in the conclusion section and leave it as part of an upcoming work.
\begin{figure}[t!]
\centering
\subfigure[SNR=32]{\includegraphics[width=0.5\textwidth]{plots/plots_pdf/KDE_vis_highsnr4.pdf}}
\subfigure[SNR=11]{\includegraphics[width=0.5\textwidth]{plots/plots_pdf/KDE_vis_lowsnr4.pdf}}
\caption{KDE visualization for two simulated events, with APR4\_EPP as the injected EoS, one at low (bottom) and the other at high (top) SNR. In both the figures we have plotted $K_i(q,\tilde{\Lambda})$ as a 2D density plots. We have overplotted the PE samples of $q,\tilde{\Lambda}$ as discrete gray points to demonstrate the KDE accurately approximates the posterior distribution from which the samples are drawn. The integral in Eq.~\eqref{posterior2} can be visualized as line integrals of these 2D density along the $\tilde{\Lambda}(q,\mathcal{M},\vec{\gamma})$ line for some particular value of $\vec{\gamma}$. Two such lines, one corresponding to the injected EoS APR4\_EPP and a different SLY are plotted as an example of the EoS lines along which the integrals are performed. }
\label{kde-vis}
\end{figure}
In the next section, we describe our choice of EoS parameterization and the priors imposed on the EoS parameters, which are based on existing knowledge of NS matter physics.
\subsection{EoS parameterization and priors}
\begin{figure}[!ht]
\centering
\includegraphics[width=0.5\textwidth]{plots/plots_pdf/prior2.pdf}
\caption{Visualization of the priors used on EoS parameters as bounds on the pressure density plane. We draw a large number of samples of the spectral parameters from the uniform distributions on the right hand side of Eq.~\eqref{prior}. From those samples, we only keep the ones that satisfy the physicality conditions imposed by the various component of the prior. We then calculate the pressure density curves for each of those samples and plot credible intervals on the pressure density plane that contain 90 percent of those curves.}
\label{priors}
\end{figure}
\label{eos-prior}
Previous studies like~\cite{piecewiseonly} have shown the effectiveness of using phenomenologically parameterized EoSs such as the piecewise polytrope, in measuring the EoS pressure-density relation empirically from GW data. Parameterizing the pressure density relation instead of the mass-tidal parameter relation and deriving the latter from the former makes the inclusion of a priori knowledge of NS physics into the analysis straightforward, in the form of Bayesian priors on the EoS parameters. However, the non-differentiability in the EoS at the joining point of two consecutive polytrope pieces leads to increases statistical errors in the EoS measurements inferred from GW data using the piecewise parameterizations, at those points~\cite{spectralpiecewise}. The spectral parameterization of the EoS which expands the adiabatic index in powers of logarithmic pressure, has been shown to be free of this deficiency~\cite{spectralpiecewise}. For this reason, we choose the four parameter spectral decomposition as our EoS parameterization for this study while noting that our analysis works for any parameterized model including the piecewise-polytrope, which we show in the appendix as a validation study. Under the spectral parameterization, the adiabatic index in terms of the pressure is
\begin{equation}
\ln \Gamma(p,\vec{\gamma}) = \sum_{k=1}^{4}\gamma_k \left(\ln{\frac{p}{p_0}}\right)^k\label{spectral1}
\end{equation}
where $\Gamma(p)$, is the adiabatic index at pressure $p$ and $p_0$ is the minimum pressure above which the representation is valid. Eq.~\eqref{spectral1} can be used to directly compute pressure as a function of energy density $e$ using the thermodynamic relation
\begin{equation}
\frac{de}{dp} = \frac{e + p}{p\Gamma(p,\vec{\gamma})}
\end{equation}
which can then be used to find rest mass density as a function of pressure using $\rho(p,\vec{\gamma})=[e(p,\vec{\gamma})+p]\exp[-h(p,\vec{\gamma})]$ where $dh/dp=1/[e(p,\vec{\gamma})+p]$. The pressure density relation can then be used to solve the TOV equation to yeild $\tilde{\Lambda}=\tilde{\Lambda}(\mathcal{M},q,\vec{\gamma})$. We use the \textsc{LALSimulationNeutronStarEOS} module of the LIGO Algorithm Library package (\textsc{LALSuite})~\cite{lalsuite}, that implements TOV solving algorithms described in~\cite{mr-pe} and~\cite{TOVsol}, to solve the TOV equation, for each $\vec{\gamma}$.
As implemented in \textsc{LALSimulationNeutronStarEOS}, the minimum pressure $p_0$ is chosen to be $5.37\times 10^{34}\,\text{dyn}\,\text{cm}^{-2}$. At pressures below this value, a different EoS, the SLY EoS model of~\cite{sly}, is used, which is stitched to the high density spectral EoS at $p=p_0$, as implemented in \textsc{LALSimulationNeutronStarEOS}, consistent with previous works like~\cite{spectralpiecewise,interpolation}. The range of $\vec{\gamma}$ for which the EoS is physical and stable can be determined using existing knowledge of NS physics. Following~\cite{spectralpiecewise} and~\cite{interpolation} we demand that for an EoS characterized by a particular value of the parameters $\vec{\gamma}$ to be physical and observationally consistent, it must be thermally stable, causal and result in a maximum NS mass that is larger than the mass of the most massive NS observed with precise mass measurement to date. The thermal stability requirement, which demands that $de/dp>0$, is satisfied by imposing $\Gamma(p,\vec{\gamma}) \in [0.6,4.5]$ for all $p\in (5.37\times10^{32},1.19\times 10^{38})\,\text{dyn}\,\text{cm}^{-2}$, in addition to the uniform priors on the $\vec{\gamma}$: $\gamma_0 \in [0.2,2]$, $\gamma_1 \in [-1.6,1.7]$, $\gamma_2 \in [-0.6,0.6]$, $\gamma_3 \in [-0.02,0.02]$. The causality prior demands that the speed of sound in NS matter, $c_s=\sqrt{dp/de}$, up to and at the central pressure $p_{c,max}$ of the heaviest NS supported by the EoS, has to be less than the speed of light. We allow for a 10 percent buffer in the speed of sound inequality, to account for causal EoS models in the list of tabulated EoSs to be a fit by acausal spectral EoSs. Finally, we impose the maximum mass constraint by demanding that spectral parameters corresponding to physical EoSs must correspond to a maximum NS mass that is larger than that of the most massive NS observed, for which precise mass measurements are possible: $m_{\text{max}}(\vec{\gamma})>1.97\,M_{\odot}$~\cite{mmaxprior}. All of these priors are consistent with previous work on hierarchical inference of parameterized EoSs from GW data, for example~\cite{spectralpiecewise,piecewiseonly,interpolation}. We note that the recent mass measurements of PSR J0952-0607~\cite{heaviest} renders this choice of $1.97\,M_{\odot}$ for the heaviest observed NS mass outdated. However, since we validate our results for the real events by comparing with previous analyses like~\cite{170817Eos,190425} that were carried out before this recent observation, we choose $1.97\,M_{\odot}$ for the heaviest observed NS mass so as to be consistent with those works.
Combining all of these individual priors, our resultant prior on the EoS parameters is:
\begin{widetext}
\begin{equation}\label{prior}
\renewcommand\arraystretch{0}
p(\vec{\gamma},I) \propto \begin{cases}
1 & \text{where}\; \begin{array}[t]{l} 0.2 \le \gamma_0 \le 2,\; -1.6 \le \gamma_1 \le 1.7,\; -0.6 \le \gamma_2 \le 0.6,\; -0.02 \le \gamma_3 \le 0.02, \\ 0.6 \le \Gamma(p,\vec{\gamma}) \le 4.5,\; c_{s}(p_{c,max},\vec{\gamma})<1.1c,\; \text{and}\; m_{\text{max}}(\vec{\gamma})>1.97 M_{\odot} \end{array} \\
0 & \text{otherwise}
\end{cases}
\end{equation}
\end{widetext}
For a visualization of how these priors constrain the EoS pressure-density relation, we draw samples of the EoS parameters from the prior in Eq.~\eqref{prior} and compute pressure density credible intervals from those samples. We plot the intervals for 50000 combined prior samples in Fig.~\ref{priors}.
In the next section, we summarize the compatibility of our approximation, with various GW waveform models that might be used to perform the single-event PE.
\subsection{Waveform Systematics}
\label{waveform}
The compatibility of our approximation scheme with various GW waveform models manifests through the dependence of the choice of uninformative priors $p_{\text{PE}}(\mathcal{M},q,\tilde{\Lambda},\delta\tilde{\Lambda})$ used in the single-event PE runs on said waveform models. The equivalence of Eq.~\eqref{likelihood3} and Eq.~\eqref{likelihood31} is conditional on whether the $p_{\text{PE}}(\mathcal{M},q,\tilde{\Lambda},\delta\tilde{\Lambda})$ is uniform in $\tilde{\Lambda}$. However, such a prior typically has support in regions of the $\tilde{\Lambda}$-$\delta\tilde{\Lambda}$ space that corresponds to negative values of the tidal parameters $\lambda_1$, $\lambda_2$, which can cause certain waveform generators for certain families of GW waveform models to fail. \textsc{PhenomNRT}~\cite{phenom1,phenom2,phenom3,phenom4}, is one such example. On the other hand, a uniform in $\Lambda_1$-$\Lambda_2$ prior, which is compatible to \textsc{PhenomNRT} waveforms, will imply a $p_{\text{PE}}(\mathcal{M},q,\tilde{\Lambda},\delta\tilde{\Lambda})$ that is not uniform in $\tilde{\Lambda}$ and has vanishing support at $\tilde{\Lambda}=0$, as can be seen in Fig.~\ref{l1l2prior}. This will tend to blow up the integrand in Eq.~\eqref{likelihood3} near $\tilde{\Lambda}=0$ which might result in unacceptable numerical errors.
\begin{figure}[!hb]
\centering
\includegraphics[width=0.5\textwidth]{plots/plots_pdf/Lt_vs_l1l2_uniform.pdf}
\caption{Visualization of uninformative priors on tidal parameters used in single event EoS agnostic PE runs which can be one or the other depending on the waveform model being used. The uniform in $\tilde{\Lambda}$ prior, compatible with \textsc{TaylorF2} and hence the current version of \textsc{GWXtreme}, would be a horizontal line in this plot. The uniform in positive $\Lambda_1$, $\Lambda_2$, which is compatible with the \textsc{PhenomPNRT} waveform families leads to a $\tilde{\Lambda}$ prior distribution that is shown in this plot and evidently has vanishing support at $\tilde{\Lambda}=0$. This can tend to blow up the integrand in Eq.~\eqref{likelihood3} as its denominator $p_{\text{PE}}(\mathcal{M},q,\tilde{\Lambda},\delta\tilde{\Lambda}$ will then be proportional to a quantity that vanishes a region inside the integration range. }
\label{l1l2prior}
\end{figure}
The \textsc{TaylorF2} waveform~\cite{TaylorF2}, is immune to this issue since it is evaluated originally as a function of $(\tilde{\Lambda},\delta\tilde{\Lambda})$ and does not need a conversion to $\Lambda_1$-$\Lambda_2$ space. A uniform in $\tilde{\Lambda}$ prior for the single event PE runs is thus compatible with \textsc{TaylorF2}. Hence we choose the \textsc{TaylorF2} waveform model to perform our single event EoS agnostic PE runs for both real and simulated events.
Following~\cite{gwxteme,spectralpiecewise,piecewiseonly}, we truncate \textsc{TaylorF2}'s frequency domain waveform model at a stage of the binary's evolution, where the separation between the two NSs become comparable to the innermost stable circular orbit (ISCO), at which point the frequency of GW has the value $f_{\text{ISCO}}=c^3/[6^{2/3}\pi G(m_1+m_2)]$. While this choice of upper frequency cutoff can be unrealistic for EoSs that predict large NS radii, leading to merger happening before the separation of the NSs reaching ISCO, GW detectors are largely insensitive to such high frequencies, rendering the unphysicallity of our frequency cutoff irrelevant. It has been shown in previous studies, such as~\cite{fcutoff}, that varying the cutoff frequency has a negligible effect on the EoS Inference from GW data.
We note that our algorithm can be made compatible with other waveforms such as \textsc{PhenomNRT} that require positive $\Lambda_1$ and $\Lambda_2$ by switching to a 3-dimensional KDE, as described in Sec.~\ref{conclusion}. We leave such a generalization as part of an upcoming work. In the next section, we describe in detail the simulation study we performed, the single event PE method used in those simulations and the real event data, on which we run our analysis to test its accuracy.
\section{Data: Simulation Study and real events}
\label{data}
\begin{figure}[!ht]
\subfigure[SNR $\in (23,25)$]{\includegraphics[width=0.49\textwidth]{plots/plots_pdf/pop_vis_hpd22.pdf}}
~
\subfigure[SNR $\in (33,35)$]{\includegraphics[width=0.49\textwidth]{plots/plots_pdf/pop_vis_hpd33.pdf}}
\caption{Visualization of why high mass events are less informative on the EoS. We plot 90\% highest posterior density intervals of the $\tilde{\Lambda}$ posteriors for a low mass and a high mass simulated event drawn from the narrow SNR bins, top: (23,25) and bottom: (33,35). We also display the shape of the posterior in the form of violin plots. The solid lines are the $\mathcal{M}$-$\tilde{\Lambda}$ curves at $q=1$ for some of the known EoS models.}
\label{lo-hi}
\end{figure}
We run our algorithm on the single event PE data for real events GW170817 and GW190425 as released publicly by the LVC~\cite{170817PE,190425PE}, both individually and jointly. We compare our results with the EoS constraints obtained from the full unapproximated PE run that infers the event-specific BNS parameters simultaneously with the EoS hyper-parameters. We show that the results from our approximate technique are in very good agreement with that of the full PE results, while being orders of magnitude faster. To test the accuracy of our algorithm in recovering the true EoS from data we run our analysis on simulated events which were chosen as follows.
We randomly draw BNS masses from the Galactic BNS population as inferred in~\cite{pop}. Astrometric parameters were chosen in such a way that the sky positions and orientations of the coalescences are isotropic and distributed uniformly in co-moving volume within a luminosity distance range of 30\,Mpc to 200\,Mpc. This is consistent with previous work:~\cite{gmm,interpolation} except for the distance distribution which~\cite{gmm} chose to be uniform in distance instead. However, unlike the aforementioned works, we also set the dimensionless spin parameters to be zero for simplicity, since neutron stars are expected to spin down and loose rotational energy to magnetically driven plasma winds~\cite{spin1,spin2,spin3}. This choice is further justified by noting that galactic double neutron star systems have been observed to have very low spins (with dimensionless spin-parameter $\chi<0.05$)~\cite{pop2,spinpop}. We assign our fiducial/``true'' EoS to be APR4\_EPP~\cite{APR4EPP} and compute the tidal parameters using said EoS from the drawn masses. Then, using these parameters and the \textsc{TaylorF2} waveform model, we simulate a GW waveform and inject it into GW detector noise realizations corresponding to power spectral densities of the 4 LVK detectors at projected O4 sensitivity~\cite{O4projection}.
Among the events drawn, we choose the first $N_{\text{O4}}$ with a signal-to-noise ratio greater than or equal to 8 in at least one detector and perform PE on them to infer the EoS agnostic posteriors of single event BNS parameters given the simulated GW data. Here the expected number of observable events in O4, $N_{\text{O4}}$, is calculated using Poisson statistics, the number of events discovered till date and the projected O4 sensitivity, as derived in appendix~\ref{poisson}. With 2 confident BNSs observed till date and the projected O4 sensitivity estimated in ref~\cite{O4projection}, we find the upper bound on $N_{\text{O4}}$ to be 16 with 90\% confidence. This effectively makes our EoS constraints obtained from analyzing the simulated events a forecast for how well we might be able to constrain the NS EoS from GW data alone, by the time O4 ends.
To study the variability of EoS constraints we also draw equal mass simulated events from narrow SNR bins, distributed uniformly over a broad range of chirp masses. For nearly identical SNRs, we expect the EoS constraints to be dominated by low-mass events which is due to the following reason. Many candidate EoSs all predict small $\tilde{\Lambda}$ close to 0 for higher mass BNS systems. However, they predict large and often vastly different $\tilde{\Lambda}$ for lighter systems. Thus for larger mass systems, many EoSs in addition to the true EoS are expected to have posterior support since all of them predict $\tilde{\Lambda}$ within a narrow range of 0. On the other hand, for smaller mass systems only the true EoS has support. As an example, see Fig.~\ref{lo-hi}. This leads to larger mass system being much less informative about the EoS than low mass systems with the later dominating the EoS constraints. We verify this by running our algorithm on these sets of of simulated events drawn from narrow SNR bins which yield EoS constraints consistent with this expectation.
In the next section we describe the single event PE runs which were used to generate the posterior samples that serve as input to our analyses.
\subsection{Single Event EoS-Agnostic Parameter Estimation runs}
\label{PE}
The single event EoS-agnostic PE runs stochastically sample the posterior distribution of parameters $\vec{\theta}$ that characterize the frequency-domain GW waveform model $h(f,\vec{\theta})$, given GW data $p(\vec{\theta}|d)$. By Bayes theorem, this posterior distribution can be expressed as being proportional to the likelihood of obtaining GW data, given said parameters and a noise model, multiplied by uninformative priors on those parameters: $p(\vec{\theta}|d)\propto\mathcal{L}(d|\vec{\theta})~p(\vec{\theta})$. Under the assumption of stationary Gaussian noise~\cite{Bayes0,Bayes1,noise,thranetalbot2019} , the likelihood function is
\begin{eqnarray}
\mathcal{L}(d|\vec{\theta}) &\propto& \exp[-(d-h(\vec{\theta})|d-h(\vec{\theta}))/2]\label{single-likelihood}\\
(a|b)&=&4\mathrm{Re}\int_{0}^{\infty}\frac{a^*(f)b(f)}{S_{n}(f)}
\end{eqnarray}
where $S_{n}(f)$ is the noise power spectral density of detector and the frequency domain quantities are obtained by taking a Fourier transform of the time domain quantities in Eq.~\eqref{data0} and noise. The integral in Eq.~\eqref{single-likelihood} can be evaluated numerically by truncating the waveform model at $f=f_{\text{ISCO}}$, as mentioned in Sec.~\ref{waveform}. For the simulated events, we choose a lower frequncy cutoff at 20\,Hz. One can now evaluate the posterior $p(\vec{\theta}|d)\propto\mathcal{L}(d|\vec{\theta})p(\vec{\theta})$ and sample it stochastically. For the real events GW170817 and GW190425, we re-use the PE samples released by the LIGO/Virgo Collaboration~\cite{170817PE,190425PE}, that were generated using \textsc{LALInference\_Nest}, of the \textsc{LALSuite} software package~\cite{lalsuite}, which implements nested sampling, to sample the posterior distribution.
However, such nested sampling runs involve potentially millions of likelihood evaluations~\cite{slow4,slow3}, which are computationally costly. Since each likelihood evaluation involves the computation of the entire frequency-domain GW waveform at the corresponding values of $\vec{\theta}$, such a PE run can take potentially weeks per event~\cite{roq1,roq2}. For the purpose of our simulation study, wherein we draw of order 20 events from the Galactic BNS population and of order 10 events each in different SNR and NS mass ranges, we need single event PE analysis techniques that are much more computationally efficient.
For accelerating the analyses, we employ the reduced order quadrature (ROQ) technique \cite{roq1, roq2}. We constructed linear and quadratic ROQ basis vectors of \textsc{TaylorF2} waveform over the parameter space we consider, employing the procedure described in the previous works. To obtain highly compressed basis sets, we constructed tens of linear ROQ basis sets, each of which is constructed over a narrow chirp-mass range, as done in \cite{froq}. The resultant speed-up gain is $\sim10^3$ to $\sim 10^4$, reducing the run time to a few hours.
We use the \textsc{bilby}~\cite{bilby1,bilby2} package that implements ROQ for single event EoS agnostic PE of simulated BNSs. We choose priors on the BNS parameters for the single event PE runs, that are consistent with previous work. We choose uniform priors in the mass ratio $q\sim U(0,1)$, tidal parameters $\tilde{\Lambda}\sim U(0,5000)$ and $\delta \tilde{\Lambda}\sim U(-5000,5000)$, and the dimensionless component spin parameters $\chi_i\sim U(-0.05,0.05)$. We choose a prior that is uniform in sky position, orientation, and polarization angle. For the real events, the priors used in EoS agnostic PE are listed in the public release by LVK~\cite{170817PE,190425PE} of the PE samples we use.
We also use \textsc{bilby}'s implementation of analytic marginalization over extrinsic parameters such as arrival time, coalescence phase and luminosity distance to further accelerate the EoS agnostic PE for the simulated events\cite{thranetalbot2019}. We set uniform priors in time and phase and a powerlaw with index 2 prior on luminosity distance for carrying out the marginalizations. We then exploit \textsc{bilby}'s ability to re-construct posterior samples of the marginalized parameters~\cite{bilby2} in post-processing to obtain posterior samples of luminosity distance which is necessary for EoS inference due to the following reason.
The single event likelihood in Eq.~\eqref{single-likelihood} is implicitly a function of the detector frame chirp mass $\mathcal{M}_d$ which is related to the source frame chirp mass $\mathcal{M}$ by the redshift: $\mathcal{M}_d=\mathcal{M}(1+z)$. Since GW observations alone cannot break the degeneracy between mass and redshift, \textsc{bilby} samples the the likelihood in detector frame chirp mass. However, since the EoS is sensitive to the source frame masses, we need a way to break the mass-redshift degeneracy which is achievable by imposing a cosmological model. Given a cosmological model, posterior samples of luminosity distance (that are reconstructed by \textsc{bilby} in post-processing) can be converted to samples of red-shift. These can then be used to convert posterior samples of detector frame chirp-mass to source frame. Following previous works such as~\cite{gmm}, we use the Planck15 cosmology~\cite{planck15} for converting our chirp masses to detector frame before feeding them into our algorithm for EoS inference. We note that for the low luminosity distances we consider in this study ($<300\,\text{Mpc}$), Planck15 yields redshifts that are small, leading to source frame masses within at most 6-8\% of the detector frame masses.
\vspace{-0.5cm}
\section{Results}
\label{results}
\begin{figure}[!hb]
\subfigure[GW170817]{\includegraphics[width=0.49\textwidth]{plots/plots_pdf/170817_narrow.pdf}}
~
\subfigure[GW190425]{\includegraphics[width=0.49\textwidth]{plots/plots_pdf/190425_narrow.pdf}}
\caption{Comparison of \textsc{GWXtreme} with \textsc{LALInference\_Nest} for the real events GW170817 (left) and GW190425 (right). The posterior samples of the spectral parameters are used to compute pressure density curves and the shaded regions in the plots are equal tail confidence intervals that contain 90 percent of those curves. We also display the 90\% prior intervals in dashed lines and the prior extrema in dotted lines computed using 50000 samples of $\vec{\gamma}$ drawn from the prior. It can be seen that \textsc{GWXtreme} results are consistent with the \textsc{LALInference\_Nest} found by the LVK, results despite being orders of magnitude faster. The slight preference of \textsc{LALInference\_Nest} towards softer EoSs can be attributed to the difference in waveform models as well as the priors on the tidal parameters that are used by the two algorithms.}
\label{real-data}
\end{figure}
In this section, we display the results we got by running \textsc{GWXtreme}-parameterized on real and synthetic data, which were obtained/generated by using the techniques and details summarized in the previous sections. The algorithm developed based on the method described in Sec.~\ref{approx} is publicly available and documented in the release of \textsc{GWXtreme-0.3.1}. We also reproduce each of these results with the piecewise-polytropic parameterization in the appendix to demonstrate the compatibility of our algorithm with any EoS parametrization.
\subsection{Real Events}
\begin{figure}[h]
\centering
\includegraphics[width=0.5\textwidth]{plots/plots_pdf/1719.pdf}
\caption{EoS constraints obtained by jointly analyzing GW170817 and GW190425 using \textsc{GWXtreme}. It can be seen that the joint EoS constraints are dominated by that of GW170817 which is to be expected due its larger SNR and smaller masses of component NSs than GW190425. However, the joint constraint can be seen to be very slightly narrower than that of GW170817 due to the contribution from GW190425.}
\label{real-data-1719}
\end{figure}
\begin{figure}[!ht]
\subfigure[EoS Constraint]{\includegraphics[width=0.49\textwidth]{plots/plots_pdf/16_spectral.pdf}}
~
\subfigure[$\Lambda(1.4M_{\odot})$ constraint]{\includegraphics[width=0.49\textwidth]{plots/plots_pdf/16_spectral_L144.pdf}}
\caption{EoS constraints obtained by jointly analyzing 16 events drawn from the Galactic BNS population with APR4\_EPP as the true EoS and injected into O4 sensitivity. The posterior samples of the spectral parameters generated by \textsc{GWXtreme} can be used to compute $p(\rho,\vec{\gamma})$ and $\Lambda(m,\vec{\gamma})$ curves. In plot (a), the shaded region marks the equal-tail confidence interval that contains the 90 percent of the $p(\rho,\vec{\gamma})$ curves. Plot (b) is obtained by histogramming $\Lambda(1.4\,M_{\odot},\vec{\gamma}_i)$ corresponding to each posterior sample $\vec{\gamma}_i$. Both plots demonstrate that our computationally cheap and fast algorithm accurately measures the injected EoS.
}
\label{simulated-data-16}
\end{figure}
\label{real}
We re-used the single event EoS agnostic posterior samples of masses and the tidal parameters given GW data from GW170817 and GW190425, as released by the LVC~\cite{170817PE,190425PE}, that were generated using narrow spin priors and ran our analysis on them to produce EoS constraints in the form of credible intervals on the EoS pressure density plain. We compare our EoS constraints with those obtained by the joint un-approximated PE of EoS hyper-parameters and BNS parameters obtained by the LVK~\cite{170817Eos,190425} using \textsc{LALInference\_Nest} module of the \textsc{LALSuite} package. Even though different waveform models were used (\textsc{TaylorF2} for the EoS agnostic single event PE for \textsc{GWXtreme}'s input and \textsc{IMRPhenomNRT-v2} for \textsc{LALInference\_Nest}), the EoS constraints are largely consistent, as can be seen in Fig.~\ref{real-data}. The slightly higher preference of the \textsc{LALInference\_Nest} constraints towards softer EoSs than \textsc{GWXtreme}'s, can be attributed to the difference in the waveform models used in the two analyses. We note that the EoS agnostic posterior distribution of $\tilde{\Lambda}$
for the events computed using the two waveform models are themselves different. The $\tilde{\Lambda}$ posterior obtained using \textsc{TaylorF2} favors larger $\tilde{\Lambda}$ than \textsc{PhenomNRT} for both GW170817 and GW190425 (see Fig.~11 of~\cite{170817prop} for GW170817 and Fig.~14 of~\cite{190425PE} for GW190415). This explains the preference of stiffer equations of state for \textsc{TaylorF2} and hence \textsc{GWXtreme} which takes the \textsc{TaylorF2}-based EoS agnostic posterior samples of $\tilde{\Lambda}$ as input. From this we can conclude that the slight mismatch in EoS constraints between the two analysis is not due to an artifact of our algorithm but rather due to the difference in waveform models being used.
We also produced a joint constraint by hierarchically combining GW170817 and GW190425 data which we show in Fig.~\ref{real-data-1719}. Due to the large mass of its components and lower SNR as compared to GW170817, the GW190425 event does not contribute much to the joint EoS constraint even though the constraints change slightly from GW170817, which is barely discernible in Fig.~\ref{real-data-1719}.
\subsection{Simulated Events}
\begin{figure*}[!ht]
\centering
\subfigure[SNR $\in(23,25)$: EoS Constraint]{\includegraphics[width=0.45\textwidth]{plots/plots_pdf/23_to_25_eos.pdf}}
~
\subfigure[SNR $\in(23,25)$: $\Lambda(1.4M_{\odot})$ constraint Constraint]{\includegraphics[width=0.45\textwidth]{plots/plots_pdf/23_to_25_spectral_L144.pdf}}
\subfigure[SNR $\in(33,35)$: EoS Constraint]{\includegraphics[width=0.45\textwidth]{plots/plots_pdf/33_to_35_eos.pdf}}
~
\subfigure[SNR $\in(33,35)$: $\Lambda(1.4M_{\odot})$ constraint Constraint]{\includegraphics[width=0.45\textwidth]{plots/plots_pdf/33_to_35_spectral_L144.pdf}}
\caption{EoS constraints obtained using \textsc{GWXtreme} for events drawn in narrow SNR ranges. For each chosen SNR range, we draw events uniformly in chirp masses and then draw the extrinsic parameters so that when injected into O4 sensitivity, the simulated signals will have an SNR that belongs in said range of SNRs. We then group the highest mass and lowest mass events in each SNR range and analyze them using our algorithm both separately and jointly. Then using the posterior samples of the spectral parameters generated by our algorithm, we produce EoS constraints as both pressure-density credible intervals and quantile ranges of the tidal deformability at $1.4\,M_{\odot}$. It can be seen that EoS constraints are dominated by the lower mass events in each SNR bin and that they become narrower with increasing SNR.
}
\label{lo-hi33}
\end{figure*}
\label{sim-art-pop}
In this section we display the results we found upon analyzing the data from 16 simulated events whose masses are distributed according to the Galactic double neutron star population using our algorithm. The details of the simulation and associated modelling of measurement uncertainties are described in Sec.~\ref{data}. Upon analyzing data generated from such events using our computationally cheap algorithm, we compute EoS constraints that are completely consistent with the chosen ``true'' EoS, as displayed in Fig.~\ref{simulated-data-16}. We note that the increase in the width of the credible intervals near the high and decrease near the low density ends of the EoS pressure density relation are expected given our choice of the parameterized EoS, the population of simulated events and the injected EoS\@. The widening of the constraints at high densities occurs due to those densities being higher than the central density corresponding to the injected EoS, of the most massive component NS in our chosen set of events. On the other hand, the narrowing in the low density end is an artifact of our EoS model. As mentioned in Sec.~\ref{eos-prior} our high density spectral EoS is stitched to SLY at low densities. Due to this stitching with SLY at low densities, variations of the spectral parameters do not affect the pressure density relation at low densities, resulting in the credible interval obtained using the spectral parameters to converge into to a single line (corresponding to SLY) at low densities. We also display the posterior predictive distribution of $\Lambda(1.4\,M_{\odot})$ in Fig.~\ref{simulated-data-16}. The uncertainties in the distribution of $\Lambda(1.4\,M_{\odot})$ can be thought of as estimates of how well \textsc{GWXtreme} can measure the EoS from GW observations and can be used for the comparison of our method with competing ones.
\subsection{Variability of EoS Constraints with SNR and Mass}
\label{sim-bins}
To explore the variability of the EoS constraints with the SNR and chirp mass of events, we simulated events randomly in narrow SNR bins of $(23,25)$ and $(33,35)$, with the chirp masses for each bin chosen uniformly in the range $(0.8\,M_{\odot},1.8\,M_{\odot})$.
The joint EoS constraints along with the predictive distributions of the measured dimensionless tidal parameter $\Lambda(1.4\,M_{\odot})$, obtained from analyzing these events using our algorithm is displayed in Fig.~\ref{lo-hi33}. For the $(23,25)$ SNR range we see that the high mass events are much less informative than the low mass once and that the joint constraint is completely dominated by the lower mass events even in the high density regime. This is consistent with what we expect, as described in Sec.~\ref{data}. We see similar trends in the $(33,35)$ SNR range along with a couple of additional features. First, both high mass and low mass events produce narrower constraints than their $(23,25)$ SNR counterparts. Second, in the high density regime, the joint constraint appears more informative than the one obtained from low mass events only due to contribution from the high mass events. This implies that in joint EoS inference from multiple events, a high mass event can result in a non-neglible information gain only if it is loud enough.
\section{Conclusion and Future Prospects}
\label{conclusion}
We have developed an algorithm for fast and computationally cheap hierarchical inference of the NS EoS using observations of GWs from multiple BNSs that re-uses single event EoS agnostic PE results to achieve its latency and efficiency. We demonstrated the accuracy of our method by showing its results to be fully consistent with the existing EoS constraints for the events GW190425 and GW170817 that were computed using un-approximated and hence much costlier analyses. We also demonstrated the accuracy with which our method can constrain the true EoS by performing a simulation study with realistic modeling of the measurement uncertainties in the EoS sensitive BNS parameters and a population of BNSs consistent with the latest studies of the Galactic double neutron star populations. We further studied the variability of EoS constraints with BNS chirp mass and SNRs by drawing simulated events in narrow SNR bins and over a broad range of BNS chirp masses. We found that EoS constraints are dominated by lower mass and higher SNR events, in agreement with our understanding of NS structure, which can be used for potentially truncating the list of events that need to be analyzed for joint EoS constraints without losing precision. While the variation with SNR is consistent with previous works, we have shown that the variation with chirp mass is also equally significant in selecting the events that will have the most dominant contribution to the EoS constraints.
Even though we do not simultaneously infer the BNS mass distributions which might have non-negligible corelations with the EoS constraints~\cite{interpolation}, we note that our method is generalizable to do so. Simultaneous inference of BNS mass distributions in our framework would require the uninformative prior $p(\mathcal{M},q)$ in Eq.~\eqref{likelihood3} to be replaced by a population prior conditional on the mass population model and its parameters that we are trying to infer: $p(\mathcal{M},q|\vec{\gamma}_{\text{pop}})$, where $\vec{\gamma}_{\text{pop}}$ are the universal parameters characterizing the BNS mass distribution. This will induce an additional factor of $p(q,\bar{\mathcal{M}}|\vec{\gamma}_{\text{pop}})/p(\vec{q},\bar{\mathcal{M}})$ in the integrand of Eq.~\eqref{likelihood31} and hence those of Eq.~\eqref{posterior2}. Then, Including an additional prior on the mass population parameters $p(\vec{\gamma}_{\text{pop}})$ to be multiplied with the prior on the EoS parameters $p(\vec{\gamma}|I)$ in Eq.~\eqref{posterior2} effectively makes its LHS the joint posterior of the universal EoS and population parameters given GW data from multiple events, which can be sampled stochastically to produce simultaneous mass-population and EoS constraints. Since the KDE and rest of the approximations along with the dimensionality of the integrals remain the same, this generalization will not affect the latency and computational cost of our algorithm. We leave such a generalization as an upcoming work.
We have shown that our algorithm is compatible with the \textsc{TaylorF2} waveform model. We note that switching to a 3-dimensional KDE in $(\Lambda_1,\Lambda_2,q)$ instead of our 2 dimensional one, has the potential of making our analysis compatible with other waveforms as well, such as \textsc{PhenomPNRT}. As described in Secs.~\ref{waveform} and~\ref{approx}, our 2-dimensional KDE-based approximation necessitates the use of a uniform in $\tilde{\Lambda}$ prior in the single event EoS agnostic PE runs which prevents the use of existing PE results with the \textsc{PhenomPNRT} waveforms. However the 3-dimensional KDE based generalization would work with the uniform in $\Lambda_1$-$\Lambda_2$ prior being used in the single event PE with \textsc{PhenomPNRT} waveforms, enabling the use of these PE results. For such a prior, i.e., $p_{\text{PE}}(\Lambda_1,\Lambda_2,\mathcal{M},q)\propto p(\mathcal{M},q)$, the 3-dimensional KDE approximation to the EoS agnostic posterior, $p(q,\Lambda_1,\Lambda_2|d_i)\approx K_i(q,\Lambda_1,\Lambda_2)$, would lead to the modification of Eq.~\eqref{posterior2} to $p(\vec{\gamma}|\{d\},\mathcal{E},I) \appropto p(\vec{\gamma},I)\prod_{i=1}^N \int_{0}^1 K_i(q,\Lambda_{1}(\bar{\mathcal{M}}_i,q,\vec{\gamma}),\Lambda_{2}(\bar{\mathcal{M}}_i,q,\vec{\gamma}))\,dq$. This posterior can be then be sampled stochastically using the same techniques outlined in Sec.~\ref{approx} to produce EoS constraints. Similar calculations with higher dimensional KDE's have been shown to produce sensible results in non-parametric EoS inference studies like refs~\cite{3dkde1,3dkde2,3dkde3}. In the context of our algorithm, such a calculation is a generalization we leave as part of future work.
To summarize, in this proof of concept work, we develop an algorithm for fast and efficient hierarchical Inference of parameterized NS EoS from multiple GW observations and demonstrate its accuracy. With this development, \texttt{GWXtreme} is now a strong candidate for performing fast and computationally cheap hierarchical EoS inference using both tabulated and parameterized EoS models from multiple GW obsevations in O4. We have noted that generalizations of our algorithm to increase accuracy and applicability while maintaining efficiency is straightforward and will be available soon in future releases of \textsc{GWXtreme}. We further note that our demonstrations serve as proof of concept for the applicability of bounded KDEs in increasing the efficiency of other GW based hierarchical inference problems. Similar problems where-in the parameterized physical model being inferred implies deterministic relationships between event-specific observables leading to delta function priors on them, can be efficiently handled with customized KDEs. Our algorithm can serve to guide such analyses which can re-use the basic concept of our framework while needing to modify only the model-imposed priors and set of observables sensitive to the model.
\acknowledgments
\input{ack}
|
1,108,101,566,407 | arxiv | \section{Introduction}
Approximate chiral symmetry is an important
feature of the QCD lagrangian. Much of the low-energy
behavior of QCD at zero temperature and density
can be understood in terms of chiral symmetry, its
spontaneous breaking and the anomolous breaking of
the $U(1)_A$ subgroup. There has been considerable
interest in the possibility of a chirally
restored phase of QCD, as might be expected at
a sufficiently high temperature, or a large number of flavors,
or a high density \cite{shur1}. Recently, there have also
been some speculations about the possibilities that
$U(1)_A$ symmetry might be restored \cite{u1a1,u1a2,u1a3},
despite the fact that the anomaly
in the singlet axial current is formally
temperature-independent \cite{u1a}.
These issues are of obvious theoretical interest. They
also may have nontrivial experimental implications since
regions of the chirally restored phase might be produced
in ultrarelativistic heavy-ion collisions.
An important issue in attacking this problem theoretically
is finding calculable quantities which are sensitive
to whether the phase breaks the symmetry.
There are many well-known signatures of chiral symmetry
restoration; for instance, the vanishing of the
chiral quark condensates and the disappearance of Goldstone
modes. A special class of tests
consists of comparing different thermal
two-point correlation functions of currents with hadronic
quantum numbers; in the symmetric phase they are connected
due to the underlying chiral (or possibly axial U(1)) symmetry.
We wish to observe that it is important in practice to check
many of these signtures simultaneously since practical
QCD-based calculations are limited to numerical calculations on the
lattice which necessarily have both statistical and systematic
errors. Accordingly it is hard to tell whether the
approximate vanishing of a single observable is an indication
of symmetry restoration or simply an
accidently small value masked by numerical noise.
Correlation functions with hadron quantum numbers
are a useful window into the
structure of excitations of the QCD vacuum. They
have been used widely in the QCD sum rule
and lattice calculations
to study hadron spectrum at
zero temperature. One can construct an infinite
number of hadron currents of different dimensions
and Lorentz symmetry by employing covariant derivatives,
gluon and quark fields. However, in
realistic calculations, one uses
simple ones with given quantum numbers. In the chirally-symmetric
phase, chiral symmetry imposes
many relations among the current correlators; these
are chiral Ward identities.
Some of these are well known and can be derived
by simple inspections.
The goal of this paper is to show
that a very large class of hadron interpolating
currents fall into a few chiral representations
and that relations among current correlators can be
systematically derived using multiplications of these
representations. In particular, we will explictly enumerate
all of the chiral representations of
mesonic currents that carry flavor quantum numbers of one quark
and one antiquark fields and the representations
of baryon currents that carry flavor
quantum numbers of three quark fields. We
should note that this is a very large class of interpolating
currents---there are no restrictions on the number of covariant
derivatives, gluon fields, and quark pairs that are coupled to
chiral singlets, and indeed no restriction that the
current even be local (although they must be gauge invariant).
While we have focused our attention to a restricted class
of currents, the techniques used in this
paper can be extended straightforwardly to determine
the chiral representation for arbitrary currents,
such as the pion interpolating current of type $\bar q \tau^a
q \, \bar q i \gamma_5 q$.
It is also worth noting that
virtually all practical lattice gauge or QCD
sum rule calculations have used currents in the class considered here.
The role of the chiral multiplet structures of the
currents on two-point correlation functions in the chirally
restored phase is significant in two ways. The first
concerns correlation functions between two distinct
currents with the same flavor,
spin, and parity quantum numbers but which
belong to distinct chiral representations. In a chirally-broken
phase such as the $T=0$ vacuum, such correlation
functions are generically nonzero. In terms of a
$T=0$ spectral representation of the correlator, this
indicates nothing more than the fact that each of these
currents has a nonzero overlap between the vacuum and the
same physical state. However in a chirally restored phase
all such mixed correlators are identically zero. This
vanishing of all these mixed correlators can be used as a
signature of symmetry restoration. Some examples
have already been considered in Ref. \cite{shaf}.
When one enumerates the chiral representations one sees
that for virtually every flavor, spin and parity
channel there are interpolating currents with at least two
distinct sets of chiral quantum numbers, even when restricting
to the class of currents considered here.
This is significant when trying to interpret the nature of
the chirally
restored phase. For example, questions such as ``does the $\rho$
meson survive the chiral transition?'' becomes intriniscally
ambiguous. The question cannot even be formulated without
specifing the chiral quantum numbers of the current coupling to
the $\rho$ channel.
The second class of issues concerns the equality of
certain two-point correlation functions. If a current is in
a nontrivial chiral multiplet, then under chiral rotations
one generates new currents with distinct parity and/or flavor
quantum numbers. Clearly, in a chiral restored phase these
newly generated currents must yield correlation functions
identical to the original ones and hence one predicts that
certain correlators must be identical. Of course, many such
examples have long been known. For example, for two massless
flavors the correlators in
the $\sigma$ (scalar-isoscalar) and $\pi$
(pseudoscalar-isovector) channels
corrresponding to the currents, $\bar q\tau^a q$ and $\bar q \tau^a\, i
\gamma_5 q$, respectively are well known to be the same in the restored phase;
similary the $\rho$ (vector-isovector) and $A_1$
(pseudovector-isovector) corresponding to the currents $\bar q
\gamma^i \tau_a q$ and $\bar q \gamma_5 \gamma^i
\tau_a q$ are also well known to be identical.
However, there are many examples which are less familiar.
For example, the $\rho$ and the $b_1$ (psuedovector-isovector and
charge conjugation {\it odd}) corresponding to the currents
$ \bar q \tau^a \sigma^{ij} q$ and
$\bar q \tau^a \sigma^{0i} q$, respectively, are also
identical in this phase. We note that there is a very large
number of these relations including a certain nucleon
interpolating current whose correlator in the chirally restored
phase is identical to correlators in the $\Delta \,
({\frac{1}{2}}^{-})$ channel.
It is also interesting to
consider correlator relations which would result
if the $U(1)_A$ symmetry were to be restored.
Whether the effects of the $U(1)_A$ anomaly play
a role in the chirally restored phase and if they
do how do, these effects die off with increasing
temperature, remain interesting questions. Accordingly,
it is useful to classify good signatures of the effects
of a manifest $U(1)_A$ symmetric phase on correlation
functions. In the course
of our discussion, we will recover
many relations which are widely known and have
already been used frequently in the literature. However, besides
organizing those into categories, we also
find a number of new relations which we suggest
will provide further insights
for lattice or model calculations.
We divide our dicussions into meson and baryon
currents. For each case, we consider the possibilities
of two and three massless flavors separately.
Of course, in nature there are neither two nor three
flavors of massless quarks. For the analysis here
to be of any use in connecting to real QCD, it is important
that for quantities of interest the quark masses must
be small enough to either neglect outright or to include
perturbatively in some type of chiral perturbation theory.
The up and down quark masses are presumably light enough
for such a procedure to make sense. Moreover, the nature
of the chiral symmetry of the underlying theory allows one
to deduce certain relations between correlators which hold
to order $m_q^2$ rather than $m_q$, thereby enhancing the
range of validity of the chiral expansion\cite{BCM2}.
Treating the strange quark mass as small is clearly far more
problematic. It is by no means clear that a chiral expansion
in $m_s$ will be valid for any given quantity of interest
particularly in the vicinity of the phase transition.
Even if it turns out that an expansion in $m_s$ is not valid,
the three massless flavor results are not
without interest. One obvious use is in providing limiting
cases against which to test more realistic calculations. For
example, lattice calculations which hope to connect to the
real world of two light and one intermediate mass flavors
could be re-run comparatively easily with three light flavors.
The ability of such calculations to reproduce the correlator
relations for three massless flavors will be quite useful in
demonstrating that the systematic and statistical errors
inherent in the calculations are under control.
\section{meson currents}
The simplest meson currents can be constructed
from one-quark and one-hermitian-conjugated-quark
fields. Although one can construct more
complicated meson currents by including
covariant derivatives, gluon and quark fields,
a large class of meson
currents belong to the chiral multiplets
of one-quark and one-antiquark product
representations. For $N_f$ flavors,
the relevant chiral multiplets
are representations
of $SU(N_f)_L \times SU(N_f)_R$.
In the following subsections, we consider
the simplest chiral multiplets for
two and three massless flavors.
\subsection{Two Massless Flavors}
The quark fields $(u_L, d_L)$ and
$(u_R, d_R)$ belong to the basic
representation $({1\over 2}, 0)+ (0,{1\over 2})$
of $SU(2)_L\times SU(2)_R$.
The convention for denoting chiral multiplets is that
the first and second numbers in a bracket refer
to $SU(2)_L$ and $SU(2)_R$ representations,
respectively. Under parity
transformation, the left-handed fields become
right-handed and vice versa. In the case of $SU(2)$,
the hermitian-conjugation fields transform according to
the same representation as the original fields.
To classify meson currents, we consider the
product representation, $[({1\over 2}, 0)+ (0,{1\over 2})]
\times [({1\over 2}, 0) + (0,{1\over 2})]$.
The simple angular momentum addition rules yield,
\begin{eqnarray}
&& \hspace*{-.5in}\left[({1\over 2}, 0)+ (0,{1\over 2})\right] \times \left[({1\over 2}, 0)
+ (0,{1\over 2})\right] \nonumber \\
= && \Big[(\tilde 0,0)+(0,\tilde 0)\Big] + \left[({1\over 2}, {1\over 2})+
({1\over 2}, {1\over 2})\right] + \Big[(1,0)+(0,1)\Big] \ ,
\end{eqnarray}
where tilde on $\tilde 0$ has no group-theoretical role.
It simply serves as
a reminder that this singlet corresponds a
left- or right-handed quark-antiquark pair coupled into
a flavor-singlet rather than the absence of any fields. We use square brackets to group together
representions that are parity-conjugates.
Let us first consider left and right singlets $(\tilde 0,0)+(0,\tilde 0)$.
The correponding quark bilinears are $\bar u_Lu_L+\bar d_Ld_L$
and $\bar u_Ru_R+\bar d_Rd_R$. To form currents with good parity,
we have to consider symmetric and antisymmetric combinations:
$(\bar u_Lu_L+\bar d_Ld_L) \pm (\bar u_Ru_R+\bar d_Rd_R)$.
The simplest currents with these flavor structures
have the quantum numbers of $\omega(1^{--})$ and $f_1(1^{++})$,
\begin{eqnarray}
j^\mu_{vI=0} &=& \bar q\gamma^\mu q\ , \nonumber \\
j^{\mu}_{aI=0} &=& \bar q\gamma^\mu \gamma_5 q \ ,
\label{2}
\end{eqnarray}
where $q$ is a column vector consisting of up and down quark fields.
Both currents are invariant under $SU(2)_L\times SU(2)_R$
and $U(1)_A$ transformations. Therefore, flavor symmetries impose
no constraints on their correlation functions.
Next, we consider $({1\over 2}, {1\over 2})+
({1\over 2}, {1\over 2})$. Under the
isospin subgroup, they contain both isoscalar and isovector
multiplets. The isoscalar ones with good parity
correspond to the quark bilinears
$(\bar u_Lu_R + \bar d_Ld_R) \pm(\bar u_Ru_L+ \bar d_Rd_L)$.
Two examples of the positive parity currents are,
\begin{equation}
j_{sI=0} = {1\over 2} \bar qq \ , ~~~~
j_{tI=0}^k= {1\over 4}\epsilon^{ijk} \bar q \sigma^{ij} q \ ,
\end{equation}
which have the quantum numbers of $\sigma(0^{++})$ and
$h_1(1^{+-})$, respectively.
And the corresponding examples of the negative parity currents are,
\begin{equation}
j_{pI=0} = {1\over 2} \bar qi\gamma_5q, ~~~~
j_{\tilde tI=0}^k= {1\over 2} \bar q\sigma^{0k} q \ ,
\end{equation}
which have the quantum numbers of $\eta(0^{-+})$
and $\omega(1^{--})$,
respectively.
Examples of isovector multiplets
can be obtained by inserting the isospin Pauli matrices
$\tau^a (a=1,2,3)$ into the above currents,
\begin{eqnarray}
&&j_{sI=1}^a = \bar q{\tau^a\over 2} q \ ; ~~~~ j_{\tilde tI=1}^{ka}
= {1\over 2}\epsilon^{ijk}\bar q {\tau^a\over 2}\sigma^{ij} q \ ;
\nonumber \\
&&j_{pI=1}^a = \bar q{\tau^a\over 2}i\gamma_5q \ ; ~~~~
j_{tI=1}^{ka} = \bar q {\tau^a\over 2}\sigma^{0k} q \ .
\end{eqnarray}
which have the quantum numbers of $\delta(0^{++})$,
$b_1(1^{+-})$, $\pi(0^{-+})$, and
$\rho(1^{--})$, respectively.
Under $SU(2)_L\times SU(2)_R$, the isoscalar and isovector currents
transform into each other in pairs,
\begin{eqnarray}
j_{sI=0}~~ && \leftrightarrow ~~j_{pI=1}^a \ ,
\nonumber \\
j_{pI=0}~~ && \leftrightarrow ~~j_{sI=1}^a \ ,
\nonumber \\
j_{tI=0}^k~~ && \leftrightarrow ~~j_{\tilde tI=1}^{ka} \ ,
\nonumber \\
j_{\tilde t I=0}^k~~ && \leftrightarrow ~~j_{tI=1}^{ka} \ .
\end{eqnarray}
Thus, in the chirally-symmetric phase,
the bilocal current correlators of each pair
are the same,
\begin{eqnarray}
\langle Tj_{sI=0}(x)j_{sI=0}(0)\rangle
&=& \langle Tj_{pI=1}^a(x)j_{pI=1}^a(0)\rangle \ , \nonumber \\
\langle Tj_{pI=0}(x)j_{pI=0}(0)\rangle
&=& \langle Tj_{sI=1}^a(x)j_{sI=1}^a(0)\rangle \ , \nonumber \\
\langle Tj_{tI=0}^k(x)j_{tI=0}^k(0)\rangle
&=& \langle Tj_{\tilde tI=1}^{ka}(x)j_{\tilde tI=1}^{ka}(0)\rangle \ ,
\nonumber \\
\langle Tj_{\tilde tI=0}^k(x)j_{\tilde tI=0}^k(0)\rangle
&=& \langle Tj_{tI=1}^{ka}(x)j_{tI=1}^{ka}(0)\rangle \ .
\end{eqnarray}
Notice that there is no summation for repeated indices in the above
equation. Unless stated explicitly, the same is
used for other equations below.
The first two relations say that the $\pi$ ($\delta$)
type of correlators are the same as
$\sigma$ ($\eta$) type of correlators, a result which is familiar.
The second two relations say that
the $\rho$ ($b_1$) type of correlators are the same as $h_1$
($\omega$) type, which is lessknown.
On the other hand, under $U(1)_A$ transformations,
the currents with opposite parities transform into
each other. If $U(1)_A$ symmetry is restored in some phase,
we then have the following relations among the
correlators,
\begin{eqnarray}
\langle Tj_{sI=0}(x)j_{sI=0}(0)\rangle
&=& \langle Tj_{pI=0}(x)j_{pI=0}(0)\rangle\ , \nonumber \\
\langle Tj_{sI=1}^a(x)j_{sI=1}^a(0)\rangle
&=& \langle Tj_{pI=1}^a(x)j_{pI=1}^a(0)\rangle\ , \nonumber \\
\langle Tj_{tI=0}^k(x)j_{tI=0}^k(0)\rangle
&=& \langle Tj_{\tilde tI=0}^k(x)j_{\tilde tI=0}^k(0)\rangle\ ,
\nonumber \\
\langle Tj_{ tI=1}^{ka}(x)j_{tI=1}^{ka}(0)\rangle
&=& \langle Tj_{\tilde tI=1}^{ka}(x)j_{\tilde tI=1}^{ka}(0)\rangle \ .
\end{eqnarray}
The isovector relations are particularly interesting because
they contain no disconnected contributions in the path-integral
formulation. In the literature,
the $\pi$ and $\delta$ types of correlators have been compared at
the chiral transition region to learn about the fate of $U(1)_A$
symmetry\cite{shur1,shaf,columbia,milc}. The same comparison can be made
of the $\rho$ and $b_1$ types of correlators.
Finally, we consider $(0,1)+(0,1)$ which contain
isovector multiplets only.
The simplest example of the multiplets is,
\begin{eqnarray}
j^{\mu a}_{vI=1} & = & \bar q \gamma^\mu {\tau^a\over 2} q \ , \nonumber \\
j^{\mu a}_{aI=1} & = & \bar q \gamma^\mu \gamma_5{\tau^a\over 2} q \ ,
\end{eqnarray}
which have the quantum numbers of $ \rho$
and $A_1$, respectively. It is worth noting that the currents
written above are both conserved in the massless limit and
hence do couple only to vectors (axial vectors) and not
to scalars (pseudoscalars). More general realization of
currents in this representation such as
\begin{eqnarray}
j^{\mu a}_{vI=1} & = & \bar q \gamma^\mu {\tau^a\over 2}
F^2 q \ , \nonumber \\
j^{\mu a}_{aI=1} & = & \bar q \gamma^\mu \gamma_5{\tau^a\over 2}
F^2 q \ ,
\end{eqnarray}
where $F^2=F^{\alpha\beta}F_{\alpha\beta}$,
have the quantum numbers of $(\delta,\rho)$ and $(\pi,A_1)$,
respectively.
Under $U(1)_A$ transformations, the currents are invariant.
Under $SU(2)_L\times SU(2)_R$
chiral transformations, they mix with each other.
Thus if the vacuum is chirally-symmetric,
their two-point correlation functions are equal;
\begin{eqnarray}
\langle Tj_{vI=1}^{\mu a}(x)j_{vI=1}^{\mu a}(0)\rangle
= \langle Tj_{aI=1}^{\mu b}(x)j_{aI=1}^{\mu b}(0)\rangle \ ,
\end{eqnarray}
which is a well-known result.
A slightly more complicated example of (0,1)+(1,0) multiplet
involves currents with gluon fields,
\begin{eqnarray}
\tilde j^{\mu a}_{vI=1} & = & \bar q \gamma^\nu F_{\mu\nu} {\tau^a \over 2}
q \ , \nonumber \\
\tilde j^{\mu a}_{aI=1} & = & \bar q \gamma^\nu \gamma_5
F_{\mu\nu} {\tau^a\over 2} q \ .
\end{eqnarray}
The first current has the quantum number of
the exotic vector meson $1^{-+}$
and the second has that of $b_1(1^{+-})$. The chiral symmetry
again predicts the equality of the two-point correlators in
the symmetric phase.
\subsection{Three Massless Flavors}
For three massless flavors, the meson currents belong
to the product representations of $(3,1)+(1,3)$ and $(\bar 3, 1)
+(1, \bar 3)$. The former correspond to the quark fields $u_L,d_L,s_L$
and $u_R,d_R,s_R$; the latter correspond to the
the conjugate quark fields $\bar u_L, \bar d_L, \bar s_L$
and $\bar u_R, \bar d_R, \bar s_R$. SU(3) multiplication rules
give,
\begin{eqnarray}
&& \hspace*{-.5in}\Big[(\bar 3, 1)+ (1,\bar 3)\Big] \times \Big[(3, 1)
+ (1,3)\Big] \nonumber \\
= && \Big[(\tilde 1,1)+(1,\tilde 1)\Big] + \Big[(\bar 3, 3)+
(3, \bar 3)\Big] + \Big[(8,1)+(1,8)\Big] \ .
\end{eqnarray}
The notations for SU(3) representations are such that
they denote the actual dimensions.
Let us consider first the chiral-singlet
$(\tilde 1,1)+(1,\tilde 1)$. One can easily construct
currents that are generalizations
of those in $(\tilde 0,0)+(0,\tilde 0)$ of the two-flavor case
(Eq.(\ref{2})),
\begin{eqnarray}
j^{\mu}_{v1} & = & \bar q \gamma_\mu q \ , \nonumber \\
j^{\mu}_{a1} & = & \bar q \gamma_\mu\gamma_5 q \ ,
\end{eqnarray}
where $q$ is now a column vector consisting of up, down, and
strange quark fields. Both $j^{\mu}_{v1}$ and $j^{\mu}_{a1}$
are invariant under $SU(3)_L\times SU(3)_R$ and
$U(1)_A$ transformations, and therefore chiral symmetries
do not impose any constraints on their correlators.
The multiplet $(\bar 3, 3)+(3, \bar 3)$
contains $SU(3)_V$ octets and singlets.
The eighteen $J=0$ currents constructed from 1 and
$\gamma_5$ matrices belong to this chiral multiplet,
\begin{eqnarray}
j_{s1} & = & \bar q q/\sqrt{6} \ , \nonumber \\
j_{p8}^a & = & \bar q i\gamma_5 t^a q \ , \nonumber \\
j_{p1} & = & \bar q i\gamma_5 q/\sqrt{6} \ , \nonumber \\
j_{s8}^a & = & \bar q t^a q \ ,
\end{eqnarray}
where $t^a=\lambda^a/2$ and $\lambda^a ~(a=1,...,8)$ are Gell-Mann
matrices. Under $SU(3)_L\times SU(3)_R$, these currents
transform into each other, and their two-point correlators
equal in a chirally symmetric phase,
\begin{equation}
\langle Tj_{s1}(x)j_{s1}(0)\rangle =
\langle Tj_{s8}^a(x)j_{s8}^a(0)\rangle =
\langle Tj_{p1}(x)j_{p1}(0)\rangle =
\langle Tj_{p8}^a(x)j_{p8}^a(0)\rangle \ .
\label{j0}
\end{equation}
Using the currents in this class one sees that the correlator
in the pion channel is necessarily the same as that in the
$\eta^\prime$ channel despite the existence of the anomaly.
This result has previously been obtained from arguments
based on instanton contributions\cite{eta1,eta2} and explictly
on the basis of group theory \cite{birse}.
One can also construct eighteen $J=1$
currents in the same chiral multiplet
from the $\sigma^{\mu\nu}$ matrix,
\begin{eqnarray}
j_{t1}^k & = & \bar q \sigma^{0k} q/\sqrt{6} \ , \nonumber \\
j_{\tilde t8}^{ka} & = & {1\over 2}\epsilon^{ijk}\bar q
\sigma^{ij} t^a q \ , \nonumber \\
j_{\tilde t1}^k & = & {1\over 2}\epsilon^{ijk}\bar q
\sigma^{ij} q/\sqrt{6} \ , \nonumber \\
j_{t8}^{ka} & = & \bar q \sigma^{0k} t^a q \ ,
\end{eqnarray}
which have the quantum numbers of $J^{\pi C}=1^{\pm+}$ mesons:
$\rho, \omega, \phi, K^*, b_1, h_1, K_{1B}$.
Under $SU(3)_L\times SU(3)_R$, these currents
transfom into each other in the same way as
the $J=0$ multiplet does. In the chirally-symmetric
phase, their two-point correlators have similar
relations as those in Eq. (\ref{j0}),
\begin{equation}
\langle Tj_{t1}^k(x)j_{t1}^k(0)\rangle =
\langle Tj_{t8}^{ka}(x)j_{t8}^{ka}(0)\rangle =
\langle Tj_{\tilde t1}^k(x)j_{\tilde t1}^k(0)\rangle =
\langle Tj_{\tilde t8}^{ka}(x)j_{\tilde t8}^{ka}(0)\rangle \ .
\end{equation}
Finally, we consider the multiplet $(1,8)+(8,1)$ which contains
$SU(3)$ flavor octets. The simplest currents in the multiplet are,
\begin{eqnarray}
j_{v8}^{\mu a} &=& \bar q \gamma_\mu t^a q \ , \nonumber \\
j_{a8}^{\mu a} &=& \bar q \gamma_\mu \gamma_5 t^a q \ .
\end{eqnarray}
Under $SU(3)_L\times SU(3)_R$, they transform into each other
and thus their two-point correlators equal in the chirally-symmetric
phase,
\begin{eqnarray}
\langle Tj_{v8}^{\mu a}(x)j_{v8}^{\mu a}(0)\rangle
= \langle Tj_{a8}^{\mu b}(x)j_{a8}^{\mu b}(0)\rangle \ ,
\end{eqnarray}
Under $U(1)_A$ tranformations, the currents
are separately invariant.
Since there are no two simple currents
which belong to the same chiral mutiplet
but transform differently under the $U(1)_A$ group,
one cannot form simple two-point correlators to
test the $U(1)_A$ restoration, as in the two-flavor case.
One can, however, construct
three-point correlators that are chiral-singlet,
but transform nontrivially under $U(1)_A$. An example
is presented in Ref. \cite{birse}.
\section{Baryon currents}
Assuming color SU(3) symmetry, one can construct the simplest
baryon currents out of three quark fields. However, a large
class of baryon currents can be classified in the
chiral multiplets derived from
the product of three basic (quark) representations.
In the following subsections, we again consider two and
three massless flavors separately.
\subsection{Two Massless Flavors}
For two massless flavors, we consider baryon
currents belonging to $[(0,{1\over 2})+({1\over 2},0)]^3$.
Reducing it to irreducible multiplets,
we find,
\begin{eqnarray}
\left[(0,{1\over 2})+({1\over 2},0)\right]^3 &
= &\left[({3\over 2},0)+(0,{3\over 2})\right]
+3\times \left[(1,{1\over 2})+({1\over 2},1)\right] \nonumber \\
&& +3\times \left[(\tilde 0,{1\over 2})
+ ({1\over 2}, \tilde 0)\right] + 2\times \left[(\tilde {1\over 2},0)+(0,
\tilde {1\over 2})\right]\ .
\end{eqnarray}
Multiple appearances of the same representations
are due to the permutation symmetry of
three quark labellings. The tildes on
$\tilde 0$ and $\tilde {1\over 2}$ serve as a reminder that
a pair of left- or right-handed quarks has been coupled to
flavor-singlet.
One of the simplest examples of currents
in multiplet $({1\over 2}, \tilde 0)+(
{1\over 2}, \tilde 0)$, is the
spin-1/2 proton interpolating field,
\begin{equation}
\eta_{N} = \left(u^TC\gamma_\alpha u\right) \gamma_5 \gamma^\alpha d \ ,
\label{19}
\end{equation}
where and hereafter
color indices on the quark fields are implicit
and totally antisymmetric. This current
has been used in the QCD sum rule
calculations \cite{ioffe}. The current itself
couples also to a negative-parity spin-1/2 state,
\begin{equation}
\langle 0|\eta_N|N({1\over 2})^-\rangle = \lambda_N'\gamma_5 U(p) \ ,
\end{equation}
where $U(p)$ is a Dirac spinor. [For a recent
application of this feature, see Ref. \cite{oka}.]
Consider the following two-point correlator:
\begin{equation}
\int d^4x e^{ixp}\langle T \eta_{N}(0)\bar \eta_{N}(x)\rangle
= \rho_1(p^2) p_\mu \gamma^\mu + \rho_2(p^2)\ .
\end{equation}
In the chirally-symmetric phase, by making chiral
rotation $U=\exp(i\pi \tau^3 \gamma_5/2)$, one can show,
\begin{equation}
\langle T \eta_{N}(0)\bar \eta_{N}(x) \rangle=
-\gamma_5 \langle T \eta_{N}(0)\bar \eta_{N}(x)\rangle \gamma_5 \ .
\end{equation}
Thus, we have $\rho_2(p^2)=0$, i.e.
the correlator contains only the chiral-even term.
Since negative parity states contribute
to $\rho_2(p^2)$ with an opposite sign compared with
positve parity states, the above result
implies that every intermediate state has a degenerate
partner of opposite parity and their chiral-odd spectral
strengths cancel. As we shall see below, this is quite a
general property of two-point baryon correlators in
the chirally-symmetric phase if the chiral limit
can be taken uniformly.
The simplest current in the
$(\tilde{1\over 2}, 0)+(0, \tilde{1\over 2})$ multiplet is,
\begin{equation}
\eta_{N'} = \left( u^TC\sigma_{\alpha\beta} u\right)
\gamma_5 \sigma^{\alpha\beta}d \ ,
\label{22}
\end{equation}
which has also been recognized in the QCD
sum rule calculations\cite{ioffe}. Again the two-point
correlator has only the chiral-even term
if the vacuum has chiral symmetry.
>From the group theoretical standpoint, the
two multiplets discussed so far are identical.
Thus, their product can produce an
$SU(2)_L\times SU(2)_R$ singlet, and
the correlation function,
\begin{equation}
\int d^4x e^{ixp}\langle T \eta_{N}(0)\bar \eta_{N'}(x)\rangle\ ,
\end{equation}
is nonzero even if the vacuum is chirally symmetric.
[Of course, in that case the $\rho_2(p^2)$ type of term does
vanish.] However, since $\eta_{N}$ and $\eta_{N'}$
transform differently under $U(1)_A$, the chiral-even
term would vanish if the $U(1)_A$ symmetry is restored.
This interesting diagnosis for $U(1)_A$ restoration
signature was first studied by Schafer and Shuryak
\cite{shaf}.
The chiral multiplet $({1\over 2},1)+(1, {1\over 2})$
contains both $I={1\over 2}$ and $I={3\over 2}$ isospin multiplets.
The simplest $I={1\over 2}, I_z={1\over 2}$ current is \cite{ioffe},
\begin{equation}
\eta_{N}^\mu = \left(u^T C \sigma_{\alpha\beta} u \right)\gamma_5
\sigma^{\alpha\beta}\gamma^\mu d -
\left(u^TC \sigma_{\alpha\beta} d \right) \gamma_5
\sigma^{\alpha\beta}\gamma^\mu u \ ,
\label{24}
\end{equation}
which has the quantum numbers of the nucleon($P_{11}$), as well as $S_{11}$,
$P_{13}$ and $D_{13}$ resonances.
Under $SU(2)$ chiral rotation, for instance $U=\exp(i\pi\tau^3\gamma_5/4)$,
the current is tranformed to its $I={3\over 2}$ partner,
\begin{equation}
\eta_{\Delta}^\mu = \left(u^T C \sigma_{\alpha\beta} u\right) \gamma_5
\sigma^{\alpha\beta} \gamma^\mu d +
\left(u^T C \sigma_{\alpha\beta} d\right) \gamma_5
\sigma^{\alpha\beta}\gamma^\mu u \ ,
\label{25}
\end{equation}
which has the quantum numbers of $\Delta (P_{33})$, $D_{33}$,
$S_{31}$, and $P_{31}$ resonances. A slightly different form
of $\eta^\mu_\Delta$ with three up-quark fields was first
used in a QCD sum rule calculation \cite{ioffe}.
In a chirally-symmetric phase,
\begin{equation}
\langle T \eta_{N}^\mu(x)\bar \eta_{N}^\nu(0) \rangle
= \langle T \eta_{\Delta}^\mu(x)\bar \eta_{\Delta}^\nu(0)
\rangle \ ,
\end{equation}
and only chiral-even terms contribute.
If relevant baryons survive the chiral phase transition
and they couple to these currents strongly,
$J={1\over 2}^\pm ({3\over 2}^\pm) $, $I={1\over 2}$
resonances would be degenerate with
$J={1\over 2}^\pm ({3\over 2}^\pm) $, $I={3\over 2}$ resonances.
Finally, the chiral multiplet $({3\over 2},0)+(0,{3\over 2})$
has isospin ${3\over 2}$. The simplest current in this multiplet is,
\begin{equation}
\eta_{\Delta}^{\mu\nu} = \left[\left( q^TC
\sigma_{\alpha\beta} q\right)\gamma_5\sigma^{\alpha\beta}
\sigma^{\mu\nu} q \right]_{I=3/2} \ ,
\end{equation}
where the flavor indices are coupled in a totally-symmetric way.
$\eta_{\Delta}^{\mu\nu}$ can couple to $J={1\over 2}^\pm,
{3\over 2}^\pm$ resonances. We haven't found any previous use of
this current in the literature.
In a chirally-symmetric phase,
the two-point correlator of the current contains chiral-even terms
only,
which implies that parity partners are degenerate and the chiral-odd
spectral strengths cancel.
\subsection{Three Massless Flavors}
To classify baryon currents in three massless flavors, we
consider decomposition of the representation
$[(1,3)+(3,1)]^3$ of $SU(3)_L\times SU(3)_R$.
The $SU(3)$ multiplication rules yield,
\begin{eqnarray}
\Big[(1,3)+(3,1)\Big]^3 &= &\Big[(10,1)+(1,10)\Big]
+3\times \Big[(6,3)+(3,6)\Big] \nonumber \\
& +& 3\times \Big[(\bar 3,3)
+ (3, \bar 3)\Big] + 2\times \Big[(8,1)+(1,8)\Big] +
\Big[(\tilde 1,1)+(1,\tilde 1)\Big] \ ,
\end{eqnarray}
where $\bar 3$ is from an antisymmetric combination of
two quark fields, and $\tilde 1$ from an antisymmetric
combination of three quark fields.
The chiral multiplet $(8,1)+(1,8)$ contains $SU(3)_V$
flavor octets.
Currents in this representation
can be constructed as generalizations of the
currents in $(\tilde {1\over 2},0)+(0,\tilde {1\over 2})$ of the
two-flavor case.
For instance, an extension of the $J={1\over 2}^{\pm}$ currents
in Eq. (\ref{22}) is ,
\begin{equation}
\eta^a_{[8]} = \left[\left(q^TC\sigma_{\alpha\beta} q \right)\gamma_5
\sigma^{\alpha\beta}q\right]_{[8]}^a \ ,
\end{equation}
where the first two quark fields are symmetrized in flavor indices
to form a 6 of $SU(3)_V$. The complete flavor-octet wave functions
can be found, for instance, in Ref. \cite{close}. In a
chirally-symmetric phase, the correlators
$\langle T \eta^a_{[8]}(x)\eta^a_{[8]}(0)\rangle$
contain chiral-even Dirac structure only.
The multiplet $(3,\bar 3)+(\bar 3,3)$ contain both
$SU(3)_V$ octets and singlets. The baryon currents
of the multiplet can be obtained from generalizations of
the currents in $({1\over 2},\tilde 0)+(\tilde 0,
{1\over 2})$ of the two-flavor case. For instance, from
Eq. (\ref{19}) we can write down nine $J={1\over 2}^{\pm}$ currents,
\begin{eqnarray}
\eta^a_{[8]'} &= &\left[\left(q^T C\gamma_\alpha \gamma_5 q\right)
\gamma^\alpha q\right]_{[8]'}^a
\ , \nonumber \\
\eta_{[1]} &=& \left[\left(q^T C\gamma_\alpha \gamma_5 q\right)
\gamma^\alpha q\right]_{[1]} \ .
\end{eqnarray}
where the first two quark fields are antisymmetrized in flavor
indices to form a $\bar 3$ of $SU(3)$. The complete flavor
wave functions again can be found in Ref. \cite{close}.
$\eta^a_{[8]'}$ have been used to calculate masses of
the baryon-octet in the QCD sum rule approach in Ref. \cite{yaz}.
The $\eta_{[1]}$ current carries the quantum number of
a singlet $\Lambda$.
In the chirally-symmetric phase, we have,
\begin{equation}
\langle T \eta^a_{[8]'}(x)\bar \eta^a_{[8]'}(0)\rangle
= \langle T \eta_{[1]}(x)\bar \eta_{[1]}(0)\rangle \ ,
\end{equation}
which contain only chiral-even Dirac structures.
If all the currents couple to chiral resonances
strongly, $J={1\over 2}^\pm$ octets are
degenerate with the $J={1\over 2}^\pm$ singlets,
which apparently is a new result\cite{shur1}.
The chiral multiplet $(6,3)+(3,6)$ contains
both $SU(3)_V$ octets and decuplets. Again, the currents
in the multiplet can be obtained as
generalizations of those
in $(1,{1\over 2})+({1\over 2},1)$ of the two-flavor case
(Eqs. (\ref{24}),(\ref{25})).
For instance,
\begin{eqnarray}
\eta^{\mu a}_{[8]} &=& \left[\left(q^T C\sigma_{\alpha\beta}
q\right) \gamma_5\sigma^{\alpha\beta}
\gamma^\mu q\right]_{[8]}^a \ , \nonumber \\
\eta^{\mu b}_{[10]} &=& \left[\left(q^T C\sigma_{\alpha\beta} q\right)
\gamma_5\sigma^{\alpha\beta}
\gamma^\mu q\right]_{[10]}^b \ .
\end{eqnarray}
The implicit flavor indices in 10 are totally symmetric.
$\eta^{\mu b}_{[10]}$ have been used to calculate
the masses of the lowest-lying baryon decuplet in the
QCD sum rule approach \cite{yaz}.
In a chirally-symmetric phase, we have,
\begin{equation}
\langle T \eta^{\mu a}_{[8]}(x)\bar
\eta^{\nu a}_{[8]}(0)\rangle
= \langle T \eta^{\mu b}_{[10]}(x)\bar
\eta^{\nu b}_{[10]}(0)\rangle \ ,
\end{equation}
which contain chiral-even structures only.
If these currents are dominated
by the lowest resonances, then $J={1\over 2}^\pm
({3\over 2}^\pm)$
octets are degenerate with $J={1\over 2}^\pm ({3\over 2}^\pm)$
decuplets.
Finally, the simplest currents in multiplet $(10,1)+(1,10)$
are the three-flavor generalization of
those in $({3\over 2},0)+(0,{3\over 2})$ in the
two-flavor case,
\begin{equation}
\eta_{[10]}^{\mu\nu b} = \left[\left(q^TC\sigma_{\alpha\beta}
q\right)\gamma_5\sigma^{\alpha\beta}
\sigma^{\mu\nu} q\right]_{[10]}^b \ ,
\end{equation}
where three implicit flavor indices are symmetric.
An example of the chiral-singlet current
in $(\tilde 1,1)+(1, \tilde1)$ is
\begin{equation}
\eta_{[1]}^\mu = \left[\left(q^TC\gamma_5
q\right) D^\mu q - \left(q^TC
q\right) \gamma_5 D^\mu q\right]_{[1]} \ ,
\end{equation}
which would vanish if without the covariant derivative.
In the chirally-symmetric vacuum, the two-point
correlators of the above currents contain only chiral-even
terms.
Since there are no two simple currents in the same chiral $SU(3)$
representation having different $U(1)_A$ properties,
a test of $U(1)_A$ would involve at least four baryon
currents. Correlators with three baryon currents vanish
due to the baryon number $U(1)$ symmetry.
This generalizes the result
in Ref. \cite{birse}.
\section{comments}
In this paper, we have systematically enumerated the simplest
chiral multiplets which have meson and baryon quantum numbers.
The reason for doing this is the prospect
that QCD may have a chiral-restored phase.
If so, the chiral symmetry will
be reflected explicitly in the correlators of hadronic currents,
as we mentioned earlier in the Introduction.
Of course, many of the results we have stated are well known.
We repeat them here so that the reader can clearly see a
group-theoretical organization. However, we have
also found a number of new results which we
summarize here:
\begin{itemize}
\item{A test of unbroken $U(1)_A$ symmetry in the two massless
flavor case is that the correlators of $\rho$ and $b_1$ currents in
parity-conjugating $({1\over 2}, {1\over 2})+({1\over 2}, {1\over 2})$
multiplets are equal.}
\item{A test of unbroken $SU(2)$ chiral symmetry is the equality of
$\rho$ and $h_1$ types of correlators and $b_1$ and $f_1$
types of correlators in $({1\over 2}, {1\over 2})+({1\over 2}, {1\over 2})$. This result has
a three-flavor generalization.}
\item{We found several new interpolating currents for baryons:
the currents for $I={3\over 2}$ baryons in
$({3\over 2}, 0)+(0, {3\over 2})$ multiplet and
its three-flavor generalization, and the current
for singlet $\Lambda$ in $(\tilde 1,1)+(1, \tilde 1)$.}
\item{A test of unbroken $SU(2)$ chiral symmetry in baryon sector
is the equality of the $I={1\over 2}$ (nucleon) and $I={3\over 2}/2$ ($\Delta$)
two-point current correlators
in $(1, {1\over 2}) + (1, {1\over 2})$.
$(1,1/2)+(1/2,1)$. Besides a generalization of this result
to the three flavor case, we found that the singlet $\Lambda$
and the baryon-octet correlators in $(\bar 3, 3)+(3,\bar 3)$
are equal.}
\end{itemize}
Finally, the chiral multiplets are a useful way to organize
baryon interpolating currents, which are closely related
to independent Bethe-Salpeter amplitudes. Thus, we expect
the present work to be useful also
for the study of hadron structure.
\acknowledgements
This work is supported in part by funds provided by the
U.S. Department of Energy (D.O.E.) under cooperative agreement
DOE-FG02-93ER-40762.
|
1,108,101,566,408 | arxiv | \section{Introduction}
There is a growing need for real-time status monitoring and controlling
with the overwhelming proliferation of the Internet of Things (IoT),
such as sensor networks, camera networks, and vehicular networks,
to name but a few \cite{palattellaInternetThings5G2016}. Timely updates
of status at the destination are crucial for effective monitoring
and control in these applications \cite{linAverageAgeChanged2020a,xuAoIEnergyConsumption2020}.
As such, we use the metric of age of information (AoI), which is defined
from the receiver\textquoteright s perspective as the time elapsed
since the most recently received status update was generated at the
IoT device \cite{kaulRealtimeStatusHow2012}, to quantify the freshness
of information. In general, minimization of the AoI requires the sampling
frequency, queueing delay, and transmission latency be jointly optimized
at the IoT device, which have been extensively studied in previous
works \cite{sunUpdateWaitHow2017,jiangUnifiedSamplingScheduling2019,zhouJointStatusSampling2019}.
Actually, besides performing simple monitoring tasks, new designed
IoT devices with computing capability is able to conduct more intricate
tasks, such as data compression, feature extraction, and initial classification
\cite{kuangAnalysisComputationIntensiveStatus2020,xuOptimizingInformationFreshness2020}.
Preprocessing the status update at the IoT device can reduce the transmission
time but give rise to an additional preprocessing time. Therefore,
a natural question arises at once: Is it instrumental in reducing
AoI by preprocessing the status updates before the transmission? And
if yes, how to jointly schedule the preprocessing and transmission?
These questions motivate the study of the computing-enable IoT in
this paper.
A recent line of research has exerted substantial efforts in studying
the AoI minimization with computing-enabled IoT devices \cite{kuangAnalysisComputationIntensiveStatus2020,xuOptimizingInformationFreshness2020,zouOptimizingInformationFreshness2019,bastopcuPartialUpdatesLosing2020,bastopcuAgeInformationUpdates2019}.
In \cite{kuangAnalysisComputationIntensiveStatus2020}, the local
computing scheme was analyzed under the zero-wait policy by using
tandem queueing model and was compared with the remote computing scheme
in terms of the average AoI. The tandem queueing model was further
extended in \cite{xuOptimizingInformationFreshness2020}, where the
status updates from multiple sources are preprocessed with different
priorities. The closed-form expression for the average peak AoI was
derived and the effects of the processing rate on the peak AoI was
analyzed. In \cite{zouOptimizingInformationFreshness2019}, both average
AoI and average peak AoI were analyzed for the computing-enable IoT
device with various tandem queueing models, including preemptive and
non-preemptive queueing disciplines. However, these studies are primarily
concerned with the AoI analysis of a computing-enabled IoT system
with a predetermined preprocessing and transmission policy.
The optimal control of the preprocessing at the IoT device has been
studied in \cite{bastopcuPartialUpdatesLosing2020,bastopcuAgeInformationUpdates2019}.
In \cite{bastopcuPartialUpdatesLosing2020}, each status update is
generated with zero-wait policy and preprocessed to regenerate a partial
update. The partial update generation process was optimized to minimize
the average AoI and maintain a desired level of information fidelity.
However, the time consumption of the preprocessing has not been considered.
In \cite{bastopcuAgeInformationUpdates2019}, the processing is used
to improve the quality of the status update at the cost of increasing
the age. Both the waiting time and the processing time were optimized
to find the minimum of the average AoI subject to a desired level
of distortion for each update. Nonetheless, the transmission time
was assumed to be ignorable.
The status updating problem in a time-critical IoT system is studied
in this paper, where the IoT device is capable of preprocessing the
status updates. In particular, our goal is to control the preprocessing
and transmission procedure jointly at the IoT device in order to
reduce the weighted sum of the average AoI associated with the destination
and the energy consumed by the IoT device. Under this setup, the IoT
device can stay idle, transmit the status update directly, or preprocess
and transmit the status update. Due to the limited transmission and
computation capacities, each status update takes multiple minislots
to be preprocessed and transmitted. Moreover, because the processing
rate and transmission rate are different in general, the time for
transmitting directly and that for preprocessing-and-transmiting are
unequal. While the model of non-uniform transmission time has also
been investigated in \cite{wangWhenPreemptAge2019,zhouMinimumAgeInformation2020a},
where either the status updates are of different sizes and hence the
durations of the same action may be non-uniform \cite{wangWhenPreemptAge2019},
or the sizes of the status updates are different for different devices
\cite{zhouMinimumAgeInformation2020a}, in this work, it is the duration
of distinct actions that are non-uniform. The key contributions of
this paper are summarized as follows:
\begin{itemize}
\item By accounting for the non-uniform duration of distinct actions, we
formulate the status updating problem as an infinite horizon average
cost semi-Markov decision process (SMDP). In consequence, the Bellman
equation for the uniform time step average cost MDP does not directly
apply. To address this issue, we transform the SMDP to an equivalent
uniform time step MDP. Then, we analyze the structure of the optimal
update policy and put forth a relative policy iteration algorithm
to obtain the optimal update policy based on the structural properties.
We prove that to minimize the long-term average cost, the updating
action with a shorter expected duration should be chosen when the
AoI is large enough to dominate the cost. Therefore, the IoT device
should preprocess the status update before the transmission for large
AoIs, when the preprocessing results in a shorter expected update
duration than direct transmission.
\item The optimal status updating problem is further studied in a special
scenario where the status updates are transmitted over a reliable
channel. Then, we demonstrate that the optimal update policy has a
switch-type structure as to AoI in two cases. In the first case, the
action of being idle is excluded in the optimal policy, while in the
second case the action with lower energy efficiency is excluded in
the optimal policy. The optimal thresholds are further derived in
both cases.
\item We evaluate the performance of the optimal update policy and compare
it with two zero-wait policies by conducting extensive simulations.
The results demonstrate that the optimal update policy can effectively
schedule the preprocessing and transmission and walk a fine line between
the AoI and the energy consumption.
\end{itemize}
The rest of the paper is organized as follows: Section II presents
the system model. In Section III, we provide the SMDP formulation
of the problem and propose the structure-aware relative policy iteration
algorithm. In Section IV, we study the structure property of the optimal
policy in a special scenario. In Section V, the simulation results
are discussed, followed by the conclusion in Section VI.
\section{System Model \label{sec:System-Overview}}
As illustrated in Fig. \ref{fig:SystemModel}, we consider a time-critical
IoT status updating system with a single IoT device and a destination.\footnote{Although we consider only one device, the result in this paper can
be extended to the IoT system with multiple devices by formulating
the status updating problem as a restless multi-armed bandit (RMAB)
problem. To solve the RMAB problem, Whittle\textquoteright s index
policy can be employed, where we decouple the problem with multiple
devices into multiple sub-problems. There is only a single source-destination
pair in each sub-problem, which is exactly the model we considered
in this work.} The IoT device is composed of a sensor which is capable of tracking
the status of the underlying physical process, a processor which is
capable of preprocessing the status update, and a transmitter which
can deliver the status update over a wireless channel to the destination.
The model with a single source-destination pair is simple but sufficient
enough to investigate a wide range of applications. We assume that
the IoT device adheres to the generate-at-will policy, which implies
that a fresh status update is generated anytime an update decision
is made.
\begin{figure}[tp]
\centering
\includegraphics[width=0.5\textwidth]{fig/SystemModel}\caption{\label{fig:SystemModel}An illustration of the IoT status monitoring
system.}
\end{figure}
A time-slotted system is considered, where time is divided into minislots
with equal duration of $\tau$ (in seconds). In this system, a status
update with $T_{u}$ packets is generated at the beginning of a minislot
and at most one packet can be transmitted in one minislot. As such,
the total duration for transmitting a single status update is $T_{u}$
minislots. The preprocessing at the IoT device could be data compression,
feature extraction, or initial classification. In this work, we consider
the preprocessing in general practice. Specifically, we characterize
the preprocessing operation with three parameters, namely, the size
of the status update before preprocessing $T_{u}$, the size of the
status update after preprocessing $T_{u}'$, and the number of CPU
cycles per bit required to complete this operation $v$.\footnote{Here, we would like to take the data compression as an example to
explain the relationship between these parameters. For data compression,
$T_{u}$ and $T_{u}'$ are related to each other with a data compression
ratio $\beta$, i.e., $T_{u}'=\beta T_{u}$. Moreover, to perform
the compression operation with the ratio $\beta$, the number of CPU
cycles required to compress one bit of the input data is $v$. } Let $l$ denote the number of bits per packet. Since the number
of bit of the status update before processing is $T_{u}l$, the number
of minislots required for preprocessing one status update is then
given by
\begin{equation}
T_{p}=\left\lceil \frac{T_{u}l\upsilon}{f\tau}\right\rceil ,
\end{equation}
where $f$ (in Hz) is the CPU frequency of the processor. We assume
that the destination (e.g., a base station or an access point) has
a more powerful computing capability. Therefore, the processing time
at the destination is negligible compared to the processing time at
the IoT device or the transmission time.
We refer to a decision epoch of the IoT device as a time step, as
illustrated in Fig. \ref{fig:AoIEvolution}. In each time step, the
IoT device must determine whether to sample and transmit an update
directly or preprocess the update before the transmission. Let $a_{p}(t)\in\{0,1\}$
denote the computing action at time step $t$, where $a_{p}(t)=1$
indicates that the device preprocesses the status update, and $a_{p}(t)=0$,
otherwise. Let $a_{u}(t)\in\{0,1\}$ denote the updating action at
time step $t$, where $a_{u}(t)=1$ indicates that the device samples
and transmits the status update to the destination and $a_{u}(t)=0$,
otherwise. Let $\boldsymbol{a}(t)\triangleq(a_{p}(t),a_{u}(t))\in\mathcal{A\triangleq}\left\{ (0,0),(0,1),(1,1)\right\} $
denote the device's control action vector at time step $t$, where
$\mathcal{A}$ is the feasible action space. In particular, if $\boldsymbol{a}(t)=(0,0)$,
the device will stay idle in one minislot. If $\boldsymbol{a}(t)=(0,1)$,
the device will sample and transmit the update directly without preprocessing.
If $\boldsymbol{a}(t)=(1,1)$, the device will first preprocess the
status update after sampling and then transmit it to the destination.
Notably, the action vector $(1,0)$ is not feasible because this action
incurs energy consumption but does not provide the destination with
a fresh status update.
It is important to emphasize that the duration of a time step is not
uniform. Specifically, let $L(\boldsymbol{a}(t))$ denote the number
of minislots in time step $t$ with action $\bm{a}(t)$ being taken,
we can then express $L(\boldsymbol{a}(t))$ as follows
\begin{equation}
L(\boldsymbol{a}(t))=\begin{cases}
1, & \text{if }\boldsymbol{a}(t)=(0,0),\\
T_{u}, & \text{if }\boldsymbol{a}(t)=(0,1),\\
T_{p}+T_{u}', & \text{if }\boldsymbol{a}(t)=(1,1).
\end{cases}\label{eq:Duration}
\end{equation}
We further denote by $L_{u}(\boldsymbol{a}(t))$ the transmission
time corresponding to action $\bm{a}(t)$, which is given as follows
\begin{equation}
L_{u}(\boldsymbol{a}(t))=\begin{cases}
0, & \text{if }\boldsymbol{a}(t)=(0,0),\\
T_{u}, & \text{if }\boldsymbol{a}(t)=(0,1),\\
T_{u}', & \text{if }\boldsymbol{a}(t)=(1,1).
\end{cases}\label{eq:TransDuration}
\end{equation}
Let $C_{p}$ denote the computation energy consumption per minislot
when $a_{p}(t)=1$ and $C_{u}$ denote the communication energy consumption
per minislot when $a_{u}(t)=1$. In particular, the computation energy
consumption per minislot is given by
\begin{equation}
C_{p}=\kappa\tau f^{3},
\end{equation}
where $\kappa$ is the effective switched capacitance depending on
the chip architecture. By assuming a constant transmission power $P$
of the IoT device, the communication energy consumption per minislot
is $C_{u}=P\tau$. Then, the total energy consumption associated with
action $\bm{a}(t)$ at time step $t$ is given by
\begin{align}
C(\bm{a}(t)) & =\begin{cases}
0, & \text{if }\boldsymbol{a}(t)=(0,0),\\
T_{u}C_{u}, & \text{if }\boldsymbol{a}(t)=(0,1),\\
T_{p}C_{p}+T_{u}'C_{u}, & \text{if }\boldsymbol{a}(t)=(1,1).
\end{cases}
\end{align}
It is assumed that channel fading is constant in each minislot but
varies independently across them. The channel state information is
also assumed to be available only at the destination and the IoT device
transmits an update at a fixed rate. We use a memoryless Bernoulli
process $h(t,i)\in\{0,1\}$ to characterize the transmission failure
because of outage, where $h(t,i)=1$ indicates that the packet is
transmitted successfully at the $i$-th minislot of time step $t$,
and $h(t,i)=0$, otherwise. The transmission success probability of
a packet is defined as
\begin{equation}
p_{s}=\Pr\{h(t,i)=1\}=\Pr\left\{ B\log\left(1+\frac{\gamma P}{\sigma^{2}}\right)\geq\frac{l}{\tau}\right\} ,
\end{equation}
where $B$ is the channel bandwidth, $\gamma$ is the channel gain
between the IoT device and the destination, and $\sigma^{2}$ is the
noise power. We assume that the status update can be successfully
recovered at the destination if all the packets are transmitted successfully
during one time step. We denote by $h(t)\in\{0,1\}$ the transmission
status of an update at time step $t$, i.e., $h(t)=\prod_{i=L(\boldsymbol{a}(t))-L_{u}(\boldsymbol{a}(t))+1}^{L(\boldsymbol{a}(t))}h(t,i)$,
where $h(t)=1$ indicates that the update is transmitted successfully,
and $h(t)=0$, otherwise. Thus, the transmission success probability
of an update is given by $\Pr\{h(t)=1\}=p_{s}^{L_{u}(\boldsymbol{a}(t))}$
and the transmission failure probability of an update is given by
$\Pr\{h(t)=0\}=1-p_{s}^{L_{u}(\boldsymbol{a}(t))}$. We assume that
there exists an instantaneous error-free single-bit ACK/NACK feedback
from the destination to the IoT device. After a status update arrives
at the destination, an ACK signal (a NACK signal) is sent in case
of a successful reception (a failure).
The freshness of the status update is measured via AoI, which is defined
as the time elapsed since the generation of the most recently received
status update. Formally, let $U(t)$ denote the time step at which
the most up-to-date status update successfully received by the destination
was generated. Then, the AoI at the $i$-th minislot of time step
$t$ can be defined as
\begin{align}
\delta(t,i) & =\stackrel[n=U(t)]{t-1}{\sum}L(\boldsymbol{a}(n))+i-1,
\end{align}
where the first term represents the number of minislots in the previous
time steps since $U(t)$ and $i-1$ is the number of minislots in
the current time step. For simplicity, we represent the AoI at the
beginning of time step $t$ as $\delta(t)$, i.e., $\delta(t)=\delta(t,1)=\sum_{n=U(t)}^{t-1}L(\boldsymbol{a}(n))$.
Since it is pointless to receive a status update with a very large
age for time-critical IoT application, we let $\hat{\delta}$ be
the upper limit of the AoI, which is assumed to be finite but arbitrarily
large \cite{zhouJointStatusSampling2019}. Then, we present the dynamics
of the AoI as follows
\begin{align}
& \delta(t+1)=\nonumber \\
& \begin{cases}
\min(\delta(t)+1,\hat{\delta}), & \text{if }\boldsymbol{a}(t)=(0,0),\\
\min(T_{u},\hat{\delta}), & \text{if }\boldsymbol{a}(t)=(0,1)\text{ and }h(t)=1,\\
\min(\delta(t)+T_{u},\hat{\delta}), & \text{if }\boldsymbol{a}(t)=(0,1)\text{ and }h(t)=0,\\
\min(T_{p}+T_{u}',\hat{\delta}), & \text{if }\boldsymbol{a}(t)=(1,1)\text{ and }h(t)=1,\\
\min(\delta(t)+T_{p}+T_{u}',\hat{\delta}), & \text{if }\boldsymbol{a}(t)=(1,1)\text{ and }h(t)=0.
\end{cases}\label{eq:Dynamic}
\end{align}
We also illustrate the AoI evolution process in Fig. \ref{fig:AoIEvolution}.
\begin{figure}[tp]
\centering
\includegraphics[width=0.5\textwidth]{fig/AoIEvolution}\caption{\label{fig:AoIEvolution}An illustration of the evolution of the AoI,
where $T_{u}=4$, $T_{u}'=2$, $T_{p}=1$.}
\end{figure}
\section{Optimal Update Algorithm }
\subsection{SMDP Formulation}
Since the duration of each time step depends on the action taken in
that time step, the time interval between two sequential actions is
inconstant. Therefore, the optimal updating problem belongs to the
class of SMDP. An SMDP can be defined as a tuple $(\mathcal{S},\mathcal{A},t^{+},\Pr(\cdot|\cdot),R)$,
where $\mathcal{S}$ is the state space, $\mathcal{A}$ is the action
space, $t^{+}$ is the decision epoch, $\Pr(\cdot|\cdot)$ is the
transition probability, and $r$ is the cost function. In particular,
at the beginning of the time step $t$, the agent observes the system
state $s(t)$ and chooses an action $\bm{a}(t)$. As a consequence,
the system remains at $s(t)$ until the next decision epoch. Then,
the system state transitions to $s(t+1)$ and the agent receives a
cost $R(t)$. We note that this is different from MDP, where the transition
time is fixed and independent of the actions. In the following, we
formally define the state, action, transition probability, and cost
function of the SMDP.
\subsubsection{State }
The state of the SMDP at time step $t$ $s(t)$ is defined to be the
AoI at the beginning of that time step, i.e., $s(t)=\delta(t)$. Since
we limit the maximum value of the AoI, the state space is expressed
as $\mathcal{S}\triangleq\{1,2,\cdots,\hat{\delta}\}$.
\subsubsection{Action}
The action at time step $t$ is $\boldsymbol{a}(t)$ and the action
space is $\mathcal{A\triangleq}\left\{ (0,0),(0,1),(1,1)\right\} $.
\subsubsection{Decision Epoch}
A decision is making at the beginning of a time step. The time interval
between two adjacent decision epochs is $L(\bm{a}(t))$, which depends
on the action taking in time step $t$.
\subsubsection{Transition Probability}
We denote by $\Pr(s(t+1)\mid s(t),\boldsymbol{a}(t))$ the transition
probability that a state transits from $s(t)$ to $s(t+1)$ with action
$\boldsymbol{a}(t)$. According to the AoI evolution dynamic in (\ref{eq:Dynamic}),
the transition probability can be given as follows
\begin{gather}
\begin{cases}
\Pr\left(\min(\delta(t)+1,\hat{\delta})\mid\delta(t),(0,0)\right)=1,\\
\Pr\left(\min(T_{u},\hat{\delta})\mid\delta(t),(0,1)\right)=p_{s}^{T_{u}},\\
\Pr\left(\min(\delta(t)+T_{u},\hat{\delta})\mid\delta(t),(0,1)\right)=1-p_{s}^{T_{u}},\\
\Pr\left(\min(T_{p}+T_{u}',\hat{\delta})\mid\delta(t),(1,1)\right)=p_{s}^{T_{u}'},\\
\Pr\left(\min(\delta(t)+T_{p}+T_{u}',\hat{\delta})\mid\delta(t),(1,1)\right)=1-p_{s}^{T_{u}'}.
\end{cases}
\end{gather}
\subsubsection{Cost}
We aim to minimize the weighted sum of the average AoI associated
with the destination and the energy consumed by the IoT device. As
such, we define the cost at a time step as the weighted sum of the
AoI and the energy consumption. Specifically, the cost at time step
$t$ is represented as
\begin{align}
& R(\delta(t),\boldsymbol{a}(t))\nonumber \\
= & \stackrel[i=1]{L(\boldsymbol{a}(t))}{\sum}\delta(t,i)+\omega C(\bm{a}(t))\nonumber \\
= & \stackrel[i=1]{L(\boldsymbol{a}(t))}{\sum}(\delta(t)+i-1)+\omega C(\bm{a}(t))\nonumber \\
= & \frac{1}{2}(2\delta(t)+L(\boldsymbol{a}(t))-1)L(\boldsymbol{a}(t))+\omega C(\bm{a}(t)),\label{eq:total-cost}
\end{align}
where $\omega$ is the weighting factor.
Our goal is to find an update policy $\pi=(\boldsymbol{a}(1),\boldsymbol{a}(2),\ldots)$
that reduces the average cost to the lowest possible level. Under
a set of stationary deterministic policy $\Pi$ and a given initial
system state $s(1)$, the objective can be formulated as follows:
\begin{align}
\min_{\pi\in\Pi}\limsup_{T\rightarrow\infty}\frac{\mathbb{E}\left[\stackrel[t=1]{T}{\sum}R(\delta(t),\boldsymbol{a}(t))\mid s(1)\right]}{\mathbb{E}\left[\stackrel[t=1]{T}{\sum}L(\boldsymbol{a}(t))\right]}.\label{eq:Problem}
\end{align}
Since the duration of the time step is not uniform, the average cost
in (\ref{eq:Problem}) is defined as the limit of the expected total
cost over a finite number of time steps divided by the expected cumulative
time of these time steps \cite{sunUpdateWaitHow2017}. In this work,
we restrict our attention to stationary unichain policy, under which
the Markov chain is composed of a single recurrent class and a set
of transient states. Thus, the average cost is independent on the
initial state and the Markov chain has a unique stationary distribution
\cite{putermanMarkovDecisionProcesses2005}.
To solve this problem, we transform the SMDP into an equivalent discrete
time MDP using uniformization \cite{tijmsSemiMarkovDecisionProcesses2004,putermanMarkovDecisionProcesses2005}.
Let $\mathcal{\bar{S}}$ and $\mathcal{\bar{A}}$ denote the state
space and action space of the transformed MDP. They are the same as
those in the original SMDP, i.e., $\mathcal{\bar{S}}=\mathcal{S}$
and $\mathcal{\bar{A}}=\mathcal{A}$. For any $s\in\bar{\mathcal{S}}$
and $\boldsymbol{a}\in\bar{\mathcal{A}}$, the cost in the MDP is
given by
\begin{equation}
\bar{R}(s,\boldsymbol{a})=\frac{R(s,\boldsymbol{a})}{L(\boldsymbol{a})}=s+\frac{1}{2}(L(\boldsymbol{a})-1)+\omega\frac{C(\bm{a})}{L(\boldsymbol{a})},
\end{equation}
and the transition probability is given by
\begin{equation}
\bar{p}(s'\mid s,\boldsymbol{a})=\begin{cases}
\frac{\epsilon}{L(\boldsymbol{a})}p(s'\mid s,\boldsymbol{a}), & s'\neq s,\\
1-\frac{\epsilon}{L(\boldsymbol{a})}, & s'=s,
\end{cases}
\end{equation}
where $\epsilon$ is chosen in $\Big(0,\min\limits _{\boldsymbol{a}}L(\boldsymbol{a})\Big]$.
Then, by solving the Bellman equation in (\ref{eq:bellman-equ}),
one can obtain the optimal policy $\pi^{*}$ of the original SMDP
that minimizes the average cost. According to \cite{tijmsSemiMarkovDecisionProcesses2004},
we have
\begin{equation}
\theta+V(s)=\min\limits _{\boldsymbol{a}\in\mathcal{A}}\bigg\{\bar{R}(s,\boldsymbol{a})+\sum\limits _{s'\in\mathcal{S}}\bar{p}(s'\mid s,\boldsymbol{a})V(s')\bigg\},\forall s\in\mathcal{S},\label{eq:bellman-equ}
\end{equation}
where $\theta$ is the optimal value to (\ref{eq:Problem}) for all
initial state and $V(s)$ is the value function for the discrete-time
MDP. Then, the optimal policy can be given by
\begin{equation}
\pi^{*}(s)=\arg\min\limits _{\boldsymbol{a}\in\mathcal{A}}\bigg\{\bar{R}(s,\boldsymbol{a})+\sum\limits _{s'\in\mathcal{S}}\bar{p}(s'\mid s,\boldsymbol{a})V(s')\bigg\}\label{eq:opt-policy}
\end{equation}
for any $s\in\mathcal{S}$. Theoretically, we can obtain the optimal
policy $\pi^{*}$ via (\ref{eq:opt-policy}). However, the value function
$V(\cdot)$ does not have closed-form solution in general, which makes
this problem challenging. Although numerical algorithms such as value
iteration and policy iteration can solve this problem, they incur
high computational complexity and do not provide many design insights.
For a better understanding of the system, we will investigate the
structural properties of the optimal update policy in the next subsection.
\subsection{Structural Analysis and Algorithm Design}
In this subsection, we first show that the structure of the optimal
policy is of threshold-type with respect to the AoI. Then, we propose
a relative policy iteration algorithm based on the threshold structure
to obtain the optimal policy $\pi^{*}$ for (\ref{eq:Problem}).
To begin with, we show some key properties of the value function $V(s)$
in the following lemmas.
\begin{lem}
\label{lem:lem1}The value function $V(s)$ is non-decreasing with
$s$.
\end{lem}
\begin{IEEEproof}
See Appendix \ref{subsec:Proof-of-Lem1}.
\end{IEEEproof}
\begin{lem}
\label{lem:lem2}The value function $V(s)$ is concave in $s$.
\end{lem}
\begin{IEEEproof}
See Appendix \ref{subsec:Proof-of-Lem2}.
\end{IEEEproof}
Since $V(s)$ is a concave function, its slope is non-increasing.
We drive the lower bound of the slope of $V(s)$ in the following
lemma. Before that, we define an auxiliary variable $\bm{a}_{f}$,
which is given by
\begin{equation}
\boldsymbol{a}_{f}=\begin{cases}
(0,1), & \frac{T_{u}}{p_{s}^{T_{u}}}\leq\frac{T_{p}+T_{u}'}{p_{s}^{T_{u}'}},\\
(1,1), & \frac{T_{u}}{p_{s}^{T_{u}}}\geq\frac{T_{p}+T_{u}'}{p_{s}^{T_{u}'}}.
\end{cases}
\end{equation}
\begin{lem}
\label{lem:lem3}For any $s_{1},s_{2}\in\mathcal{S}$, such that $s_{1}\le s_{2}$,
$V(s_{2})-V(s_{1})\geq\frac{L(\boldsymbol{a}_{f})}{\epsilon p_{s}^{L_{u}(\boldsymbol{a}_{f})}}(s_{2}-s_{1})$.
\end{lem}
\begin{IEEEproof}
See Appendix \ref{subsec:Proof-of-Lem3}.
\end{IEEEproof}
We are now in position to show the structure of the optimal update
policy.
\begin{thm}
\label{thm:threshold-structure}For any $s_{1},s_{2}\in\mathcal{S}$,
such that $s_{1}\leq s_{2}$, there is an optimal policy that satisfies
the structural properties as follow:
A) When $\frac{T_{u}}{p_{s}^{T_{u}}}\leq\frac{T_{p}+T_{u}'}{p_{s}^{T_{u}'}}$,
if $\pi^{*}(s_{1})=(0,1)$, then $\pi^{*}(s_{2})=(0,1)$.
B) When $\frac{T_{u}}{p_{s}^{T_{u}}}\geq\frac{T_{p}+T_{u}'}{p_{s}^{T_{u}'}}$,
if $\pi^{*}(s_{1})=(1,1)$, then $\pi^{*}(s_{2})=(1,1)$.
\end{thm}
\begin{IEEEproof}
See Appendix \ref{subsec:Proof-of-thm4}.
\end{IEEEproof}
Theorem \ref{thm:threshold-structure} depicts the structural properties
of the optimal policy $\pi^{*}$ of the SMDP in two cases. We note
that $\frac{T_{u}}{p_{s}^{T_{u}}}$ and $\frac{T_{p}+T_{u}'}{p_{s}^{T_{u}'}}$
can be interpreted as the expected duration of repeatedly taking action
$(0,1)$ and that of taking action $(1,1)$ to get one success transmission,
respectively. Therefore, Theorem 1 also suggests when to choose which
updating action in the high AoI regime. Particularly, in order to
minimize the long-term average cost, the updating action with a shorter
expected duration should be chosen when the AoI is large enough to
dominate the cost. For example, in the first case where the preprocessing
incurs a larger expected update duration, it is better to transmit
the update directly for a large enough AoI, while in the second case
where preprocessing can help shorten the expected update duration,
it is no doubt to choose preprocessing-and-transmission when the AoI
is large enough. The reason why we do not consider the energy consumption
of both actions in the conditions is that the difference between the
energy consumption of different actions is constant and the age increasingly
dominates the cost as the AoI grows larger. We further illustrate
the threshold structure of the optimal policy in Fig. \ref{fig:structure-general},
where the structure of optimal policy falls into case 1 when $v\leq10$,
and otherwise when $v\geq12$.
\begin{figure}[tp]
\centering
\includegraphics[scale=0.55]{fig/Policy_general_v}
\caption{\label{fig:structure-general}Structure of the optimal policy for
different values of $v$ ($T_{u}=4$, $T_{u}'=2$, $l=3$, $f=35$,
$\tau=1$, $\kappa=0.00005$, $P=6$, $\omega=2$, $p_{s}=0.8$). }
\end{figure}
\begin{rem}
We note that the result in Theorem \ref{thm:threshold-structure}
can be extended to the case of discrete transmit power control. The
action with the largest transmit power would be the optimal one when
the AoI is large enough, because the largest transmit power can bring
the shortest expected duration.
\end{rem}
According to Theorem \ref{thm:threshold-structure}, there exists
a threshold $\Omega$ in the optimal update policy. Although the
exact values of $\Omega$ depend on the particular values of $V(s)$,
the structure only depends on the properties of $V(s)$. Therefore,
a low-complexity relative policy iteration algorithm can be developed
by incorporating the threshold structure into a standard relative
policy iteration algorithm. In particular, we will no longer need
to minimize the righthand side of (\ref{eq:bellman-equ}) for all
states to find $\pi^{*}$, thereby reducing the computational complexity.
The details are given in Algorithm \ref{alg:RPI}.
\begin{algorithm}[t]
\caption{\label{alg:RPI}Relative Policy Iteration based on the Threshold Structure}
\begin{algorithmic}[1]
\STATE \textbf{Initialization:} Set $\pi_{0}^{*}(s)=(0,0)$ for
all $s\in\mathcal{S}$, select a reference state $s^{\dagger}$, and
set $k=0$.
\STATE \textbf{Policy Evaluation:} Given $\pi_{k}^{*}$, compute
the value of $\theta_{k}$ and $V_{k}(s)$ from the linear system
of equations
\begin{equation}
\begin{cases}
\begin{alignedat}{1}\theta_{k}+V_{k}(s)= & \bar{R}(s,\pi_{k}^{*}(s))\\
& +\sum\limits _{s'\in\mathcal{S}}\bar{p}(s'\mid s,\pi_{k}^{*}(s))V_{k}(s'),
\end{alignedat}
\\
V_{k}(s^{\dagger})=0,
\end{cases}
\end{equation}
by Gaussian elimination.
\STATE \textbf{Structured Policy Improvement:} Compute a new policy
$\pi_{k+1}^{*}$ for each $s\in\mathcal{S}$ as follows:
\textbf{if} $\frac{T_{u}}{p_{s}^{T_{u}}}\leq\frac{T_{p}+T_{u}'}{p_{s}^{T_{u}'}}$
\textbf{and} $\pi_{k+1}^{*}(s-1)=(0,1),$ \textbf{then} $\pi_{k+1}^{*}(s)=(0,1).$
\textbf{else if} $\frac{T_{u}}{p_{s}^{T_{u}}}\geq\frac{T_{p}+T_{u}'}{p_{s}^{T_{u}'}}$
\textbf{and} $\pi_{k+1}^{*}(s-1)=(1,1),$ \textbf{then} $\pi_{k+1}^{*}(s)=(1,1).$
\textbf{else}
\begin{align}
\pi_{k+1}^{*}(s)=\arg\min_{\boldsymbol{a}\in\mathcal{A}} & \{\bar{R}(s,\pi_{k}^{*}(s))\nonumber \\
& +\sum\limits _{s'\in\mathcal{S}}\bar{p}(s'\mid s,\pi_{k}^{*}(s))V_{k}(s')\}.
\end{align}
\STATE Set $k=k+1$ and go to Step 2 until $\pi_{k}^{*}(s)=\pi_{k+1}^{*}(s)$
for all $s\in\mathcal{S}$.
\end{algorithmic}
\end{algorithm}
\section{Special Case Study: Transmission over a Reliable Channel}
In this section, we consider a special scenario where the packets
are transmitted over a reliable channel. Accordingly, the status updating
problem can be simplified. From (\ref{eq:Dynamic}) we can see that
the states smaller than $\min\{T_{u},T_{p}+T_{u}'\}$ are non-recurrent
states. Since the policies in non-recurrent states has no effect
on the average cost, we can only consider the state space $\mathcal{S}^{\dagger}\triangleq\left\{ \min\{T_{u},T_{p}+T_{u}'\},\cdots,\hat{\delta}\right\} $
when discussing the optimal policy.
\subsection{Case 1}
Based on the model of the reliable channel, we give the first simplification
of the optimal policy.
\begin{lem}
\label{lem:simplify1}For any $s\in\mathcal{S}^{\dagger}$, we have
$\pi^{*}(s)\neq(0,0)$ when $\frac{1}{2}L(\boldsymbol{a}_{f})(L(\boldsymbol{a}_{f})+1)\geq\omega C(\boldsymbol{a}_{f})$.
\end{lem}
\begin{IEEEproof}
See Appendix \ref{subsec:Proof-of-lem:simplify1}.
\end{IEEEproof}
Lemma \ref{lem:simplify1} indicates that the IoT device will never
stay idle with the optimal policy when the AoI dominates in the cost.
Accordingly, the threshold structure in Theorem \ref{thm:threshold-structure}
can be simplified, which is presented in the theorem below.
\begin{thm}
\label{thm:threshold1}For $s\in\mathcal{S}^{\dagger}$, the optimal
policy is of a switch-type structure when $\frac{1}{2}L(\boldsymbol{a}_{f})(L(\boldsymbol{a}_{f})+1)\geq\omega C(\boldsymbol{a}_{f})$,
namely, there exists a threshold $\Omega\geq\min\{T_{u},T_{p}+T_{u}'\}$,
such that when $T_{u}\leq T_{p}+T_{u}'$,
\begin{equation}
\pi^{*}(s)=\begin{cases}
(1,1), & T_{u}\leq s<\Omega,\\
(0,1), & s\geq\Omega,
\end{cases}\label{eq:switching-structure-1-2}
\end{equation}
and when $T_{u}\geq T_{p}+T_{u}'$,
\begin{equation}
\pi^{*}(s)=\begin{cases}
(0,1), & T_{p}+T_{u}'\leq s<\Omega,\\
(1,1), & s\geq\Omega.
\end{cases}\label{eq:switching-structure-1-1}
\end{equation}
\end{thm}
\begin{IEEEproof}
According to Lemma \ref{lem:simplify1}, we can exclude action $(0,0)$
from the optimal policy when $\frac{1}{2}L(\boldsymbol{a}_{f})(L(\boldsymbol{a}_{f})+1)\geq\omega C(\boldsymbol{a}_{f})$.
Moreover, since we have proved the threshold structure of the optimal
policy in a general case in Theorem \ref{thm:threshold-structure},
the optimal policy can be further proved to satisfy the switching
structure in (\ref{eq:switching-structure-1-2}) and (\ref{eq:switching-structure-1-1}).
\end{IEEEproof}
Theorem \ref{thm:threshold1} depicts the structure of the optimal
policy $\pi^{*}$ for the SMDP in (\ref{eq:Problem}) when $p_{s}=1$
and $\frac{1}{2}L(\boldsymbol{a}_{f})(L(\boldsymbol{a}_{f})+1)\geq\omega C(\boldsymbol{a}_{f})$.
We further illustrate the analytical results of Theorem \ref{thm:threshold1}
in Fig. \ref{fig:structure-sp1}, where $T_{u}\leq T_{p}+T_{u}'$
and $C(\bm{a}=(0,1))>C(\bm{a}=(1,1))$. It can be seen from Fig. \ref{fig:structure-sp1}
that the optimal policy is of the threshold type. Moreover, the threshold
increases along with $\omega$. This indicates that, when the weighting
factor is large, it is not desirable to directly transmit a new status
update to the destination due to a high weighted energy consumption.
\begin{figure}[tp]
\centering
\includegraphics[scale=0.55]{fig/Policy_sp1_weight}
\caption{\label{fig:structure-sp1}Structure of the optimal policy in Theorem
\ref{thm:threshold1} for different values of $\omega$ ($T_{u}=5$,
$T_{u}'=1$, $l=3$, $v=5$, $f=15$, $\tau=1$, $\kappa=0.00005$,
$P=3$).}
\end{figure}
We then denote $\boldsymbol{a}_{1}=\arg\min\limits _{\boldsymbol{a}\in\mathcal{A}\setminus(0,0)}\{L(\boldsymbol{a})\}$
and $\boldsymbol{a}_{2}=\arg\max\limits _{\boldsymbol{a}\in\mathcal{A}\setminus(0,0)}\{L(\boldsymbol{a})\}$.
According to the threshold structure in Theorem \ref{thm:threshold1},
we can proceed to reduce the recurrent state space of the computing-enable
IoT system.
\begin{lem}
\label{lem:recurrent-states}For a given threshold policy of the type
in Theorem \ref{thm:threshold1} with the threshold of $\Omega$,
recurrent state space $\mathcal{S}'$ can be given as follow:
A) $\mathcal{S}'=\left\{ L(\boldsymbol{a}_{1})\right\} $ when $\Omega=L(\boldsymbol{a}_{1})$.
B) $\mathcal{S}'=\left\{ L(\boldsymbol{a}_{1}),L(\boldsymbol{a}_{2})\right\} $
when $L(\boldsymbol{a}_{1})<\Omega\leq L(\boldsymbol{a}_{2})$.
C) $\mathcal{S}'=\left\{ L(\boldsymbol{a}_{2})\right\} $ when $\Omega>L(\boldsymbol{a}_{2})$.
\end{lem}
\begin{IEEEproof}
As illustrated in Fig. \ref{fig:MarkovChain-1}, we can use a Discrete
Time Markov Chain (DTMC) to model the MDP constructed by any threshold
policy of the type in Theorem \ref{thm:threshold1} with the threshold
of $\Omega$. It can be seen from Fig. \ref{fig:MarkovChain-1}(a)
and Fig. \ref{fig:MarkovChain-1}(c), respectively, that there is
only one recurrent state $L(\boldsymbol{a}_{1})$ when $\Omega=L(\boldsymbol{a}_{1})$
and $L(\boldsymbol{a}_{2})$ when $\Omega>L(\boldsymbol{a}_{2})$.
Also we can see from Fig. \ref{fig:MarkovChain-1}(b) that the recurrent
states are $L(\boldsymbol{a}_{1})$ and $L(\boldsymbol{a}_{2})$ when
$L(\boldsymbol{a}_{1})<\Omega\leq L(\boldsymbol{a}_{2})$.
\end{IEEEproof}
\begin{figure}[tp]
\centering
\subfloat[]{\includegraphics[width=0.5\textwidth]{fig/MarkovChain1}}
\subfloat[]{\includegraphics[width=0.5\textwidth]{fig/MarkovChain2}}
\subfloat[]{\includegraphics[width=0.5\textwidth]{fig/MarkovChain3}}\caption{\label{fig:MarkovChain-1}The states transitions under a threshold
policy with different values of $\Omega$. (a) $\Omega=L(\boldsymbol{a}_{1})$.
(b) $L(\boldsymbol{a}_{1})<\Omega\protect\leq L(\boldsymbol{a}_{2})$.
(c) $\Omega>L(\boldsymbol{a}_{2})$.}
\end{figure}
Under the threshold policy, we proceed with analyzing the average
cost of any threshold $\Omega$.
\begin{lem}
\label{lem:threshold-range}Let $J_{1}=\frac{3}{2}L(\boldsymbol{a}_{1})+\omega\frac{C(\boldsymbol{a}_{1})}{L(\boldsymbol{a}_{1})}-\frac{1}{2}$,
$J_{2}=L(\boldsymbol{a}_{1})L(\boldsymbol{a}_{2})+\omega\frac{C(\boldsymbol{a}_{1})+C(\boldsymbol{a}_{2})}{L(\boldsymbol{a}_{1})+L(\boldsymbol{a}_{2})}$,
$J_{3}=\frac{3}{2}L(\boldsymbol{a}_{2})+\omega\frac{C(\boldsymbol{a}_{2})}{L(\boldsymbol{a}_{2})}-\frac{1}{2}$.
For a given threshold $\Omega$, the average cost of the threshold
policy in Theorem \ref{thm:threshold1} is given by
\begin{equation}
J(\Omega)=\begin{cases}
J_{1}, & \Omega=L(\boldsymbol{a}_{1}),\\
J_{2}, & L(\boldsymbol{a}_{1})<\Omega\leq L(\boldsymbol{a}_{2}),\\
J_{3}, & \Omega>L(\boldsymbol{a}_{2}).
\end{cases}
\end{equation}
\end{lem}
\begin{IEEEproof}
See Appendix \ref{subsec:Proof-of-threshold-range}.
\end{IEEEproof}
By leveraging the above results, we can find the set of the optimal
threshold $\Omega^{*}$.
\begin{thm}
\label{thm:opt-threshold-range}If $J_{1}$ is smaller than $J_{2}$
and $J_{3}$, we have $\Omega^{*}=L(\boldsymbol{a}_{1})$. If $J_{2}$
is smaller than $J_{1}$ and $J_{3}$, we have $L(\boldsymbol{a}_{1})<\Omega^{*}\leq L(\boldsymbol{a}_{2})$.
If $J_{3}$ is smaller than $J_{1}$ and $J_{2}$, we have $\Omega^{*}>L(\boldsymbol{a}_{2})$.
\end{thm}
\begin{IEEEproof}
According to Lemma \ref{lem:threshold-range}, we can determine the
set of the optimal threshold $\Omega^{*}$ by comparing the values
of $J_{1}$, $J_{2}$ and $J_{3}$.
\end{IEEEproof}
We have proved in Lemma \ref{lem:recurrent-states} that under the
threshold policy in Theorem \ref{thm:threshold1}, only a few states
of the system are recurrent states. Once the set of the optimal threshold
is determined, we can determine the optimal policy in the recurrent
states. Therefore, the specific value of the threshold is not a necessity.
As long as we take any value in the set of the optimal threshold,
we can achieve the goal of minimizing the average cost.
\subsection{Case 2}
Based on the model of the reliable channel, we give the second simplification
of the optimal policy. Recall that $\boldsymbol{a}_{1}=\arg\min\limits _{\boldsymbol{a}\in\mathcal{A}\setminus(0,0)}\{L(\boldsymbol{a})\}$
and $\boldsymbol{a}_{2}=\arg\max\limits _{\boldsymbol{a}\in\mathcal{A}\setminus(0,0)}\{L(\boldsymbol{a})\}$,
we have the following lemma.
\begin{lem}
\label{lem:simplify2}For any $s\in\mathcal{S}^{\dagger}$, we have
$\pi^{*}(s)\neq\boldsymbol{a}_{2}$ when $\frac{C(\boldsymbol{a}_{1})}{L(\boldsymbol{a}_{1})}\leq\frac{C(\boldsymbol{a}_{2})}{L(\boldsymbol{a}_{2})}$.
\end{lem}
\begin{IEEEproof}
See Appendix \ref{subsec:Proof-of-simplify2}.
\end{IEEEproof}
Lemma \ref{lem:simplify2} reveals that the action with a lower energy
efficiency (i.e., a larger energy consumption per minislot) shall
be excluded in the optimal policy. Accordingly, the threshold structure
in Theorem \ref{thm:threshold-structure} is simplified in the following
theorem.
\begin{thm}
\label{thm:threshold2}For $s\in\mathcal{S}^{\dagger}$, the optimal
policy is of a switch-type structure when $\frac{C(\boldsymbol{a}_{1})}{L(\boldsymbol{a}_{1})}\leq\frac{C(\boldsymbol{a}_{2})}{L(\boldsymbol{a}_{2})}$,
namely, there is a threshold $\Omega\geq\min\{T_{u},T_{p}+T_{u}'\}$,
such that
\begin{equation}
\pi^{*}(s)=\begin{cases}
(0,0), & L(\boldsymbol{a}_{1})\leq s<\Omega,\\
\boldsymbol{a}_{1}, & s\geq\Omega.
\end{cases}\label{eq:switching-structure-2}
\end{equation}
\end{thm}
\begin{IEEEproof}
According to Lemma \ref{lem:simplify2}, we can exclude $\bm{a}_{2}$
from the optimal policy when $\frac{C(\boldsymbol{a}_{1})}{L(\boldsymbol{a}_{1})}\leq\frac{C(\boldsymbol{a}_{2})}{L(\boldsymbol{a}_{2})}$.
Moreover, since we have proved the threshold structure of the optimal
policy in a general case in Theorem \ref{thm:threshold-structure},
the optimal policy can be further proved to satisfy the switching
structure in (\ref{eq:switching-structure-2}).
\end{IEEEproof}
Theorem \ref{thm:threshold2} depicts the structure of the optimal
policy $\pi^{*}$ for the SMDP in (\ref{eq:Problem}) when $p_{s}=1$
and $\frac{C(\boldsymbol{a}_{1})}{L(\boldsymbol{a}_{1})}\leq\frac{C(\boldsymbol{a}_{2})}{L(\boldsymbol{a}_{2})}$.
Fig. \ref{fig:structure-sp2} illustrates the analytical results of
Theorem \ref{thm:threshold2}, where $p_{s}=1$ and $C_{u}\geq\frac{T_{p}C_{p}+T_{u}'C_{u}}{T_{p}+T_{u}'}$.
We can see from Fig. \ref{fig:structure-sp2} that, in order to strike
a balance between the AoI and the energy consumption, the IoT device
does not transmit until the AoI is large. We can also see that the
threshold increases with the increasing of the weighting factor $\omega$.
This is due to a higher weighted energy consumption in average cost
when $\omega$ grows larger.
\begin{figure}[tp]
\centering
\includegraphics[scale=0.55]{fig/Policy_sp2_weight}
\caption{\label{fig:structure-sp2}Structure of the optimal policy in Theorem
\ref{thm:threshold2} for different values of $\omega$ ($T_{u}=6$,
$T_{u}'=2$, $l=3$, $v=5$, $f=45$, $\tau=1$, $\kappa=0.00005$,
$P=6$).}
\end{figure}
Under the threshold policy in Theorem \ref{thm:threshold2}, we proceed
with analyzing the average cost.
\begin{lem}
\label{lem:threshold-value}The average cost of the threshold policy
for any given threshold $\Omega$ in Theorem \ref{thm:threshold2}
can be given by
\begin{equation}
J(\Omega)=L(\boldsymbol{a}_{1})+\frac{1}{2}(\Omega-1)+\frac{\omega C(\bm{a}_{1})}{\Omega}.
\end{equation}
\end{lem}
\begin{IEEEproof}
See Appendix \ref{subsec:Proof-of-threshold-value}.
\end{IEEEproof}
By leveraging the above results, we can find the optimal threshold
value $\Omega^{*}$.
\begin{thm}
\label{thm:opt-threshold2}The optimal threshold $\Omega^{*}$ of
the optimal update policy in Theorem \ref{thm:threshold2} is given
by
\begin{equation}
\Omega^{*}=\arg\min\left(J\left(\left\lfloor \sqrt{2\omega C(\bm{a}_{1})}\right\rfloor \right),J\left(\left\lceil \sqrt{2\omega C(\bm{a}_{1})}\right\rceil \right)\right).\label{eq:opt-threshold}
\end{equation}
\end{thm}
\begin{IEEEproof}
See Appendix \ref{subsec:Proof-of-opt-threshold}.
\end{IEEEproof}
From the expression of the optimal threshold $\Omega^{*}$, we can
see that the threshold is monotonically increasing with respect to
$\omega$ and $C(\boldsymbol{a}_{1})$, which is consistent with the
result in Fig. \ref{fig:structure-sp2}. This indicates that, when
the weighting factor or the energy consumption is large, transmitting
a new status update can achieve better result than keeping idle only
in the large AoI regime.
\begin{rem}
In this section, by studying the special cases, we show that the optimal
policy has a switching structure with respect to only two actions.
Moreover, the optimal threshold can be obtained in closed-form, which
reveals how the system parameters affect the threshold policy. It
is worth noting that, in practice, the conclusions obtained from the
special cases study can be applied in the high SNR regime and the
obtained optimal policy can be used as an approximation of the optimal
policy when the success rate is high.
\end{rem}
\section{Simulation Results}
In this section, we present the simulation results to explore the
effects of system parameters on the optimal update policy and demonstrate
the efficacy of the proposed scheme by comparing it with two other
zero-wait policies, i.e., zero-wait no-computation policy and zero-wait
computation policy. Particularly, in both baseline policies, the IoT
device starts a new transmission immediately after the previous transmission
is finished. In the zero-wait no-computation policy, the IoT device
transmits each status update without preprocessing, while, in the
zero-wait computation policy, the IoT device preprocesses each status
update and then transmits the processed status update. In the simulations,
we truncate the state space by setting the upper limit of the number
of states to be 200.
\subsection{Performance Evaluation in the General Case}
\begin{figure}[tp]
\centering
\subfloat[\label{fig:ps1}]{\includegraphics[width=0.5\textwidth]{fig/opt_vs_zerowait_ps}}
\subfloat[\label{fig:ps2}]{\includegraphics[width=0.5\textwidth]{fig/opt_vs_zerowait_cost_ps}}\caption{\label{fig:opt-vers-zerowait-ps}Performance comparison among the
optimal policy, the zero-wait no-computation policy, and the zero-wait
computation policy ($T_{u}=4$, $T_{u}'=2$, $l=3$, $v=2$, $f=35$,
$\tau=1$, $\kappa=0.00005$, $P=6$, $\omega=2$). (a) The average
cost versus $p_{s}$. (b) The average AoI and the average energy consumption
versus $p_{s}$. }
\end{figure}
In Fig. \ref{fig:opt-vers-zerowait-ps}, the average cost of the optimal
policy and the two baseline policies are compared with respect to
the transmission success probability $p_{s}$. As we can see from
Fig. \ref{fig:opt-vers-zerowait-ps}(a), the optimal policy outperforms
the zero-wait policies. Moreover, the average cost decreases as $p_{s}$
increases. The reason can be explained with the aid of Fig \ref{fig:opt-vers-zerowait-ps}(b).
First, it is evident that the average AoI of all the three policies
decreases with the increasing of $p_{s}$ since the AoI is more likely
to be reset with a larger transmission success probability. Second,
because the IoT device with zero-wait policies keep updating continuously,
the average energy consumption remains a constant irrespective of
$p_{s}$. For $0.1\leq p_{s}\leq1$, we can see that the average
energy consumption steadily decreases with the increase of $p_{s}$.
This is due to the fact that less transmission is needed to reduce
the AoI when $p_{s}$ is larger. Therefore, the optimal policy can
adapt to the channel quality. Through this comparison, we can see
that although the optimal update policy does not yield the minimum
AoI, it has a smaller energy consumption than the zero-wait policies.
By trading off the AoI for energy consumption, the optimal update
policy achieves the smallest average cost.
\begin{figure}[tp]
\centering
\subfloat[\label{fig:v1}]{\includegraphics[width=0.5\textwidth]{fig/opt_vs_zerowait_v}}
\subfloat[\label{fig:v2}]{\includegraphics[width=0.5\textwidth]{fig/opt_zerowait_cost_v}}\caption{\label{fig:opt-vers-zerowait-v}Performance comparison among the optimal
policy, the zero-wait no-computation policy, and the zero-wait computation
policy ($T_{u}=4$, $T_{u}'=2$, $l=3$, $f=35$, $\tau=1$, $\kappa=0.00005$,
$P=6$, $\omega=2$, $p_{s}=0.8$). (a) The average cost versus $v$.
(b) The average AoI and the average energy consumption versus $v$.}
\end{figure}
In Fig. \ref{fig:opt-vers-zerowait-v}, the average cost of the optimal
policy and the two baseline policies are compared with respect to
the number of CPU cycles required to preprocess one bit $v$. We can
see from Fig. \ref{fig:opt-vers-zerowait-v}(a) that the optimal policy
outperforms both baseline policies. It is easy to see that the performance
of zero-wait no-computation policy is irrespective of $v$ since each
status update is transmitted directly without preprocessing. In contrast,
the average cost of the optimal policy and the zero-wait computation
policy is non-decreasing as $v$ grows. Particularly, we can see from
Fig. \ref{fig:opt-vers-zerowait-v}(b) that the average AoI of the
zero-wait computation policy increases with $v$. Since the status
update is more and more computation intensive as $v$ grows, the preprocessing
at the IoT device requires more and more time, which leads to an increase
in the average AoI. However, since the computation energy consumption
per minislot in this setup is less than the transmission energy consumption
per minislot, the average energy consumption of the zero-wait computation
policy is shown to decline with $v$. In other words, although the
duration and total energy consumption are increasing as $v$ increases,
the average energy consumption is decreasing. Moreover, we can see
from Fig. \ref{fig:opt-vers-zerowait-v}(b) that the average AoI of
the optimal update policy is non-decreasing with $v$ except when
$14\leq v\leq16$. The reason why the average AoI of the optimal policy
drops for $14\leq v\leq16$ can be explained with Fig. \ref{fig:structure-general}.
Since $v=16$, the optimal policy completely abandons the action of
$(1,1)$. The change of the optimal policy effectively reduces the
average AoI of the system, but induces a sudden increase in the average
energy consumption at $v=16$. We can also see that the average energy
consumption increases as $v$ increases. This is because the optimal
policy makes sacrifices in energy consumption in order to ensure that
the AoI is increased at a slower pace. It could be concluded that
the optimal update policy can adjust adaptively based on the degree
of computational intensity of the status update.
\subsection{Performance Evaluation in the Special Cases}
In this subsection, we show the performance of two different special
cases in Section IV that the packets are transmitted over a reliable
channel. In Fig. \ref{fig:opt-vers-zerowait-sp1-weight}, the average
cost of the optimal policy and the two baseline policies are compared
with respect to the weighting factor $\omega$ when $T_{u}\leq T_{p}+T_{u}'$
and $\frac{1}{2}(T_{p}+T_{u}')(T_{p}+T_{u}'+1)\geq\omega(T_{p}C_{p}+T_{u}'C_{u})$.
As we can see from Fig. \ref{fig:opt-vers-zerowait-sp1-weight}(a),
the optimal policy outperforms both two baseline policies. Moreover,
the optimal policy coincides with the zero-wait no-computation policy
when $\omega$ is small and coincides with the zero-wait computation
policy when $\omega$ is large. This is because, in this simulation
setup, when $w\leq0.55$, we have $\min(J_{1},J_{2},J_{3})=J_{1}$
and the optimal policy is to always transmit the status update directly,
while when $w\ge0.75$, we have $\min(J_{1},J_{2},J_{3})=J_{3}$ and
the optimal policy is to always preprocess and transmit the status
update. Moreover, when $0.60\leq w\leq0.70$, we have $\min(J_{1},J_{2},J_{3})=J_{2}$
and the optimal policy is to execute the above two actions in turns.
That is the reason why the optimal policy outperforms the two zero-wait
policies in this regime.
\begin{figure}[tp]
\centering
\subfloat[\label{fig:sp1-weight1}]{\includegraphics[width=0.5\textwidth]{fig/opt_vs_zerowait_weight_sp1}}
\subfloat[\label{fig:sp1-weight2}]{\includegraphics[width=0.5\textwidth]{fig/opt_vs_zerowait_weight_cost_sp1}}\caption{\label{fig:opt-vers-zerowait-sp1-weight}Performance comparison among
the optimal policy, the zero-wait no-computation policy, and the zero-wait
computation policy ($T_{u}=5$, $T_{u}'=1$, $l=3$, $v=5$, $f=15$,
$\tau=1$, $\kappa=0.00005$, $P=3$). (a) The average cost versus
$\text{\ensuremath{\omega}}$. (b) The average AoI and the average
energy consumption versus $\omega$.}
\end{figure}
In Fig. \ref{fig:opt-vers-zerowait-sp2-weight}, the average cost
of the optimal policy and the two baseline policies are compared
with respect to $\omega$ when $T_{u}\geq T_{p}+T_{u}'$ and $C_{u}\geq\frac{T_{p}C_{p}+T_{u}'C_{u}}{T_{p}+T_{u}'}$.
As we can see from Fig. \ref{fig:opt-vers-zerowait-sp2-weight}(a),
the optimal policy outperforms both two baseline policies. As the
weight factor increases, the average cost of the optimal policy and
the baseline policies increase but with different rates. From Fig.
\ref{fig:opt-vers-zerowait-sp2-weight}(b) we can see that the average
AoI and the energy consumption of the two zero-wait baseline policies
are constant for any $\omega$. The gap between the optimal policy
and the zero-wait baseline policies grows with $\omega$ in \ref{fig:opt-vers-zerowait-sp2-weight}(a),
because the zero-wait policies suffer from a higher weighted energy
consumption when the weighting factor is large. For the optimal policy,
we can see that, with the increase of $\omega$, its average energy
consumption tends to be smaller while the average AoI tends to be
larger. This is the result of the balance between the AoI reduction
and the energy consumption. Theorem \ref{lem:threshold-value} shows
that the threshold of the optimal policy in this case will increase
with $\omega$. Therefore, when $\omega$ is large, the IoT device
will update only when the AoI is large enough.
\begin{figure}[tp]
\centering
\subfloat[\label{fig:sp2-weight1}]{\includegraphics[width=0.5\textwidth]{fig/opt_vs_zerowait_weight_sp2}}
\subfloat[\label{fig:sp2-weight2}]{\includegraphics[width=0.5\textwidth]{fig/opt_vs_zerowait_weight_cost_sp2}}\caption{\label{fig:opt-vers-zerowait-sp2-weight}Performance comparison among
the optimal policy, the zero-wait no-computation policy, and the zero-wait
computation policy ($T_{u}=6$, $T_{u}'=2$, $l=3$, $v=5$, $f=45$,
$\tau=1$, $\kappa=0.00005$, $P=6$). (a) The average cost versus
$\text{\ensuremath{\omega}}$. (b) The average AoI and the average
energy consumption versus $\omega$.}
\end{figure}
\section{Conclusion}
In this paper, we have studied the problem for optimizing information
freshness in computing-enabled IoT systems by jointly controlling
the preprocessing and transmission at the IoT device. To minimize
the weighted sum of the average AoI associated with the destination
and the energy consumed by the IoT device, we have formulated an infinite
horizon average cost SMDP. By transforming the SMDP to an equivalent
uniform time step MDP, we have investigated the structure of the
optimal update policy and provided a structure-aware relative policy
iteration algorithm. We have further proved the switch-type structure
of the optimal policy in a special scenario where the status updates
are transmitted over a reliable channel. Simulation results have shown
that the optimal update policy can adjust adaptively based on the
channel quality and the degree of computational intensity of the status
update. By comparing the optimal update policy with two other zero-wait
policies, it is shown that the optimal update policy achieves a good
balance between the AoI reduction and the energy consumption.
|
1,108,101,566,409 | arxiv | \section{\label{section1:level1} Introduction}
Radon provides a dangerous background for
experiments in search for WIMP dark matter~\cite{PhysRevD.101.052002,XENON:2020fbs,lux2015RnBackgrounds,deap2015Rn}.
Radon emanation accounted for
$>50$\% of the projected electron recoil
background in LZ~\cite{PhysRevD.101.052002,Eur.Phys.J.C80(2020)11.1044},
dominated by the
“naked” $\beta$-emission from its $^{214}$Pb progeny.
For this reason nearly all components that touch the xenon were screened for radon emanation. Over eighty samples were assayed, including
components from the inner cryostat (such as phototubes, cabling, and PTFE), the xenon tower (such as the sub-cooler, weir reservoir, and heat exchanger), and the xenon circulation system (such as compressors, circulation panel, and xenon transfer lines)~\cite{Eur.Phys.J.C80(2020)11.1044}.
Here we briefly summarize the process, detail the particularly interesting and complicated emanation measurement of the full Inner Cryostat Vessel (ICV), and describe radon emanation of a calibration source not included in Ref.~\cite{Eur.Phys.J.C80(2020)11.1044}.
\section{\label{section:Methods} Radon Emanation Measurements}
\begin{table}
\caption{Comparison of the
four radon emanation facilities used by LZ. The chambers listed
contain the sample material, where radon is collected.
The emanation rates from the chambers alone (the "blank" rates) are statistically subtracted for sample measurements. Only a fraction of the emanated radon is transferred to the detection chamber ("Transfer Eff.") and only a fraction of that transferred is then detected ("Detector Eff.").
The cross-calibration figures represent the reconstructed emanation rate of a standard rubber sample previously used by
the EXO collaboration. When not stated, overall uncertainties are estimated to be 10--20\% (consistent with most of the cross-calibrations).}
\label{tab:facilities}
\tabcolsep=6pt
\centering
\begin{ruledtabular}
\begin{tabular}{ ccccccc }
\multicolumn{1}{c}{Detector} &
\multirow{2}{*}{Type} &
Chamber &
Chamber Blank &
Transfer &
Detector &
Cross-Calibration \\
&
&
Volumes [L] &
Rates [mBq] &
Eff. [\%] &
Eff. [\%] &
[Measured/EXO activity]\\
\hline
\vspace{-4mm}
\\
SD Mines &
PIN-diode &
\multicolumn{1}{c}{\begin{tabular}[c]{@{}c@{}}13\\ 300\end{tabular}} & \multicolumn{1}{c}{\begin{tabular}[c]{@{}c@{}}0.2\\ 0.2\end{tabular}} &
\multicolumn{1}{c}{\begin{tabular}[c]{@{}c@{}}94\\ 80\end{tabular}} &
25 &
\multicolumn{1}{c}{\begin{tabular}[c]{@{}c@{}}0.89 $\pm$ 0.15\\ 1.11 $\pm$ 0.28\end{tabular}}
\vspace{2mm}
\\
Maryland &
PIN-diode &
4.7 &
0.2 &
96 &
24 &
1.13 $\pm$ 0.19
\vspace{2mm} \\
UCL &
PIN-diode &
\multicolumn{1}{c}{\begin{tabular}[c]{@{}c@{}}2.6\\ 2.6\end{tabular}} &
\multicolumn{1}{c}{\begin{tabular}[c]{@{}c@{}}0.2\\ 0.4\end{tabular}} &
\multicolumn{1}{c}{\begin{tabular}[c]{@{}c@{}}97\\ 97\end{tabular}} &
30 &
1.49 $\pm$ 0.15
\vspace{2mm} \\
Alabama &
Liquid Scint. &
\multicolumn{1}{c}{\begin{tabular}[c]{@{}c@{}}2.6\\ 2.6\end{tabular}} &
\textless{}0.4 &
34 &
36 &
0.83 $\pm$ 0.17
\\
\end{tabular}
\end{ruledtabular}
\end{table}
Four facilities, detailed in Table~\ref{tab:facilities}, performed the LZ radon emanation measurements. For each, emanation of the materials took place in dedicated chambers formed from electropolished stainless steel in order to minimize emanation from the chambers themselves, with the ``blank'' rate from the chamber alone typically 0.2--0.4\,mBq. Leak checks ensured that radon from lab air would not enter the chamber during the emanation period and provide additional background.
All facilities operated at room temperature such that the expected suppression of diffusion-dominated radon emanation at low temperature was not probed.
While transfer from small emanation chambers to the detection chamber was relatively straightforward, high transfer efficiency was obtained even from the large SD Mines emanation chambers.
A low-radon carrier gas, such as nitrogen or helium, was passed through a cold trap consisting of activated carbon cooled with a solution of dry ice and isopropyl alcohol to render its radon concentration negligible. The carrier gas passed into the emanation chamber, then through a second cold trap.
The second cold trap was made of brass or copper wool cooled to 77K by submerging it in liquid nitrogen and was used to trap the radon atoms exiting the emanation chamber while allowing the carrier gas to be pumped away.
In order to trap a high percentage of the radon atoms from a large emanation chamber, the chamber was pumped down with its gas passing through the metal trap, then refilled with filtered carrier gas, with the process repeated five times.
The Alabama facility collects the harvested radon by dissolving it in organic liquid scintillator by means of a carrier gas.
The delayed $^{214}$Bi-$^{214}$Po coincidences are then counted to infer the $^{222}$Rn decay rate. The other three facilities all use electrostatic PIN-diode detectors, where the detector is at a negative voltage relative to the detection chamber. Since most of radon daughter nuclei are positively charged ($87.3 \pm 1.6$\% in air)~\cite{PAGELKOPF20031057}, about half of the daughters end up on the PIN-diode itself. Half of the resulting alpha decays deposit the alpha energy in the detector, resulting in detector efficiencies about 25\% as listed in Table~\ref{tab:facilities}. Because $^{218}$Po$^+$, $^{214}$Pb$^+$, and $^{214}$Bi$^+$ ions may all be collected on the detector, $^{214}$Po has a slightly higher collection efficiency than $^{218}$Po, with calibrations at SD Mines indicating a detection efficiency of 23\% for $^{218}$Po and 26\% for $^{214}$Po for its detector~\cite{MillerLZbgdsLRT2018}.
\begin{figure}[tb]
\centering
\includegraphics[width=0.97\textwidth]{PortableTrap.png}
\caption{\label{fig:PortableTrap}
\textbf{Left:} SD Mines portable radon system at
the Sanford Underground Research Facility (SURF) Surface Assembly Lab for an early measurement of the Inner Cryostat Vessel (ICV), before the full inner detector was completed. Liquid nitrogen (LN) boil-off gas is cleaned of residual radon in the cooled Carbon Trap before flowing into the ICV and then out to the Radon Cold trap for harvesting.
\textbf{Right:} Portable Radon Cold Trap itself disconnected for transport to the SD Mines campus.
}
\end{figure}
LZ used two portable radon collection systems (Maryland and SD Mines), for equipment too large or delicate to move to the radon emanation facilities, or for equipment used as their own emanation volumes.
Figure~\ref{fig:PortableTrap} shows the SD Mines portable system at the Sanford Underground Research Facility (SURF).
After the transfer of radon to the portable cold trap, it was double-sealed with hand valves, warmed, and removed from the portable system. The trap was then transported to a radon screening facility where the collected radon was transferred from the trap into a detection chamber for counting. The Maryland portable system was used to determine that the xenon circulation system's integrated compressor skid assembly originally presented $\sim$17\,mBq. Replacing most of the welded stainless steel plumbing and etching the accumulation bottles in citric acid reduced the rate to $1.48\pm0.31$\,mBq~\cite{Eur.Phys.J.C80(2020)11.1044}. The SD Mines portable system was used to assay the fully loaded getter (model PS5-MGT50-R-535 from SAES) at its operational temperature of $400^{\circ}$C using helium carrier gas; its emanation rate was determined to be $2.26^{+0.28}_{-0.27}$\,mBq~\cite{Eur.Phys.J.C80(2020)11.1044,CarterHallLRT2022}. The SD Mines portable system was also used to assay other large equipment at SURF such as the Xenon Tower and the Inner Cryostat Vessel.
\section{\label{sec:ICV}
Inner Cryostat Vessel with Full Detector}
\begin{figure}[tb]
\centering
\includegraphics[width=0.8\textwidth]{fullICVunderground.pdf}
\caption{\label{fig:ICVphoto} Photo of ICV in October 2019 after it was enclosed with all components of the inner detector, filled with low-radon nitrogen gas, and moved underground to the 4850-foot level of the Sanford Underground Research Facility, allowing it two weeks to emanate. Once underground, the SD Mines portable emanation system was used to collect a fraction of the emanated radon.
}
\end{figure}
Measuring the radon emanation from
the Inner Cryostat Vessel (ICV) after the assembly of the full inner detector
was complicated.
Radon emanation from the ICV had been measured several times during the integration of various detector components. The final assay was made in October 2019 after the ICV was fully complete and sealed as shown in Fig.~\ref{fig:ICVphoto}. The cryostat at this stage housed the entire inner detector including photomultiplier tube arrays, their corresponding bases and cables, the entire field cage, PTFE coating, sensors, and conduit volumes.
The South Dakota Mines portable radon trapping system was deployed underground at SURF with minimal plumbing due to space constraints. After leak-checking and purging, the trapping system was opened to the ICV and the emanated gas was harvested over a 6.3-hour period that removed 18.3\% of the gas within the ICV. After the harvest, the trap was carefully disconnected and transported to the SD Mines radon facility for screening.
\begin{figure}[tb]
\centering
\includegraphics[width=\textwidth]{ICVresults.png}
\caption{\label{fig:ICVresults} Detected rates for $^{218}$Po (red diamonds) and $^{214}$Po (blue circles) as functions of time relative to sample transfer, with best-fit $^{222}$Rn curves decay curves for sample (unshaded) and calibration runs (darkly shaded), and best-fit constant values for background runs (lightly shaded).
\textbf{Left:} Test measurement with $87\pm4$\,mBq added to the detection chamber after an early transfer of radon and contaminants from the IVC performed without the filtering process. The low rates detected relative to those expected, and the lower rate of $^{218}$Po relative to $^{214}$Po, both indicate neutralization of positive ions before they reached the detector surface.
After the filtration procedure and a step believed to lose 2/3 of the radon sample, the detection rate of $^{214}$Po increased by 50\% and the detection rate of $^{218}$Po increased by $22\times$ (darkly shaded).
\textbf{Right:} Measurement of ICV with completed inner detector after filtering process. Agreement of $^{218}$Po and $^{214}$Po rates suggests a normal collection efficiency for each. The measurement of additional radon added to the detection chamber (darkly shaded) is also consistent with a normal collection efficiency.
}
\end{figure}
Earlier emanation measurements of the ICV had indicated that the radon trap also captured an outgassed molecular species that neutralized the positively charged radon daughters in the radon detection chamber, leading to a drop in detection efficiency.
A residual gas analyzer (RGA) indicated the culprit had 59 AMU, but no clear candidate was identified.
The left panel of Fig.~\ref{fig:ICVresults} shows a test measurement made in August 2019 by adding a known amount of radon from a Pylon radon source to the gas transferred from an earlier assay of the ICV. For this test, a sample of $87\pm4$\,mBq was transferred to a cold trap and the trap was warmed. The gas from the trap was transferred into the detector volume and allowed to equilibrate within the full volume containing the trap, tubing, and detector. Since the detector chamber volume is 97\% of this total (and based on previous calibrations), $>95$\% of the radon is expected to have transferred into the chamber. However, the detected rate for $^{214}$Po is about $4\times$ lower than that expected for this amount of radon, while the detected rate of $^{218}$Po is $>50\times$ lower than expected. These results indicate that $^{218}$Po$^+$ is neutralized before reaching the detector surface $>98$\% of the time. The much higher detected rate of $^{214}$Po decays presumably results from the ions $^{214}$Pb$^+$ and $^{214}$Bi$^+$ having a lower neutralization probability than $^{218}$Po$^+$.
To increase the collection efficiency, a procedure was developed to remove the contaminants from the gas while keeping the radon by taking advantage of the fact that radon breaks through our brass-wool cold trap very quickly when it is held at $-78^{\circ{}}$C (with a mixture of dry ice and IPA), but contaminants may break through the trap more slowly. The sample
was transferred from a brass-wool cold trap held at $-78^{\circ{}}$C to one at $-196^{\circ{}}$C (LN) with sufficient flow to transfer all of the radon atoms while leaving most of the contaminants behind, as confirmed with the RGA. This cold trap was then purged.
The radon sample was transferred to the detection chamber via a secondary small cold trap.
As shown in the left panel of Fig.~\ref{fig:ICVresults},
detection of a consistent, increased rate of both $^{214}$Po and $^{218}$Po strongly suggests that the process was successful at removing the contaminants. Measurements with the RGA also indicated a significant reduction in contaminant concentration at 59 AMU.
Unfortunately, the transfer of contaminants to the RGA involved flowing LN boiloff gas through the trapped radon,
losing $\sim$67\% of the radon sample.
This process was used, without the step that likely lost sample, for the transfer of the sample from the completed ICV.
The right panel of Fig.~\ref{fig:ICVresults} shows the results. The agreement of the $^{214}$Po and $^{218}$Po rates suggest that the process was successful and detection efficiencies were therefore the usual 23\% for $^{218}$Po and 26\% for $^{214}$Po.
Immediately following the measurement of the ICV radon sample, a check on the detection efficiency was attempted by adding radon from the Pylon source to the gas already in the detection chamber. Unfortunately, only a small amount of radon, with a large systematic uncertainty, was added. Results taken at face value indicate a higher detection efficiency than usual. However, since there is no plausible way to have increased the detection efficiency above the standard values, it is more likely that the systematic uncertainty on the radon added was underestimated.
We therefore use these usual detection efficiencies for calculation of the ICV results.
Based on the observed $^{214}$Po and $^{218}$Po rates, the radon activity of the sample was $8.07^{+0.62}_{-0.59}\,$mBq. Transfer efficiencies were dominated by the small fraction of gas removed from the ICV, with an additional 0--8\% of the collected sample lost to post-filter RGA measurements. Under the assumption of an even sampling of the radon within the ICV, the total transfer efficiency was $17.5\% \pm 0.7$\%, resulting in a total radon emanation rate of $46.1^{+4.0}_{-3.8}\,$mBq for the ICV including all materials within it.
The expected radon contribution from all materials inside the ICV, excluding dust and the ICV titanium itself, was $27.2 \pm 1.9\,$mBq, while dust could plausibly contribute anywhere from 1.7 -- 9.1\,mBq. The ICV titanium therefore must contribute 10--20 mBq of radon burden. Since the total titanium surface area in the ICV is 15.1\,m$^2$, the inferred emanation is $\sim$1\,mBq/m$^2$, consistent with measurements of LZ titanium sheets at UCL~\cite{UmitThesis}.
Assays with high-purity germanium detectors had set an upper limit of $<0.09$\,mBq/kg on $^{226}$Ra in the 800-kg ICV~\cite{Eur.Phys.J.C80(2020)11.1044}, so the maximum radon burden for it (assuming half emanates to the outside and half to the inside) is 36\,mBq. These measurements suggest a high fraction of the radium is very near the titanium surface (or that radon diffuses surprisingly well through titanium). Previous measurements of titanium by XENON1T~\cite{XENON:2020fbs} included one set of samples (total area 1.1\,m$^{2}$ and mass 4.6\,kg) with 5.0\,mBq $^{226}$Ra and 3.0\,mBq $^{222}$Rn emanation, so such a high fraction of emanation in titanium has been seen before.
For these samples, electropolishing 30\,$\mu$m off the surface reduced the emanation by $>300\times$ to a result consistent with zero, indicating that the $^{226}$Ra had indeed been predominantly on the surface.
The assay results for the LZ ICV similarly suggest that most of the $^{226}$Ra is near the titanium surfaces.
Since the radon emanation from titanium is of great interest to future experiments, further studies are warranted.
\section{\label{sec:Th228source}%
LZ $^{228}$Th Calibration Source}
A $^{228}$Th calibration source ($\tau_{1/2}=1.9116$\,y), used to inject short-lived $^{220}$Rn into the LZ TPC for calibration, was measured to make sure its $^{222}$Rn emanation was not too large. The source was nominally 20.11\,kBq on May 1, 2020, and is enclosed in stainless steel tubing between two 3-nm pore-sized filters between two locking valves. Emanation was performed twice successfully from December 2020 to January 2021 using the source as its own chamber, with the valves closed for two weeks. Radon was transferred first into the SD Mines 13-liter emanation chamber at low pressure by flowing 4\,liters of LN boiloff gas (many times the source volume) through the source. Then the standard procedure was used to transfer the radon to the cold trap. The trap was isolated for 2--3 days to allow both the $^{220}$Rn and most of its longer-lived daughters to decay before transfer to the detection chamber, in case the transfer efficiency for these longer-lived daughters was high enough to otherwise overwhelm the detector.
Results indicate the $^{222}$Rn emanation rate is $0.88 \pm 0.18$\,mBq, acceptably small compared to the LZ $^{222}$Rn background to allow its use as a calibration source.
\begin{figure}[tb]
\centering
\includegraphics[width=0.9\textwidth]{ThoronSetup.png}
\caption{\label{fig:ThoronResults} \textbf{Left:} Setup for $^{220}$Rn assay of $^{228}$Th calibration source, which lives in the stainless-steel tubing just under the radioactivity warning sign, surrounded by locking valves. LN boil-off gas flowed through the nitrogen gas line, flowmeter, source, and exhaust line to purge the source before use. During the measurement, the RAD7's internal pump continuously circulated about 0.7 liters/minute via the path shown with arrows. A jar provided additional volume to reduce any risk of the RAD7 pump damaging the source. The push-to-connect unions and valves were sealed with silicone.
\textbf{Right:} Schematic of setup labeling radon concentration variables $C_1, C_2, C_3, C_7, C_{\rm S}, C_{\rm J}$ and volumes $V_1 = 18.0$\,ml, $V_2 = 22.0$\,ml, $V_3 = 26.3$\,ml, $V_{\rm J} = 1479$\,ml, and $V_7 = 750$\,ml, and showing flow $q=0.7$\,liters/min. The volume of the source is negligible.
}
\end{figure}
The amount of $^{220}$Rn emanating from the source was measured using a Durridge RAD7 detector~\cite{durridgeRAD7} on February~11, 2021, when the $^{228}$Th activity would have decreased to 15.14\,kBq. After purging the $^{228}$Th source with LN boiloff, valves were simultaneously changed to start circulating LN boiloff through the source, to a jar to provide a large (1.5\,liter) volume, and a Durridge RAD7 detector~\cite{durridgeRAD7}, as shown in
Fig.~\ref{fig:ThoronResults}.
In the limit of perfect mixing, the concentration $C$ in one of the large volumes (the jar or the RAD7) depends on its volume $V$, the flow $q$, the $^{220}$Rn decay rate $\lambda$, and the concentration $C_i$
feeding into it:
\begin{equation*}
\pd{C}{t} = \frac{q}{V} C_i - \frac{q}{V} C - \lambda{} C,
\end{equation*}
with steady-state solution
$C = q C_i / \left(q+ \lambda{}V\right)$.
The time for radon to move from the start to the end of one of the small tubes of volume $V_i$ is $V_i/q$, so in equilibrium the radon concentration in such a tube decreases along its length from $C$ at the start of the tube to $C_i = C e^{-V_i \lambda/q}$ at the end of the tube.
The thorium source increases the concentration from its input $C_3$ to $C_3+ C_{\rm Th}$ at its output, where $C_{\rm Th} = E / q$ is the concentration in the source
due to its emanation $E$.
Combining these equations for the setup sketched in Fig.~\ref{fig:ThoronResults} yields the expected ratio of thoron concentration in the RAD7 to that in the source:
\begin{equation*}
\frac{C_7}{C_{\rm Th}} = \frac{e^{-\left[ \left( V_1 +V_2 \right) \lambda \right] /q}} {\left( 1 + \lambda V_J/q \right) \left( 1 + \lambda V_7 / q \right) - e^{ \left[ \left( V_1 + V_2 + V_3 \right) \lambda \right] /q } } = 0.24.
\end{equation*}
The measured thoron concentration in the RAD7 $C_7 = 579.6 \pm 2.6$\,kBq/m$^3$. Including systematic uncertainties on the flow and volume results in an inferred $^{220}$Rn emanation rate $E = 2.11 \pm 0.19$\,kBq, so about 14\% of the $^{220}$Rn escapes the source at room temperature.
The lack of $^{212}$Po alpha decays after the transfer allows limits to be placed on the fraction of $^{212}$Pb atoms that were transferred into the detection chamber.
The strongest limit may be set from the second emanation transfer, for which the delay in the cold trap was shorter, 52\,hours.
The two weeks of emanation put the $^{220}$Rn decay chain in excellent secular equilibrium. At the start of the transfer, there were $E/\lambda \approx 167,000$ $^{220}$Rn atoms in the source, along with 450 $^{216}$Po atoms, $115 \times 10^6$\,$^{212}$Pb atoms, and $11 \times 10^6$\,$^{212}$Bi atoms. At the end of the transfer, there were $N_{\rm end}=3.8 \times 10^6$\,$^{212}$Pb atoms. Of these, $\epsilon_{\rm decay} \approx 75$\% decayed through $^{212}$Po to $^{208}$Pb during the first day of measurement, with about $\epsilon_{\rm det}\approx25$\% of such atoms in the detection chamber resulting in the daughter $^{212}$Bi atoms collected on the detector and detected. Since no events consistent with $^{212}$Po (or $^{212}$Bi--$^{212}$Po coincidences) were observed during the first day of measurement, the 90\% C.L. upper limit of $N_{\rm obs} \leq 2.3$, and the the fraction of the $^{212}$Pb atoms transferred to the chamber
\begin{equation*}
\frac{N_{\rm obs}}{\epsilon_{\rm decay} \epsilon_{\rm det} N_{\rm end}} \leq \frac{2.3}{(0.75)(0.25)(3.8\times 10^6)} = 2.4 \times 10^{-6}.
\end{equation*}
Clearly, the two-day delay before transferring the sample from the trap to the detection chamber was unnecessary.
\section{Conclusions}
Radon's importance as a background for LZ required a comprehensive and sensitive radon emanation assay program. Measurement of LZ's inner cryostat vessel indicates a large fraction of $^{226}$Ra in the titanium results in $^{222}$Rn emanation, likely due to surface contamination. Comprehensive assay results of materials used in LZ construction are available in Ref.~\cite{Eur.Phys.J.C80(2020)11.1044}. One item not included in that publication, the LZ $^{228}$Th source, was found to emanate sufficiently little $^{222}$Rn to allow its use for LZ calibrations.
\begin{acknowledgments}
The authors gratefully acknowledge the support and participation of the LZ collaboration. Ann Harrison performed or helped perform many radon emanation measurements of the ICV and developed the filtering procedure used. Carter Hall's group at the University of Maryland designed the portable radon emanation systems.
Scott Hertel's group at the University of Massachusetts provided the $^{228}$Th source.
This work was supported in part by the Department of Energy (Grant No.~DE-AC02-05CH1123).
\end{acknowledgments}
\nocite{*}
|
1,108,101,566,410 | arxiv | \section{Introduction}
Two-dimensional electron gases~\cite{ando-82-rmp} (2DEGs) buried inside semiconductor heterostructures~\cite{harris-89-rpp} show ballistic transport over micrometers at low temperature. Their very long electron mean free path results from the combination of a high growth quality by molecular beam epitaxy and a remote doping technique that drastically reduces scattering by impurities.~\cite{stormer-83-ss} In such heterostructures, conduction electrons are confined at the interface between two different band gap materials and spatially separated from the dopants, which are placed a few tens of nanometers above the heterojunction. However, the random distribution of the ionized dopants produces long-range potential fluctuations in the 2DEG that strongly affect electron transport at low temperature.~\cite{rorison-88-sst}
Below a critical electron density, this disorder potential breaks the 2DEG into several electron puddles,~\cite{ilani-01-sci} and conduction is described by a percolation process in a two-dimensional network with thermally activated hopping.~\cite{shi-02-prl} This 2D metal-to-insulator transition (MIT) has been extensively studied by transport experiments in macroscopic samples using large planar gates to control the overall electron density.~\cite{dassarma-05-prl} Investigations of the MIT in small samples revealed that long-range and short-range disorder potentials produce different behaviors. In particular, insulating samples with short gate length show a metallic behavior at very low temperature~\cite{baenninger-08-prl} that may result from resonant tunneling between conducting domains.~\cite{neilson-10-prb} In samples with even shorter gate length, strong conductance fluctuations are observed versus gate voltage~\cite{washburn-88-prb} due to sample specific disorder configurations.~\cite{davies-89-prb}
These potential fluctuations also explain the tremendous difficulty to fabricate ballistic one-dimensional wires.~\cite{nixon-90-prb,laughton-91-prb} The presence of potential barriers along the wire results in the formation of localized states with Coulomb blockade, especially in long wires of several microns in length,~\cite{scott-thomas-89-prl,meirav-89-prb,field-90-prb,staring-92-prb} but also in submicrometer-length wires.~\cite{nicholls-93-prb,poirier-99-prb} In quantum point contacts (QPCs), the presence of potential fluctuations in the constriction~\cite{nixon-91-prb} is often invoked to explain resonances in the quantized conductance plateaus.~\cite{faist-90-prb} Alternatively, QPCs could be used to probe locally the disorder potential, since only a small region between the split-gates dominates the transport.~\cite{larkin-94-prb} Finally, in mesoscopic devices of intermediate dimension at very low temperature, quantum interferences of electron waves spreading coherently in the disordered potential landscape give rise to universal conductance fluctuations~\cite{lee-85-prl,vanhouten-86-apl} (UCFs).
Imaging the disorder potential of a surface 2DEG can be achieved by scanning tunneling microscopy,~\cite{morgenstern-02-prl,wiebe-03-prb} but the case of 2DEGs buried tens of nanometers below the surface requires specific local probe techniques. Most of the studies have been done in the quantum Hall regime at high magnetic field, using techniques based on subsurface charge accumulation,~\cite{tessmer-98-nat,finkelstein-00-sci,steele-05-prl} single electron transistors,~\cite{yoo-97-sci,zhitenev-00-nat,ilani-04-nat} and scanning gate microscopy (SGM).~\cite{woodside-01-prb,baumgartner-07-prb,paradiso-11-prb,martins-13-njp} Surprisingly, very few studies have been done at zero magnetic field. Scanning capacitance microscopy is the only technique that succeeded in imaging directly the disorder potential at zero field and revealed fluctuations on a length scale much larger than expected from the distance between dopants and the 2DEG.~\cite{chakraborty-04-prb} However, SGM can also provide indirect information on the disorder potential inside or close to a nanoscale device by imaging, for example, the complex branched electron flow spreading in a 2DEG out of a QPC,~\cite{topinka-01-nat,jura-07-natp,kozikov-13-njp,brun-14-ncom} the UCF pattern in a small constriction etched in a 2DEG,~\cite{aoki-05-apl,dacunha-06-apl} the irregular fringe pattern in a quantum ring,~\cite{pala-08-prb,pala-09-nt,chwiej-13-prb} or the presence of charge traps in the 2DEG heterostructure.~\cite{pioda-07-prb,gildemeister-07-jap}
\begin{figure}[b]
\begin{center}
\includegraphics[width=7cm,clip,trim=0 0 0 0]{figure1.png}
\caption{(a) Cross-section of the semiconductor heterostructure hosting a high-mobility 2DEG at the heterojunction. (b) Schematics of the energy potential landscape in the 2DEG resulting from the random distribution of ionized dopants and surface charges. (c) Potential landscape in the presence of a negatively polarized SGM tip that raises the potential fluctuations around the Fermi level and creates a quantum dot.} \label{fig1}
\end{center}
\end{figure}
In this paper, we use SGM~\cite{sellier-11-sst,ferry-11-sst} to probe the disorder potential in a low-density InGaAs/InAlAs 2DEG, patterned by etching into a network of wires. We show that transport through the wires is dominated by a few spots where the electrostatic potential forms a valley surrounded by two hills. When the tip is placed above these spots with a negative voltage, the conductance decreases strongly and can even drop to zero. In addition, the conductance does not decrease smoothly when the tip approaches these spots, but shows several oscillations, which are clearly revealed by sensitive transconductance measurements. These oscillations are interpreted as a signature of localized states with Coulomb blockade in quantum dots that form in the 2DEG local potential valley. By lowering locally the electron density close to zero under the tip, we indeed expect the disordered potential landscape to form a series of potential barriers delimiting quantum dots with localized states, as depicted schematically in Fig.~\ref{fig1}. In the case of macroscopic 2D gates, it would give rise to the formation of isolated 2DEG islands and to a percolation-driven MIT. Our experiment therefore represents a local investigation of the disorder-induced MIT.
Similar conductance oscillations have been observed in SGM images of various systems, but were explained by the presence quantum dots only for InAs nanowires,~\cite{bleszynski-07-nl} carbon nanotubes,~\cite{woodside-02-sci} and graphene.~\cite{connolly-11-prb,pascher-12-apl,garcia-13-prb} So far, none of the SGM studies has found conductance oscillations due to the charging of quantum dots within a 2DEG but rather due to charging of traps or impurities in the heterostructure surrounding the conducting channel.~\cite{crook-02-prb,aoki-06-jpconf,pioda-07-prb,gildemeister-07-jap} Here, we show that the features observed in our experiment are not consistent with charging events in traps, but should instead be explained by the formation of quantum dots in the disordered potential landscape. We substantiate our finding by approximate electrostatic calculations of the disorder potential within the wire and the induced tip potential revealing nearly quantitative agreement with the experimental data.
The paper is organized as follows. Section~\ref{sec:exp1} gives technical information about the experiment. Sections~\ref{sec:exp2} and \ref{sec:exp3} present the SGM images and their analysis. Section~\ref{sec:sim1} presents simulations of the disordered potential landscape induced by ionized dopants. Section~\ref{sec:sim2} demonstrates that the SGM tip can reveal the presence of quantum dots and supports our analysis of the experimental data. Supplementary information, measurements, and analysis are given in Appendix sections.
\section{Experiment}\label{sec:exp}
\subsection{Sample and setup}\label{sec:exp1}
The sample is based on a pseudomorphic In$_{0.75}$Ga$_{0.25}$As/InAlAs heterostructure grown by molecular beam epitaxy on a semi-insulating InP substrate~\cite{wallart-05-jap} with the following layer sequence : 100\,nm lattice-matched InAlAs buffer layer, 50\,nm AlAsSb barrier, 400\,nm InAlAs layer, 15\,nm In$_{0.75}$Ga$_{0.25}$As channel, 20\,nm InAlAs spacer, $\delta$-doping Si plane ($2.25\times10^{12}$\,cm$^{-2}$), 15\,nm InAlAs barrier, 7\,nm doped InGaAs cap layer. The 2DEG is formed 42\,nm below the sample surface with a carrier density $n=3.5\times10^{11}$\,cm$^{-2}$ and a mobility $\mu=10^5$\,cm$^{2}$V$^{-1}$s$^{-1}$ as measured by magneto-transport at 4.2\,K in a Hall bar patterned on the same sample. The investigated nanostructure is a $1.0\times1.9\,\mu$m$^2$ network made of three 180\,nm wide parallel wires, linked together by two 210\,nm wide wires, connected to the source and drain reservoirs by 370\,nm wide openings (see Fig.~\ref{fig2}(a)). This complex sample geometry will be simply considered here as a set of three independent wires measured in parallel. The pattern is written by electron beam lithography and transferred into a mesa by wet etching of 65\,nm deep trenches.
SGM measurements are performed in a homemade atomic force microscope~\cite{martins-07-prl} (AFM) cooled at 4.2\,K by exchange gas in a liquid helium cryostat. A commercial silicon tip coated with a PtIr conducting layer is glued at the extremity of a tuning fork, which is used as a force sensor in the AFM imaging mode. Experiments start by recording a topographic image at 4.2\,K to locate the device. For SGM measurements, the tip is lifted by 100\,nm (for all the data presented here) and scanned in a plane at a constant height above the sample. Usually, a negative voltage relative to the 2DEG is applied to the tip, and the device conductance and/or transconductance are recorded as a function of the tip position during scanning. As a result of the capacitive coupling between the tip and the 2DEG, the electron density under the tip is reduced and the electrostatic potential is raised towards the Fermi level: the tip acts as a local movable gate.
The device conductance is measured with a lock-in using a small AC source-drain excitation at 68\,Hz, while a DC voltage is applied to the tip. The transconductance is measured with a small DC source-drain bias, while a 40\,mV AC excitation at 939\,Hz is applied to the tip in addition to the main DC voltage. The two signals can be recorded simultaneously with a dual reference lock-in amplifier. The unperturbed device resistance being around 10\,k$\Omega$, voltage or current bias can be used for the source-drain polarization, corresponding to the measurement of a conductance $G$ or a resistance $R$, respectively. Both configurations have been used depending on the highest resistance recorded in the SGM map, but all data are plotted here in terms of conductance and transconductance, using the conversion $G=1/R$ and ${\rm d}G/{\rm d}V_{\rm tip}=-(1/R^2)\,{\rm d}R/{\rm d}V_{\rm tip}$. Note that ${\rm d}G$ instead of ${\rm d}G/{\rm d}V_{\rm tip}$ will be plotted for the transconductance signal in order to keep all quantities ($G$ and ${\rm d}G$) in units of $2e^2/h$.
\subsection{Conductance images}\label{sec:exp2}
\begin{figure}[b]
\begin{center}
\includegraphics[width=8cm,clip,trim=0 0 0 0]{figure2.png}
\caption{(a) Topography at 4.2\,K recorded before the SGM measurements. (b) SGM images of the conductance $G$ measured with an AC current bias $I=10$\,nA. The DC tip voltage is indicated on each image. (c) SGM conductance profiles along the red lines drawn in (a) for tip voltages from $-$3.6\,V (bottom curve) to +4.2\,V (top curve) with 0.6\,V steps. Left and right graphs correspond to bottom and top red lines, respectively. (d) Average conductance $\bar{G}$ and standard deviation $\delta G$ calculated from the SGM images in (b) versus tip voltage. (e) Difference $\Delta G$ between two consecutive SGM images as explained in the text.} \label{fig2}
\end{center}
\end{figure}
The conductance images shown in Fig.~\ref{fig2}(b) are obtained by scanning the tip above the entire device for decreasing negative tip voltages. They show a complex pattern of conductance drops covering the device area between the two openings. The device geometry can hardly be recognized because the tip-induced potential has a broad lateral extension and influences electron transport in the device even if the tip is not directly above the wires. The largest changes are observed along the central path, which probably carries the largest current, and in particular at its ends which are critical nodes for transmission. At some locations, the conductance drops by a factor of 4 at $V_{\rm tip}=-3.6$\,V and can even drop to zero at larger negative tip voltage as shown later. The narrow width of the arms and the low electron density make the device very sensitive to potential changes induced by the tip.
SGM profiles recorded along two selected lines are plotted in Fig.~\ref{fig2}(c) for different tip voltages. It is found that the profiles recorded at $V_{\rm tip}^{\rm flat}=+0.6$\,V (black curves) show no conductance change, i.e., the tip does not produce any potential perturbation. This particular value, the so-called flat band voltage, corresponds to the work function difference between the PtIr coating of the tip and the InGaAs cap layer of the heterostructure taking into account a surface Fermi level pinning at mid-gap (see Appendix A). Similar values were found for PtIr tips and GaAs surfaces in other SGM experiments~\cite{vancura-03-apl,pioda-04-prl} since InGaAs and GaAs have similar work functions.
The average conductance calculated over the full scanning area is shown in Fig.~\ref{fig2}(d). It varies roughly linearly with the tip voltage as in the case of a macroscopic field effect transistor, except for the lower slope observed at positive voltages that we attribute to a larger screening in case of charge accumulation. The standard deviation of the conductance maps, also shown in Fig.~\ref{fig2}(d), is found to drop very close to zero at the flat band voltage $V_{\rm tip}^{\rm flat}=+0.6$\,V, showing that the tip is free from charged dust particles that would have disturbed its local gate action.~\cite{gildemeister-07-prb} The linear increase of the standard deviation on both sides of $V_{\rm tip}^{\rm flat}$ is consistent with an in-average linear gate effect of the tip (linear response), since the conductance does not drop to zero in this tip voltage range.~\cite{martins-07-prl}
Careful examination of the conductance images in Fig.~\ref{fig2}(b) reveals that they contain several spots, growing in size and amplitude for decreasing tip voltages. The edge of these spots can be made more visible in Fig.~\ref{fig2}(e) by plotting the difference $\Delta G(V_{\rm tip})=G(V_{\rm tip}+0.6)-G(V_{\rm tip})$ between conductance maps recorded at two consecutive voltages separated by 0.6\,V. Many overlapping circles appear in these images, four in the left branch, two in the right one, and even more in the central one, which are difficult to distinguish. Their diameter increases for more negative tip voltages, but their center remains at a fixed position, always located inside the wires, never in the etched regions. These images show that the device is very sensitive to a local potential change at these particular locations.
Similar isolated features were observed previously in SGM images by other groups and were interpreted as the presence of charged traps in the semiconductor heterostructure, possibly in the doping plane, but not in the 2DEG itself.~\cite{crook-01-jp,crook-02-prb,aoki-06-jpconf,dacunha-06-apl,pioda-07-prb,gildemeister-07-jap} In this interpretation, the trapped charges create potential perturbations in the potential landscape of the 2DEG, and changing the number of these charges modifies the device conductance. In this case, approaching the tip with a negative voltage removes electrons from the traps and restores a larger device conductance.
In our case, the exact opposite behavior is observed, since approaching the tip with a negative voltage strongly decreases the conductance. The phenomenon observed in our experiment is therefore not related to traps in the heterostructure, and we propose instead that the tip affects directly the 2DEG potential within the following mechanism. When the tip scans above a high hill of the potential landscape, the gate effect of the tip is stronger, and it produces a spot of low conductance in the SGM image. Low density regions have indeed a weak screening capability and can be easily depleted by the repulsive potential of the tip. In addition, our particular device geometry composed of narrow wires makes the conductance very sensitive to a local depletion of the 2DEG. According to this proposal, which will be sustained later in the paper, SGM images reveal the spatial inhomogeneity of the 2DEG and show that it is characterized by a discrete distribution of small regions where the electron density is much lower than the average value.
Some of the features in Fig.~\ref{fig2}(e) consist of two concentric circles that may arise either from two very close spots, or from a single spot with two successive changes. To distinguish these two possibilities, we need higher resolution images. For this purpose, we now present transconductance measurements, which are more sensitive than a simple difference between two conductance maps.
\subsection{Transconductance images}\label{sec:exp3}
The transconductance signal ${\rm d}G/{\rm d}V_{\rm tip}$ is measured with a small additional AC voltage on the tip. A series of transconductance images is shown in Fig.~\ref{fig3}(b) for different tip voltages from $V_{\rm tip}^{\rm flat}$ down to $-$3.6\,V. Note that these images have been recorded during the same cool down as in Fig.~\ref{fig2}, but a small electrostatic discharge may have occurred during the change of the measurement configuration resulting in a slightly different potential landscape.
\begin{figure}[b]
\begin{center}
\includegraphics[width=\columnwidth,clip,trim=0 0 0 0]{figure3.png}
\caption{(a) Topography at 4.2\,K recorded before the SGM measurements. (b) SGM images of the transconductance ${\rm d}G/{\rm d}V_{\rm tip}$ measured with a DC current bias $I=10$\,nA and an AC tip voltage modulation ${\rm d}V_{\rm tip}=40$\,mV. The DC tip voltage is indicated below each image. A logarithmic color scale is used and only positive values of the transconductance are plotted. (c) SGM profiles extracted along the red line drawn in (a). The successive profiles recorded from $-$3.6\,V to +0.6\,V are shifted upwards by $2\times 10^{-3}\times 2e^2/h$ for the sake of clarity (the dotted lines indicate the zeros).} \label{fig3}
\end{center}
\end{figure}
At $-$0.6\,V tip voltage, the SGM image shows several spots and circles which correspond to those visible in Fig.~\ref{fig2}(e). As clearly seen in the image at $-$1.2\,V, all these features are located along the device wires whose topography is shown in Fig.~\ref{fig3}(a). For more negative tip voltages, the spots evolve into narrow circles with increasing diameters. In the central wire, the presence of many overlapping circles makes the pattern rather complex to analyze. In the lateral wires however, only a limited number of spots dominate the conductance (one spot in the left wire, two spots in the right wire). Several concentric circles are visible around each spot, with at least two circles for each, and up to four circles for the left spot.
These circles look very much like the Coulomb blockade oscillations observed previously by SGM in different kinds of quantum dots made by lithography,~\cite{pioda-04-prl,kicin-05-njp,fallahi-05-nl} or present accidentally in nanowires,~\cite{bleszynski-07-nl} carbon nanotubes,~\cite{woodside-02-sci} and graphene.~\cite{connolly-11-prb,pascher-12-apl,garcia-13-prb} In our experiment, each spot showing concentric circles can therefore be interpreted by the presence of a quantum dot with Coulomb blockade. When the tip is approached towards the dot, the electrostatic potential is raised, which results in the discharging of electrons outside the dot, one-by-one, with a conductance maximum each time a charge state crosses the Fermi level. Because of the large electron density in the unperturbed 2DEG, these dots do not pre-exist in absence of the tip, but appear when the electron density is lowered under the tip, such that the potential fluctuations of the 2DEG are brought around the Fermi level. Fig.~\ref{fig1} illustrates this effect by showing a localized state under the tip with discrete energy levels close to the Fermi level. According to this interpretation, each set of concentric circles observed in the SGM images reveals the presence of a quantum dot formed in the 2DEG potential fluctuations. Note that the dots are not created by the tip, as done in the past with a scanning tunneling microscope on a clean InAs surface using a positive tip voltage.~\cite{dombrowski-99-prb} Here, the dots result from a local lowering of the density, such as to induce locally an equivalent to the disorder-induced metal-to-insulator transition, well-known in macroscopic 2DEGs at low enough electron density.~\cite{dassarma-05-prl}
These SGM images are reproducible within the same cool-down in absence of external perturbation, but change if light is shined on the sample or if an electrostatic discharge occurs in the setup. An example of images obtained after a small electrostatic perturbation is given in Appendix B. This behavior gives information about the origin of the electrostatic disorder, which is not structural, but results from a particular distribution of charges, located either in the doping plane, or at the surface, and which are frozen in a given configuration at low temperature.
A striking feature in the transconductance images is the appearance of a disk with constant signal inside the innermost circle, which grows in size for more negative tip voltage without any new circle appearing inside. This phenomenon is also visible in the SGM profiles recorded along a single line and plotted in Fig.~\ref{fig3}(c) : the transconductance oscillations are progressively shifted away from the center and a region with flat signal develops in the middle. Simultaneous conductance and transconductance measurements on a single constriction (see Appendix C) have shown that the absence of feature inside the innermost circle corresponds to zero current in the wire. This effect can be understood by the quantum dot being completely emptied and/or the barriers becoming too high to give significant tunneling. Since no current can flow around the dot (the wire is too narrow), the conductance of the wire vanishes. The total conductance of the device is however not zero because some current still flows in the two other wires of the network, and circles from quantum dots in these other wires are therefore visible inside the depleted areas (see Fig.~\ref{fig3}(b) below $-$2.4\,V in the left wire).
\begin{figure}[b]
\begin{center}
\includegraphics[width=\columnwidth,clip,trim=0 0 0 0]{figure4.png}
\caption{(a) Topography of the device recorded before the SGM measurements. The red rectangle indicates the scanning area of the SGM images. (b) SGM images of the transconductance ${\rm d}G/{\rm d}V_{\rm tip}$ measured with a DC current bias $I=20$\,nA and an AC tip voltage modulation ${\rm d}V_{\rm tip}=40$\,mV. The DC tip voltage is indicated on each image. (c) SGM profiles extracted along the white line in (b). The successive profiles recorded from $-$3.4\,V to $-$1.3\,V are shifted upwards by $0.01\times 2e^2/h$.} \label{fig4}
\end{center}
\end{figure}
The concentric circles and their evolution with tip voltage are now analyzed in more details thanks to the high-resolution SGM images of the left wire plotted in Fig.~\ref{fig4}(b) for a slightly different disorder potential. Four dots in series with a similar response can be identified in this wire. Each dot shows a set of concentric circles, whose diameter increases for more negative tip voltages, and new circles emerge progressively from the center. Below a given tip voltage, a region of constant signal appears in the middle, corresponding to zero current in the wire (in parallel with the two other wires). This uniform region grows in size and merges progressively with the uniform regions of the nearby dots. Careful examination of these areas without signal shows that they are not exactly centered on the dots, and sometimes, cover partly the innermost circles. This indicates that a uniform region may not correspond to an empty dot with the last electron being removed, but rather to the appearance of a thick barrier around the dot that suppresses the current.
The SGM profiles in Fig.~\ref{fig4}(c) show the conductance oscillations for a single dot. About six oscillations are visible on both sides of the flat region where the wire is blocked. For a correct interpretation of the data, it is important to note that the transconductance signal corresponds to the derivative of the conductance curve with respect to gate voltage. Each Coulomb blockade conductance peak therefore appears as a transconductance oscillation with a negative peak immediately followed by a positive peak when the tip approaches the dot. Each oscillation is progressively shifted outwards the center when the tip voltage is decreased. The shift versus tip voltage is linear below $-$2.2\,V, which indicates an unscreened tip-induced potential. A detailed discussion of the potential induced by the tip in the wire in presence of screening effects is given in Appendix D.
In our experiment, the successive Coulomb resonances are not separated by Coulomb-blocked regions, indicating that the charging energy is smaller than the temperature or the intrinsic resonance width, probably broadened by the poor confinement of the disorder potential. A tip-dot capacitance $C_{\rm tip,dot}=e/\Delta V_{\rm tip}=8\times 10^{-19}$\,F can be deduced from the tip voltage change $\Delta V_{\rm tip}=0.2$\,V required to shift the Coulomb oscillations by one period. Note that the transconductance is measured with a 40\,mV tip voltage modulation smaller than $\Delta V_{\rm tip}$ in order to fully resolve the Coulomb oscillations. The determination of the charging energy, however, requires the knowledge of the total dot capacitance, usually measured by source-drain bias spectroscopy of the dot. The presence of several dots in series between source and drain in this device prevents the investigation of an individual dot.
\section{Simulations}\label{sec:sim}
In this section, we develop a simple model to show that isolated dots hosting a few electrons can appear under the SGM tip due to the disordered potential landscape inside the narrow wire.
\subsection{Potential fluctuations}\label{sec:sim1}
Potential fluctuations in the 2DEG have several origins, including alloy disorder, fluctuations of the barrier thickness, random distribution of ionized dopants, inhomogeneous density of surface charges. For simplicity, we consider only the distribution of ionized dopants as source of potential fluctuations, since it is an intrinsic source that cannot be suppressed. Following Ref.~\cite{nixon-90-prb}, we calculate the potential induced by positively charged ions distributed randomly in a plane located at a distance $h=20$\,nm from the 2DEG, with a mean density $N_d=2\times10^{16}$\,m$^{-2}$. We use a boundary condition with a uniform potential on the surface located at a distance $p=40$\,nm from the 2DEG.~\cite{footnote} We assume the Fermi level to be pinned at mid-gap by the surface states of the InGaAs cap layer, such that the conduction band edge of InGaAs is at an energy $V_s\approx400$\,meV above the Fermi level. For simplicity, the dielectric constant is taken uniform over the heterostructure, using the value $\epsilon_r=12.7$ of the InAlAs barrier.
In the case of a uniform dopant distribution with a continuous density $N_d$, the positively charged dopants induce an attractive potential energy $V_d=-e^2N_d(p-h)/\epsilon_0\epsilon_r$ for electrons located below the doping plane with respect to the fixed surface potential. Therefore, electrons accumulate at the InGaAs/InAlAs interface and form a 2DEG with a uniform density $N_e$, which in turn induces a repulsive potential energy $V_e=e^2N_ep/\epsilon_0\epsilon_r$ in the 2DEG plane. The electron density $N_e=(m^*/\pi\hbar^2)(-V)\theta(-V)$ depends self-consistently on the total potential energy $V=V_s+V_d+V_e$ calculated with respect to the Fermi level ($m^*$ is the electron effective mass and $\theta$ is the Heaviside step function). The condition for a non-zero electron density in the 2DEG is having $V$ negative, such that the conduction band edge is below the Fermi level (which is set to zero as the energy reference). When the density is non-zero, the self-consistent potential energy writes:
\begin{eqnarray*}
V = \frac{1}{1+\frac{m^*}{\pi\hbar^2}\frac{e^2\,p}{\epsilon_0\epsilon_r}}\;\left(V_s+V_d\right)
\end{eqnarray*}
where the coefficient before the parenthesis represents the screening by the 2DEG. For the chosen heterostructure parameters, this coefficient equals 0.09 and the attractive potential energy of the dopants is $V_d=-570$\,meV. In order to reproduce the measured electron density $N_e=3.5\times10^{15}$\,m$^{-2}$, we have to set the surface potential to $V_s=350$\,meV, which is close to the value expected from a Fermi level pinning at mid-gap on the surface.
\begin{figure}[b]
\begin{center}
\includegraphics[width=\columnwidth,clip,trim=0 0 0 0]{figure5.png}
\caption{(a) Random distribution of ionized dopants in the doping plane at finite distance above the 2DEG. The wire is 200\,nm wide and infinitely long. (b) Spatial fluctuations of the electron potential energy induced by the distribution in (a). The Fermi level is at $V=0$. The regions in gray are depleted. (c) Energy potential profile along the central line at $Y=0$. The Fermi energy is about 8\,meV with peak-to-peak fluctuations as large as 6\,meV.} \label{fig5}
\end{center}
\end{figure}
In the case of a random distribution of ionized dopants, the attractive potential energy $V_d$ is non-uniform and writes:
\begin{multline*}
V_d(\vec{r}) = \frac{-e^2}{4\pi\epsilon_0\epsilon_r}\;\sum_i \left[\frac{1}{\left(|\vec{r}-\vec{r_i}|^2+h^2\right)^{1/2}}\right.\\ \left.-\;\frac{1}{\left(|\vec{r}-\vec{r_i}|^2+(2p-h)^2\right)^{1/2}}\right]
\end{multline*}
where the fixed surface potential is equivalent to the presence of an image charge with opposite sign. In this case, the electron density $N_e(\vec{r})$, the repulsive self-energy $V_e(\vec{r})$, and the total potential energy $V(\vec{r})$ are also non-uniform, and their exact determination would require self-consistent quantum calculations in the 2DEG plane. Here, we keep the calculation classical, and make the approximation of a local response using the same relations as for the uniform case. This is a first-order approximation to give an estimate of the potential fluctuations.
Fig.~\ref{fig5}(a) shows a random distribution of ionized dopants in a 200\,nm wide and infinitely long wire, while Fig.~\ref{fig5}(b) shows the resulting screened potential energy in the 2DEG. The finite width of the wire results in a larger attractive potential in the central region and 20\,nm wide depleted regions on each side (in gray). The Fermi energy in this wire geometry (about 8\,meV in the center) is significantly lower than the value for the infinite plane (20\,meV). The most striking property is the presence of strong potential fluctuations along the wire (Fig.~\ref{fig5}(c)) with peak-to-peak variations (about 6\,meV) of the same order as the Fermi energy (about 8\,meV). These fluctuations are proportional to the square root of the mean dopant density~\cite{davies-89-prb} and their typical length scale (about 50\,nm) is governed by the distance between the doping plane and the 2DEG.~\cite{nixon-90-prb} This length scale indeed corresponds to the extension of the potential induced by each dopant, which is much larger than the mean dopant spacing (7\,nm).
\subsection{Formation of quantum dots}\label{sec:sim2}
\begin{figure}[b]
\begin{center}
\includegraphics[width=\columnwidth,clip,trim=0 0 0 0]{figure6.png}
\caption{(a-d) Potential energy landscape in the wire with the SGM tip at position $X_{\rm tip}=540$\,nm (indicated by the vertical bar) and $Y_{\rm tip}=0$\,nm. The dopant distribution is the same as in Fig.~\ref{fig5}. The spatial extension of the tip-induced potential is given in the text and its amplitude is increased from 6 to 12\,meV as indicated in each panel. The tip creates a low density region and a small quantum dot is formed for potentials between 8 and 10\,meV with a small number of electrons (indicated by black dots).} \label{fig6}
\end{center}
\end{figure}
In SGM experiments, for large enough negative tip voltage, the tip-induced potential brings locally the total electron potential above the Fermi level and builds a barrier that blocks electron transport along the narrow wire. In presence of potential fluctuations with a local minimum under the tip, a pocket of electrons can survive in this barrier, forming a small dot between two barriers as drawn schematically in Fig.~\ref{fig1}(c). When these confining barriers are rather symmetric, a resonant tunneling process through the dot can restore a high electron transmission for discrete energy levels. If the resistance of the barriers is larger than $h/e^2$, Coulomb blockade will also occur with charge quantization in the dot and a finite energy spacing between successive charge states. In this case, if the temperature is lower than the charging energy $e^2/C_{\rm dot}$, discrete conductance peaks should appear as a function of the gate voltage on the tip.
Fig.~\ref{fig6} shows an example of the formation of such a quantum dot when the tip is placed right above a local potential minimum. Panels (a) to (d) show the evolution of the potential landscape in the wire when the potential energy under the tip is raised from 6 to 12\,meV. The shape of the tip-induced potential is chosen of the form $Z/(X^2+Y^2+Z^2)^{1/2}$ with $Z=140$\,nm as in the experiment, corresponding to an unscreened potential (see Appendix D). By increasing locally the potential energy, an island of electrons forms, then shrinks, and finally disappears. To quantify the number of electrons in this island, the total charge is calculated by integration of the electron density, and each additional charge $e$ along the $X$ axis is marked by a black dot. For example, three electrons are present in the dot for the 8\,meV tip-induced potential. These simulations show that isolated islands with a few electrons can indeed form under the tip in presence of potential fluctuations along the wire. This result supports our interpretation of the experimental data in terms of Coulomb blockade in quantum dots formed in the 2DEG disorder potential.
\section{Conclusion}
SGM has been used to investigate locally the disorder-induced potential fluctuations in a 2DEG patterned into narrow wires. The SGM images reveal that a few discrete spots dominate the total resistance, corresponding to hills in the potential landscape. In addition, several concentric circles appear in the transconductance images, which are very similar to those observed previously for real quantum dots in the Coulomb blockade regime. These features indicate the presence of localized states in the 2DEG, confined between two hills of the disorder potential, when the tip lowers locally the electron density. Additional characterizations of these dots should be done in the future, in particular source-drain bias spectroscopy of a single dot in a short constriction, to measure their charging energy and level spacing.
\section*{Acknowledgement}
This work has been supported by the French Agence Nationale de la Recherche (MICATEC project), by the Grenoble Nanosciences Foundation (Scanning-Gate Nanoelectronics project), by FRS-FNRS projects J.0067.13 and U.N025.14, by FRFC grant no. 2.4503.12, and by the ARC project "stresstronics". B.H. and F.M. are research associate and postdoctoral fellow of the FRS-FNRS, respectively.
\section*{Appendix A : Work function}
The work function of a semiconductor with Fermi level pinning at mid-gap is given by $W=\chi_e+E_g/2$ where $\chi_e$ is the electron affinity and $E_g$ the band gap. According to this formula, the In$_{0.53}$Ga$_{0.47}$As cap layer has a work function $W_{\rm InGaAs}=4.9$\,eV (similar to the value $W_{\rm GaAs}=4.8$\,eV for GaAs). The AFM tip (PointProbePlus from NanoSensors) coated with a layer of Pt$_{0.95}$Ir$_{0.05}$ alloy has a work function $W_{\rm PtIr}=5.4$\,eV, as measured by Kelvin probe force microscopy~\cite{heim-04-nl,kim-05-jkps,spadafora-10-nl} (note that $W_{\rm Pt}=5.6$\,eV and $W_{\rm Ir}=5.3$\,eV). The tip voltage $V_{\rm tip}^{\rm flat}$ that compensates for the work function difference between the tip and the surface (also called flat band potential) is therefore equal to $+0.5$\,V for an InGaAs surface ($+0.6$\,V for a GaAs surface). This value is consistent with the value $+0.6$\,V extracted from Fig.~\ref{fig2}(d).
\section*{Appendix B : Different configuration}
\begin{figure}[b]
\begin{center}
\includegraphics[width=\columnwidth,clip,trim=0 0 0 0]{figure7.png}
\caption{Similar plots as in Fig.~\ref{fig3} for a slightly different disorder potential. (a) Topography recorded just before the SGM measurements. (b) SGM images of the transconductance ${\rm d}G/{\rm d}V_{\rm tip}$ measured with a DC current bias $I=20$\,nA and an AC tip voltage modulation ${\rm d}V_{\rm tip}=40$\,mV. The DC tip voltage is indicated on each image. (c) SGM profiles extracted along the red line drawn in (a). The successive profiles recorded from $-$5\,V to $-$3\,V are shifted upwards by $0.1\times 2e^2/h$ (the dotted lines indicate the zeros).} \label{fig7}
\end{center}
\end{figure}
Fig.~\ref{fig7} shows a set of SGM images recorded during the same cool-down as for Fig.~\ref{fig3} but after a refilling of the cryostat with liquid helium. Several dots can be recognized in the two sets of images, but some are new and others have disappeared. A small electrostatic discharge may have occurred in the cryostat during this operation, explaining a change of the charge distribution in the heterostructure, resulting in a slightly different potential landscape in the 2DEG. SGM profiles across a single dot in the upper wire are plotted in Fig.~\ref{fig7}(c). The conductance oscillations are rather large (amplitude up to $0.1\times 2e^2/h$) because this dot blocks the transport through two of the three wires of the device. When the tip voltage is lowered, the Coulomb peaks move away from the center and become sharper. This entails the faster potential change experienced by the dot when the tip is scanned with a larger negative voltage. At $-$5\,V tip voltage, the transconductance becomes flat and almost zero in the center because the local potential under the tip is so high that the current in the wire is completely blocked and the transconductance signal is suppressed.
\section*{Appendix C : Single constriction}
\begin{figure}[b]
\begin{center}
\includegraphics[width=\columnwidth,clip,trim=0 380 0 0]{figure8.png}
\caption{(a) Topography of the upper constriction connecting the network to the top 2DEG reservoir. (b) SGM images of the transconductance ${\rm d}G/{\rm d}V_{\rm tip}$ measured with a DC voltage bias $V=1$\,mV and an AC tip voltage modulation ${\rm d}V_{\rm tip}=40$\,mV. The DC tip voltage is indicated on each image. (c) SGM images of the conductance $G$ measured with an AC voltage bias $V=100\,\mu$V simultaneously to the transconductance. (d,e) SGM transconductance and conductance profiles extracted along the white line in (b,c) for tip voltages from $-$2\,V to $-$9\,V (from top to bottom). In (d), the successive profiles are shifted by $0.01\times 2e^2/h$.} \label{fig8}
\end{center}
\end{figure}
A different configuration of the disorder potential was obtained by shining light on the sample at low temperature and waiting for charge noise relaxation. In this configuration, the upper device constriction shown in Fig.~\ref{fig8}(a) exhibits the strongest SGM response and dominates the device resistance. This situation corresponds to the presence of several negative charges in this region, which are frozen at the surface or in the doping plane, and raise locally the 2DEG potential. Simultaneous conductance and transconductance SGM measurements have been carried out in this region using a dual reference lock-in and plotted in Figs.~\ref{fig8}(b,c). Up to six dots can be identified in these images, arranged in parallel with respect to the current flow and controlling the amount of current flowing between the reservoir and the device. For tip voltages below $-$6\,V, a region with zero current and zero transconductance appears in the middle of the image, with a contour delimited by portions of different circles. This region corresponds to the overlap of the blocked regions created by the different dots. Individually, the dots cannot block the current because they are arranged in parallel rather than in series and several parallel paths are available for the current. Figs.~\ref{fig8}(d,e) show that weak conductance modulations correspond to sharp transconductance oscillations: when Coulomb blockade effects are weak, transconductance measurements strongly improve their detection in SGM images. On curves recorded with tip voltages lower than $-$7\,V , the region with a flat transconductance signal corresponds exactly to the region where the conductance is zero. This result shows that a flat transconductance signal usually indicates a vanishing current in the probed region.
\section*{Appendix D : Tip-induced potential}
A direct measure of the energy change induced by the tip in the dots would require source-drain bias spectroscopy of an individual dot,~\cite{gildemeister-07-jap} but this study cannot be done here because of the multichannel character of the branched device. Alternatively, we investigate the potential induced by the tip in the 2DEG by measuring continuously the size of the concentric circles versus tip voltage while scanning a single line.~\cite{pioda-04-prl} Fig.~\ref{fig9}(a) shows the evolution of the transconductance signal along a vertical line in the middle of Fig.~\ref{fig8}(a) while sweeping the tip voltage. Following a given transconductance peak in this voltage-position diagram gives a trace $V_{\rm tip}^{\rm peak}(Y)$ that corresponds to an iso-potential line for the dot, i.e. a line where the potential induced in the dot is constant.~\cite{gildemeister-07-jap} From a theoretical point of view, the tip-induced potential along the $Y$ axis can be written:
\begin{eqnarray*}
V_{\rm induced}(Y) = \frac{C_{\rm tip,dot}(Y)}{C_{\rm dot}(Y)}\left(V_{\rm tip}-V_{\rm tip}^{\rm flat}\right)
\end{eqnarray*}
where $C_{\rm tip,dot}$ is the tip-dot capacitance, $C_{\rm dot}$ is the total dot capacitance, and $V_{\rm tip}^{\rm flat}$ is the flat band voltage. The quantity $C_{\rm tip,dot}/C_{\rm dot}$ represents the position-dependent lever-arm parameter between the tip voltage and the potential induced in the 2DEG. This parameter can be determined from an iso-potential line $V_{\rm tip}^{\rm peak}(Y)$, and then used to get the spatial dependence of the tip-induced potential at fixed tip voltage:
\begin{eqnarray*}
V_{\rm induced}(Y) \propto \frac{V_{\rm tip}-V_{\rm tip}^{\rm flat}}{V_{\rm tip}^{\rm peak}(Y)-V_{\rm tip}^{\rm flat}}
\end{eqnarray*}
In Fig.~\ref{fig9}(a), it is not clear if the different traces correspond to different dots or to the successive charge states of the same dot, and the precise extraction of an iso-potential line is difficult. In the following, we adopt an alternative approach and compare the experimental figure with one resulting from the modeling of the potential induced by the tip in the dot with and without screening.
\begin{figure}[b]
\begin{center}
\includegraphics[width=\columnwidth,clip,trim=0 0 0 0]{figure9.png}
\caption{(a) Single-line SGM scan along the $Y$ axis at $X=0.5\,\mu$m in Fig.~\ref{fig8}(a) for several tip voltages from 0 to $-$10\,V with 10\,mV steps. The transconductance ${\rm d}G/{\rm d}V_{\rm tip}$ is measured with a DC voltage bias $V=1$\,mV and an AC tip voltage modulation ${\rm d}V_{\rm tip}=40$\,mV (note that the weak tilted parallel lines covering the full plot are artifacts from a parasitic interference signal). (b) Simulation of the transconductance signal versus tip position and voltage (same axes as in (a)), using the modeled conductance curve shown in (c) and the tip-dot couplings shown in (d) without screening (left) and with screening (right). (c) Model for the quantum dot conductance versus tip voltage, when the tip is exactly above the dot. This curve simulates the gate effect around threshold and includes Coulomb blockade oscillations. (d) Model of the tip-dot coupling (or potential induced in the 2DEG plane) versus tip position, according to Eq.~\ref{eqn1} without screening (red line) and Eq.~\ref{eqn2} with screening (green line). The minimum tip-dot separation $\sqrt{X^2+Z^2}$ is 200\,nm and 500\,nm for the red and green lines, respectively.} \label{fig9}
\end{center}
\end{figure}
For this purpose, we approximate the tip as a point charge $Q \propto V_{\rm tip}$ for which an analytic solution is possible. This charge is placed in vacuum above the surface of a semiconductor with dielectric constant $\epsilon_r$. The potential created inside a semiconductor, at coordinates $X,Y,Z$ relative to the charge, writes:
\begin{eqnarray}\label{eqn1}
V(X,Y,Z) = \frac{Q}{4\pi\epsilon_0}\;\frac{2}{1+\epsilon_r}\;\frac{1}{\left(X^2+Y^2+Z^2\right)^{1/2}}
\end{eqnarray}
This expression is plotted as a red curve in Fig.~\ref{fig9}(d) for fixed values of $X$ and $Z$. If screening from the surrounding 2DEG can be neglected, this relation gives the potential variations of a dot inside the semiconductor, when a charge $Q$ is scanned in an horizontal plane above the surface with coordinates $X,Y,Z$ relative to the dot ($Z$ is the sum of the charge height above the surface and the dot depth below the surface).
To check if this unscreened $1/r$ dependence is consistent with the data in Fig.~\ref{fig9}(a), we simulate the transconductance signal in the presence of the tip. For this purpose, we model the dot conductance as shown in Fig.~\ref{fig9}(c), with a global drop due to the local depletion and weak Coulomb oscillations due to the disorder potential. This phenomenological model reproduces the typical behavior of the conductance curve versus gate voltage for a quantum dot in a disordered wire.~\cite{washburn-88-prb,staring-92-prb} The tip plays here the role of the gate. The left panel of Fig.~\ref{fig9}(b) shows the expected transconductance signal versus tip position and voltage. This plot reproduces qualitatively the experimental traces in Fig.~\ref{fig9}(a) if the minimum tip-dot distance is adjusted to $(X^2+Z^2)^{1/2}\sim 200$\,nm. Since the dot is in the 2DEG plane located at 42\,nm below the surface and the tip is at 100\,nm above the surface, i.e. $Z=142$\,nm, the horizontal distance between the scanning line and the dot is found to be $X\sim 140$\,nm. The linear asymptotic behavior of the experimental traces at large distance is well reproduced by this model without screening. Note that the successive charge states of the dot have different asymptotic slopes in the model, whereas the experimental traces are parallel to each other and may therefore correspond to different dots.
The above expression without screening predicts a very large tip-induced potential, which is not realistic. In reality, this potential is partially screened by the 2DEG, which is grounded to zero volt at the Ohmic contacts. Unfortunately, no analytical expression exists for the real case of a 2DEG embedded inside a semiconductor host and perturbed by a charge above the surface. In the following, we treat the closest situation as possible, which has an analytical solution, i.e. a 2DEG at the surface of the semiconductor. We therefore neglect the dielectric constant of the semiconductor barrier above the 2DEG and keep it only below the 2DEG. In the regime of linear response (no depleted region in the 2DEG) and in the Thomas-Fermi approximation (short Fermi wavelength), the potential in the 2DEG at a radial distance $r$ from a point charge $Q$ placed in vacuum at distance $Z$ above the 2DEG, can be calculated with the formula:~\cite{stern-67-prl,karsono-77-jpc,krcmar-02-prb}
\begin{eqnarray*}
V(r) = \frac{Q}{4\pi\epsilon_0}\;\int_0^\infty{J_0(q\,r)\;e^{-\,q\,Z}\;\frac{2\,q}{(1+\epsilon_r)\,q+k_s}\;dq}
\end{eqnarray*}
where $k_s=m^*\,e^2/\pi\,\hbar^2\,\epsilon_0$ is the screening wave vector, $J_0$ is the zeroth-order Bessel function, and $\epsilon_r$ is the dielectric constant of the semiconductor located below the 2DEG. This formula is almost equivalent to the expression:
\begin{eqnarray*}
V(r) = \frac{Q}{4\pi\epsilon_0}\;\frac{2}{1+\epsilon_r}\;\frac{1}{Z}\;\frac{I(a)}{1+a\,I(a)\left(\left(1+r^2/Z^2\right)^{3/2}-1\right)}
\end{eqnarray*}
where the integral $I(a)=\int_0^\infty{\frac{x\,e^{-x}}{x+a}\,dx}$ is a function of the dimensionless parameter $a=k_s\,Z/(1+\epsilon_r)$. Using $m^*=0.04\,m_e$ for InGaAs gives $k_s=3$\,nm$^{-1}$, then $\epsilon_r=14$ and $Z=142$\,nm give $a=28$. In this situation, $I(a)$ can be approximated by $1/a$ and the potential becomes:
\begin{eqnarray}\label{eqn2}
V(r) = \frac{Q}{4\pi\epsilon_0}\;\frac{2}{k_s\,Z^2}\;\frac{1}{\left(1+r^2/Z^2\right)^{3/2}}
\end{eqnarray}
This expression is plotted as a green curve in Fig.~\ref{fig9}(d) and is independent of the semiconductor dielectric constant $\epsilon_r$ because of the large screening by the 2DEG located at the surface. This expression predicts a faster $1/r^3$ potential decay at large distance than Eq.~\ref{eqn1} without screening. The expected transconductance traces for this screened potential are shown in the right panel of Fig.~\ref{fig9}(b). Their nonlinear asymptotic behavior at large distance differ from the linear behavior of the experimental traces in Fig.~\ref{fig9}(a), which are better reproduced by the model without screening. This result might be explained by the very low electron density close to the depletion threshold where these traces have been measured.
\begin{figure}[b]
\begin{center}
\includegraphics[width=\columnwidth,clip,trim=0 0 0 0]{figure10.png}
\caption{(a) Single-line SGM scan along the $X$ axis in the middle of the device ($Y=0.65\,\mu$m in Fig.~\ref{fig2}(a)) for several tip voltages from $-$3.6 to 0\,V with 30\,mV steps. These data are recorded for a large device resistance (40\,k$\Omega$) after an electrostatic discharge. The transconductance ${\rm d}G/{\rm d}V_{\rm tip}$ is measured with an AC tip voltage modulation ${\rm d}V_{\rm tip}=40$\,mV. (b) Simulation of the transconductance signal versus tip position and voltage (same axes as in (a)), without screening (left) and with screening (right), like in Fig.~\ref{fig9}(b). Two conductance curves slightly different from that in Fig.~\ref{fig9}(c) are used to model two different dots contributing in parallel to the total conductance. The tip-induced potentials with and without screening are the same as in Fig.~\ref{fig9}(d).} \label{fig10}
\end{center}
\end{figure}
According to Eq.~\ref{eqn2}, screening by the 2DEG gives a reduction of the tip-induced potential by a factor $k_s\,Z \sim 400$, which gives a more realistic estimate of the potential in the 2DEG, as explained in the following. The charge $Q$ that dresses the SGM tip can be estimated using the sphere-plane capacitance model. The conical part of the tip above the apex also contributes to the tip-induced potential, but is not considered here. The tip is modeled by a metallic sphere of radius $R_{\rm tip}$ biased at a voltage $V_{\rm tip}$ relative to the grounded 2DEG at a gap distance $Z$. Its capacitance can be written~\cite{oyama-99-jap} $C=4\pi\epsilon_0\,R_{\rm tip}\,F(R_{\rm tip}/Z)$, where the function $F(x)\simeq(1+x)/(1+x/2)$ for $x<1$, $F(0)=1$, $F(1)=1.3$, $F(10)=2.1$. Since $R_{\rm tip}/Z<1$ for a sharp tip with small curvature radius, we can reasonably assume $F\simeq 1$. In this case, the charge is given by $Q/4\pi\epsilon_0\approx R_{\rm tip}\,V_{\rm tip}$. In this model, the screened potential in the 2DEG under the tip writes:
\begin{eqnarray*}
V(0) = \frac{2\,R_{\rm tip}}{k_s\,Z^2}\;V_{\rm tip}
\end{eqnarray*}
This potential is of the order of 3\,mV for a 3\,V tip voltage and a 30\,nm tip radius (tip with metallic coating). Since the depletion of the 2DEG is obtained experimentally for a tip voltage of a few volts in the regions where dots are observed, we can estimate the Fermi energy to be about 3\,meV in these regions. This small energy is consistent with our simulation of the disorder potential in the wire (see Fig.~\ref{fig5}(c)) where a Fermi energy as small as 5\,meV is obtained on the highest potentiall hill. The existence of such high potential hills explains the strong response in the SGM images and the formation of quantum dots. Note that this model assumes an infinite 2DEG, whereas the device is etched into wires, which reduces the amount of screening as compared to the model.
For some disorder configuration, the different dots can be sufficiently far from each other to make the analysis of the dot characteristics easier. Fig.~\ref{fig10}(a) corresponds to such a case, with only three dots, one in each of the three parallel wires (the scanning line is in the $X$ direction along the symmetry axis of the device). This plot was recorded in very different conditions than for previous data, with a very large device resistance. About five traces are visible for the central dot and for the right dot. Two traces close to zero tip voltage come from a third dot in the left wire (this dot is rapidly depleted for negative tip voltages). In Fig.~\ref{fig10}(b), we compare these experimental traces with theoretical ones calculated using the above model. The figure shows the iso-potential traces for successive charge states of two different dots contributing in parallel to the conductance. The left panel corresponds to the unscreened tip-induced potential with $1/r$ decay (Eq.~\ref{eqn1}) and the right panel to the screened potential with $1/r^3$ decay (Eq.~\ref{eqn2}). The traces in the left panel reproduce better the shape of the experimental ones at large distance, as if the potential would not be screened. However, with Eq.~\ref{eqn1}, the tip voltage has to be artificially reduced by a factor 100 to give a realistic potential change in the 2DEG on the order of the Fermi energy, whereas Eq.~\ref{eqn2} gives reasonable values.
This analysis shows that screening effects are rather difficult to understand quantitatively in highly nonuniform systems like nanoscale devices, and would require 3D self-consistent calculations~\cite{szafran-11-prb} to be correctly taken into account. In addition, part of the discrepancy may result from the point charge model used for the tip, and numerical calculations would be necessary to treat correctly the actual shape of the tip.
|
1,108,101,566,411 | arxiv | \section*{Introduction}
Vertex operator algebras (VOA) are mathematical counterparts of conformal field theory. An important family of examples comes from representations of affine Lie algebras. More precisely, if we let $\hat{\frak g}$ be an affine Lie algebra, the irreducible $\hat{\frak g}$-module $L(k,0)$ with highest weight $k \Lambda_0$, $k \in \mathbb{C}$, is a VOA, whenever $k \ne - h^\vee$, the negative of the dual Coxeter number.
The representation theory of $L(k,0)$ varies depending on values of $k \in \mathbb C$. If $k$ is a positive integer, the VOA $L(k,0)$ has only finitely many irreducible modules which coincide with the irreducible integrable $\hat{ \frak g}$-modules of level $k$, and the category of $\mathbb Z_+$-graded weak $L(k,0)$-modules is semisimple. If $k \notin \mathbb Q$ or $k < - h^\vee$, categories of $L(k,0)$-modules are quite different from those corresponding to positive integer values. (For example, see \cite{KL1, KL2}.)
For some rational values of $k$, the category of weak $L(k,0)$-modules which are in the category $\mathcal O$ as $\hat{\frak g}$-modules has a similar structure as the category of $\mathbb Z_+$-graded weak modules for positive integer values. Such rational values are called {\em admissible levels}. This notion was defined in the important works of Kac and Wakimoto (\cite{KW1, KW2}). Various cases have been studied with different generality by many authors. Adamovi{\' c} studied the case of admissible half-integer levels for type $C_l^{(1)}$ \cite{A1}. The case of all admissible levels of type $A_1^{(1)}$ was studied by Adamovi{\' c} and Milas \cite{AM}, and by Dong, Li and Mason \cite{DLM}. In his recent papers \cite{P, P1}, Per{\v s}e studied admissible half-integer levels for type $A_l^{(1)}$ and $B_l^{(1)}$.
In these developments, the $A(V)$-theory has played an important role. The associative algebra $A(V)$ associated to a vertex operator algebra $V$ was introduced by I. Frenkel and Y. Zhu (see \cite{FZ, Z}). It was shown that the irreducible modules of $A(V)$ are in one-to-one correspondence with irreducible $\mathbb{Z}_+$-graded weak modules of $V$. This fact gives an elegant method for the classification of representations of $V$, and was exploited in the works mentioned above.
In this paper, we study one-third admissible levels $- \frac 5 3 \Lambda_0, -\frac 4 3 \Lambda_0, -\frac 2 3 \Lambda_0$ for type $G_2^{(1)}$ adopting the method of \cite{A1,AM,MP,P,P1}. We first determine singular vectors (Proposition \ref{prop-sing}) and then obtain a description of the associative algebra $A(L(k,0))$ in Theorem \ref{thm-zhu-image} using the singular vectors for $k= -\frac 5 3, - \frac 4 3, - \frac 2 3$. By constructing some polynomials in the symmetric algebra of the Cartan subalgebra, we find all the possible highest-weights for irreducible $A(L(k,0))$-modules from the category $\mathcal O$ (Proposition \ref{prop-finite}). As a result, in each case of $k= -\frac 5 3, - \frac 4 3, - \frac 2 3$, we prove that there are only finitely many irreducible $A(L(k,0))$-modules from the category $\mathcal O$. Then it follows from the one-to-one correspondence in $A(V)$-theory that there are only finitely many irreducible weak $L(k,0)$-modules from the category $\mathcal O$ (Theorem \ref{thm-main}). In the case of irreducible $L(k,0)$-modules, our result provides a complete classification (Theorem \ref{thm-class}).
We also prove that such an $L(k,0)$-module is completely reducible (Theorem \ref{thm-main-second}). Thus the VOA $L(k,0)$ is {\em rational in the category $\mathcal O$} for $k= -\frac 5 3, - \frac 4 3, - \frac 2 3$. This result supports the conjecture made by Adamovi{\' c} and Milas in \cite{AM}, which suggests that $L(k,0)$'s are rational in the category $\mathcal O$ for all admissible levels $k$.
Although some of our results may be generalized to higher levels $k$, the first difficulty is in the drastic growth of complexity in computing singular vectors, as one can see in Appendix A. It seems to be necessary to find a different approach to the problem for higher levels. The first-named author will consider singular vectors for other admissible weights in his subsequent paper.
\subsection*{Acknowledgments} We thank A. Feingold and M. Primc for helpful comments.
\vskip 1cm
\section{Preliminaries}
\subsection{Vertex operator algebras}
Let $(V, Y, \mathbf{1}, \omega)$ be a vertex operator algebra (VOA). This means that $V$ is a $\mathbb{Z}$-graded vector space, $V= \bigoplus_{n\in \mathbb{Z}} V_{n}$, $Y$ is the {\it vertex operator map}, $Y(\cdot, x): V \rightarrow (\mbox{End } V ) [[x,x^{-1}]]$, $\mathbf{1} \in V_{0}$ is the {\em vacuum vector}, and $\omega \in V_{2}$ is the {\em conformal vector}, all of which satisfy the usual axioms. See \cite{ DLM, FLM, LL} for more details.
By an ideal in the vertex operator algebra $V$ we mean a subspace $I$ of $V$ satisfying $Y(a,x)I \subseteq I[[x,x^{-1}]]$ for any $a \in V$. Given an ideal $I$ in $V$ such that $\mathbf{1} \notin I$, $\omega \notin I$, the quotient $V/I$ naturally becomes a vertex operator algebra. Let $(M, Y_M)$ be a weak module for the vertex operator algebra $V$. We thus have a vector space $M$ and a map $Y_M(\cdot, x): V \rightarrow (\mbox{End }M)[[x,x^{-1}]]$, which satisfy the usual set of axioms (cf. \cite{DLM}). For a fixed element $a \in V$, we write $Y_M(a,x) = \sum_{m \in\mathbb{Z}} a(m) x^{-m-1} $, and for the conformal element $\omega$ we write $Y_M(\omega,x) = \sum_{m \in\mathbb{Z}} \omega(m) x^{-m-1} = \sum_{m \in\mathbb{Z}} L_m x^{-m-2}$. In particular, $V$ is a weak module over itself with $Y=Y_V$.
A {\it $\mathbb Z_+$-graded weak $V$-module} is a weak $V$-module $M$ together with a $\mathbb Z_+$-gradation $M= \bigoplus_{n=0}^\infty M_n$ such that
\[ a(m) M_n \subseteq M_{n+r-m-1} \qquad \text{ for } a \in V_{r} \mbox{ and }m,n,r \in \mathbb Z, \] where $M_n =0$ for $n<0$ by definition. A weak $V$-module $M$ is called a {\it $V$-module} if $L_0$ acts semisimply on $M$ with a decomposition into $L_0$-eigenspaces $M= \bigoplus_{\alpha \in \mathbb C} M_{\alpha}$ such that for any $\alpha \in \mathbb C$, $\dim M_{\alpha} < \infty$ and $M_{\alpha +n}=0$ for $n \in \mathbb Z$ sufficiently small.
We define bilinear maps $*: V \times V \rightarrow V$ and $\circ: V \times V \rightarrow V$ as follows. For any homogeneous $a \in V_n$, we write $\mathrm {deg}( a)=n$, and for any $b \in V$, we define \[ a*b = \mathrm{Res}_x \frac {(1+x)^{\mathrm{deg} \, a}} x \ Y(a,x) b, \] and \[ a \circ b = \mathrm{Res}_x \frac {(1+x)^{\mathrm{deg} \, a}} {x^2} \ Y(a,x) b, \] and extend both definitions by linearity to $V \times V$. Denote by $O(V)$ the linear span of elements of the form $a \circ b$, and by $A(V)$ the quotient space $V/O(V)$. For $a \in V$, denote by $[a]$ the image of $a$ under the projection of $V$ onto $A(V)$. The map $a \mapsto [a]$ will be called {\em Zhu's map}. The multiplication $*$ induces the multiplication on $A(V)$, and $A(V)$ has a structure of an associative algebra. This fact can be found in \cite{FZ,Z}.
\begin{Prop} \cite{FZ} \label{prop-FZ}
Let $I$ be an ideal of the vertex operator algebra $V$ such that $\mathbf{1} \notin I$, $\omega \notin I$. Then the associative algebra $A(V/I)$ is isomorphic to $A(V)/A(I)$, where $A(I)$ is the image of $I$ in $A(V)$.
\end{Prop}
Given a weak module $M$ and homogeneous $a \in V$, we recall that we write $Y_M(a,x) = \sum_{m\in \mathbb{Z}} a(m) x^{-m-1}$. We define $o(a)= a({\mathrm{deg} \, a -1}) \in \mathrm{End} (M)$ and extend this map linearly to $V$.
\begin{Thm}\label{thm-Z} \cite{Z}
\begin{enumerate}
\item Let $M= \bigoplus_{n=0}^\infty M_n$ be a $\mathbb Z_+$-graded weak $V$-module. Then $M_0$ is an $A(V)$-module defined as follows: \[ [a] \cdot v= o(a) v \] for any $a \in V$ and $v \in M_0$.
\item Let $U$ be an $A(V)$-module. Then there exists a $\mathbb Z_+$-graded weak $V$-module $M$ such that the $A(V)$-modules $M_0$ and $U$ are isomorphic.
\item The equivalence classes of the irreducible $A(V)$-modules and the equivalence classes of the irreducible $\mathbb Z_+$-graded weak $V$-modules are in bijective correspondence.
\end{enumerate}
\end{Thm}
\medskip
\subsection{Affine Lie algebras}
Let $\frak g$ be a finite-dimensional simple Lie algebra over $\mathbb C$, with a triangular decomposition $\frak g = \frak n_- \oplus \frak h \oplus \frak n_+$. Let $\Delta$ be the root system of $(\frak g, \frak h)$, $\Delta_+ \subset \Delta$ the set of positive roots, $\theta$ the highest root and $(\cdot , \cdot ): \frak g \times \frak g \rightarrow \mathbb C$ the Killing form, normalized by the condition $(\theta, \theta) =2$. Denote by $\Pi = \{ \alpha_1, ... , \alpha_l \}$ the set of simple roots of $\frak g$, and by $\Pi^\vee = \{ h_1, ... , h_l \}$ the set of simple coroots of $\frak g$. The affine Lie algebra $\hat{\frak g}$ associated to $\frak g$ is the vector space
\[ \hat{\frak g} = \frak g \otimes \mathbb C[t, t^{-1}] \oplus \mathbb C K \] equipped with the bracket operation\[ [a\otimes t^m, b\otimes t^n] = [a,b]\otimes t^{m+n} + m(a,b)\delta_{m+n,0}K, \hspace{1cm} a,b \in \mathfrak{g}, m,n \in \mathbb{Z}, \] together with the condition that $K$ is a nonzero central element.
Let $h^\vee$ be the dual Coxeter number of $\hat{\frak g}$. Let $\hat{\frak g} = \hat{\frak n}_- \oplus \hat{\frak h} \oplus \hat{\frak n}_+$ be the corresponding triangular decomposition of $\hat{\frak g}$. Denote by $\widehat{\Delta}$ the set of roots of $\hat{\frak g}$, by $\widehat{\Delta}_+$ the set of positive roots of $\hat{\frak g}$, and by $\widehat{\Pi}$ the set of simple roots of $\hat{\frak g}$. We also denote by $\widehat{\Delta}^{\mathrm {re}}$ the set of real roots of $\hat{\frak g}$ and let $\widehat{\Delta}^{\mathrm{re}}_+ = \widehat{\Delta}^{\mathrm{re}} \cap \widehat{\Delta}_+$. The coroot corresponding to a real root $\alpha \in \widehat{\Delta}^{\mathrm{re}}$ will be denoted by $\alpha^\vee$. Let $\widehat{Q}= \bigoplus_{\alpha \in \widehat{\Pi}} \mathbb Z \, \alpha$ be the root lattice, and let $\widehat{Q}_+= \bigoplus_{\alpha \in \widehat{\Pi}} \mathbb Z_+ \, \alpha \subset \widehat{Q}$. For any $\lambda \in \hat{\frak h}^*$, we set
\[ D(\lambda) = \left \{ \lambda - \alpha \ | \ \alpha \in \widehat{Q}_+ \right \}. \]
We say that a $\hat{\frak g}$-module $M$ belongs to the {\it category $\mathcal O$} if the Cartan subalgebra $\hat{\frak h}$ acts semisimply on $M$ with finite-dimensional weight spaces and there exits a finite number of elements $\nu_1, ... , \nu_k \in \hat{\frak h}^*$ such that $\nu \in \bigcup_{i=1}^k D(\nu_i)$ for every weight $\nu$ of $M$. We denote by $M(\lambda)$ the Verma module for $\hat{\frak g}$ with highest weight $\lambda \in \hat{\frak h}^*$, and by $L(\lambda)$ the irreducible $\hat{\frak g}$-module with highest weight $\lambda$.
Let $U$ be a $\frak g$-module, and let $k \in \mathbb C$. We set $\hat{\frak g}_+ = \frak g \otimes t \mathbb C[t]$
and $\hat{\frak g}_- = \frak g \otimes t^{-1} \mathbb C[t^{-1}]$. Let $\hat{\frak g}_+$
act trivially on $U$ and $K$ as scalar multiplication by $k$. Considering $U$ as a $\frak g \oplus \mathbb C K \oplus \hat{\frak g}_+$-module, we have the induced $\hat{\frak g}$-module \[ N(k, U) = \mathcal{U}(\hat{\frak g}) \otimes_{\mathcal{U}(\frak g \oplus \mathbb C K \oplus \hat{\frak g}_+)} U. \]
For a fixed $\mu \in \frak h^*$, denote by $V(\mu)$ the irreducible highest weight $\frak g$-module with highest weight $\mu$. Denote by $P_+$ the set of dominant integral weights of $\frak g$, and by $\omega_1, ... , \omega_l \in P_+$ the fundamental weights of $\frak g$. We will write $N(k, \mu) = N(k, V(\mu))$. Denote by $J(k, \mu)$ the maximal proper submodule of $N(k, \mu)$ and $L(k, \mu)= N(k, \mu)/ J(k, \mu)$. We define $\Lambda_0 \in \hat{\frak h}^*$ by $\Lambda_0(K)=1$ and $\Lambda_0 (h)=0$ for any $h \in \frak h$. Then $N(k, \mu)$ is a highest-weight module with highest weight $k \Lambda_0+ \mu$, and a quotient of the Verma module $M(k \Lambda_0+\mu)$. We also obtain $L(k, \mu) \cong L(k \Lambda_0+ \mu)$.
\medskip
\subsection{Admissible weights}
Let $\widehat{\Delta}^{\vee, \mathrm{re}}$ (respectively, $\widehat{\Delta}^{\vee, \mathrm{re}}_+$) be the set of real (respectively, positive real) coroots of $\hat{\frak g}$, and $\widehat{\Pi}^\vee$ the set of simple coroots. For $\lambda \in \hat{\frak h}^*$, we define
\[ \widehat{\Delta}^{\vee, \mathrm{re}}_\lambda = \{ \alpha^\vee \in \widehat{\Delta}^{\vee, \mathrm{re}} \, | \, \langle \lambda, \alpha^\vee \rangle \in \mathbb Z \}, \qquad \text{ and } \qquad
\widehat{\Delta}^{\vee, \mathrm{re}}_{\lambda , +} = \widehat{\Delta}^{\vee, \mathrm{re}}_\lambda \cap \widehat{\Delta}^{\vee, \mathrm{re}}_+,\] and we set \[
\widehat{\Pi}^\vee_\lambda = \{ \alpha^\vee \in \widehat{\Delta}^{\vee, \mathrm{re}}_{\lambda , +} \, | \, \alpha^\vee \text{ is not decomposable into a sum of elements from } \widehat{\Delta}^{\vee, \mathrm{re}}_{\lambda , +} \}. \]
Let $\hat W$ denote the Weyl group of $\hat{\frak g}$. For each $\alpha \in \widehat{\Delta}^{\mathrm{re}}$, we have a reflection $r_\alpha \in \hat W$.
Define $\rho \in \hat{\frak h}^*$ in the usual way, and we recall the shifted action of an element $w \in \hat W$ on $\hat{\frak h}^*$, given by $w \cdot \lambda = w (\lambda + \rho) - \rho$.
A weight $\lambda \in \hat{\frak h}^*$ is called {\em admissible} if \[ \langle \lambda + \rho , \alpha^\vee \rangle \notin - \mathbb Z_{+} \quad \text{ for all } \alpha^\vee \in \widehat{\Delta}^{\vee, \mathrm{re}}_+ \qquad \text{ and } \qquad \mathbb Q \widehat{\Delta}^{\vee, \mathrm{re}}_\lambda = \mathbb Q \widehat{\Pi}^\vee. \]
The irreducible $\hat{\frak g}$-module $L(\lambda)$ is called {\em admissible} if the weight $\lambda \in \hat{\frak h}^*$ is admissible.
Given a $\hat{\frak g}$-module $M$ from the category $\mathcal O$, we call a weight vector $v \in M$ a {\it singular vector} if $\hat{\frak n}_+ . v = 0$.
\begin{Prop} \label{prop-KW} \cite{KW1}
Let $\lambda$ be an admissible weight. Then
\[ L(\lambda) = M(\lambda) \big / \left ( \sum_{\alpha^\vee \in \widehat{\Pi}^\vee_\lambda} \mathcal{U}(\hat{\frak g}) v_{\alpha} \right ) , \] where $v_{\alpha} \in M(\lambda)$ is a singular vector of weight $r_{\alpha} \cdot \lambda$.
\end{Prop}
\begin{Prop} \label{prop-KW2}\cite{KW2} Let $M$ be a $\hat{\frak g}$-module from the category $\mathcal O$. If every irreducible subquotient $L(\nu)$ of $M$ is admissible, then $M$ is completely reducible.
\end{Prop}
\medskip
\subsection{$N(k,0)$ and $L(k,0)$ as VOA's}
We identify the one-dimensional trivial $\frak g$-module $V(0)$ with $\mathbb C$. Write $\mathbf{1}=1 \otimes 1 \in N(k,0)$. The $\hat{\frak g}$-module $N(k,0)$ is spanned by the elements of the form \[ a_1(-n_1-1) \cdots a_m(-n_m-1) \mathbf{1},\] where $a_1, ... , a_m \in \frak g$ and $n_1, ... , n_m \in \mathbb Z_{+}$, with $a(n)$ denoting the element $a \otimes t^n$ for $a \in \frak g$ and $n \in \mathbb Z$.
The vector space $N(k,0)$ admits a VOA structure, which we now describe.
The vertex operator map $Y(\cdot , x) : N(k,0) \rightarrow {\rm{End}} (N(k,0)) [[x, x^{-1}]]$ is uniquely determined by defining $Y(\mathbf{1}, x)$ to be the identity operator on $N(k,0)$ and \[ Y( a(-1) \mathbf{1}, x) = \sum_{n \in \mathbb Z} a(n) x^{-n-1} \quad \text{ for } a \in \frak g . \] In the case that $k \neq - h^\vee$, the module $N(k,0)$ has a conformal vector \[ \omega = \frac 1 {2(k+h^\vee)} \sum_{i=1}^{\dim \frak g} (a^i (-1))^2 \mathbf{1},\] where $\{a^i\}_{i=1, ... , \dim \frak g}$ is an arbitrary orthonormal basis of $\frak g$ with respect to the normalized Killing form $(\cdot , \cdot )$. Then it is well known that the quadruple $( N(k,0), Y, \mathbf{1}, \omega)$ defined above is a vertex operator algebra.
\begin{Prop} \cite{FZ} The associative algebra $A(N(k,0))$ is canonically isomorphic to $\mathcal{U}(\frak g)$. The isomorphism is given by $F: A(N(k,0)) \rightarrow \mathcal{U}(\frak g)$, \[ F( [a_1(-n_1-1) \cdots a_m(-n_m-1) \mathbf{1}]) = (-1)^{n_1+ \cdots +n_m} a_1 \cdots a_m,\] for $a_1, ... , a_m \in \frak g$ and $n_1, ... , n_m \in \mathbb Z_{+}$.
\end{Prop}
Since every $\hat{\frak g}$-submodule of $N(k,0)$ is also an ideal in the VOA $N(k,0)$, the module $L(k,0)$ is a VOA for any $k \neq -h^\vee.$
\begin{Prop} \label{prop-important} \cite{P}
Assume that the maximal $\hat{\frak g}$-submodule of $N(k,0)$ is generated by a singular vector $v_0$. Then we have
\[ A(L(k,0)) \cong \mathcal{U}(\frak g) \big / \langle F([v_0]) \rangle , \] where $\langle F([v_0]) \rangle$ is the two-sided ideal of $\mathcal{U}(\frak g)$ generated by $F([v_0])$. In particular, a $\frak g$-module $U$ is an $A(L(k,0))$-module if and only if $F([v_0]) U=0$.
\end{Prop}
\vskip 1 cm
\section{Affine Lie algebra of type $G_2^{(1)}$}
\subsection{Admissible weights}
Let \[ \Delta = \left \{ \begin{array}{lll} \pm \frac{1}{\sqrt{3}}(\epsilon_1-\epsilon_2),& \pm \frac{1}{\sqrt{3}}(\epsilon_1-\epsilon_3),& \pm \frac{1}{\sqrt{3}}(\epsilon_2-\epsilon_3), \\ \pm \frac{1}{\sqrt{3}}(2\epsilon_1-\epsilon_2-\epsilon_3),& \pm \frac{1}{\sqrt{3}}(2\epsilon_2-\epsilon_1-\epsilon_3),& \pm \frac{1}{\sqrt{3}}(2\epsilon_3-\epsilon_1-\epsilon_2) \end{array} \right \} \] be the root system of type $G_2$. We fix the set of positive roots \[ \Delta_+ = \left \{ \begin{array}{lll} \frac{1}{\sqrt{3} }(\epsilon_1-\epsilon_2) ,& \frac{1}{\sqrt{3} }(\epsilon_3-\epsilon_1), & \frac{1}{\sqrt{3} }(\epsilon_3-\epsilon_2), \\ \frac{1}{\sqrt{3} }(-2\epsilon_1+\epsilon_2+\epsilon_3) , & \frac{1}{\sqrt{3} }(-2\epsilon_2+\epsilon_1+\epsilon_3), & \frac{1}{\sqrt{3} }(2\epsilon_3 - \epsilon_1-\epsilon_2) \end{array} \right \}. \] Then the simple roots are $\alpha= \frac 1 {\sqrt{3}} ( \epsilon_1-\epsilon_2)$ and $\beta= \frac 1 {\sqrt{3}} (-2\epsilon_1+\epsilon_2+\epsilon_3)$, and the highest root is $\theta = \frac 1 {\sqrt{3}} (2\epsilon_3-\epsilon_1-\epsilon_2) = 3\alpha + 2\beta$.
Let $\mathfrak{g}$ be the simple Lie algebra over $\mathbb{C}$, associated with the root system of type $G_2$. Let $E_{10},E_{01},F_{10},F_{01},H_{10},H_{01}$ be Chevalley generators of $\mathfrak{g}$, where $E_{10}$ is a root vector for $\alpha$, $E_{01}$ is a root vector for $\beta$, and so on. We fix the root vectors:
\begin{equation} \label{xx}
\begin{split}
E_{11}& = [E_{10}, E_{01}],\\
E_{21}& = \frac{1}{2} [E_{11},E_{10}] = \frac{1}{2} [[E_{10}, E_{01}], E_{10}],\\
E_{31}& = \frac{1}{3} [ E_{21}, E_{10}] = \frac{1}{6} [[[E_{10}, E_{01}], E_{10}], E_{10}]\\
E_{32}& = [ E_{31}, E_{01}]=\frac{1}{6} [[[[E_{10}, E_{01}], E_{10}], E_{10}],E_{01}],\\
F_{11}& = [F_{01}, F_{10}],\\
F_{21}& = \frac{1}{2} [F_{10} , F_{11}]=\frac{1}{2} [F_{10} , [F_{01}, F_{10}]],\\
F_{31}& =\frac{1}{3} [F_{10} ,F_{21}]=\frac{1}{6} [F_{10} ,[F_{10} , [F_{01}, F_{10}]]],\\
F_{32}& = [F_{01}, F_{31}]=\frac{1}{6} [F_{01}, [F_{10} ,[F_{10} , [F_{01}, F_{10}]]]].
\end{split}
\end{equation}\\
We set $H_{ij} = [E_{ij}, F_{ij}]$ for any positive root $i \alpha+ j \beta \in \Delta_+$. Then one can check that $H_{ij} $ is the coroot corresponding to $i \alpha+ j \beta$, i.e. $H_{ij}= (i \alpha+ j \beta)^\vee$. For a complete multiplication table, we refer the reader to Table 22.1 in \cite[p.346]{FH}, where we have
\[ \begin{array}{llllll}
X_1= E_{10}, & X_2= E_{01}, & X_3= E_{11}, & X_4= - E_{21}, & X_5= - E_{31}, & X_6= - E_{32}, \\
Y_1= F_{10}, & Y_2= F_{01}, & Y_3= F_{11}, & Y_4= - F_{21}, & Y_5=- F_{31}, & Y_6= - F_{32}.
\end{array} \]
All admissible weights for arbitrary affine Lie algebras have been completely classified in \cite{KW2}. The next proposition provides a description of the ``vacuum" admissible weights for $G_2^{(1)}$ at one-third levels. This is a special case of Proposition 1.2 in \cite{KW3}. We provide a proof for completeness.
\begin{Lem} \label{lem-adm}
The weight $\lambda_{3n+i} = (n-2+\frac{i}{3})\Lambda_0$ is admissible for $n \in \mathbb{Z}_{+}, i = 1,2,$ and we have
\[ \widehat{ \Pi }_{\lambda_{3n+i}}^{\vee} = \{ (\delta- (2 \alpha+\beta) )^\vee, \alpha^\vee, \beta^\vee \}, \] where $\delta$ is the canonical imaginary root.
Furthermore,
\begin{eqnarray}
& & \langle \lambda_{3n+i}+\rho, \gamma^\vee \rangle = 1 \quad \textit{ for } \gamma=\alpha,\beta ; \nonumber \\
& & \langle \lambda_{3n+i}+\rho, (\delta-(2 \alpha+\beta))^\vee \rangle = 3n+i+1 \quad \textit{ for } i=1,2. \nonumber
\end{eqnarray}
\end{Lem}
\begin{proof}
We have to show
\begin{align}
&\langle \lambda_{3n+i} + \rho, {\gamma}^\vee \rangle \notin -\mathbb{Z}_+ \qquad \text{ for any }\gamma \in \widehat{\Delta}_+^{\mathrm{re}} \nonumber \\
\text{ and } \qquad \qquad & \mathbb{Q} \widehat{\Delta}_{\lambda_{3n+i}}^{\vee, \mathrm{re}} = \mathbb{Q} \widehat{ \Pi }^{\vee}. \nonumber
\end{align}
Any positive real root ${\gamma} \in \widehat{ \Delta}_+^{\mathrm{re}} $ of $\hat{\mathfrak{g} }$ is of the form ${\gamma} = \bar{\gamma} + m \delta$, for $m > 0$ and $\bar{\gamma} \in \Delta$, or $m=0$ and $\bar{\gamma} \in \Delta_+$. Denote by $\bar{\rho}$ the sum of fundamental weights of $\mathfrak{g}$. Then we can choose $\rho = h^\vee \Lambda_0 + \bar{\rho} = 4 \Lambda_0 + \bar{\rho}$.
We have
\begin{eqnarray*}
\langle \lambda_{3n+i} + \rho, {\gamma}^\vee \rangle & = & \bigl{\langle} \bigl{(}n+2+\tfrac{i}{3}\bigr{)} \Lambda_0 + \bar{\rho}, (\bar{\gamma}+m\delta)^\vee \bigr{\rangle} \\
& = & \tfrac{2}{(\bar{\gamma},\bar{\gamma})} \bigl{(}m\bigl{(}n+2+\tfrac{i}{3} \bigr{)} + (\bar{\rho}, \bar{\gamma}) \bigr{)}.
\end{eqnarray*}
If $m=0$, then it is trivial that $\langle \lambda_{3n+i}, {\gamma}^\vee \rangle \notin - \mathbb{Z}_+$.
Suppose that $m \ge 1$. If $(\bar{\gamma}, \bar{\gamma}) = 2$ and $m$ $\not \equiv 0\ (\mathrm{mod}\ 3)$, then $\langle \lambda_{3n+i} + \rho, {\gamma}^\vee \rangle \notin - \mathbb{Z}_+$.
If $(\bar{\gamma}, \bar{\gamma}) = 2$, and $m$ $ \equiv 0\ (\mathrm{mod}\ 3)$, then $m\ge3$, and since $(\bar{\rho}, \bar{\gamma}) \ge -3$ for any $\bar{\gamma} \in \Delta$, we have
\[ \langle \lambda_{3n+i} + \rho, {\gamma} ^\vee \rangle = m \bigl{(} n+2+\tfrac{i}{3} \bigr{)} + (\bar{\rho}, \bar{\gamma}) \ge 3 \bigl{(}n +2+ \tfrac{1}{3}) -3= 3n+4 \ge 4,\]
which implies $\langle \lambda_{3n+i} + \rho, {\gamma}^\vee \rangle \notin - \mathbb{Z}_+$.
If $(\bar{\gamma}, \bar{\gamma}) = \frac{2}{3}$, then $(\bar{\rho}, \bar{\gamma}) \ge -\frac{5}{3}$. We have
\[ \langle \lambda_{3n+i} + \rho, {\gamma} ^\vee \rangle = 3 \bigl{(} m \bigl{(} n+2+\tfrac{i}{3} \bigr{)} + (\bar{\rho}, \bar{\gamma})\bigr{)} \ge 3 \bigl{(} n+\tfrac{7}{3} + (\bar{\rho}, \bar{\gamma} )\bigr{)} \ge 3 \bigl{(}n + \tfrac{7}{3} -\tfrac{5}{3} \bigr{)}= 3n+2 \ge 2,\]
which implies $\langle \lambda_{3n+i} + \rho, {\gamma}^\vee \rangle \notin - \mathbb{Z}_+$.
Thus, $\langle \lambda_{3n+i} + \rho, {\gamma}^\vee \rangle \notin -\mathbb{Z}_+$ for any ${\gamma} \in \widehat{\Delta}_+^{\mathrm{re}}.$
One can easily see that \begin{eqnarray*} \widehat{\Delta}^{\vee, \mathrm{re}}_{\lambda_{3n+i}, +} &=&
\{ m\delta + \bar{\gamma} | m>0, m \equiv 0 \ (\mathrm{mod}\ 3), (\bar{\gamma}, \bar{\gamma})= 2 \} \\ & & \cup \
\{ m\delta + \bar{\gamma} | m>0, (\bar{\gamma}, \bar{\gamma})= 2/ 3 \} \cup \Delta_+ , \end{eqnarray*}
Then we obtain \[ \widehat{ \Pi }_{\lambda_{3n+i}}^{\vee} = \{ (\delta- (2 \alpha+ \beta) )^\vee, \alpha^\vee, \beta^\vee \}, \]
and we see that $\mathbb{Q} \widehat{\Delta}_{\lambda_{3n+i}}^{\vee, \mathrm{re}} = \mathbb{Q} \widehat{ \Pi }_{\lambda_{3n+i}}^{\vee} = \mathbb{Q} \widehat{ \Pi }^{\vee}. $
Through direct calculations, we get
\[ \begin{aligned}
& \langle \lambda_{3n+i}+\rho,\gamma^\vee \rangle = 1 \text{ for } \gamma=\alpha,\beta, \text { and } \\
& \langle \lambda_{3n+i}+\rho, (\delta-(2 \alpha+\beta))^\vee \rangle = 3n+i+1.
\end{aligned} \]
\end{proof}
\vskip 1 cm
\subsection{Singular Vectors}
In what follows, let $\hat{\mathfrak{g}}$ be the affine Lie algebra of type $G_2^{(1)}$ and $\mathcal{U}(\hat{ \frak g})$ its universal enveloping algebra.
We write $X^i(-m)=X(-m)^i$ for elements in $\mathcal{U}(\hat{ \frak g})$. We set \[ \begin{array}{rl}
a = & E_{21}(-1),\\
b = & E_{31}(-1) E_{11}(-1)
-\ E_{32}(-1) E_{10}(-1), \\
c = & E^2_{31}(-1) E_{01}(-1)
-\ E_{32}(-1) E_{31}(-1) H_{01}(-1)
-\ E^2_{32}(-1) F_{01}(-1), \\
w = & E_{31}(-1) E_{32}(-2)
-\ E_{32}(-1) E_{31}(-2), \end{array}
\] and define \[ u = \tfrac 1 3 a^2 - b, \quad \text{ and } \quad v = \tfrac 2 9 a^3 -ab -3c .\]
The following proposition determines singular vectors for the first three admissible weights, i.e. $- \frac 5 3 \Lambda_0, -\frac 4 3 \Lambda_0, -\frac 2 3 \Lambda_0$, respectively.
\begin{Prop} \label{prop-sing}
The vector $v_k \in N(k,0)$ is a singular vector for the given value of $k$:
\[ v_k = \left \{ \begin{array}{ll}
u . \mathbf{1} & \mbox{ for } \ k = -\frac{5}{3}, \\
(v+w) . \mathbf{1} & \mbox{ for } \ k = -\frac{4}{3}, \\
u(v - w) . \mathbf{1} & \mbox{ for } \ k = -\frac{2}{3} .
\end{array} \right . \]
\end{Prop}
The proof will be given in the Appendix A. As one can see in the proof, the computational difficulty increases as the level $k$ goes up. A different approach will be used in a subsequent work of the first-named author on higher levels.
\subsection{Descripton of Zhu's algebra}
\begin{Prop}\label{prop-sing-max}
The maximal $\hat{\mathfrak{g}}$-submodule $J(k,0)$ of $N(k,0)$ is generated by the vector $v_k$ for $k=-\frac{5}{3}$, $-\frac{4}{3}$, $-\frac{2}{3}$, respectively, where $v_k$'s are given in Proposition \ref{prop-sing}.
\end{Prop}
\begin{proof} Let $\lambda_{3n+i}=(-2+n+\tfrac{i}{3})\Lambda_0= k \Lambda_0$ as before. It follows from Proposition \ref{prop-KW} and Lemma \ref{lem-adm} that the maximal submodule of the Verma module $M(\lambda_{3n+i})$ is generated by three singular vectors with weights \[r_{\delta-(2\alpha+\beta)} \cdot \lambda_{3n+i}, \quad r_\alpha \cdot \lambda_{3n+i}, \quad r_\beta \cdot \lambda_{3n+i} , \qquad \text{respectively}. \] We consider the three cases \[ n=0,i=1, k=-5/3; \qquad n=0,i=2, k=-4/3; \qquad n=1,i=1, k=-2/3 . \]
In each case, there is a singular vector $u_{k} \in M(\lambda_{3n+i})$ of weight $r_{\delta-(2\alpha+\beta)}.\lambda_{3n+i},$
whose image under the projection of $M(\lambda_{3n+i})$ onto $N(k, 0)$ is the singular vector $v_k$ given in Proposition \ref{prop-sing}.
The other singular vectors have weights \[ \begin{array}{l} r_{\alpha}\cdot \lambda_{3n+i} = \lambda_{3n+i} - \langle \lambda_{3n+i}+\rho,\alpha^{\vee}\rangle\alpha = \lambda_{3n+i} - \alpha , \quad \mbox{ and }\\ r_{\beta}\cdot \lambda_{3n+i} = \lambda_{3n+i} - \langle \lambda_{3n+i}+\rho,\beta^{\vee}\rangle\beta = \lambda_{3n+i} - \beta,\end{array} \] so the images of these vectors under the projection of $M(\lambda_{3n+i})$ onto $N(k, 0)$ are $0$ from the definition. Therefore the maximal submodule of $N(k, 0)$ is generated by the singular vector $v_k$, i.e. $J(k, 0) = \mathcal{U}(\hat{\mathfrak{g}})v_k$.
\end{proof}
Now we consider the image of a singular vector $v_k$ under Zhu's map \[ [\cdot ]: N(k,0)\rightarrow A(N(k,0)) \cong \mathcal{U}(\mathfrak{g}),\] which is defined in Section 1. We recall that the vertex algebra $N(k,0)$ is (linearly) isomorphic to the associative algebra $\mathcal{U}(\hat{\mathfrak{g}}_-)$. We thus have an induced map from $\mathcal{U}(\hat{\mathfrak{g}}_-)$ to $\mathcal{U}(\mathfrak{g})$ and a commutative diagram of linear maps:
\medskip
\begin{center}
$\begin{array}{ccc}
\mathcal{U}(\hat{\mathfrak{g}}_-) & \simeq & N(k,0) \\
\downarrow & &\downarrow \\
\mathcal{U}(\mathfrak{g}) & \simeq & A(N(k,0))
\end{array}$
\end{center}
\medskip
We will identify $N(k,0)$ with $\mathcal{U}(\mathfrak{\hat{g}}_-)$ and $A(N(k,0))$ with $\mathcal{U}(\mathfrak{g})$. We have:
$$
\begin{array}{l}
[a] = E_{21},\\
[b] = E_{31} E_{11} - E_{32} E_{10} ,\\
[c] =E^2_{31}E_{01} - E_{32}E_{31}H_{01} - E^2_{32}F_{01}.
\end{array}$$
We also have:
\begin{equation} \label{eqn-bracket} \begin{array}{l}
[u]= \tfrac{1}{3} [a]^2 - [b] , \\
[v]= \tfrac{2}{9} [a]^3 - [a][b] - 3 [c] , \\
[w]=0 , \\
[u(v-w)] = [u][v]= \tfrac{2}{27}[a]^5-\tfrac{5}{9}[a]^3[b]-[a]^2[c]+[a][b]^2+3[b][c] .
\end{array} \end{equation}
The following theorem is now a consequence of Propositions \ref{prop-important} and \ref{prop-sing-max}.
\begin{Thm} \label{thm-zhu-image}
The associative algebra $A(L(k,0))$ is isomorphic to $\mathcal{U}(\mathfrak{g})/I_{k}$, where $ I_{k}$ is the two-sided ideal of $\mathcal{U}(\mathfrak{g})$ generated by the vector $[v_k]$, where
\[ [v_k] = \left \{ \begin{array}{ll}
[u] & \quad \mbox{ for } k = -\frac{5}{3}, \\
[v] & \quad \mbox{ for } k = -\frac{4}{3}, \\
[uv] & \quad \mbox{ for } k = -\frac{2}{3}.
\end{array} \right . \]
\end{Thm}
\vskip 1 cm
\section{Irreducible modules}
In this section we adopt the method from \cite{A1,AM,MP,P,P1} in oder to classify irreducible $A(L(k, 0))$-modules from the category $\mathcal{O}$ by solving certain systems of polynomial equations.
\subsection{Modules for associative algebra $A(L(k, 0))$. }
Denote by $_L$ the adjoint action of $\mathcal{U}(\mathfrak{g})$ on $\mathcal{U}(\mathfrak{g})$ defined by $X_Lf = { [ X, f]}$ for $ X \in \mathfrak{g}$ and $f \in \mathcal{U}(\mathfrak{g})$. We also write $(ad \, X)f = X_Lf= [X,f]$. Then $ad \, X$ is a derivation on $\mathcal{U}(\mathfrak{g})$. Let $R(k)$ be a $\mathcal{U}(\mathfrak{g})$-submodule of $\mathcal{U}(\mathfrak{g})$ generated by the vector $[v_{k}]$, where $[v_k]$ is given in Theorem \ref{thm-zhu-image}. It is straightforward to see that $R(k)$ is an irreducible finite-dimensional $\mathcal{U}(\mathfrak{g})$-module isomorphic to $V((3k+7)(2\alpha+\beta))$. Let $R(k)_0$ be the zero-weight subspace of $R(k)$.
\begin{Prop} \cite{A1,AM}
Let $V(\mu)$ be an irreducible highest weight $\mathcal{U}(\mathfrak{g})$-module with highest weight vector $v_\mu$ for $\mu \in \mathfrak{h}^*$. Then the following statements are equivalent:
\begin{enumerate}
\item $V(\mu)$ is an $A(L(k, 0))$-module,
\item $R(k) \cdot V(\mu) = 0$,
\item $R(k)_0 \cdot v_\mu = 0$.
\end{enumerate}
\end{Prop}
Let $r\in R(k)_0$. Then there exists a unique polynomial $p_r \in S(\mathfrak{h})$, where $S(\mathfrak{h})$ is the symmetric algebra of $\mathfrak h$, such that \[r \cdot v_\mu = p_r(\mu) v_\mu. \] Set $\mathcal{P}(k)_0 = \{p_r \, | \, r \in \mathcal {R}(k)_0 \}$. Then we have:
\begin{Cor} \label{cor-biject} There is a bijective correspondence between \begin{enumerate} \item the set of irreducible $A(L(k, 0) )$-modules $V(\mu)$ from the category $\mathcal{O}$, and \item the set of weights $\mu \in \mathfrak{h}^*$ such that $p(\mu)=0$ for all $p \in \mathcal{P}(k)_0$. \end{enumerate}
\end{Cor}
\medskip
\subsection{Polynomials in $\mathcal{P}(k)_0$}
We now determine some polynomials in the set $\mathcal{P}(k)_0$ for the cases $k= - \frac 5 3$, $k= - \frac 4 3$, $k= - \frac 2 3$, respectively. We will use some computational lemmas which we collect and prove in Appendix B.
\begin{Lem}[Case: $k=- \frac 5 3$] \label{lem-first-case} We let
\[ (1) \ q(H) = H_{21}( H_{21} +2 ), \quad (2) \ p_1(H) = H_{10}(H_{10}-1), \quad \text{and } \quad (3)\ p_2(H)= \tfrac{1}{3}H_{11}(H_{11}-1) +3H_{01} .\]
Then $q(H), p_1(H), p_2(H) \in \mathcal{P}(-\frac 5 3)_0$.
\end{Lem}
\begin{proof}
(1) We show that $(E_{21}^2 F_{21}^4)_L [u] \equiv C\, q(H) \ (\mathrm{mod} \ \mathcal{U}(\mathfrak{g})\mathfrak{n}_+) $ for some $C \ne 0$. Using Lemma \ref{lem-secondstep} and Lemma \ref{lem-thirdstep}, we have
\begin{eqnarray*}
(E_{21}^2F_{21}^4)_L [u] & = &(E_{21}^2F_{21}^4)_L (\tfrac{1}{3}[a]^2-[b])\\
&\equiv & 4!2!(\tfrac{1}{3} H_{21}(H_{21}-1) + H_{21}) \equiv 4!2!\tfrac{1}{3} H_{21}(H_{21}+2) \quad (\mathrm{mod} \ \mathcal{U}(\mathfrak{g})\mathfrak{n}_+) ,
\end{eqnarray*}
which is what we wanted to show.
(2) We will show that $(E_{10}^2 F_{31}^2)_L [u] \equiv C\, p_1(H) \ (\mathrm{mod} \ \mathcal{U}(\mathfrak{g})\mathfrak{n}_+) $ for some $C \ne 0$. We again use Lemma \ref{lem-secondstep} and Lemma \ref{lem-thirdstep} to obtain:
\[
(E_{10}^2F_{31}^2)_L (\tfrac{1}{3}[a]^3-[b]) \equiv (2!)^2\ \tfrac{1}{3} H_{10}(H_{10}-1) \equiv \tfrac{4}{3}p_1(H) \quad (\mathrm{mod} \ \mathcal{U}(\mathfrak{g})\mathfrak{n}_+) .
\]
(3) In this case we show that $(E_{11}^2 F_{32}^2)_L [u] \equiv C\, p_2(H) \ (\mathrm{mod} \ \mathcal{U}(\mathfrak{g})\mathfrak{n}_+) $ for some $C \ne 0$. Similarly to the first two cases we compute:
\begin{eqnarray*}
(E_{11}^2F_{32}^2)_L(\tfrac{1}{3}[a]^2-[b]) &\equiv & (2!)^2\ ( \tfrac{1}{3} H_{11}(H_{11}-1) + 3 H_{01}) \\ & \equiv & C\, p_2(H) \quad (\mathrm{mod} \ \mathcal{U}(\mathfrak{g})\mathfrak{n}_+) .
\end{eqnarray*}
\end{proof}
We now give polynomials for the next case.
\begin{Lem}[Case: $k=- \frac 4 3$] \label{lem-second-case} Let
\begin{enumerate}
\item $q(H) = \frac{2}{9}H_{21}(H_{21}-1)(H_{21}-2) +\ H_{21} (H_{21}-2) +3 H_{01} (H_{01}+2)$,
\item $p_1(H) = H_{10}(H_{10}-1)(H_{10}-2) $,
\item $p_2(H) = \frac{2}{9}H_{11}(H_{11}-1)(H_{11}-2) +6 H_{01} H_{32}$.
\end{enumerate}
Then $p_1(H),p_2(H), q(H) \in \mathcal{P}(-\frac 4 3)_0$.
\end{Lem}
\begin{proof}
(1) We show that $(E^3_{21}F^6_{21})_L [v] \equiv C q(H) \ (\mathrm{mod} \ \mathcal{U}(\mathfrak{g})\mathfrak{n}_+) $ for some constant $C \neq 0$.
By Lemma \ref{lem-thirdstep}, we have:
\begin{eqnarray*}
(E_{21}^3F_{21}^6)_L [v]& =& (E_{21}^3F_{21}^6)_L(\tfrac{2}{9}[a]^3 -[a][b]-3[c])\\
& \equiv & -3!6! \tfrac{2}{9} H_{21}(H_{21}-1)(H_{21}-2) -\tfrac{3!}{2!}\tfrac{6!}{4!} (H_{21}-2)(E_{21}^2F_{21}^4)_L [b] \\ & & -3 (E_{21}^3F_{21}^6)_L[c] \quad (\mathrm{mod} \ \mathcal{U}(\mathfrak{g})\mathfrak{n}_+) .
\end{eqnarray*}
By Lemma \ref{lem-secondstep}, we thus have:
\begin{eqnarray*}
(E_{21}^3F_{21}^6)_L [v]& \equiv & -3!6! \tfrac{2}{9}H_{21}(H_{21}-1)(H_{21}-2) + 3!6! (H_{21}-2) H_{21}+ 3!6! H_{01}(H_{01}+2) \\
& \equiv & C q(H) \quad (\mathrm{mod} \ \mathcal{U}(\mathfrak{g})\mathfrak{n}_+) .
\end{eqnarray*}
(2) We will show that $(E_{10}^3F_{31}^3)_L [v] \equiv C p_1(H) \ (\mathrm{mod} \ \mathcal{U}(\mathfrak{g})\mathfrak{n}_+) $ for some constant $C \neq 0$.
Using Lemma \ref{lem-thirdstep}, we obtain:
\begin{eqnarray*}
& & (E_{10}^3F_{31}^3)_L (\tfrac{2}{9}[a]^3 - [a][b] - 3 [c]) \\ & \equiv & \tfrac{2}{9} (3!)^2 H_{10}(H_{10}-1)(H_{10}-2) + \tfrac{3!}{2!}\tfrac{3!}{2!}(H_{10}-2) (E_{10}^2F_{31}^2)_L [b] -3 (E_{10}^3F_{31}^3)_L [c] \quad (\mathrm{mod} \ \mathcal{U}(\mathfrak{g})\mathfrak{n}_+) .
\end{eqnarray*}
By Lemma \ref{lem-secondstep}, we thus have
\begin{eqnarray*}
(E_{10}^3F_{31}^3)_L (\tfrac{2}{9}[a]^3 - [a][b] - 3 [c])& \equiv & \tfrac{2}{9} (3!)^2 H_{10}(H_{10}-1)(H_{10}-2) \\
&\equiv & C p_1(H) \quad (\mathrm{mod} \ \mathcal{U}(\mathfrak{g})\mathfrak{n}_+) .
\end{eqnarray*}
(3) Finally, we show that $(E_{11}^3F_{32}^3)_L [v] \equiv C p_2(H) \ (\mathrm{mod} \ \mathcal{U}(\mathfrak{g})\mathfrak{n}_+) $ for some constant $C \neq 0$. Since $H_{11}+H_{31} = 2 H_{32}$, we have
\begin{eqnarray*}
(E_{11}^3F_{32}^3)_Lv '& \equiv& \tfrac{2}{9} (3!)^2\ H_{11}(H_{11}-1)(H_{11}-2) - \tfrac{3!}{2!}\tfrac{3!}{2!} (H_{11}-2) (E_{11}^2F_{32}^2)_L [b] - 3 (E_{11}^3F_{32}^3)_L [c]\\
&\equiv & (3!)^2\ (\tfrac{2}{9}H_{11}(H_{11}-1)(H_{11}-2) + 3 (H_{11}-2) H_{01} + 3 H_{01} (H_{31}+2))\\
&\equiv & (3!)^2\ (\tfrac{2}{9}H_{11}(H_{11}-1)(H_{11}-2) + 6 H_{01}H_{32} ) \\
& \equiv & C p_2(H) \quad (\mathrm{mod} \ \mathcal{U}(\mathfrak{g})\mathfrak{n}_+) .
\end{eqnarray*}
\end{proof}
The last case is presented below.
\begin{Lem}[Case: $k=-\frac 2 3$]\label{lem-third-case} We let
\begin{eqnarray*}
q(H) &= & \tfrac{2}{27}H_{21}(H_{21}-1)(H_{21}-2)(H_{21}-3)(H_{21}-4) +\tfrac{5}{9} H_{21} (H_{21}-2)(H_{21}-3)(H_{21}-4)\\
& &+\ (H_{21}-3)(H_{21}-4) H_{01}( H_{01}+2) + 2 H_{21}(H_{21}-4)(H_{11}-1) \\
& & +\ 2 (H_{21}-4)H_{10}(H_{10}-1)-\ 6(H_{21}-4)H_{01}(H_{01}+1) \ +\ 6 (H_{21}-3)H_{01} (H_{01}+2), \\
p_1(H) &=& H_{10}(H_{10}-1)(H_{10}-2)(H_{10}-3)(H_{10}-4),\\
p_2(H) &=& \tfrac{2}{27}H_{11}(H_{11}-1)(H_{11}-2)(H_{11}-3)(H_{11}-4) +\tfrac{5}{3}(H_{11}-2)(H_{11}-3)(H_{11}-4)H_{01}\\ & & +\ (H_{11}-3)(H_{11}-4)H_{01}(H_{31}+2) +18 (H_{11}-4)H_{01}(H_{01}-1) \\ & & -2(H_{11}-3)(H_{11}-4)H_{01} +18 H_{01}(H_{01}-1)(H_{31}+2) .
\end{eqnarray*}
Then $p_1(H),p_2(H), q(H) \in \mathcal{P}(-\frac 2 3)_0$.
\end{Lem}
\begin{proof}
First recall from (\ref{eqn-bracket}) that \[ [u(v-w)] = [u][v]= \tfrac{2}{27}[a]^5-\tfrac{5}{9}[a]^3[b]-[a]^2[c]+[a][b]^2+3[b][c] .\] We will show that $(E_{21}^5F_{21}^{10})_L([u][v]) \equiv - 5!10! q(H) \ (\mathrm{mod} \ \mathcal{U}(\mathfrak{g})\mathfrak{n}_+) $.
Using Lemmas \ref{lem-product}, \ref{lem-firststep}, we have:
\begin{align*}
(F_{21}^{10})_L([u][v]) =& (F_{21}^{10})_L (\tfrac{2}{27}[a]^5 -\tfrac{5}{9}[a]^3[b]-[a]^2[c]+[a][b]^2+3[b][c])\\
=&\tfrac{2}{27} \tfrac{10!}{(2!)^5} (-2)^5 F_{21}^5 - \tfrac{5}{9} \tfrac{10!}{(2!)^3 4!} (-2)^3 F_{21}^3\ (F_{21}^{4})_L[b] - \tfrac{10!}{(2!)^2 6!} (-2)^2 F_{21}^2\ (F_{21}^{6})_L[c]\\
& + \tfrac{10!}{2!8!} (-2) F_{21}\ (F_{21}^{8})_L[b]^2 + 3\ (F_{21}^{10})[b][c]\\
= & -\tfrac{2}{27} 10! F_{21}^5 + \tfrac{5}{9} \tfrac{10!}{ 4!} F_{21}^3\ (F_{21}^{4})_L[b]\\
& - \tfrac{10!}{ 6!} F_{21}^2\ (F_{21}^{6})_L[c] - \tfrac{10!}{8!} F_{21}\ (F_{21}^{8})_L[b]^2 + 3\ (F_{21}^{10})[b][c] .
\end{align*}
Now using Lemma \ref{lem-poly}, we obtain:
\begin{align*}
\tfrac{1}{10!}(E_{21}^5F_{21}^{10})_L([u][v]) = -&\tfrac{2}{27}\ 5!H_{21}(H_{21}-1)(H_{21}-2)(H_{21}-3)(H_{21}-4)\\
&+\tfrac{5}{9}\ \tfrac{5!}{2!}\ (H_{21}-2)(H_{21}-3)(H_{21}-4)\ \tfrac{1}{4!}(E_{21}^2F_{21}^{4})_L[b]\\
&-\tfrac{5!}{3!} (H_{21}-3)(H_{21}-4)\ \tfrac{1}{6!}(E_{21}^3F_{21}^{6})_L[c]\\
& - \tfrac{5!}{4!}(H_{21}-4)\ \tfrac{1}{8!}(E_{21}^4F_{21}^{8})_L[b]^2 + 3\ \tfrac{1}{10!}(E_{21}^5F_{21}^{10})_L([b][c]).
\end{align*}
Combining this with Lemmas \ref{lem-secondstep}, \ref{lem-thirdstep}, \ref{lem-fourthstep}, we obtain:
\begin{align*}
\tfrac{1}{10!}(E_{21}^5F_{21}^{10})_L([u][v])\equiv &-\tfrac{2}{27}\ 5!H_{21}(H_{21}-1)(H_{21}-2)(H_{21}-3)(H_{21}-4)\\
&+\tfrac{5}{9}\ \tfrac{5!}{2!}\ (H_{21}-2)(H_{21}-3)(H_{21}-4)\ (-2)H_{21}\\
&-\tfrac{5!}{3!} (H_{21}-3)(H_{21}-4)\ 3! H_{01}(H_{01}+2)\\
& - \tfrac{5!}{4!}(H_{21}-4)\ 4! (2 H_{21}H_{11} + 2 H_{10}(H_{10}-1) - 6 H_{01}(H_{01}+1))\\
& + 3\ 5!(-2) H_{01}(H_{01}+2)(H_{21}-3) \\
\equiv &-5! q(H) \quad (\mathrm{mod} \ \mathcal{U}(\mathfrak{g})\mathfrak{n}_+) . \end{align*}
The proofs for $p_1(H)$ and $p_2(H)$ are similar, and we omit the details.
\end{proof}
\medskip
\subsection{Finiteness of the number of irreducible modules}
We are now able to obtain the following result for the associative algebra $A(L(k,0))$. For convenience, if $\mu \in \frak h^*$, we write $\mu_{ij} = \mu(H_{ij})$. We will identify $\mu \in \frak h^*$ with the pair $(\mu_{10}, \mu_{01})$.
\begin{Prop}\label{prop-finite}
There are finitely many irreducible $A(L(k,0))$-modules from the category $\mathcal{O}$ for each of $k=-\frac 5 3, -\frac 4 3, - \frac 2 3$. Moreover, the possible highest weights $\mu=(\mu_{10}, \mu_{01})$ for irreducible $A(L(k,0))$-modules are as follows:
\begin{enumerate}
\item if $k=-\frac{5}{3}$, then $\mu=(0,0), (0, -\tfrac{2}{3})$ or $(1,-\tfrac{4}{3})$;
\item if $k=-\frac{4}{3}$, then $\mu=(0,0), (0, -\tfrac{2}{3}), (0,-\tfrac{1}{3}), (1,0), (1, -\tfrac{4}{3})$ or $(2,-\tfrac{5}{3})$;
\item if $k=-\frac{2}{3}$, then $\mu=(0,0), (0,-\tfrac{2}{3}), (0,-\tfrac{1}{3}), (0,\tfrac{1}{3})$, $(0, 1), (1,0), (1,-\tfrac{4}{3}), (1,-\tfrac{2}{3})$, \\ \phantom{LLLLLlLLLLLLLLL}$(2,0), (2,-\tfrac{5}{3}), (2,-\tfrac{4}{3})$ or $(4,-\tfrac{7}{3})$.
\end{enumerate}
\end{Prop}
\begin{proof}
(1) It follows from Corollary \ref{cor-biject} that highest weights $\mu \in \mathfrak{h}^*$ of irreducible $A(L(-\frac{5}{3},0))$-modules satisfy $p(\mu)=0$ for all $p \in \mathcal{P}_0(-\frac 5 3)$. Lemma \ref{lem-first-case} implies that $p_1(\mu)= p_2(\mu) = q(\mu) = 0$ for such weights $\mu$. Let $\mu \in \mathfrak{h}^*$. The equation $p_1(\mu)=0$ is \[ \mu_{10} ( \mu_{10}-1) = 0,\] which implies $\mu_{10}= 0$ or 1.
First suppose $\mu_{10}=0$. Then from $q(\mu)=0$ we must have $\mu_{01}= 0$ or $-\frac{2}{3}$. Similarly, from $p_2(\mu)=0$, we also get $\mu_{01}= 0$ or $-\frac{2}{3}$. So the weight $\mu$ must be of the form $\mu = (\mu_{10}, \mu_{01}) = (0, 0)$ or $(0,-\frac{2}{3})$ in this case. Now suppose $\mu_{10}=1$. The equation $q(\mu)=0$ gives $\mu_{01}=-\frac{2}{3}$ or $-\frac{4}{3}$, and the equation $p_2(\mu)=0$ gives $\mu_{01}= 0$ or $-\frac{4}{3}$. So the only possibility is $\mu = (\mu_{10},\mu_{01}) =(1, -\frac{4}{3})$.
Altogether, this gives only three possible weights $\mu$ such that $p_1( \mu) = p_2( \mu) = q( \mu) = 0$: \[\mu = (\mu_{10}, \mu_{01}) = (0, 0), (0,-\tfrac{2}{3}), \mbox{ or } (1, -\tfrac{4}{3}). \]
(2) Similarly to the part (1), we use the polynomials of Lemma \ref{lem-second-case}. Using a computer algebra system, we calculate the common zeros of the polynomials $q(H), p_1(H), p_2(H)$ to obtain the following list of possible highest weights: \[ \mu=(\mu_{10}, \mu_{01})= (0,0), (0, -\tfrac{2}{3}), (0,-\tfrac{1}{3}), (1,0), (1, -\tfrac{4}{3}), \mbox{ or } (2,-\tfrac{5}{3}) .\]
(3) For this part, we use Lemma \ref{lem-third-case}. Using a computer algebra system, we again compute the common zeros of the polynomials $q(H), p_1(H), p_2(H)$ to obtain the following list of possible highest weights: \[ \begin{array}{ll} \mu=(\mu_{10}, \mu_{01}) = & (0,0), (0,-\tfrac{2}{3}), (0,-\tfrac{1}{3}), (0,\tfrac{1}{3}), (0, 1), \\ & (1,0), (1,-\tfrac{4}{3}), (1,-\tfrac{2}{3}), (2,0), (2,-\tfrac{5}{3}), (2,-\tfrac{4}{3}), \mbox{ or } (4,-\tfrac{7}{3}). \end{array} \]
\end{proof}
Now we apply the $A(V)$-theory (Theorem \ref{thm-Z}), and obtain our main result in the following theorem.
\begin{Thm} \label{thm-main} There are finitely many irreducible weak modules from the category $\mathcal{O}$ for each of the following simple vertex operator algebras: $L(-\frac{5}{3},0)$, $L(-\frac{4}{3},0)$, $L(-\frac{2}{3},0)$.
\end{Thm}
\begin{Rmk} \label{rmk-VOA}
This theorem provides further evidence for the conjecture of Adamovi{\' c} and Milas in \cite{AM}, mentioned in the introduction.
Furthermore, if $L(\lambda)$ is an irreducible module of the VOA $L(k,0)$, for $k= -\frac{5}{3}, -\frac{4}{3}$, or $-\frac{2}{3}$, then we recall from Section 1.2 that we must have $L(\lambda) \cong L(k\Lambda_0, \mu)$ for the values of
$\mu \in \frak h^*$ given in Proposition \ref{prop-finite}.
\end{Rmk}
In the case of irreducible $L(k,0)$-modules, we obtain a complete classification. We state this result in the following proposition and theorem.
\begin{Prop}
The complete list of irreducible finite-dimensional $A(L(k,0))$-modules $V(\mu)$ for each $k$ is as follows:
\begin{enumerate}
\item if $k=- \frac 5 3$, then $V(\mu)= V(0)$,
\item if $k=- \frac 4 3$, then $V(\mu) = V(0)$ or $V(\omega_1)$,
\item if $k=-\frac 2 3$, then $V(\mu)=V(0), V(\omega_1), V(\omega_2)$, or $V(2 \omega_1)$,
\end{enumerate}
where $\omega_1, \omega_2$ are the fundamental weights of $\frak g$.
\end{Prop}
\begin{proof}
Among the list of weights in Proposition \ref{prop-finite}, we need only to consider dominant integral weights, i.e. those weights $\mu=(m_1, m_2)$ with $m_1, m_2 \in \mathbb Z_+$. Notice that the weights of the singular vectors $[v_k]$ are $2 \omega_1$, $3 \omega_1$ and $5 \omega_1$, respectively. Considering the set of weights of $V(\mu)$ listed above, we see that each singular vector $[v_k]$ annihilates the corresponding modules $V(\mu)$. Now the proposition follows from Proposition \ref{prop-important}.
\end{proof}
We again apply the $A(V)$-theory (Theorem \ref{thm-Z}), and obtain the following theorem.
\begin{Thm} \label{thm-class}
The complete list of irreducible $L(k,0)$-modules $L(k,\mu)$ for each $k$ is as follows:
\begin{enumerate}
\item if $k=- \frac 5 3$, then $L(k,\mu)= L(k,0)$,
\item if $k=- \frac 4 3$, then $L(k,\mu) = L(k,0)$ or $L(k,\omega_1)$,
\item if $k=-\frac 2 3$, then $L(k,\mu)=L(k,0), L(k,\omega_1), L(k,\omega_2)$, or $L(k,2 \omega_1)$.
\end{enumerate}
\end{Thm}
\medskip
\subsection{Semisimplicity of weak modules from the category $\mathcal{O}$}
In this subsection we show that the category of weak $L(k,0)$-modules from the category $\mathcal{O}$ is semisimple.
\begin{Lem} \label{lem-adm2} Assume that $\lambda = k \Lambda_0 + \mu$ for $k= - \frac 5 3, - \frac 4 3, - \frac 2 3$, where $\mu \in \frak h^*$ is one of the values given in Proposition \ref{prop-finite} for each $k$. Then the weights $\lambda$ are admissible.
\end{Lem}
\begin{proof} The proof is essentially the same as Lemma \ref{lem-adm}. Let us write $\widehat{\Pi}_0^\vee = \{ (\delta-(2\alpha + \beta))^{\vee}, \alpha^{\vee}, \beta^{\vee}\}$, $\widehat{\Pi}_1^\vee = \{ (\delta-(3\alpha + \beta))^{\vee}, \alpha^{\vee}, (\alpha+\beta)^{\vee}\}$, and $\widehat{\Pi}_2^\vee = \{ (\delta-\theta)^{\vee}, \alpha^{\vee}, (\alpha + \beta)^{\vee}\}$.
Since the proof for the other cases are similar, we consider only the case $k=-\tfrac{5}{3}$. From Lemma \ref{lem-adm}, we already know that $\lambda = -\tfrac{5}{3}\Lambda_0 + \mu$ is admissible for $\mu = (0,0)$, with $\widehat \Pi_{\lambda}^{\vee} = \widehat \Pi_0^\vee$.
If $\mu = (0, -\tfrac{2}{3})$, we have to show that
\[ \langle -\tfrac{5}{3}\Lambda_0 + \mu + \rho, \gamma^{\vee}\rangle \notin -\mathbb{Z}_+ \mbox{ for any } \gamma \in \widehat{\Delta}_+^{\mathrm{re}} \quad \mbox{ and } \quad \mathbb{Q} \widehat{\Delta}_{\lambda}^{\vee, \mathrm{re}} = \mathbb{Q} \widehat{\Pi}^{\vee}.
\]
Recall that $\rho = 4 \Lambda_0 + \bar{\rho}$; also $\gamma \in \widehat{\Delta}_+^{\mathrm{re}}$ must have the form $\gamma = \bar{\gamma} + m \delta$, for $m>0$ and $\bar{\gamma} \in \Delta$, or $m= 0$ and $\bar{\gamma} \in \Delta_+$. We then have:
\begin{align*}
\langle -\tfrac{5}{3}\Lambda_0 + \mu + \rho, \gamma^{\vee}\rangle &= \langle( \tfrac{7}{3} \Lambda_0 + \mu + \bar{\rho}, (\bar{\gamma}+m\delta)^{\vee} \rangle\\
&= \tfrac{2}{(\bar{\gamma}, \bar{\gamma})}\, \tfrac{7}{3}m + \langle \mu, \bar{\gamma}^{\vee} \rangle + \langle \bar{\rho}, \bar{\gamma}^{\vee} \rangle.
\end{align*}
We may then check that $\langle -\frac{5}{3}\Lambda_0 + \mu + \rho, \gamma^{\vee}\rangle \ge \frac{1}{3}$, so that $\langle -\frac{5}{3}\Lambda_0 + \mu + \rho, \gamma^{\vee}\rangle \notin -\mathbb{Z}_+$.
One may also verify that $\widehat \Pi_{\lambda}^{\vee} = \widehat \Pi_1^\vee$ so that $\mathbb{Q} \widehat{\Delta}_{\lambda}^{\vee, \mathrm{re}} = \mathbb{Q} \widehat{\Pi}^{\vee}$.
Similarly, one can show that $\lambda = -\tfrac{5}{3}\Lambda_0 + \mu$ is admissible for $\mu= (1,-\frac{4}{3})$ and that $\widehat \Pi_{\lambda}^{\vee} = \widehat \Pi_2^\vee$.
\end{proof}
\begin{Thm} \label{thm-main-second} Let $M$ be a weak $L(k,0)$-module from the category $\mathcal{O}$, for $k=-\frac{5}{3}, -\frac{4}{3}$, or $-\frac{2}{3}$. Then $M$ is completely reducible.
\end{Thm}
\begin{proof} Let $L(\lambda)$ be an irreducible subquotient of $M$. Then $L(\lambda)$ is an $L(k,0)$-module, and we see from Remark \ref{rmk-VOA} that $\lambda$ must be a weight of the form $k \Lambda_0 + \mu$, where $\mu$ is given in Proposition \ref{prop-finite} for $k = -\frac 5 3, - \frac 4 3, -\frac 2 3$, respectively. From Lemma \ref{lem-adm2} it follows that such a $\lambda$ is admissible. Now Proposition \ref{prop-KW2} implies that $M$ is completely reducible.
\end{proof}
\vskip 1cm
|
1,108,101,566,412 | arxiv | \section{Introduction}
Interest for the magnetocaloric effect has been increasing in the last decades in view of applications for room temperature refrigeration \cite{Pecha}.
Neutron scattering studies can bring important microscopic information on the spin and lattice dynamics of these systems and reveal key ingredients at play in their magneto-thermodynamics properties \cite{Nils,Biniskos1,Biniskos2}.
In such context, experiments were performed on the ferromagnetic compound MnFe$_{4}$Si$_{3}$ and the importance of short range magnetic correlations as well as their suppression by a modest magnetic field was highlighted \cite{Biniskos1}.
Besides these results on the spin dynamics, unusual features of the phonon intensities were found in longitudinal polarized neutron scattering experiments under magnetic field.
The aim of the present paper is to describe these effects and to show that they can be explained by the neutron scattering cross-sections.
\section{Experimental results}
The inelastic neutron scattering (INS) experiments were performed on the cold neutron three-axis spectrometer IN12 \cite{Schmalzl}.
The spectrometer was set in W configuration with a fully focusing setup.
The incident neutron beam spin state was prepared with a transmission polarizing cavity located after the velocity selector and the Heusler analyzer was set at fixed $k_f$=2 {\AA}$^{-1}$.
The sample is the same single crystal as the one used in Ref.\cite{Biniskos1}. It was placed in a vertical field magnet, the field being along the $b$-axis of the hexagonal structure and the horizontal scattering plane being thus defined by $a^*$ and $c^*$.
A magnetic field of 1 T was applied in the paramagnetic state and the sample was field-cooled in order to get a single domain ferromagnetic sample. Hence, the polarization at the sample position is kept along the magnetic field which defines the $z$-axis.
Two Mezei spin flippers were used before and after the sample in order to measure the four possible polarized neutron cross-sections $\sigma^{\alpha \beta}_{z}$=\{$\sigma^{++}_{z}$ , $\sigma^{+-}_{z}$ , $\sigma^{--}_{z}$ , $\sigma^{-+}_{z}$ \}. On IN12, "+" corresponds to "flipper on" since both the transmitting polarizer and the Heusler analyzer transport "-" state. With such a setup, the flipping ratio measured on a Bragg reflection of a graphite sample was on average of 18 $\pm$ 3, the most important source of uncertainty arising from the setting of subsequent experiments. In the present paper, we do not apply polarization corrections to the data. This is justified by the fact that the leakage between the polarization channels due to the finite polarization is hardly observable with respect to the limited statistics of the inelastic signal (see below) and that the paper focuses on large effects.
\begin{figure}[h]
\centering
\vspace{-1cm}
\includegraphics[width=22cm]{Fig1.pdf}
\vspace{-9cm}
\caption{Inelastic neutron scattering spectra obtained in the four polarization channels $\sigma_z^{\alpha \beta}$ for $\textbf{Q}$=($Q_h$, 0, 2) for an energy transfer of 4 meV at $T$=1.5 K. Solid lines are Gaussian fits.}
\end{figure}
Figure 1 shows constant energy scans performed along the direction $\textbf{Q}$=($Q_h$, 0, 2) for an energy transfer $E$= 4 meV at $T$= 1.5 K for the four polarization channels $\sigma_z^{\alpha \beta}$.
For the spin-flip scattering, the peak at $Q_h$ $\approx$ -0.27 is observed with the same intensity and lineshape in the $\sigma^{+-}_{z}$ and $\sigma^{-+}_{z}$channels. It is ascribed to a spin-wave mode from the facts that (i) the direction of fluctuations is found to be perpendicular to the ordered magnetic moments and (ii) its peak position is in agreement with the established magnon dispersion of MnFe$_{4}$Si$_{3}$ \cite{Biniskos1}.
The leakage of the magnon scattering from $\sigma^{\alpha \beta}_{z}$ ($\alpha \neq \beta$) to $\sigma^{\alpha \alpha}_{z}$ due to finite polarization is barely seen above the background level for both channels.
In the non-spin-flip channels, a phonon mode is measured at $Q_h$ $\approx$ $\pm$ 0.15 in agreement with inelastic X-ray scattering data \cite{Biniskos3}.
Surprisingly this phonon mode is only seen in the $\sigma^{++}_{z}$ channel and has zero intensity in the $\sigma^{--}_{z}$ channel independent of the focusing ($Q_h$ $\approx$ - 0.15) or defocusing ($Q_h$ $\approx$ + 0.15) side of the measurement.
\begin{figure}[h]
\vspace{-1cm}
\includegraphics[width=18cm]{Fig2.pdf}
\vspace{-8cm}
\caption{Inelastic neutron scattering spectra obtained in the $\sigma^{++}_{z}$ and $\sigma^{--}_{z}$ polarization channels for $\textbf{Q}$=($Q_h$, 0, 2) for an energy transfer of 4 meV at a) $T$=315 K and b) $T$=290 K. Solid lines are Gaussian fits. c) Temperature dependence of the peak intensity at $\textbf{Q}$=(-0.15, 0, 2) for an energy transfer of 4 meV. The dashed line indicates the background extrapolated from measurements performed at $\textbf{Q}$=(-0.3, 0, 2).}
\end{figure}
To get insight into this behavior, the temperature dependence of the phonon cross-sections $\sigma^{++}_{z}$ and $\sigma^{--}_{z}$ was studied.
Fig.2 shows similar INS spectra as those of Fig.1 at $T$=315 K (Fig.2a) and $T$= 290 K (Fig.2b), two temperatures above and below $T_{Curie}$. In contrast to the low temperature data, the phonon appears in the two polarization channels at 315 K ; nevertheless the two phonon intensities differ significantly. At 290 K, only a tiny signal is observed in the $\sigma^{--}_{z}$ channel.
Within our statistics, the phonon peaks at the same position and has the same width at low and high temperatures.
Fig.2 c) shows the temperature dependence of the phonon peak intensity in both polarization channels, where the temperature dependence of the lattice parameter was measured and taken into account.
The dashed line indicates the temperature dependence of the background determined by extrapolating measured values at low and high temperatures at $\textbf{Q}$=(-0.3, 0, 2) and $E$=4 meV.
From this curve, one can conclude that the phonon intensity in the $\sigma^{--}_{z}$ channel reaches the background level at around 200 K upon cooling from the paramagnetic state.
\section{Data interpretation}
The general polarized neutron scattering cross-sections indicate that the asymmetry between $\sigma^{++}_{z}$ and $\sigma^{--}_{z}$ comes from the real part of the Nuclear Magnetic Interference (NMI) term, $R_z$ :
\begin{eqnarray}
\sigma^{++}_{z}=NN+M_zM_z+R_z\\
\sigma^{--}_{z}=NN+M_zM_z-R_z
\end{eqnarray}
where $NN$ and $M_zM_z$ are convenient notations adapted from Ref.\cite{Regnault} for the purely nuclear and purely magnetic correlation functions.
A well-known use of the NMI lies in the polarized neutron diffraction experiments aiming to measure magnetic form factors and spin densities \cite{Roessli}. NMI counterparts for inelastic neutron scattering are scarce.
In the range of wave-vectors where the phonon is measured in our experiment, there are no spin-waves and no diffuse spin fluctuations. Therefore, the only possible mixed nuclear-magnetic correlation function involves the uniform static magnetization of the system.
In this case, the NMI is elastic in the magnetic system and inelastic with respect to atomic vibrations.
This suggests that it originates from the magnetovibrational scattering\footnote{It is recalled that, under the the assumption that the the motion of an ion is uncorrelated with either its spin direction or magnitude, the magnetic scattering is composed of four terms : elastic magnetic scattering, magnetovibrational scattering, inelastic magnetic scattering (without change in the phonon system) and scattering which is inelastic in both the spin and the phonon systems \cite{Lovesey,Squires}.}, the creation (or annihilation) of a phonon via the magnetic interaction.
In a real experiment, both cross-sections, the nuclear and the magnetovibrational one, participate to the scattering with their own magnitude.
Moreover, when longitudinal neutron polarization is used with a component of the polarization perpendicular to the scattering vector, both scattering amplitudes interfere with each others.
The polarized neutron cross-section including both nuclear and magnetic scattering are given in Ref.\cite{Lovesey} Eq.(10.157) and Ref.\cite{Squires} Eq.(9.49).
Considering only the coherent vibrational cross-sections (nuclear and magnetovibrational), i.e. neglecting the inelastic magnetic scattering (and all elastic scattering) and considering the experimental case of a non-Bravais lattice in a single domain ferromagnetic state with perfect polarization along $z$, this simplifies to :
\begin{eqnarray}
\left(\frac{d^2\sigma}{d\Omega dE'}\right)^{\pm \pm} \propto \int_{-\infty}^{+\infty}dt e^{-i\omega t}\times\sum_{j,j'}\left(\bar b_j\bar b_{j'}+\frac{(\gamma r_0)^2}{4}g_jg_{j'}F_j(\textbf{Q})F_{j'}(\textbf{Q}) \langle S^z_j\rangle \langle S^z_{j'}\rangle \pm \frac{\gamma r_0}{2}\bar b_{j}g_{j'}F_{j'}(\textbf{Q})\langle S^z_{j'}\rangle \right)\times\nonumber\\
\left(I_{j,j'}(\textbf{Q},t)-I_{j,j'}(\textbf{Q},\infty)\right)
\end{eqnarray}
with
\begin{equation}
I_{j,j'}(\textbf{Q},t)=\langle \mathrm{exp}(-i\textbf{Q}\cdot\textbf{R}_{j}(0))\mathrm{exp}(i\textbf{Q}\cdot\textbf{R}_{j'}(t))\rangle
\end{equation}
where $j$ labels the atoms at the position $\textbf{R}_j(t)$ with a coherent scattering length $\bar b_j$ and the subset of magnetic sites have a form factor $F_j(\textbf{Q})$, a Land\'e splitting factor $g_j$ and a mean value of the $z$ component of the spin $\langle S^z_j\rangle$, $\gamma$ is the neutron gyromagnetic ratio and $r_{0}$ is the classical radius of the electron.
One can recognize the structure of (1)-(2) in (3). The simplicity of the magnetovibrational NMI term, being linear in $N$ and $M_z$, is shared with the well-known elastic NMI.
It is not a general feature, the structure of $R_z$ can be much more complex especially for inelastic magnetic scattering.
Performing the canonical phonon expansion of Eq.(4) for a non-Bravais lattice \cite{Schober}, the polarized neutron one-phonon vibrational scattering double differential cross-section can then be written as :
\begin{equation}
\left(\frac{d^2\sigma}{d\Omega dE'}\right)^{\pm \pm} \propto \sum_{\textbf{G}} \sum_{s} \bigg|\sum_{d}\left(\bar b_d\pm\frac{\gamma r_{0}}{2}g_dF_d(\textbf{Q})\langle S^z_d\rangle\right)e^{-W_{d}(\textbf{Q})}e^{i\textbf{Q}\cdot{\textbf{d}}}(\textbf{Q}\cdot\textbf{e}_{ds})\frac{1}{\sqrt{m_{d}}}\bigg|^2\times F_s(\textbf{Q}, \omega, T)
\end{equation}
with the spectral weight function for a phonon creation process :
\begin{eqnarray}
F_s(\textbf{Q}, \omega, T)=\frac{1}{\omega_{s}}\langle n_s+1 \rangle\delta(\omega-\omega_{s}) \delta(\textbf{Q}-\textbf{q}-\textbf{G})
\end{eqnarray}
where $s$ labels the phonon mode, $d$ labels the atoms in the unit cell, their mass is $m_d$, $e^{-W_{d}(\textbf{Q})}$ is the Debye-Waller factor, $\mathbf{e}_{ds}$ is the mode eigenvector and $\omega_{s}$ its frequency and $\langle n_s+1 \rangle$ is the thermal population factor.
$\textbf{G}$ is a Brillouin zone center and the scattering vector is written $\textbf{Q}$=$\textbf{G}$+$\textbf{q}$.
Eqn. (5) corresponds to the usual phonon scattering cross-section where the coherent scattering length is replaced by $\bar b_d\pm\frac{\gamma r_{0}}{2}g_dF_d(\textbf{Q})\langle S^z_d \rangle$.
For a phonon mode investigated in a restricted portion of the reciprocal space (typically half of a Brillouin zone length), one can make the approximation $F_d(\textbf{Q})$ $\approx$ $F_d(\textbf{G})$ and the scattering length is then $\textbf{q}$ independent in this range of wave-vectors.
To be precise, the occurrence of non zero NMI is made possible, in the case described here, by the fact that the external magnetic field breaks the time reversal symmetry and creates a single domain ferromagnetic sample.
The exact same interference effect has been used in the past in order to measure the "dynamical magnetic form factor" at finite $\textbf{q}$ using phonon scattering \cite{Steinsvoll}.
In the present paper, we formulate this largely overlooked effect in the more modern viewpoint of NMI and extend the formulation for a non-Bravais lattice (See Eq.(1) of Ref.\cite{Steinsvoll} for a Bravais lattice).
In our experiment on MnFe$_{4}$Si$_{3}$, this effect is the best candidate to explain the cancellation of the phonon intensity at low temperatures.
\begin{figure}[h]
\centering
\vspace{-1cm}
\includegraphics[width=20cm]{Fig3.pdf}
\vspace{-9cm}
\caption{a) Temperature dependence of the interference term, 2$R_z$, measured on the phonon at $\bf{Q}$= (-0.15, 0, 2) at $E$ = 4 meV(full circles) and on the Bragg peak at $\bf{Q}$=(0, 0, 2) (open circles). b) Temperature dependence of the nuclear and magnetovibrational scattering amplitudes obtained from the phonon intensity at $\bf{Q}$= (-0.15, 0, 2) at $E$ = 4 meV. Lines are guides for the eyes.}
\end{figure}
In order to confirm this hypothesis and to describe the temperature dependence of the interference, Fig.3 shows $2R_z$ obtained by subtracting (1)-(2) using the data measured at $\textbf{Q}$=(-0.15, 0, 2) and $E$ = 4 meV and shown in Fig2.b.
In such a difference, the background, being equal in the $\sigma^{++}_z$ and $\sigma^{--}_z$ channels, cancels. The difference shown in Fig.3 is also corrected by the temperature population factor.
$2R_z$ follows an order-parameter-like curve.
Similarly, one can also obtain the elastic NMI term for the Bragg peak located at $\textbf{Q}$=(0, 0, 2) by subtracting the intensity measured in the $\sigma^{--}_z$ from the one measured in the $\sigma^{++}_z$ channel at this position for $E$=0 meV.
The corresponding data are also shown in the same figure. The two temperature dependences overlay within a single scaling factor, which confirms the static nature of the magnetic part of the interference term in the phonon scattering.
Indeed in the acoustic approximation, the dynamical structure factor is proportional to the static one \cite{Xu}. Therefore their temperature dependence also matches when the phonon energy does not vary with temperature and when the thermal population factor is taken into account.
In general, the overlay shown in Fig.3a should not be completely quantitative since the Bragg peak intensities may be affected by different extinction corrections in the different polarization channels while the phonon is not.
It is also worthwhile to note that the temperature dependence of $2R_z$ is not to be taken as a quantity proportional to the magnetization since we are dealing with a non-Bravais lattice and the individual $\langle S^z_d \rangle$ are weighted by $e^{i\mathbf{Q}\cdot\mathbf{d}}$ and the corresponding sum is dependent on $\textbf{Q}$.
Above $T_{Curie}$, $2R_z$ is not zero due to the fact that the magnetic field is applied in the paramagnetic state and therefore, the values of $\langle S^z_d \rangle$ are finite.
Above 260 K, the background was measured for each temperature at $\bf{Q}$=(-0.3, 0, 2) in both $\sigma^{++}_{z}$ and $\sigma^{--}_{z}$ channels, where they are found to be equal as expected.
This allows one to estimate the nuclear, $N$, and magnetic, $M_z$, (here magnetovibrational) scattering amplitudes ; the temperature population factor is also taken into account as a correction.
The obtained data are shown in Fig.3b.
The nuclear part is almost constant consistently with the weak variation of the Debye-Waller factor in a small temperature range with respect to our limited statistics.
The magnetic part rises as ferromagnetic ordering is increasing.
From this plot the origin of the phonon intensity cancellation at low temperature is clear : the magnetic amplitude increases when temperature decreases and accidentally reaches a value very close to the nuclear scattering amplitude.
This causes the cancellation of the cross section $\sigma^{- -}_z$.
\begin{figure}
\centering
\vspace{-1cm}
\includegraphics[width=20cm]{Fig4.pdf}
\vspace{-8.5cm}
\caption{a) Inelastic neutron scattering spectra obtained in the $\sigma^{++}_z$ and $\sigma^{--}_z$ polarization channels at 1.5 K for a) $\textbf{Q}$=($Q_h$, 0, 2) for an energy transfer of 6 meV b) $\textbf{Q}$=($Q_h$, 0, 2) for an energy transfer of 4 meV with $\textbf{G}$=(2, 0, 2). Solid lines are Gaussian fits.}
\end{figure}
At low temperatures, it is found that the full TA branch shows such a behavior (See Fig.4 for data at $E$= 6 meV), except in the vicinity of the zone boundary.
This can be understood in the range of $\textbf{q}$ (i) where the acoustic approximation is valid and (ii) where the wave-vector dependence of the form factor can be neglected (see above).
In order to illustrate the fact that the phonon intensity cancellation is fortuitous due to an accidental coincidence of the nuclear and magnetic amplitudes, similar data were taken in another Brillouin zone for $\textbf{G}$=(2, 0, 2).
Figure 4b shows a corresponding INS spectra along $\bf{Q}$= ($Q_h$, 0, 2) at 4 meV and 1.5 K. The very limited statistics of the data impose to fix the peak position according to the one obtained in Fig.1. At this position, the phonon intensity is similar in the $\sigma_z^{++}$ and $\sigma_z^{--}$ channels.
Finally different branches were also investigated : the LA[100] phonon and TA[001] phonon polarized along $a^{*}$ around $\textbf{G}$=(3, 0, 0) : interference effects are observed without total cancellation of intensity.
Figure 5a shows INS spectra measured at $\bf{Q}$=($Q_h$, 0, 0) for $E$= 4 meV and $T$=1.5 K and illustrates the imbalance of intensities between the two polarization channels $\sigma^{++}_{z}$ and $\sigma^{--}_{z}$ for the corresponding LA[100] mode measured near $\textbf{G}$=(3, 0, 0). The panel b) shows the temperature dependence of the scattering amplitude for the LA[100] mode measured near $\textbf{G}$=(3, 0, 0). In contrast to what is shown in Fig.3b, the nuclear amplitude is much stronger than the magnetic one. Therefore, the interference effect is less pronounced and there is no total cancellation of intensity in $\sigma^{--}_{z}$ (Fig.5a).
The overall different behaviors of the different measured modes can be semi-quantitatively rationalized by comparing nuclear and magnetic structure factors for each Brillouin zone center $\textbf{G}$ (established for $H$=0 T \cite{Hering}), their ratio being of about 28 for $\textbf{G}$=(3, 0, 0), 6 for $\textbf{G}$=(2, 0, 2), 4 for $\textbf{G}$=(1, 0, 2) and 1 for $\textbf{G}$=(0, 0, 2).
\begin{figure}[h]
\centering
\vspace{-1cm}
\includegraphics[width=20cm]{Fig5.pdf}
\vspace{-8.5cm}
\caption{a) Inelastic neutron scattering spectra obtained in the $\sigma^{++}_{z}$ and $\sigma^{--}_{z}$ polarization channels at 1.5 K for $\textbf{Q}$=($Q_h$, 0, 0) for an energy transfer of 4 meV. Solid lines are Gaussian fits. b) Temperature dependence of the nuclear and magnetovibrational scattering amplitudes obtained from the phonon intensity at $\bf{Q}$= (2.925, 0, 0) at $E$ = 4 meV. Lines are guides for the eyes.}
\end{figure}
\section{Conclusions}
Magnetovibrational scattering is often grasped as a source of spurious scattering \cite{Fernandez}. In very rare cases, it has been used as a probe of magnetism or as a probe of spin-lattice coupling.
As already mentioned, magnetic form factors as a function of $\bf{q}$ were obtained from polarized INS phonon measurements in Fe and Ni \cite{Steinsvoll} and in the Fe$_{65}$Ni$_{35}$ invar alloy \cite{Brown1}.
Another interesting case also concerning Fe$_{65}$Ni$_{35}$ illustrates how new features occur when the approximation leading to the magnetovibrational scattering breaks down. Namely a forbidden phonon mode was explained by the modulation of the magnetic moment associated with compressive strain \cite{Brown2}. In the present paper, we report a total cancellation of phonon intensity for a given mode in a given Brillouin zone in longitudinal polarized INS measurements. While surprising at first glance, the effect arises purely from interferences between neutron scattering amplitudes and does not involve any specific physics.
Put in a broad context, this large difference of cross-sections between $\sigma^{++}_{z}$ and $\sigma^{--}_{z}$ could be quite misleading especially for the INS experiments carried out with only one spin flipper where only either $\sigma^{++}_{z}$ or $\sigma^{--}_{z}$ is measured (together with a corresponding spin-flip cross-section).
The ignorance of such effect could lead to erroneous conclusions.
If one has in mind the archetypal cancellation of the static nuclear and magnetic structure factors that makes an Heusler alloy a good neutron polarizer, crudely speaking, the same effect is realized here for a phonon, except that the magnetic scattering is replaced by the magnetovibrational scattering.
Conversely, the next step would be to use the sensitivity of the NMI evidenced here to detect new physics associated with spin-lattice coupling beyond the magnetovibrational approximation as highlighted for Fe$_{65}$Ni$_{35}$ \cite{Brown2}.
To this respect magnetocaloric compounds could constitute an interesting playground for this search.
Last but not least, it is to be mentioned that NMI terms that are inelastic in the magnetic system are still to be discovered and several theoretical suggestions were made for the occurrence of such terms in complex magnetic systems \cite{Maleyev}.
\section{Acknowledgments}
We thank L.-P. Regnault for enlightening comments.
\section{Data availability}
All relevant data are available from the corresponding authors.\\
INS data collected at the ILL are available at https://doi.ill.fr/10.5291/ILL-DATA.CRG-2444.
\section*{References}
|
1,108,101,566,413 | arxiv | \section{Introduction}\label{intro}
In \cite[p.~354]{ramanujanoriginalnotebook2}, \cite[p.~263, Entry 3]{bcbramforthnote}, Ramanujan gave the following beautiful identity for $a\neq 0, |a|<1,$ and $|b|<1$, namely,
\begin{align}\label{neglected}
\sum_{n=1}^{\infty} \frac{ (b/a)_n a^n }{ (1- q^n) (b)_n } = \sum_{n=1}^{\infty} \frac{a^n - b^n }{ 1- q^n },
\end{align}
where, here and throughout the paper, we have used the standard $q$-series notation:
\begin{align*}
(a)_0 &:=(a;q)_0 =1, \qquad \\
(a)_n &:=(a;q)_n = (1-a)(1-aq)\cdots(1-aq^{n-1}),
\qquad n \geq 1, \\
(a)_{\infty} &:=(a;q)_{\i} = \lim_{n\to\i}(a;q)_n, \qquad |q|<1.
\end{align*}
Also, we will always consider $q\in\mathbb{C}$ such that $|q|<1$.
Maji and the first author \cite{dixitmaji18} obtained the following generalization of \eqref{neglected}:
\begin{theorem}\label{gen of Ramanujan's identity}
Let $a, b, c$ be three complex numbers such that $|a|<1$ and $|cq|< 1$. Then
\begin{align}\label{entry3gen}
\sum_{n =1}^{\infty} \frac{ (b/a)_n a^n }{ (1- c q^n) (b)_n } = \sum_{m=0}^{\infty}\frac{(b/c)_mc^m}{(b)_m}\left(\frac{aq^m}{1-aq^m}-\frac{bq^m}{1-bq^m}\right).
\end{align}
Moreover, for $|a|<1$ and $|b|<\min(|c|,1)$,
\begin{align*}
\sum_{n =1}^{\infty} \frac{ (b/a)_n a^n }{ (1- c q^n) (b)_n }=\frac{ ( b/c )_{\infty }}{(b)_{\infty} } \sum_{n=0}^{\infty} \frac{(c)_n ( b/c)^n }{(q)_n} \sum_{m=1}^{\infty} \frac{a^m - b^m }{1- c q^{m+n} }.
\end{align*}
\end{theorem}
Theorem \ref{gen of Ramanujan's identity} has many nice implications in partition theory which are discussed in \cite{dixitmaji18}. These include, in particular, a new proof of the generating function version of Andrews' famous identity
\begin{equation*}
\textup{spt}(n)=np(n)-\frac{1}{2}N_2(n).
\end{equation*}
Clearly, Ramanujan's identity \eqref{neglected} is the special case $c=1$ of \eqref{entry3gen}.
In a recent work \cite{dems}, Eyyunni, Maji, Sood and the first author obtained a finite analogue of Theorem \ref{gen of Ramanujan's identity} given below.
\begin{theorem}\label{finmainabc}
Let $N\in\mathbb{N}$. For $a, b, c\neq q^{-n}, 1\leq n\leq N-1$, $c\neq q^{-N}$, and $a, b\neq 1$,
\begin{align*}
\sum_{n=1}^{N}\left[\begin{matrix} N\\n\end{matrix}\right]\frac{(\frac{b}{a})_{n}(q)_{n}(a)_{N-n}a^{n}}{(1-cq^{n})(b)_n(a)_N}=
\sum_{n=1}^{N}\left[\begin{matrix} N\\n\end{matrix}\right]\frac{(\frac{b}{c})_{n-1}(q)_n (cq)_{N-n}c^{n-1}}{(b)_{n-1}(cq)_N}\left(\frac{aq^{n-1}}{1-aq^{n-1}}-\frac{bq^{n-1}}{1-bq^{n-1}}\right),
\end{align*}
where
\begin{align*}
\left[\begin{matrix} N\\n\end{matrix}\right]=\left[\begin{matrix} N\\n\end{matrix}\right]_q :=\begin{cases}
\frac{(q;q)_N}{(q;q)_n (q;q)_{N-n}},\hspace{2mm}\text{if}\hspace{1mm}0\leq n\leq N,\\
0,\hspace{2mm}\text{otherwise},
\end{cases}
\end{align*}
is the $q$-binomial coefficient.
\end{theorem}
Letting $N\to\infty$ in the above theorem leads to \eqref{entry3gen}. The finite analogue is a generalization of \eqref{entry3gen}, for, it is valid for \emph{any} natural number $N$ apart from being valid in the limiting case $N\to\infty$. A finite analogue is desirable whenever possible since it can be extended to the level of elliptic hypergeometric series. Elliptic extensions are essentially always finite, balanced and well-poised. Only finite analogues have chance to be extended to the elliptic setting.
It was shown in \cite{dems} that Theorem \ref{finmainabc} has many fruitful consequences in the theory of partitions including a representation for $\textup{spt}(n, N)$, the number of smallest parts in all partitions of $n$ whose corresponding largest parts are less than or equal to $N$, in terms of the restricted partition function $p(n, N)$, which counts the number of partitions of $n$ whose largest parts do not exceed $N$, and a finite analogue of the second Atkin-Garvan rank moment. See Theorem 2.4 of \cite{dems}. The work in \cite{dems} further extended the sparsely developed theory of $p(n, N)$. Note that $\lim_{N\to\infty}p(n, N)=p(n)$, the number of unrestricted partitions of a positive integer $n$.
Very recently, Bhoria, Eyyunni and Maji \cite[Theorem 2.1]{bem} further generalized Theorem \ref{gen of Ramanujan's identity} by obtaining the following result.
\begin{theorem}\label{Generalization of Dixit-Maji}
Let $a, b, c, d$ be four complex numbers such that $|ad|<1$ and $|cq|<1$. Then
\begin{equation*}
\sum_{n = 1}^{\infty} \frac{ (b/a)_n (c/d)_n (ad)^n }{ (b)_n (cq)_n } =
\frac{(a-b)(d-c)}{(ad-b)}\sum_{m=0}^{\infty}\frac{(a)_m (bd/c)_m c^m}{(b)_m (ad)_m}
\left(\frac{adq^m}{1-adq^m}-\frac{bq^m}{1-bq^m}\right).
\end{equation*}
\end{theorem}
This theorem also has many beautiful consequences in $q$-series and partition theory. First of all, from this single identity, the authors of \cite{bem} could derive all five entries in the unorganized portion of Ramanujan's second and third notebooks \cite[pp.~354--355]{ramanujanoriginalnotebook2} (see also \cite[pp.~302--303]{ramanujanoriginalnotebook2tifr}). These entries can be found in Section \ref{fafive} of our paper. Moreover, using a differential operator to act on the partition-theoretic interpretation of the fourth of these entries, they were able to derive an important result of Bressoud and Subbarao \cite{bresub} which had defied a proof using analytical techniques for about 38 years. (Bressoud and Subbarao's proof was combinatorial). Moreover, the authors of \cite{bem} even generalized the result of Bressoud and Subbarao.
In this paper, we obtain a finite analogue of Theorem \ref{Generalization of Dixit-Maji}, that is, the identity of Bhoria, Eyyunni and Maji. This allows to obtain finite analogues of all five entries of Ramanujan. We also derive a result involving a finite sum of a ${}_2\phi_1$ which has several important corollaries, for example, a generalization of the generating function version of Andrews' identity \eqref{idspt} as well as of an identity involving the function $N_{\textup{SC}}(n)$ (see Section 5.2 for a definition). We also generalize an identity of Andrews, Chan and Kim from \cite{agl13} by considering finite analogues of first odd moments of rank and crank.
\section{A finite analogue of a four parameter $q$-series identity}
\begin{theorem}\label{fafour}
We have
\begin{align}\label{eq:9}
&\sum_{n=1}^{N}\left[\begin{array}{c}N\\n\end{array}\right] \frac{(q)_n\left(\frac{b}{a}\right)_n\left(\frac{c}{d}\right)_n(ad)_{N-n}(ad)^n}{(b)_n(cq)_n(ad)_{N}}\nonumber\\
&=\frac{(a-b)(d-c)}{(ad-b)}\displaystyle\sum_{n=1}^{N} \left[\begin{array}{c}N\\n\end{array}\right]\frac{(a)_{n-1}\left(\frac{bd}{c}\right)_{n-1}(q)_n\left(cq\right)_{N-n} c^{n-1}}{(b)_{n-1}(cq)_N(ad)_{n-1}}
\left(\frac{adq^{n-1}}{1-adq^{n-1}} - \frac{bq^{n-1}}{1-bq^{n-1}}\right).
\end{align}
\end{theorem}
\begin{proof}
From \cite[p. 70, Equation (3.2.1)]{gasperrahman},
\begin{align*}
{}_4\phi_{3}\left[\begin{array}{cccccc} q^{-N},&A,&B,&C\;\;\; ;&q\;,&q\\D,&E,&\frac{ABCq^{1-N}}{DE}& & & \end{array}\right]
=\frac{\left(\frac{E}{A}\right)_{N}\left(\frac{DE}{BC}\right)_{N}}{\left(E\right)_{N}\left(\frac{DE}{ABC}\right)_{N}}{}_4\phi_{3}\left[\begin{array}{cccccc} q^{-N},&A,&\frac{D}{B},&\frac{D}{C}\;\;\; ;&q\;,&q\\D,&\frac{DE}{BC},&\frac{Aq^{1-N}}{E}& & & \end{array}\right].
\end{align*}
Let $A=q,\;B=\displaystyle\frac{bq}{a},\;C=\frac{cq}{d},\;D=bq$ and $E=cq^2$ so as to have
\begin{equation}\label{eq:10}
\sum_{n=0}^{N}\frac{\left(q^{-N}\right)_{n}\left(\frac{bq}{a}\right)_{n}\left(\frac{cq}{d}\right)_{n}}{\left(bq\right)_{n}\left(\frac{q^{1-N}}{ad}\right)_{n}\left(cq^2\right)_{n}}q^{n}
=\displaystyle \frac{(1-cq)(1-adq^N)}{(1-cq^{N+1})(1-ad)} \displaystyle\sum_{n=0}^{N} \frac{\left(q^{-N}\right)_n (a)_n \left(\frac{bd}{c}\right)_n}{(bq)_n (adq)_n \left(\frac{q^{-N}}{c}\right)_{n}}q^n.
\end{equation}
Using the elementary formula \cite[p. 15, Eq (4.1)]{dems}
\begin{align}\label{ef}
\left(\frac{q^{-N}}{x}\right)_{n} = \frac{(-1)^n (xq^{N-n+1})_n}{x^n q^{Nn}}q^{\frac{n(n-1)}{2}}
\end{align}
to derive
\begin{align}
\frac{\left(q^{-N}\right)_{n}}{\left(\frac{q^{-N}}{x}\right)_{n}}=\frac{\left(q^{N-n+1}\right)_{n}}{\left(xq^{N-n+1}\right)_{n}} x^{n}=\frac{\left(q\right)_{N}\left(xq\right)_{N-n}}{\left(q\right)_{N-n}\left(xq\right)_{N}}x^{n},\label{eq:11}
\end{align}
and then using the latter, with $x=ad$ for the left-hand side of \eqref{eq:10}, and then again with $x=c$ for the right-hand side, we see that
\begin{equation*}\sum_{n=0}^{N}\frac{\left(\frac{bq}{a}\right)_{n}\left(\frac{cq}{d}\right)_{n}\left(ad\right)_{N-n}\left(q\right)_{N}}{\left(bq\right)_{n}\left(cq\right)_{n+1}\left(ad\right)_{N+1}\left(q\right)_{N-n}}\left(ad\right)^{n}=\sum_{n=0}^{N}\frac{\left(a\right)_{n}\left(\frac{bd}{c}\right)_{n}\left(q\right)_{N}\left(cq\right)_{N-n}(cq)^{n}}{\left(bq\right)_{n}\left(ad\right)_{n+1}\left(q\right)_{N-n}\left(cq\right)_{N+1}}\end{equation*}
Replace $n$ by $n-1$ on both sides and multiply both sides of the resulting equation by $ \frac{ad(1-\frac{b}{a})(1-\frac{c}{d})(1-q^{N+1})}{(1-b)}$ to get
\begin{flushleft}
$\displaystyle\sum_{n=1}^{N+1} \frac{\left(\frac{b}{a}\right)_n \left(\frac{c}{d}\right)_n (ad)_{N+1-n}(q)_{N+1}}{(b)_n (cq)_n (ad)_{N+1}(q)_{N+1-n}}(ad)^n \displaystyle ={(a-b)(d-c)}\displaystyle\sum_{n=1}^{N+1} \frac{(a)_{n-1} \left(\frac{bd}{c}\right)_{n-1}(q)_{N+1}(cq)_{N+1-n}}{{(b)_n(ad)_n (q)_{N+1-n} (cq)_{N+1}}}(cq)^{n-1}.$
\end{flushleft}
Replace $N$ by $N-1$ so that
\begin{align*}
&\sum_{n=1}^{N} \frac{\left(\frac{b}{a}\right)_n \left(\frac{c}{d}\right)_n (ad)_{N-n} (q)_N}{(b)_n (cq)_n (ad)_N (q)_{N-n}}(ad)^n \nonumber\\
& =\displaystyle\frac{(a-b)(d-c)}{(ad-b)} \displaystyle\sum_{n=1}^{N} \left(\frac{(a)_{n-1} \left(\frac{bd}{c}\right)_{n-1} (q)_N (cq)_{N-n} c^{n-1}}{(b)_{n-1} (ad)_{n-1} (q)_{N-n} (cq)_N}\frac{(ad-b) q^{n-1} }{(1-bq^{n-1}) (1-adq^{n-1})}\right).
\end{align*}
This results in \eqref{eq:9} upon writing
\begin{equation*}
\frac{(ad-b) q^{n-1} }{(1-bq^{n-1}) (1-adq^{n-1})}= \frac{adq^{n-1}}{1-adq^{n-1}}- \frac{bq^{n-1}}{1-bq^{n-1}}.
\end{equation*}
\end{proof}
\begin{corollary}
\begin{equation}\label{eq:13}
\sum_{n=1}^{N}\left[\begin{array}{c}N\\n\end{array}\right] \frac{(q)_n \left(\frac{c}{d}\right)_n (-zd)^n q^\frac{n(n+1)}{2}}{(zq)_n (cq)_n}
= \frac{z}{c}(c-d)\displaystyle\sum_{n=1}^{N}\left[\begin{array}{c}N\\n\end{array}\right] \frac{(q)_n \left(\frac{zdq}{c}\right)_{n-1} (cq)_{N-n} (cq)^n}{(zq)_n (cq)_N}.\end{equation}
\end{corollary}
\begin{proof}
Let $a \longrightarrow 0 , b \longrightarrow zq$ in \eqref{eq:9} and simplify using the fact that
\begin{align*}
\lim\limits_{a\to0}\left(\frac{zq}{a}\right)_n (a)^n=\lim\limits_{a\to0}\left(1-\frac{zq}{a}\right)\cdots \left(1-\frac{zq^{n}}{a}\right) (a)^n
=(-1)^n z^n q^\frac{n(n+1)}{2}.
\end{align*}
\end{proof}
\begin{corollary}
\begin{equation}\label{eq:14}
\displaystyle\sum_{n=1}^{N}\left[\begin{array}{c}N\\n\end{array}\right] \frac{(q)_n (zc)^n q^{n^2}}{(zq)_n (cq)_n} = z \displaystyle\sum_{n=1}^{N} \left[\begin{array}{c}N\\n\end{array}\right]\frac{(q)_n (cq)_{N-n} (cq)^n}{(zq)_n (cq)_N}.
\end{equation}
\end{corollary}
\begin{proof}
Let $d\longrightarrow 0$ in \eqref{eq:13}.
\end{proof}
\begin{remark}
Letting $c=1/z$ in the above identity, we get
\begin{equation*}
\sum_{n=1}^{N}\left[\begin{array}{c}N\\n\end{array}\right] \frac{(q)_n q^{n^2}}{(zq)_n(z^{-1}q)_n} = z\sum_{n=1}^{N}\left[\begin{array}{c}N\\n\end{array}\right] \frac{(q)_n(z^{-1}q)_{N-n}(z^{-1}q)^n}{(zq)_n(z^{-1}q)_N} ,
\end{equation*}
where $\displaystyle\sum_{n=1}^{N} \left[\begin{array}{c}N\\n\end{array}\right]\frac{(q)_n q^{n^2}}{(zq)_n(z^{-1}q)_n}$
is a finite analogue of the rank generating functions \cite[Theorem 2.2]{dems}. If we let $N\to\infty$ in \eqref{eq:14}, we recover an identity of Andrews \cite[p.~24, Corollary 2.2]{yesto}:
\begin{equation}\label{d=0analogue_THM2.2_DM}
\sum_{n=1}^{\infty} \frac{ z^n c^n q^{n^2}}{ (z q)_n (c q)_n } = z \sum_{n=1}^{\infty} \frac{(c q)^n }{(z q)_n}.
\end{equation}
\end{remark}
\section{Finite analogues of five entries of Ramanujan}\label{fafive}
Our aim of this section is to derive the finite analogues of the five entries of Ramanujan mentioned in the introduction. The first of the five is
\begin{equation}\label{entry1}
\frac{(-aq)_{\infty}}{(bq)_{\infty}}=\sum_{n=0}^{\infty}\frac{(-b/a)_na^nq^{n(n+1)/2}}{(q)_n(bq)_n}.
\end{equation}
Ramanujan has also recorded in his Lost Notebook \cite[p.~370]{lnb}. The finite analogue of this entry is
\begin{theorem}
(Finite analogue of Entry 1)\\
The following identity holds:
\begin{equation*}
\sum_{n=0}^{N}\left[\begin{array}{c}N\\n\end{array}\right] \frac{\left(\frac{-b}{a}\right)_na^n q^\frac{n(n+1)}{2}}{(bq)_n}=\displaystyle\sum_{n=0}^{N}\left[\begin{array}{c}N\\n\end{array}\right]\frac{\left(\frac{-a}{b}\right)_n(bq)_{N-n} (bq)^n}{(bq)_N}.
\end{equation*}
\end{theorem}
\begin{proof}
Let $z=1, d=-a$ and $c=b$ in \ref{eq:13}.
\end{proof}
Note that if we let $N\to\infty$ in the above theorem and use the $q$-binomial theorem on the resulting right-hand side, we obtain \eqref{entry1}.
Ramanujan's Entry 2 is
\begin{equation*}
(aq)_{\infty}\sum_{n=1}^{\infty}\frac{na^nq^{n^2}}{(q)_n(aq)_n}=\displaystyle\sum_{n=1}^{\infty}\frac{(-1)^{n-1}a^nq^\frac{n(n+1)}{2}}{1-q^n}.
\end{equation*}
The proof of this identity given in \cite{bem} begins with \eqref{d=0analogue_THM2.2_DM} and is quite long (see p.~446--449 of \cite{bem}). In what follows, we give a short proof of a finite analogue of this entry. Then letting $N\to\infty$ in this finite analogue immediately gives the entry itself.
\begin{theorem}
(Finite analogue of Entry 2)\\
We have
\begin{equation}\label{correct}
(aq)_N\sum_{n=1}^{N}\left[\begin{array}{c}N\\n\end{array}\right]\frac{na^nq^{n^2}}{(aq)_n}=\displaystyle\sum_{n=1}^{N}\left[\begin{array}{c}N\\n\end{array}\right]\frac{(q)_n(-1)^{n-1}a^nq^\frac{n(n+1)}{2}}{1-q^n}.
\end{equation}
\end{theorem}
\begin{proof}
Andrews' finite Heine transformation \cite[Cor. 3]{andfinheine} is
\begin{equation*}
{}_{3}\phi_{2}\left[\begin{array}{ccccc} q^{-N},&a,&b\;\;\; ;&q\;,&q\\c,&\frac{q^{1-N}}{t}& & & \end{array}\right]=\frac{\left(\frac{c}{b}\right)_{N}\left(bt\right)_{N}}{\left(c\right)_{N}\left(t\right)_{N}}\;{}_{3}\phi_{2}\left[\begin{array}{ccccc} q^{-N},&\frac{abt}{c},b,\;\;\; ;&q\;,&q\\bt,&\frac{bq^{1-N}}{c} & & & \end{array}\right],
\end{equation*}
that is,
\begin{equation*}
\sum_{n=0}^{N}\frac{(q^{-N})_n(a)_n(b)_n}{(c)_n\left(\frac{q^{1-N}}{t}\right)_n(q)_n}q^n =\frac{\left(\frac{c}{b}\right)_N(bt)_N}{(c)_N(t)_N} \displaystyle\sum_{n=0}^{N}\frac{(q^{-N})_n\left(\frac{abt}{c}\right)_n(b)_n}{\left(\frac{bq^{1-N}}{c}\right)_n(bt)_n(q)_n}q^n.
\end{equation*}
Employing \eqref{eq:11}, and then replacing $t$ by $\frac{t}{ab}$ , we get
\begin{equation*}
\sum_{n=0}^{N}\frac{(a)_n(b)_n(q)_N\left(\frac{t}{ab}\right)_{N-n}}{(c)_n(q)_n(q)_{N-n}\left(\frac{t}{ab}\right)_{N}}\left(\frac{t}{ab}\right)^n=\frac{\left(\frac{c}{b}\right)_N\left(\frac{t}{a}\right)_N}{(c)_N\left(\frac{t}{ab}\right)_N}\displaystyle\sum_{n=0}^{N}\frac{\left(\frac{t}{c}\right)_n(b)_n(q)_N\left(\frac{c}{b}\right)_{N-n}}{\left(\frac{t}{a}\right)_{n}(q)_n(q)_{N-n}\left(\frac{c}{b}\right)_N}\left(\frac{c}{b}\right)^n.
\end{equation*}
Now let $a,b \to \infty$ and use the fact $\lim_{x\to \infty}(x)_{n}/x^n=(-1)^nq^\frac{n(n-1)}{2}$ twice to derive
\begin{equation*}
\sum_{n=0}^{N}\left[\begin{array}{c}N\\n\end{array}\right] \frac{t^nq^{n(n-1)}}{(c)_n}=\frac{1}{(c)_N} \sum_{n=0}^{N}\left[\begin{array}{c}N\\n\end{array}\right]\left(\frac{t}{c}\right)_n(-1)^nc^nq^\frac{n(n-1)}{2}.
\end{equation*}
Next, replace $t$ by $bq$ and $c$ by $aq$ so that
\begin{equation*}
\sum_{n=0}^{N}\left[\begin{array}{c}N\\n\end{array}\right] \frac{b^nq^{n^2}}{(aq)_n}=\frac{1}{(aq)_N} \sum_{n=0}^{N}\left[\begin{array}{c}N\\n\end{array}\right]\left(\frac{b}{a}\right)_n(-1)^na^nq^\frac{n(n+1)}{2}.
\end{equation*}
Now differentiate this identity with respect to $b$ to get
\begin{equation*}
(aq)_N\sum_{n=1}^{N}\left[\begin{array}{c}N\\n\end{array}\right] \frac{nab^{n-1}q^{n^2}}{(aq)_n}=\displaystyle \sum_{n=1}^{N}\left[\begin{array}{c}N\\n\end{array}\right]\left(\frac{b}{a}\right)_n(-1)^{n-1}a^nq^\frac{n(n+1)}{2}\sum_{k=0}^{n-1}\left(\frac{q^k}{1-bq^k/a}\right).
\end{equation*}
Now let $ b \to a $ and note that
\begin{align*}
\lim\limits_{b\to a}\left(\frac{b}{a}\right)_{n}\sum_{k=0}^{n-1}\left(\frac{q^k}{1-\frac{bq^k}{a}}\right)
&=\lim\limits_{b\to a}\left[\left(\frac{b}{a}\right)_n\frac{1}{1-\frac{b}{a}} + \left(\frac{b}{a}\right)_n\left(\frac{q}{1-\frac{bq}{a}}+\cdots+\frac{q^{n-1}}{1-\frac{bq^{n-1}}{a}}\right)\right]\\
&=(q)_{n-1}.
\end{align*}
This results in \eqref{correct}.
\end{proof}
The third entry of Ramanujan is
\begin{equation}
\label{entry3}
\sum_{n=1}^{\infty}\frac{(\frac{b}{a})_na^n}{(1-q^n)(b)_n}=\sum_{n=1}^{\infty}\frac{a^n-b^n}{1-q^n}.
\end{equation}
We now give its finite analogue.
\begin{theorem}
(Finite analogue of Entry 3)\\
We have
\begin{equation}\label{entry3fin}
\sum_{n=1}^{N}\left[\begin{array}{c}N\\n\end{array}\right] \frac{(q)_n\left(\frac{b}{a}\right)_n(a)_{N-n}a^n}{(b)_n(1-q^n)(a)_{N}}=\displaystyle\sum_{m=1}^{\infty}\frac{(a^m-b^m)(1-q^{mN})}{1-q^m}.
\end{equation}
\end{theorem}
\begin{proof}
Divide both sides of \eqref{eq:9} by $1-c/d$ and then let $c=d=1$ thereby obtaining
\begin{align}\label{entry3int}
\sum_{n=1}^{N}\left[\begin{array}{c}N\\n\end{array}\right] \frac{(q)_n\left(\frac{b}{a}\right)_n(a)_{N-n}a^n}{(b)_n(1-q^n)(a)_{N}}
&=\sum_{n=1}^{N}\left(\frac{aq^{n-1}}{1-aq^{n-1}}-\frac{bq^{n-1}}{1-bq^{n-1}}\right)\nonumber\\
&=\sum_{m=1}^{\infty}(a^m-b^m)\sum_{n=0}^{N-1}(q^n)^m\nonumber\\
&=\sum_{m=1}^{\infty}\frac{(a^m-b^m)(1-q^{mN})}{1-q^m}.
\end{align}
\end{proof}
The limiting case $N \longrightarrow \infty $ of \eqref{entry3fin} gives \eqref{entry3}.
\begin{theorem}
(Finite analogue of Entry 4)\\
The following identity is valid:
\begin{equation*}
\sum_{n=1}^{N}\left[\begin{array}{c}N\\n\end{array}\right]\frac{(-1)^{n-1}a^nq^\frac{n(n+1)}{2}(q)_n}{(1-q^n)(aq)_n}=\sum_{n=1}^{N}\frac{aq^n}{1-aq^n}.
\end{equation*}
\end{theorem}
\begin{proof}
Let $a \longrightarrow 0$ in the first equality of \eqref{entry3int} and then replace $b$ by $aq$.
\end{proof}
The limiting case $N \longrightarrow \infty $, in particular, gives Entry 4:
\begin{equation}\label{entry4}
\sum_{n=1}^{\infty}\frac{(-1)^{n-1}a^nq^\frac{n(n+1)}{2}}{(1-q^n)(aq)_n}=\sum_{n=1}^{\infty}\frac{a^nq^n}{1-q^n}.
\end{equation}
\begin{theorem}
(Finite analogue of Entry 5)\\
We have
\begin{equation}\label{entry5fin}
\sum_{n=1}^{N}\left[\begin{array}{c}N\\n\end{array}\right] \frac{(q)_n(q)_{n-1}(a)_{N-n}a^n}{(a)_n(1-q^n)(a)_{N}}=\displaystyle\sum_{n=1}^{N}\frac{aq^{n-1}}{(1-aq^{n-1})^2}.
\end{equation}
\end{theorem}
\begin{proof}
Let $d=1$ in \ref{eq:9} and then divide both sides by $1-\frac{b}{a}$ so that
\begin{equation*}
\sum_{n=1}^{N}\left[\begin{array}{c}N\\n\end{array}\right] \frac{(q)_n\left(\frac{bq}{a}\right)_{n-1}\left(cq\right)_{n-1}(a)_{N-n}a^n}{(b)_n(cq)_n(a)_{N}}=\displaystyle\sum_{n=1}^{N} \left[\begin{array}{c}N\\n\end{array}\right]\frac{\left(\frac{b}{c}\right)_{n-1}(q)_n(cq)_{N-n} c^{n-1}}{(b)_{n-1}(cq)_N}\frac{aq^{n-1}}{(1-aq^{n-1})(1-bq^{n-1})}.
\end{equation*}
Now let $b=a$ and $c=1$ to arrive at \eqref{entry5fin}.
\end{proof}
Let $N \longrightarrow \infty $ in \eqref{entry5fin} to have
\begin{align*}
\sum_{n=1}^{\infty}\frac{(q)_{n-1}a^n}{(1-q^n)(a)_n}&=\sum_{n=1}^{\infty}\frac{aq^{n-1}}{(1-aq^{n-1})^2}\nonumber\\
&=\sum_{m=1}^{\infty}m\left(\frac{a}{q}\right)^m\sum_{n=1}^{\infty}q^{mn}\nonumber\\
&=\sum_{m=1}^{\infty}\frac{ma^m}{1-q^m}.
\end{align*}
\section{An identity involving a finite sum of a ${}_2\phi_{1}$}\label{fa2p1}
In this section, we derive an identity for a finite sum of a ${}_2\phi_{1}$ that is instrumental in proving all of the results in Section \ref{app}. These include, among other things, a new generalization of the generating function version of Andrews' identity for $\textup{spt}(n)$, that is, \eqref{idspt}.
We begin with a lemma.
\begin{lemma}\label{supheinelemma}
We have
\begin{align}\label{supheine}
&\sum_{n=1}^{N}\frac{(-1)^{n-1}\left(\frac{c}{d} \right)_{n}d^n q^{\frac{n(n+1)}{2}}}{(q)_n (q)_{N-n} (cq)_{n}} \sum_{k=1}^n \frac{q^k}{1- q^k} \nonumber\\
&=\frac{(\frac{c}{d})_{\infty}(dq)_{\infty}}{(q)_{N}(cq)_{\infty}(dq^{N+1})_{\infty}} \sum_{k=1}^{N}\left[\begin{array}{c}N\\k\end{array}\right]\frac{d^kq^{k(k+1)}}{(dq)_k(1-q^k)}{_2}\phi_{1}\left[\begin{array}{ccc} dq,&dq^{N+1};\; &\frac{cq^k}{d} \\dq^{k+1}& \end{array}\right] .
\end{align}
\end{lemma}
\begin{proof}
Using van Hamme's identity
\begin{align*}
\sum_{k=1}^{n} \frac{q^k}{1 - q^k} = \sum_{k=1}^{n}\left[\begin{matrix} n\\k\end{matrix}\right]\frac{(-1)^{k-1} q^{k(k+1)/2}}{(1-q^k)},
\end{align*}
in the first step below, we have
\begin{align}\label{befheine}
&\sum_{n=1}^{N}\frac{(-1)^{n-1}\left(\frac{c}{d} \right)_{n}d^n q^{\frac{n(n+1)}{2}}}{(q)_n (q)_{N-n} (cq)_{n}} \sum_{k=1}^n \frac{q^k}{1- q^k}\nonumber\\
& = \displaystyle\sum_{n=1}^{N}\frac{(-1)^{n-1}\left(\frac{c}{d} \right)_{n}d^n q^{\frac{n(n+1)}{2}}}{(q)_n (q)_{N-n}(cq)_{n}} \sum_{k=1}^{n}\left[\begin{matrix} n\\k\end{matrix}\right]\frac{(-1)^{k-1} q^{k(k+1)/2}}{(1-q^k)} \nonumber\\
& =\sum_{k=1}^{N}\frac{(-1)^{k-1} q^{k(k+1)/2}}{(q)_k (1-q^k)}\sum_{n=k}^{N}\frac{(-1)^{n-1}d^{n}\left(\frac{c}{d} \right)_{n} q^{\frac{n(n+1)}{2}}}{(cq)_{n} (q)_{N-n}(q)_{n-k}}\nonumber\\
&=\sum_{k=1}^{N}\frac{ q^{k(k+1)}}{(q)_k (1-q^k)} \sum_{m=0}^{N-k} \frac{(-1)^m d^{k+m}\left(\frac{c}{d} \right)_{k+m} q^{\frac{m(m+1)}{2}+mk}}{(cq)_{k+m}(q)_{N-m-k}(q)_m }\nonumber\\
&=\sum_{k=1}^{N}\frac{ q^{k(k+1)}}{(q)_k (q)_{N-k} (1-q^k)}\sum_{m=0}^{N-k} \frac{\left(\frac{c}{d} \right)_{k+m}d^{k+m}\left( q^{-(N-k)} \right)_mq^{(N+1)m} }{(q)_m(cq)_{m+k} }\nonumber\\
&=\sum_{k=1}^{N}\frac{\left(\frac{c}{d}\right)_k d^kq^{k(k+1)}}{\left(cq\right)_k(q)_k (q)_{N-k} (1-q^k)}\sum_{m=0}^{N-k} \frac{\left(\frac{cq^{k}}{d} \right)_{m}\left( q^{-(N-k)} \right)_m(dq^{N+1})^m }{(q)_m\left(cq^{k+1}\right)_{m} },
\end{align}
where in the second-last step, we invoked \eqref{ef} with $x=1$ and $N$ replaced by $N-k$.
The inner sum is now handled using Heine's transformation \cite[p.~359, Equations (III.1), (III.2)]{gasperrahman}, namely, for $|z|<1$ and $|\gamma|<|\beta|<1$,
\begin{equation}\label{heine}
_{2}\phi_{1}\left[\begin{array}{ccc} \alpha,&\beta\; ;&z\\\gamma& \end{array}\right]
=\frac{(\beta)_{\infty}(\alpha z)_{\infty}}{(\gamma)_{\infty}(z)_{\infty}}\;_{2}\phi_{1}\left[\begin{array}{ccc} \frac{\gamma}{\beta},&z\; ;&\beta\\\alpha z& \end{array}\right].
\end{equation}
Let $\alpha=q^{-(N-k)} $, $\beta=\frac{cq^{k}}{d} $, $\gamma=cq^{k+1}$ and $z=dq^{N+1} $ in \eqref{heine} so that
\begin{align}\label{aftheine}
&\sum_{m=0}^{N-k} \frac{\left(\frac{cq^{k}}{d} \right)_{m}\left( q^{-(N-k)} \right)_m(dq^{N+1})^m }{(q)_m\left(cq^{k+1}\right)_{m} }
&=\frac{(dq^{k+1})_{\infty}(\frac{cq^{k}}{d})_{\infty}}{(cq^{k+1})_{\infty}(dq^{N+1})_{\infty}}\;_{2}\phi_{1}\left[\begin{array}{ccc} dq,&dq^{N+1};\; &\frac{cq^k}{d} \\dq^{k+1}& \end{array}\right].
\end{align}
Substituting \eqref{aftheine} in \eqref{befheine} results in \eqref{supheine} upon simplification.
\end{proof}
We also need the following lemma whose proof is similar to that of Corollary 4.1 of \cite{dems}. Hence we give only the outline of the proof.
\begin{lemma}\label{corlemma}
We have
\begin{equation*}
\sum_{n=1}^{N}\left[\begin{array}{c}N\\n\end{array}\right] \frac{\left(\frac{c}{d}\right)_n(d)^n(-1)^{n-1} q^\frac{n(n+1)}{2}}{(cq)_n} =\left(1- \frac{(dq)_N}{(c q)_N} \right).
\end{equation*}
\end{lemma}
\begin{proof}
Let $z=1$ in \eqref{eq:13}. Then use the fact $(c q)_{N-n}/(cq)_N = 1/(cq^{N+1-n})_n$ so that
\begin{align*}
\sum_{n=1}^{N}\left[\begin{array}{c}N\\n\end{array}\right] \frac{\left(\frac{c}{d}\right)_n(d)^n(-1)^{n-1} q^\frac{n(n+1)}{2}}{(cq)_n}
=-\sum_{n=1}^{N} \frac{\left(\frac{d}{c}\right)_{n}(q^{-N})_nq^n} {(q)_n\left(\frac{q^{-N}}{c}\right)_n},
\end{align*}
Now use the $q$-Chu-Vandermonde identity \cite[p.~354, II(6)]{gasperrahman}
\begin{equation}\label{q-Chu-Vandermonde}
{}_{2}\phi_{1}\left[ \begin{matrix} a, q^{-N} \\
x \end{matrix} \, ; q, q \right]
= \frac{\left(\frac{x}{a}\right)_N a^N}{(x)_N}
\end{equation}
with $x= q^{-N}/c$ and $a = d/c$ to write the right-hand side as a $q$-product which completes the proof.
\end{proof}
\begin{theorem}\label{2phi1id}
The following identity holds:
\begin{align}\label{2phi1ideqn}
&\sum_{n=1}^{N}\frac{n(-1)^{n-1}\left(\frac{c}{d} \right)_{n}d^n q^{\frac{n(n+1)}{2}}}{(q)_n (q)_{N-n} (cq)_{n}} + \frac{(\frac{c}{d})_{\infty}(dq)_{\infty}}{(q)_{N}(cq)_{\infty}(dq^{N+1})_{\infty}} \sum_{k=1}^{N}\bigg[\begin{array}{c}N\\k\end{array}\bigg]\frac{d^kq^{k(k+1)} }{(dq)_k(1-q^k)}{_2}\phi_{1}\bigg(\begin{array}{ccc} dq,&dq^{N+1};\; &\frac{cq^k}{d} \\dq^{k+1}& \end{array}\bigg)\nonumber\\
&= \frac{c}{(c-d)(q)_N}\left(1-\frac{(dq)_N}{(cq)_N}\right) +\frac{1}{(cq)_N} \sum_{k=1}^N \frac{\left(\frac{cq}{d}\right)_k (dq)_{N-k}(dq)^k}{(q)_k (q)_{N-k}(1-q^k) }.
\end{align}
\end{theorem}
\begin{proof}
Differentiate both sides of \eqref{eq:13} w.r.t $z$ and then let $z=1$ to obtain
\begin{align}\label{Putting_z=1}
\sum_{n=1}^{N}\frac{(-1)^{n-1}\left(\frac{c}{d} \right)_{n}d^n q^{\frac{n(n+1)}{2}}}{(q)_n (q)_{N-n} (cq)_{n}} \left(n + \sum_{k=1}^n \frac{q^k}{1- q^k} \right) =:T_1+T_2,
\end{align}
where
\begin{align*}
T_1& =\frac{d-c}{c}\sum_{n=1}^N \frac{\left(\frac{dq}{c}\right)_{n-1} (c q)_{N-n} (cq)^n }{(q)_n (c q)_N (q)_{N-n} } \nonumber\\
T_2&= \frac{d-c}{c} \sum_{n=1}^N \frac{\left(\frac{dq}{c}\right)_{n-1} (c q)_{N-n} (cq)^n }{(q)_n (c q)_N (q)_{N-n} } \left(-\sum_{k=1}^{n-1} \frac{q^kd/c}{1- q^kd/c} + \sum_{k=1}^n \frac{q^k}{1-q^k} \right).
\end{align*}
Using \eqref{eq:13} with $z=1$ and then invoking Lemma \ref{corlemma}, we see that
\begin{equation}\label{s1}
T_1=\frac{1}{(q)_N}\left(1-\frac{(dq)_N}{(cq)_N}\right).
\end{equation}
In \cite[Corollary 3.1]{guozhang}, Guo and Zhang have shown that if $n\geq 0$ and $0\leq m\leq n$, then
\begin{align*}
&\sum_{k=0\atop k\neq m}^{n}\left[\begin{matrix} n\\k\end{matrix}\right]\frac{\left(\frac{q}{x}\right)_k(x)_{n-k}}{1-q^{k-m}}x^k=(-1)^mq^{\frac{m(m+1)}{2}}\left[\begin{matrix} n\\m\end{matrix}\right](xq^{-m})_n\left(\sum_{k=0}^{n-1}\frac{xq^{k-m}}{1-xq^{k-m}}-\sum_{k=0\atop k\neq m}^{n}\frac{q^{k-m}}{1-q^{k-m}}\right).
\end{align*}
The $m=0$ case of the above identity gives
\begin{align}\label{m0}
\sum_{k=1}^{n}\frac{q^k}{1-q^k}-\sum_{k=1}^{n-1}\frac{xq^k}{1-xq^k}=\frac{x}{1-x}-\frac{1}{(x)_n}\sum_{k=1}^{n}\left[\begin{matrix} n\\k\end{matrix}\right]\frac{\left(\frac{q}{x}\right)_k(x)_{n-k}x^k}{1-q^k}.
\end{align}
Now invoke \eqref{m0} with $x=d/c$ to simplify the expression within the parentheses in $T_2$ to see that
\begin{align*}
T_2&=\frac{d-c}{c} \sum_{n=1}^N \frac{\left(\frac{dq}{c}\right)_{n-1} (c q)_{N-n} (cq)^n }{(q)_n (c q)_N (q)_{N-n} }\left( \frac{d}{c-d} - \frac{1}{\left(\frac{d}{c}\right)_n} \sum_{k=1}^n \left[\begin{matrix} n\\k\end{matrix}\right] \frac{\left(\frac{qc}{d}\right)_k \left(\frac{d}{c}\right)_{n-k} \left(\frac{d}{c}\right)^{k}}{1- q^k}\right)\nonumber\\
&=\frac{d}{(c-d)(q)_N}\left(1-\frac{(dq)_N}{(cq)_N}\right) +T_{2}^{*},
\end{align*}
where
\begin{equation}\label{s2s}
T_2^{*}:=\frac{1}{(cq)_N} \sum_{n=1}^N \frac{ \left(cq\right)_{N-n} (cq)^n }{(q)_n (q)_{N-n} } \sum_{k=1}^n \left[\begin{matrix} n\\k\end{matrix}\right] \frac{\left(\frac{qc}{d}\right)_k \left(\frac{d}{c}\right)_{n-k} \left(\frac{d}{c}\right)^{k}}{1- q^k},
\end{equation}
and in the last step, we used \eqref{s1}. Now
\begin{align*}
T_2^{*}& =\frac{1}{(cq)_N} \sum_{k=1}^N \frac{\left(\frac{cq}{d}\right)_k (dq)^k}{(q)_k (1-q^k) } \sum_{j=0}^{N-k} \frac{\left(\frac{d}{c}\right)_j (cq)^j (cq)_{N-j-k}}{(q)_j(q)_{N-j-k}} \nonumber \\
& = \frac{1}{(cq)_N} \sum_{k=1}^N \frac{\left(\frac{cq}{d}\right)_k (cq)_{N-k}(dq)^k}{(q)_k (1-q^k)(q)_{N-k} } \sum_{j=0}^{N-k} \frac{\left(\frac{d}{c}\right)_j q^j \left(q^{-(N-k)}\right)_j}{(q)_j \left(q^{-(N-k)}/c\right)_j},
\end{align*}
where in the last step, we used \eqref{eq:11} with $N$ replaced by $N-k$ and $n$ replaced by $j$. Now apply \eqref{q-Chu-Vandermonde} with $a = d/c$, $x= q^{-(N-k)}/c$ and $N$ replaced by $N-k$ to see that
\begin{equation}\label{appl_Chu}
\sum_{j=0}^{N-k} \frac{\left(\frac{d}{c}\right)_j q^j \left(q^{-(N-k)}\right)_j}{(q)_j \left(q^{-(N-k)}/c\right)_j}=\frac{(dq)_{N-k}}{(cq)_{N-k}}.
\end{equation}
From \eqref{appl_Chu} and \eqref{s2s},
\begin{align*}
T_2^{*}=\frac{1}{(cq)_N} \sum_{k=1}^N \frac{\left(\frac{cq}{d}\right)_k (dq)^k(dq)_{N-k}}{(q)_k (1-q^k)(q)_{N-k} },
\end{align*}
which, along with \eqref{s1}, implies
\begin{align}\label{final_second term}
T_2= \frac{d}{(c-d)(q)_N}\left(1-\frac{(dq)_N}{(cq)_N}\right) +\frac{1}{(cq)_N} \sum_{k=1}^N \frac{\left(\frac{cq}{d}\right)_k (dq)^k(dq)_{N-k}}{(q)_k (1-q^k)(q)_{N-k} }.
\end{align}
Hence from \eqref{Putting_z=1}, \eqref{s1} and \eqref{final_second term},
\begin{align*}
& \sum_{n=1}^{N}\frac{(-1)^{n-1}\left(\frac{c}{d} \right)_{n}d^n q^{\frac{n(n+1)}{2}}}{(q)_n (q)_{N-n} (cq)_{n}} \left(n + \sum_{k=1}^n \frac{q^k}{1- q^k} \right) \\
&=\frac{c}{(c-d)(q)_N}\left(1-\frac{(dq)_N}{(cq)_N}\right) +\frac{1}{(cq)_N} \sum_{k=1}^N \frac{\left(\frac{cq}{d}\right)_k (dq)^k(dq)_{N-k}}{(q)_k (1-q^k)(q)_{N-k} }
\end{align*}
Finally from the above equation and Lemma \ref{supheinelemma}, we arrive at \eqref{2phi1ideqn}.
\end{proof}
\section{Applications of Theorem \ref{2phi1id}}\label{app}
There are several applications of Theorem \ref{2phi1id}. We begin with the most appealing one generalizing the generating function version of Andrews' famous identity
\begin{equation} \label{idspt}
\textup{spt}(n)=np(n)-\frac{1}{2}N_2(n).
\end{equation}
\subsection{A generalization of Andrews' identity}
\begin{theorem}\label{sptgenthm}
We have
\begin{align}\label{sptgen}
\frac{1 }{(q)_{\infty}}\sum_{n=1}^{\infty}\frac{n(-d)^{n-1}(q/d)_{n-1}q^{n(n+1)/2}}{(q)_{n}^2}
&=\frac{1}{(q)_{\infty}}\sum_{n=1}^{\infty}\frac{nq^n(dq)_{n-1}}{(q)_n}\nonumber\\
&\quad-\frac{(dq)_{\infty}}{(q)_{\infty}}\sum_{j=1}^{\infty}\frac{q^{j^2}}{(q)_{j}^{2}}\sum_{n=1}^{j}\frac{q^n}{(1-dq^n)(1-q^n)}.
\end{align}
\end{theorem}
\begin{proof}
Let $c=1$ in Theorem \ref{2phi1id} and then divide both sides by $d-1$ to obtain after simplification
\begin{align}\label{befmmain}
&\sum_{n=1}^{N}\frac{n(-d)^{n-1}\left(\frac{q}{d} \right)_{n-1}q^{\frac{n(n+1)}{2}}}{(q)_n^2 (q)_{N-n}} + \frac{(\frac{q}{d})_{\infty}(dq)_{\infty}}{d(q)_{N}(dq^{N+1})_{\infty}(q)_{\infty}} \sum_{k=1}^{N}\bigg[\begin{array}{c}N\\k\end{array}\bigg]\frac{d^kq^{k(k+1)} }{(dq)_k(1-q^k)}{_2}\phi_{1}\bigg(\begin{array}{ccc} dq,&dq^{N+1};\; &\frac{q^k}{d} \\dq^{k+1}& \end{array}\bigg)\nonumber\\
&= \frac{1}{(1-d)^2(q)_N}\left(\frac{(dq)_N}{(q)_N}-1\right) +\frac{1}{(d-1)(q)_N} \sum_{k=1}^N \frac{\left(\frac{q}{d}\right)_k (dq)_{N-k}(dq)^k}{(q)_k (q)_{N-k}(1-q^k) }.
\end{align}
Letting $N\to\infty$ gives
\begin{align}\label{mmain}
&\frac{1}{(q)_{\infty}}\sum_{n=1}^{\infty}\frac{n(-d)^{n-1}\left(\frac{q}{d} \right)_{n-1}q^{\frac{n(n+1)}{2}}}{(q)_n^2} + \frac{(\frac{q}{d})_{\infty}(dq)_{\infty}}{d(q)_{\infty}^{2}} \sum_{k=1}^{\infty}\frac{d^kq^{k(k+1)} }{(q)_{k}(dq)_k(1-q^k)}\sum_{j=1}^{\infty}\frac{(dq)_{j}(q^{k}/d)^{j}}{(dq^{k+1})_j(q)_j}\nonumber\\
&= \frac{1}{(1-d)^2(q)_{\infty}}\left(\frac{(dq)_{\infty}}{(q)_{\infty}}-1\right) +\frac{(dq)_{\infty}}{(d-1)(q)_{\infty}^2} \sum_{k=1}^{\infty} \frac{\left(\frac{q}{d}\right)_k (dq)^k}{(q)_k (1-q^k) }.
\end{align}
Now using Jackson's transformation \cite[p.~526]{aar}
\begin{equation}\label{jackson}
\sum_{n=0}^{\infty}\frac{(\alpha)_n(\beta)_n}{(\gamma)_n(q)_n}z^n=\frac{(\alpha z)_{\infty}}{(z)_{\infty}}\sum_{n=0}^{\infty}\frac{(\alpha)_n(\gamma/\beta)_n(-\beta z)^nq^{n(n-1)/2}}{(\gamma)_n(\alpha z)_n(q)_n}
\end{equation}
with $\alpha=dq, \gamma=dq^{k+1}, \beta=0$ and $z=q^k/d$, we see that the inner sum in \eqref{mmain} transforms as
\begin{align*}
\sum_{j=0}^{\infty}\frac{(dq)_{j}(q^{k}/d)^{j}}{(dq^{k+1})_j(q)_j}=\frac{(q^{k+1})_{\infty}}{(q^{k}/d)_{\infty}}\sum_{j=0}^{\infty}\frac{(dq)_{j}q^{j^2+2jk}}{(dq^{k+1})_j(q^{k+1})_j(q)_j}.
\end{align*}
Thus, the second expression on the left-hand side of \eqref{mmain} can be written as
\begin{align*}
&\frac{(\frac{q}{d})_{\infty}(dq)_{\infty}}{d(q)_{\infty}^{2}} \sum_{k=1}^{\infty}\frac{d^kq^{k(k+1)} }{(q)_{k}(dq)_k(1-q^k)}\sum_{j=0}^{\infty}\frac{(dq)_{j}(q^{k}/d)^{j}}{(dq^{k+1})_j(q)_j}\nonumber\\
&= \frac{(\frac{q}{d})_{\infty}(dq)_{\infty}}{d(q)_{\infty}^{2}} \sum_{k=1}^{\infty}\frac{d^kq^{k(k+1)} }{(q)_{k}(dq)_k(1-q^k)}\frac{(q^{k+1})_{\infty}}{(q^{k}/d)_{\infty}}\sum_{j=0}^{\infty}\frac{(dq)_{j}q^{j^2+2jk}}{(dq^{k+1})_j(q^{k+1})_j(q)_j}\nonumber\\
&=\frac{(dq)_{\infty}}{d(q)_{\infty}}\sum_{k=1}^{\infty}\frac{(dq)^k(q/d)_{k-1}}{(q)_k(1-q^k)}\sum_{j=0}^{\infty}\frac{(dq)_{j}q^{(j+k)^2}}{(dq)_{j+k}(q)_{j+k}(q)_j}\nonumber\\
&=\frac{(dq)_{\infty}}{d(q)_{\infty}}\sum_{k=1}^{\infty}\frac{(dq)^k(q/d)_{k-1}}{(q)_k(1-q^k)}\sum_{j=k}^{\infty}\frac{(dq)_{j-k}q^{j^2}}{(dq)_{j}(q)_{j}(q)_{j-k}}\nonumber\\
&=\frac{(dq)_{\infty}}{d(q)_{\infty}}\sum_{j=1}^{\infty}\frac{q^{j^2}}{(dq)_j(q)_j}\sum_{k=1}^{j}\frac{(dq)^k(q/d)_{k-1}(dq)_{j-k}}{(q)_k(q)_{j-k}(1-q^k)}.
\end{align*}
Next, using \eqref{m0} with $x=dq$ and simplifying, we find that
\begin{align}\label{m0app}
\sum_{k=1}^{j}\frac{(dq)^k(q/d)_{k-1}(dq)_{j-k}}{(q)_k(q)_{j-k}(1-q^k)}=\frac{(dq)_j}{(1-1/d)(q)_j}\left(\sum_{k=1}^{j}\frac{dq^k}{1-dq^k}-\sum_{k=1}^{j}\frac{q^k}{1-q^k}\right).
\end{align}
Substituting \eqref{m0app} in the last expression in \eqref{m0bef}, the second expression on the left-hand side of \eqref{mmain} finally simplifies to
\begin{align}\label{m0bef}
\frac{(\frac{q}{d})_{\infty}(dq)_{\infty}}{d(q)_{\infty}^{2}} \sum_{k=1}^{\infty}\frac{d^kq^{k(k+1)} }{(q)_{k}(dq)_k(1-q^k)}\sum_{j=1}^{\infty}\frac{(dq)_{j}(q^{k}/d)^{j}}{(dq^{k+1})_j(q)_j}&=\frac{(dq)_{\infty}}{(d-1)(q)_{\infty}}\sum_{j=1}^{\infty}\frac{q^{j^2}}{(q)_j^{2}}\sum_{k=1}^{j}\left(\frac{dq^k}{1-dq^k}-\frac{q^k}{1-q^k}\right)\nonumber\\
&=\frac{(dq)_{\infty}}{(q)_{\infty}}\sum_{j=1}^{\infty}\frac{q^{j^2}}{(q)_j^{2}}\sum_{k=1}^{j}\frac{q^k}{(1-dq^k)(1-q^k)}.
\end{align}
Next, let $F_d(q)$ denote the expression on the right-hand side of \eqref{mmain}. Using the identity \cite[Corollary 2.4]{dixitmaji18}, namely
\begin{align*}
\sum_{k=1}^{\infty}\frac{d^{k-1}(q/d)_{k-1} q^k}{(q)_k}=\frac{1}{1-d} \left(1- \frac{(q)_{\infty} }{(d q)_\infty } \right),
\end{align*}
in the second step below, we see that
\begin{align}\label{m0last0}
F_d(q)&=\frac{(dq)_{\infty}}{(d-1)(q)_{\infty}^{2}}\left\{\frac{1}{(1-d)}\left(\frac{(q)_{\infty}}{(dq)_{\infty}}-1\right)+\sum_{k=1}^{\infty} \frac{\left(\frac{q}{d}\right)_k (dq)^k}{(q)_k (1-q^k) }\right\}\nonumber\\
&=\frac{(dq)_{\infty}}{(d-1)(q)_{\infty}^{2}}\left\{-\frac{1}{d}\sum_{k=1}^{\infty}\frac{(q/d)_{k-1} (dq)^k}{(q)_k}+\sum_{k=1}^{\infty} \frac{\left(\frac{q}{d}\right)_k (dq)^k}{(q)_k (1-q^k) }\right\}\nonumber\\
&=\frac{(dq)_{\infty}}{(d-1)(q)_{\infty}^{2}}\sum_{k=1}^{\infty}\frac{(q/d)_{k-1} (dq)^k}{(q)_k}\left(-\frac{1}{d}+\frac{1-q^k/d}{1-q^k}\right)\nonumber\\
&=\frac{(dq)_{\infty}}{d(q)_{\infty}^{2}}\sum_{k=1}^{\infty}\frac{(q/d)_{k-1} (dq)^k}{(q)_k(1-q^k)}\nonumber\\
&=\frac{(dq)_{\infty}}{(q)_{\infty}^2}\sum_{n=1}^{\infty}\frac{q^n}{(1-dq^n)(1-q^n)},
\end{align}
where in the last step we used the special case $z=1$ of the identity
\begin{align*}
\sum_{n=1}^{\infty} \frac{(q)_{n-1} z^nq^n }{ (1-d q^n) (zq)_n } &=z\sum_{n=1}^{\infty}\frac{(zq/d)_{n-1}}{(zq)_n}\frac{d^{n-1}q^n}{1-zq^{n}}
\end{align*}
which is valid for $|zq|<1$ and $|dq|<1$.
For $|q|<1$ and $d\neq d^{-m}, m\in\mathbb{N}$, Andrews \cite[p.~159]{ms_problem} has generalized Uchimura's identity \cite[Theorem 2]{uchimura} as follows:
\begin{align}\label{andrewsgen}
\sum_{n=1}^{\infty}\frac{q^n}{(1-dq^n)(1-q^n)}=\sum_{n=1}^{\infty}\frac{nq^n(q^{n+1})_{\infty}}{(dq^n)_{\infty}}.
\end{align}
Hence substituting \eqref{andrewsgen} in \eqref{m0last0} leads to
\begin{align}\label{m0last}
F_d(q)=\frac{1}{(q)_{\infty}}\sum_{n=1}^{\infty}\frac{nq^n(dq)_{n-1}}{(q)_n}.
\end{align}
Substituting \eqref{m0bef} and \eqref{m0last} in \eqref{mmain} and rearranging, we arrive at \eqref{sptgen}.
\end{proof}
\begin{corollary}
The identity in \eqref{idspt} holds.
\end{corollary}
\begin{proof}
Let $d=1$ in \eqref{sptgen}. This gives
\begin{align}\label{mmaind1}
\frac{1}{(q)_{\infty}}\sum_{n=1}^{\infty}\frac{n(-1)^{n-1}q^{\frac{n(n+1)}{2}}}{(q)_n(1-q^n)} &=\frac{1}{(q)_{\infty}}\sum_{n=1}^{\infty}\frac{q^n}{(1-q^n)^2}-\sum_{j=1}^{\infty}\frac{q^{j^2}}{(q)_{j}^{2}}\sum_{n=1}^{j}\frac{q^n}{(1-q^n)^2}.
\end{align}
From \cite[Theorem 3.8]{agl13},
\begin{equation}\label{1}
\sum_{n=1}^{\infty}\textup{spt}(n)q^n= \frac{1}{(q)_{\infty}}\sum_{n=1}^{\infty}\frac{n(-1)^{n-1}q^{\frac{n(n+1)}{2}}}{(q)_n(1-q^n)} ,
\end{equation}
whereas, from \cite[Equation (3.3)]{andrews08},
\begin{align}\label{2}
\sum_{n=1}^{\infty}np(n)q^n&=\frac{1}{(q)_{\infty}}\sum_{n=1}^{\infty}\frac{nq^n}{1-q^n}.
\end{align}
Moreover, from \cite[Equation (7.14)]{dixitmaji18},
\begin{align}\label{3}
\sum_{j=1}^{\infty}\frac{q^{j^2}}{(q)_{j}^{2}}\sum_{n=1}^{j}\frac{q^n}{(1-q^n)^2}=\frac{1}{2}\left.\frac{d^2}{dz^2}\sum_{j=0}^{\infty}\frac{q^{j^2}}{(zq)_j(z^{-1}q)_j}\right|_{z=1}.
\end{align}
From \eqref{mmaind1}, \eqref{1}, \eqref{2} and \eqref{3}, we arrive at
\begin{align*}
\sum_{n=1}^{\infty}\textup{spt}(n)q^n=\sum_{n=1}^{\infty}np(n)q^n-\frac{1}{2}\sum_{n=1}^{\infty}N_2(n)q^n,
\end{align*}
which establishes \eqref{idspt} upon comparing the coefficients of $q^n$ on both sides.
\end{proof}
\begin{corollary}
We have
\begin{align*}
\sum_{n=1}^{\infty}\frac{n(-q)_{n-1}q^{n(n+1)/2}}{(q)_n^{2}}=\sum_{n=1}^{\infty}\frac{nq^n(-q)_{n-1}}{(q)_n}-(-q)_{\infty}\sum_{j=1}^{\infty}\frac{q^{j^2}}{(q)_{j}^{2}}\sum_{n=1}^{j}\frac{q^n}{(1-q^{2n})}.
\end{align*}
\end{corollary}
\begin{proof}
Let $d=-1$ in \eqref{sptgen} and multiply the resulting identity by $(q)_{\infty}$.
\end{proof}
\begin{remark}
We observe that $\displaystyle\sum_{n=1}^{\infty}\frac{nq^n(-q)_{n-1}}{(q)_n}$ is the generating function of the sum of largest parts (counted with multiplicty $1$) in those overpartitions of a positive integer whose largest part is always overlined. For example, there are seven overpartitions of $4$ whose largest part is always overlined, namely, $\overline{4}, \overline{3}+1, \overline{3}+\overline{1}, \overline{2}+\overline{2}, \overline{2}+\overline{1}+1, \overline{2}+1+1$ and $\overline{1}+\overline{1}+\overline{1}+\overline{1}$. Then the coefficient $q^4$ in the series expansion of $\displaystyle\sum_{n=1}^{\infty}\frac{nq^n(-q)_{n-1}}{(q)_n}$ is 4+3+3+2+2+2+1=17.
\end{remark}
\subsection{A generalization of an identity for the generating function of $N_{\textup{SC}}(n)$ }
Let $V$ be the set of vector partitions, that is, $V=\mathcal{D}\times\mathcal{P}\times\mathcal{P}$, where $\mathcal{P}$ denotes the set of unrestricted partitions and $\mathcal{D}$ denotes the set of partitions into distinct parts. Define $S$ to be the set of vector partitions given below:
\begin{equation*}
S:=\{\vec{\pi}=(\pi_1, \pi_2, \pi_3)\in V: 1\leq s(\pi_1)<\infty\hspace{1mm}\text{and}\hspace{1mm}s(\pi_1)\leq\min(s(\pi_2), s(\pi_3))\}.
\end{equation*}
Let $\omega_1(\vec{\pi})=(-1)^{\#(\pi_1)-1}$. Let $\imath: S\to S$ be the involution map defined by
\begin{equation*}
\imath(\vec{\pi})=\imath(\pi_1, \pi_2, \pi_3)=\imath(\pi_1, \pi_3, \pi_2).
\end{equation*}
The partitions $\vec{\pi}=(\pi_1, \pi_2, \pi_3)$ from the set $S$ are simply called $S$-partitions. Define an $S$-partition $\vec{\pi}=(\pi_1, \pi_2, \pi_3)$ to be self-conjugate $S$-partition if it is a fixed point of $\imath$, that is, if and only if $\pi_2=\pi_3$. Moreover, let $N_{\textup{SC}}(n)$ denote the number of self-conjugate $S$-partitions counted according to the weight $\omega_1$, that is,
\begin{equation*}
N_{\textup{SC}}(n)=\sum_{\vec{\pi}\in S, |\vec{\pi}|=n \atop \imath(\vec{\pi})=\vec{\pi}}\omega_1(\vec{\pi}).
\end{equation*}
Andrews, Garvan and Liang \cite[Theorem 3.8, Equation (3.25)]{agl13} showed that
\begin{align*}
\sum_{n=1}^{\infty}N_{\textup{SC}}(n)q^n=\frac{1}{(q)_{\infty}}\sum_{n=1}^{\infty}\frac{n(-1)^{n-1}q^{n(n+1)/2}}{(q)_n(1+q^n)}.
\end{align*}
In \cite[Corollary 2.12]{dixitmaji18}, the following result for the generating function of $N_{\textup{SC}}(n)$ was proved:
\begin{align}\label{nsceqn}
&(q)_{\infty}\sum_{n=1}^{\infty}N_{\textup{SC}}(n)q^n+\frac{1}{2}\frac{(q)_{\infty}}{(-q)_{\infty}}\sum_{n=1}^{\infty}\frac{q^{\frac{n(n+1)}{2}}}{(1-q^n)(q)_n}\left(\frac{(-q)_n}{(q)_n}-1\right)\nonumber\\
&=\frac{1}{4}-\frac{1}{4}\frac{(q)_{\infty}}{(-q)_{\infty}}+\frac{1}{2}\frac{(q)_{\infty}}{(-q)_{\infty}}\sum_{n=1}^{\infty}\frac{(-q)_n}{(q)_n}\frac{q^n}{1-q^n}.
\end{align}
In what follows, we generalize the above identity by means of an extra parameter $d$.
\begin{theorem}
We have
\begin{align}\label{nscgen}
&\sum_{n=1}^{\infty}\frac{n(-1)^{n-1}\left(\frac{-1}{d}\right)_{n}d^nq^{n(n+1)/2}}{(q^2;q^2)_n}+\frac{(\frac{-1}{d})_{\infty}(dq)_{\infty}}{(-q)_{\infty}} \sum_{k=1}^{\infty}\frac{d^kq^{k(k+1)}}{(q)_k(dq)_k(1-q^k)}\sum_{n=0}^{\infty} \frac{(dq)_{n}}{(dq^{k+1} )_{n}(q)_{n}}\left(\frac{-q^{k}}{d} \right)^{n}\nonumber\\
&=\frac{1}{(1+d)}\left(1-\frac{(dq)_{\infty}}{(-q)_{\infty}}\right)+\frac{(dq)_{\infty}}{(-q)_{\infty}}\sum_{n=1}^{\infty}\frac{(-q/d)_n(dq)^n}{(q)_n(1-q^n)}.
\end{align}
\end{theorem}
\begin{proof}
Let $c=-1$ in Theorem \ref{2phi1id} and then let $N\to\infty$.
\end{proof}
\begin{corollary}
Identity \eqref{nsceqn} holds.
\end{corollary}
\begin{proof}
Let $d=1$ in \eqref{nscgen}. This results in
\begin{align}\label{aux0}
&\sum_{n=1}^{\infty}\frac{n(-1)^{n-1}q^{n(n+1)/2}}{(q)_n(1+q^n)}+(q)_{\infty}\sum_{k=1}^{\infty}\frac{q^{k(k+1)}}{(q)_k^{2}(1-q^k)}F(0;q^k;-q^k)\nonumber\\
&=\frac{1}{4}\left(1-\frac{(q)_{\infty}}{(-q)_{\infty}}\right)+\frac{1}{2}\frac{(q)_{\infty}}{(-q)_{\infty}}\sum_{n=1}^{\infty}\frac{(-q)_n}{(q)_n}\frac{q^n}{1-q^n}.
\end{align}
But from \cite[Equation (7.26)]{dixitmaji18}, we have
\begin{align}\label{aux}
(q)_{\infty}\sum_{k=1}^{\infty}\frac{q^{k(k+1)}}{(q)_k^{2}(1-q^k)}F(0;q^k;-q^k)=\frac{1}{2}\frac{(q)_{\infty}}{(-q)_{\infty}}\sum_{k=1}^{\infty}\frac{q^{k(k+1)/2}}{(q)_k(1-q^k)}\left(\frac{(-q)_k}{(q)_k}-1\right).
\end{align}
Substituting \eqref{aux} in \eqref{aux0}, we arrive at \eqref{nsceqn}.
\end{proof}
\begin{remark}
If we let $d=-1$ in \eqref{nscgen}, we simply obtain the \eqref{entry4} with $a=-1$.
\end{remark}
\subsection{Other special cases}
\begin{theorem}
We have
\begin{align}\label{c=0}
\frac{1}{(dq)_{\infty}}\sum_{n=1}^{\infty}\frac{n(-1)^{n-1}d^nq^{n(n+1)/2}}{(q)_n}+\sum_{n=1}^{\infty}\frac{d^nq^{n(n+1)}}{(q)_n(dq)_n(1-q^n)}=\sum_{n=1}^{\infty}\frac{(dq)^n}{(q)_n(1-q^n)}.
\end{align}
\end{theorem}
\begin{proof}
Let $c=0$ in Theorem \ref{2phi1id} to obtain
\begin{align*}
\sum_{n=1}^{N}\left[\begin{array}{c}N\\n\end{array}\right]n(-1)^{n-1}d^nq^{n(n+1)/2}+(dq)_N\sum_{n=1}^{N}\left[\begin{array}{c}N\\n\end{array}\right]\frac{d^nq^{n(n+1)}}{(dq)_n(1-q^n)}=\sum_{n=1}^{N}\left[\begin{array}{c}N\\n\end{array}\right]\frac{(dq)^n(dq)_{N-n}}{1-q^n}.
\end{align*}
Now let $N\to\infty$ in the above identity to arrive at \eqref{c=0}.
\end{proof}
\begin{corollary}
We have
\begin{align*}
\frac{1}{(q)_{\infty}}\sum_{n=1}^{\infty}\frac{n(-1)^{n-1}q^{n(n+1)/2}}{(q)_n}+\sum_{n=1}^{\infty}\frac{q^{n(n+1)}}{(q)_n^{2}(1-q^n)}=\sum_{n=1}^{\infty}\frac{q^n}{(q)_n(1-q^n)}.
\end{align*}
\end{corollary}
\begin{proof}
Let $d=1$ in \eqref{c=0}.
\end{proof}
\begin{remark}
Comparing with \cite[Corollary 2.10]{dixitmaji18}, we readily see that
\begin{equation*}
\frac{1}{(q)_{\infty}}\sum_{n=1}^{\infty}\frac{n(-1)^{n-1}q^{n(n+1)/2}}{(q)_n}=\sum_{n=1}^{\infty}\frac{q^n}{1-q^n}.
\end{equation*}
\end{remark}
\begin{corollary}
We have
\begin{align*}
\frac{-1}{(-q)_{\infty}}\sum_{n=1}^{\infty}\frac{nq^{n(n+1)/2}}{(q)_n}+\sum_{n=1}^{\infty}\frac{(-1)^nq^{n(n+1)}}{(q^2;q^2)_n(1-q^n)}=\sum_{n=1}^{\infty}\frac{(-q)^n}{(q)_n(1-q^n)}.
\end{align*}
\end{corollary}
\begin{proof}
Let $d=-1$ in \eqref{c=0}.
\end{proof}
\begin{remark}
Note that $\displaystyle\sum_{n=1}^{\infty}\frac{nq^{n(n+1)/2}}{(q)_n}$ is the generating function of the number of parts in all partitions of a positive integer into distinct parts.
\end{remark}
\begin{theorem}
The following identity is valid:
\begin{align}\label{d tends to zero}
\sum_{n=0}^{\infty}\frac{nc^nq^{n^2}}{(q)_n(cq)_n}-\sum_{k=1}^{\infty}\frac{(-c)^kq^{\frac{k(k+1)}{2}}}{(q)_k(1-q^k)}\sum_{j=0}^{\infty}\frac{c^jq^{(j+k)^2}}{(cq)_{j+k}(q)_j}=\frac{1}{(cq)_{\infty}}-1-\frac{1}{(cq)_{\infty}}\sum_{k=1}^{\infty}\frac{(-c)^kq^{\frac{k(k+3)}{2}}}{(q)_k(1-q^k)}.
\end{align}
\end{theorem}
\begin{proof}
Let $N\to\infty$ in Theorem \ref{2phi1id} thereby obtaining
\begin{align}\label{interm0}
&\frac{1}{(q)_\infty}\sum_{n=1}^{\infty}\frac{n(-1)^{n-1}\left(\frac{c}{d} \right)_{n}d^n q^{\frac{n(n+1)}{2}}}{(q)_n(cq)_{n}} + \frac{(\frac{c}{d})_{\infty}(dq)_{\infty}}{(q)_{\infty}(cq)_{\infty}} \sum_{k=1}^{\infty}\frac{d^kq^{k(k+1)} }{(dq)_k(q)_k(1-q^k)}\sum_{j=0}^{\infty}\frac{(dq)_j(cq^k/d)^j}{(dq^{k+1})_j(q)_j}\nonumber\\
&= \frac{c}{(c-d)(q)_\infty}\left(1-\frac{(dq)_\infty}{(cq)_\infty}\right) +\frac{(dq)_\infty}{(cq)_\infty(q)_\infty} \sum_{k=1}^\infty \frac{\left(\frac{cq}{d}\right)_k (dq)^k}{(q)_k (1-q^k) }.
\end{align}
We now want to let $d\to0$. However, before we do that, we need to transform the double sum on the left-hand side into a suitable form. To that end, we employ Jackson's transformation \eqref{jackson} with $\alpha=dq, \gamma=dq^{k+1}, \beta=0$ and $z=cq^k/d$ so that
\begin{align}\label{interm}
\sum_{j=0}^{\infty}\frac{(dq)_{j}(cq^{k}/d)^{j}}{(dq^{k+1})_j(q)_j}=\frac{(cq^{k+1})_{\infty}}{(cq^{k}/d)_{\infty}}\sum_{j=0}^{\infty}\frac{(dq)_{j}q^{j^2+2jk}c^j}{(dq^{k+1})_j(cq^{k+1})_j(q)_j}.
\end{align}
Substituting \eqref{interm} in the double sum on the left-hand side of \eqref{interm0} and simplifying as in \eqref{m0bef}, we see that
\begin{align}\label{interm1}
\frac{(\frac{c}{d})_{\infty}(dq)_{\infty}}{(q)_{\infty}(cq)_{\infty}} \sum_{k=1}^{\infty}\frac{d^kq^{k(k+1)} }{(dq)_k(q)_k(1-q^k)}\sum_{j=0}^{\infty}\frac{(dq)_j(cq^k/d)^j}{(dq^{k+1})_j(q)_j}=\frac{(dq)_{\infty}}{(q)_{\infty}}\sum_{k=1}^{\infty}\frac{(c/d)_k(dq)^k}{(q)_k(1-q^k)}\sum_{j=0}^{\infty}\frac{(dq)_jc^jq^{(j+k)^2}}{(dq)_{j+k}(cq)_{j+k}(q)_j}.
\end{align}
Substitute \eqref{interm1} in \eqref{interm0} and then let $d\to0$ to finally deduce \eqref{d tends to zero}.
\end{proof}
\begin{remark}
The identity in \eqref{interm0} is equivalent to \cite[Equation (4.5)]{bem}.
\end{remark}
\section{A finite analogue of the $\textup{ospt}$-function of Andrews, Chan and Kim}
In \cite[p. 78]{andrewschankim}, Andrews, Chan and Kim considered the odd moments of rank and crank, namely,
\begin{equation*}
\overline{N}_j(n):=\sum_{k=1}^{\infty}k^jN(k, n)\hspace{3mm}\text{and}\hspace{3mm} \overline{M}_j(n):=\sum_{k=1}^{\infty}k^jM(k, n),
\end{equation*}
where $N(k, n)$ (resp. $M(k, n)$) denote the number of partitions of $n$ with rank (resp. crank) $k$.
They further defined
\begin{align*}
C_1(q)&:=\sum_{n=1}^{\infty}\overline{M}_1(n)q^n,\\
R_1(q)&:=\sum_{n=1}^{\infty}\overline{N}_1(n)q^n,
\end{align*}
and showed that \cite[Theorems 1, 2]{andrewschankim}
\begin{align}
C_1(q)&=\frac{1}{(q)_{\infty}}\sum_{n=1}^{\infty}\frac{(-1)^{n+1}q^{n(n+1)/2}}{1-q^n}=\sum_{k=0}^{\infty}\frac{kq^{k^2}}{(q)_k^{2}},\label{c1q}\\
R_1(q)&=\frac{1}{(q)_{\infty}}\sum_{n=1}^{\infty}\frac{(-1)^{n+1}q^{n(3n+1)/2}}{1-q^n}.\label{r1q}
\end{align}
Their work culminated into an important inequality, namely, $\overline{M}_1(n)>\overline{N}_1(n)$ for all positive integers $n$, which, in fact, led them to consider the odd spt-function $\textup{ospt}(n)$ defined by
\begin{equation*}
\textup{ospt}(n)=\overline{M}_1(n)-\overline{N}_1(n).
\end{equation*}
They then interpreted it combinatorially in terms of even and odd strings in the partitions of $n$. See \cite[p.~80]{andrewschankim} for the definitions. Moreover, they extended all these results for higher values of $k>1$ as well.
In this section, we are concerned with the finite analogues of the first odd moments of rank and crank.
We first note that the finite analogues of ranks and cranks, and of $N(k, n), M(k, n)$ as well as of the rank and crank moments were considered in \cite[p.~8-10]{dems} . The finite analogues of $N(k, n)$ and $M(k, n)$, respectively, $N_{S_{1}}(k, n)$ and $M_{S_{1}}(k, n)$, satisfy
\begin{equation*}
N_{S_{1}}(-k, n)=N_{S_{1}}(k, n)\hspace{1mm}\text{and}\hspace{1mm} M_{S_{1}}(-k, n)=M_{S_{1}}(k, n),
\end{equation*}
because of which one needs to consider the following modified finite rank and crank moments for getting nontrivial odd moments:
\begin{equation*}
\overline{N}_j(n, N):=\sum_{k=1}^{\infty}k^jN_{S_{1}}(k, n)\hspace{3mm}\text{and}\hspace{3mm} \overline{M}_j(n, N):=\sum_{k=1}^{\infty}k^jM_{S_{1}}(k, n).
\end{equation*}
Their generating functions can then be defined to be
\begin{equation*}
R_{j}(q, N):=\sum_{n=1}^{\infty}\overline{N}_j(n, N)q^n\hspace{3mm}\text{and}\hspace{3mm}C_{j}(q, N):=\sum_{n=1}^{\infty}\overline{M}_j(n, N)q^n.
\end{equation*}
Here we are concerned with $R_{1}(q, N)$ and $C_{1}(q, N)$. Note that from \cite[p.~252, Theorem 4.1]{andpar},
\begin{align*}
\frac{(q)_{N}}{(zq)_{N}(z^{-1}q)_{N}}=\frac{1}{(q)_N}+(1-z)\sum_{n=1}^{N}\left[\begin{matrix} N \\ n \end{matrix}\right]\frac{(-1)^n(q)_nq^{n(n+1)/2}}{(q)_{n+N}}\left(\frac{1}{1-zq^n}-\frac{1}{z-q^n}\right),
\end{align*}
where the left-hand side is the finite analogue of the crank generating function.
Applying the differential operator $z\frac{\partial}{\partial z}$ on both sides, we get
\begin{align*}
z\frac{\partial}{\partial z} \frac{(q)_{N}}{(zq)_{N}(z^{-1}q)_{N}}=z\sum_{n=1}^{N}\left[\begin{matrix} N \\ n \end{matrix}\right]\frac{(-1)^{n+1}(q)_nq^{n(n+1)/2}}{(q)_{n+N}}\left(\frac{1-q^n}{(1-zq^n)^2}-\frac{1-q^n}{(z-q^n)^2}\right).
\end{align*}
Now since only the first sum on the right-hand side contributes to positive powers of $z$ when expanding the right-hand side as a Laurent series in $z$, we deduce that
\begin{align}\label{c1qnexpr}
C_1(q, N)&=\lim_{z\to1}z\sum_{n=1}^{N}\left[\begin{matrix} N \\ n \end{matrix}\right]\frac{(-1)^{n+1}(q)_nq^{n(n+1)/2}}{(q)_{n+N}}\frac{1-q^n}{(1-zq^n)^2}\nonumber\\
&=\sum_{n=1}^{N}\left[\begin{matrix} N \\ n \end{matrix}\right]\frac{(-1)^{n+1}(q)_nq^{n(n+1)/2}}{(q)_{n+N}(1-q^n)}.
\end{align}
It is easy to see that letting $N\to\infty$ in the above identity leads to the first equality in \eqref{c1q}.
We now proceed towards obtaining a finite analogue of the first odd rank moment.Moreover, from \cite[p.~252, Theorem 2.1]{andpar}, we have
\begin{align*}
\sum_{n=0}^{N}\left[\begin{matrix} N \\ n \end{matrix}\right]\frac{(q)_nq^{n^2}}{(zq)_n(z^{-1}q)_n}=\frac{1}{(q)_N}+(1-z)\sum_{n=1}^{N}\left[\begin{matrix} N \\ n \end{matrix}\right]\frac{(-1)^n(q)_nq^{n(3n+1)/2}}{(q)_{n+N}}\left(\frac{1}{1-zq^n}-\frac{1}{z-q^n}\right),
\end{align*}
where the left-hand side is a finite analogue of the rank generating function as given in \cite[Theorem 2.2]{dems}. Hence proceeding similarly as in the derivation of \eqref{c1qnexpr}, we arrive at
\begin{align}\label{r1qnexpr}
R_1(q, N)
=\sum_{n=1}^{N}\left[\begin{array}{c}N\\n\end{array}\right]\frac{(-1)^{n+1}(q)_nq^{n(3n+1)/2}}{(q)_{n+N}(1-q^n)},
\end{align}
which indeed gives us \eqref{r1q} upon letting $N\to\infty$.
Next, from \eqref{c1qnexpr} and \eqref{r1qnexpr},
\begin{align}\label{difference}
C_1(q, N)-R_1(q, N)=\sum_{n=1}^{N}\left[\begin{array}{c}N\\n\end{array}\right]\frac{(-1)^{n+1}(q)_nq^{n(n+1)/2}(1-q^{n^2})}{(q)_{n+N}(1-q^n)}.
\end{align}
The coefficients of the right-hand side always appear to be positive. It would be interesting to try to prove this.
\section{Future directions}\label{cr}
Theorem \ref{sptgenthm} is a generalization of the generating function version of Andrews' famous identity for $\text{spt}(n)$ in that it involves an extra parameter $d$. It would be interesting to interpret the coefficients of $d^mq^n$ in the power-series expansions of each of the expressions in \eqref{sptgen}. We have not been able to do it as of now.
Note that one of the first steps in the proof of \eqref{sptgen} was to let $N\to\infty$ in \eqref{befmmain} to obtain \eqref{mmain}. So if one succeeds in extracting the arithmetic and combinatorial information embedded in \eqref{sptgen}, a similar thing could possibly be done beginning with the more general \eqref{befmmain}.
There are many further identities that could be derived from Theorem \ref{2phi1id} or its special cases considered in Section \ref{app}. For example, letting $d=q$ in Theorem \ref{sptgen} gives
\begin{align}
\sum_{j=1}^{\infty}\frac{q^{j^2}}{(q)_{j}^{2}}\sum_{n=1}^{j}\frac{q^n}{(1-q^{n+1})(1-q^n)}=\frac{q^2}{(1-q)^2(q)_{\infty}}.
\end{align}
Here, we have concentrated only on special cases of Theorem \ref{2phi1id} for $c$ or $d$ equal to $0,\pm1$, and that too only when $N\to\infty$!
Lastly, determining if all of the coefficients in the power series expansion of the right-hand side of \eqref{difference} are positive seems to be worthwhile to study in view of the fruitful consequences that ensue if it is true, for example, the inequality between $\overline{M}_1(n, N)$ and $\overline{M}_1(n, N)$ for all $n\in\mathbb{N}$ and hence the existence of the finite analogue of the $\textup{ospt}(n)$ function.
|
1,108,101,566,414 | arxiv | \section{\textsc{Introduction}}
Due to major technological upheavals, the complexity of many dynamical systems has dramatically increased in recent years, thus making their control more and more challenging. The academic community has coined this paradigm shift under the name of the \emph{Cyber-Physical revolution} (see \cite{s150304837, inproceedings, Kim2012CyberPhysicalSA, bookref,doi }). In particular, \emph{Hybrid systems}, which often appear in Cyber-Physical applications, are dynamical systems whose dynamics are characterized by continuous and discrete behaviours.
In many practical applications, the engineer cannot rely on having a model, but rather has to analyse the underlying system in a \emph{data-driven} fashion. Most classical data-driven methods (see e.g. \cite{Karimi_2017, 710876, CAMPI200366}) are limited to linear systems and rely on classical identification and frequency-domain approaches. These methods may not well suited for Cyber-Physical systems because of the natural complexity of these systems. Novel data-driven stability analysis methods have been recently developed based on \emph{scenario optimization} (see \cite{ken, berger, RUBBENS202167}). In this paper we seek to take one more step towards complexity.
We consider data-driven stability analysis of discrete-time \emph{switching linear systems}. Dynamics of a switching linear system defined by a set of matrices $\mathbf{\Sigma} = \{A_i\}_{i \in \{1, \dots, m\}}$ is given by the following equation:
\begin{equation}
x_{t+1} = A_{\sigma(t)} x_t
\end{equation}
for any $t \in \mathbb{N}$, where $x_t \in \mathbb{R}^n$ and $\sigma(t) \in \{1, \dots, m\}$ are respectively the \emph{state} and the \emph{mode} at time $t$. The sequence $(\sigma(0), \sigma(1), \dots) \subseteq \{1, \dots, m\}^{\mathbb{N}}$ is the \emph{switching sequence}.
Switching linear systems are an important family of hybrid systems which often arise in Cyber-Physical systems (see \cite{tabuada}). Stability analysis of switching linear systems is challenging due to the hybrid behaviour caused by the switches. In recent years, many model-based stability analysis techniques have been proposed (see \cite{linhai, jungers_2009_the} and references therein).
In particular, we are interested in the stability of \emph{constrained switching linear systems} (\emph{CSLS} for short). A CSLS is a switching linear system with logical rules on its switching sequence. We represent these rules by an \emph{automaton} (see Definition~\ref{autodef}). White-box stability of CSLS has also been studied extensively (see e.g. \cite{DAI20121099, PHILIPPE2016242, xu2020approximation}). In particular, we are interested in asymptotic stability of CSLS, whose definition is given as follows.
Given an automaton $\mathbf{G}$ and a set of matrices $\mathbf{\Sigma}$, the system $S(\mathbf{G}, \mathbf{\Sigma})$ is said to be \emph{asymptotically stable} (or \emph{stable}, for short) if, for all $x \in \mathbb{R}^n$ and for all infinite words $(\sigma(0), \sigma(1), \dots)$ accepted by $\mathbf{G}$,
\begin{equation}
\lim_{t \to \infty} A_{\sigma(t-1)} \dots A_{\sigma(0)}x = 0.
\end{equation}
In this work we extend the approaches in \cite{ken, berger, RUBBENS202167} by considering a larger state space. For a CSLS $S(\mathbf{G}(V, E), \mathbf{\Sigma})$, we consider that one can observe points in $\mathbb{R}^n \times V$ i.e., couples of state and node. This allows us to find probabilistic guarantees for the asymptotic stability of CSLS whose dynamics are unknown.
\textbf{Outline.} The rest of this paper is organized as follows. We introduce the problem that we tackle in Section~\ref{setting}, as well as all concepts needed to this end. We present our results in Section~\ref{main}. We first propose a formulation allowing us to do this in a data-driven fashion. We then propose a deterministic method to find sufficient condition for instability of black-box CSLS. Finally we find probabilistic guarantees on the stability of a CSLS whose dynamics are unknown. Results are illustrated on a numerical example in Section~\ref{numerical}.
\section{\textsc{Problem setting}}
\label{setting}
In this section, we introduce the notions necessary to formally write the problem tackled in this paper.
\subsection{Constrained joint spectral radius}
We first define an \emph{automaton} (see e.g. \cite{lind_marcus_1995}):
\begin{defn}
\label{autodef}
An automaton is a strongly connected\footnote{A strongly connected graph is a graph that has a path from each vertex to every other vertex. See \cite[Definition~2.2.13]{lind_marcus_1995} for a formal definition.}, directed and labelled graph $\mathbf{G}(V, E)$, where $V$ is the set of nodes and $E$ the set of edges. Note that we drop the writing of $V$ and $E$ when it is clear from the context. The edge $(u, v, \sigma) \in E$ between two nodes $u, v \in V$ carries the label $\sigma \in \{1, \dots, m\}$, where $m \in \mathbb{N}$ is the number of labels.
\end{defn}
In the context of CSLS, $\sigma$ maps to a mode of the system. A sequence of labels $(\sigma(0), \sigma(1), \dots)$ is a \emph{word} in the language \emph{accepted} by the automaton $\mathbf{G}$ if there is a path in $\mathbf{G}$ carrying the sequence as the succession of the labels on its edges. A CSLS defined on the set of matrices $\mathbf{\Sigma}$ and constrained by the automaton $\mathbf{G}$ is noted $S(\mathbf{G}, \mathbf{\Sigma})$.
Let us present an example of CSLS, inspired from \cite[Section~4]{PHILIPPE2016242}, in order to illustrate the notions defined above.
\begin{exmp}
\label{example}
Consider a plant that may experience control failures. Its dynamics is given by $x_{t+1} = A_{\sigma(t)} x_t$ where $A_{\sigma(t)} = A + BK_{\sigma(t)}$ with
\begin{equation}
A = \begin{pmatrix} 0.47 & 0.28 \\ 0.07 & 0.23 \end{pmatrix} \textrm{ and } B = \begin{pmatrix} 0 \\ 1 \end{pmatrix}.
\end{equation}
$K_\sigma(t)$ is described as follows. $K_1 = \begin{pmatrix} k_1 & k_2 \end{pmatrix}$ with $k_1 = -0.245$ and $k_2 = 0.135$, corresponds to the mode where the controller works as expected. $K_2 = \begin{pmatrix} 0 & k_2 \end{pmatrix}$ and $K_3 = \begin{pmatrix} k_1 & 0 \end{pmatrix}$ respectively correspond to the modes when the first and the second part of the controller fails. And $K_4 = \begin{pmatrix} 0 & 0 \end{pmatrix}$ corresponds to the mode when both parts fail. We consider as a constraint that the same part of the controller never fails twice in a row. This is modelled by the automaton $\mathbf{G}$, depicted in Figure~\ref{automaton}.
\begin{figure}[h]
\centering
\label{automaton}
\begin{tikzpicture}[shorten >=1pt,node distance=1.8cm,on grid,auto]
\node[state] (1) {$i$};
\node[state] (2) [above left = of 1] {$j$} ;
\node[state] (3) [below left = of 1] {$k$};
\node[state] (4) [below right=of 1] {$l$};
\path[->]
(2) edge [right] node {3} (3)
(3) edge [bend left] node {2} (2)
(2) edge [bend left] node {1} (1)
(1) edge [left] node {2} (2)
(3) edge [left] node {1} (1)
(1) edge [bend left] node {3} (3)
(1) edge [bend left] node {4} (4)
(4) edge [bend left] node {1} (1)
(1) edge [out=-330,in=-300,looseness=8] node [above right] {1} (1);
\end{tikzpicture}
\caption{Automaton $\mathbf{G}$. No mode can fail twice in a row.}
\end{figure}
In this example, the considered CSLS is thus $S(\mathbf{G}, \mathbf{\Sigma})$ with $\mathbf{\Sigma} = \{A_1, A_2, A_3, A_4\}$.
\end{exmp}
The \emph{constrained joint spectral radius}, introduced in \cite{DAI20121099}, is defined as follows:
\begin{defn}[{\cite[Definition~1.2]{DAI20121099}}]
Given a set of matrices $\mathbf{\Sigma} = \{A_1, \dots, A_m\}$ and an automaton $\mathbf{G}$ whose labels $\sigma \in \{1, \dots, m\}$, the \emph{constrained joint spectral radius} (\emph{CJSR} for short) of the CSLS $S(\mathbf{G}, \mathbf{\Sigma})$ is defined as
\begin{equation}
\begin{aligned}
&\rho(\mathbf{G}, \mathbf{\Sigma}) = \lim_{t \to \infty} \max \{ \|A_{\sigma(t-1)}\dots A_{\sigma(0)} \|^{1/t} : \\
&\quad \quad \quad \quad \quad (\sigma(0), \dots, \sigma(t-1)) \textrm{ is a word of } \mathbf{G} \}.
\end{aligned}
\end{equation}
\end{defn}
As the following proposition shows, the CJSR characterizes the stability of a CSLS:
\begin{prop}[{\cite[Corollary~2.8]{DAI20121099}}]
\label{stabcertif}
Given a set of matrices $\mathbf{\Sigma}$ and an automaton $\mathbf{G}$, the CSLS $S(\mathbf{G}, \mathbf{\Sigma})$ is asymptotically stable if and only if $\rho(\mathbf{G}, \mathbf{\Sigma}) < 1$.
\end{prop}
\subsection{Multiple Quadratic Lyapunov Functions}
We present a classical result from model-based analysis of CSLS. The following proposition gives a quadratic framework for approximating the CJSR of a given CSLS:
\begin{prop}[{\cite[Proposition~2.20]{MPthesis}}]
\label{mqlf}
Consider a CSLS $S(\mathbf{G}(V, E), \mathbf{\Sigma})$ and a constant $\gamma > 0$. If there exists a set of quadratic forms $\{P_i,\, i \in V\}$ satisfying the set of \emph{Linear Matrix Inequalities} (\emph{LMI}s)
\begin{equation}
\forall (u, v, \sigma) \in E: A_\sigma^TP_vA_\sigma \preceq \gamma^2P_u,
\end{equation}
then $n^{-1/2}\gamma \leq \rho(\mathbf{G}, \mathbf{\Sigma}) \leq \gamma$.
\end{prop}
If $\gamma < 1$, the set of norms $\{ \| \cdot \|_{P_u}, \, u \in V\}$ is called a set of \emph{Multiple Quadratic Lyapunov Functions} (\emph{MQLF}). Proposition~\ref{mqlf} thus gives a sufficient condition for the stability of a given CSLS using MQLF.
Consider a given CSLS $S(\mathbf{G}(V, E), \mathbf{\Sigma})$. Let $\Delta = \mathbb{S} \times E$, with $\mathbb{S} \subset \mathbb{R}^n$ the unit sphere. As a preparation to develop our data-driven approach, we reformulate the stability condition in Proposition \ref{mqlf} into a robust optimization problem\footnote{Note that we can restrict $x$ to the unit sphere $\mathbb{S}$ in constraint \eqref{lmis}. We can do this thanks to the \emph{homogeneity} of the CSLS: for any $x \in \mathbb{R}^n$, $\mu > 0$, and $A \in \mathbf{\Sigma}$, it holds that $A(\mu x) = \mu Ax$.}:
\begin{subequations}
\label{PDelta}
\begin{align}
\mathcal{P}(\Delta):& \min_{\substack{\{P_u, \, u \in V\} \\\gamma \geq 0}} \gamma \\
\textrm{s.t. } &\forall (x, (u, v, \sigma)) \in \Delta:
(A_\sigma x)^TP_v(A_\sigma x) \leq \gamma^2 x^TP_ux \label{lmis} \\
&\forall u \in V: P_u \in \{ P : P \succ 0 \}.
\end{align}
\end{subequations}
We denote by $\gamma^*(\Delta)$ and $\{ P^*_u(\Delta),\, u \in V \}$ the solution of $\mathcal{P}(\Delta)$. Following Proposition~\ref{mqlf}, if $\gamma^*(\Delta) < 1$, the set $\{ P^*_u(\Delta),\, u \in V \}$ is a set of MQLF.
The notation $\mathcal{P}(\Delta)$ emphasizes that the whole set of constraints is known in this white-box formulation, in opposition to the black-box problem $\mathcal{P}(\omega_N)$ defined in~\eqref{PomegaN}.
\section{\textsc{Main results}}
\label{main}
\subsection{Data-driven formulation}
\label{formulation}
In this paper, we analyze the problem of approximating the CJSR in a data-driven fashion: we assume that the system is not known, hence problem $\mathcal{P}(\Delta)$ defined in Equation~\eqref{PDelta} cannot be solved. We only sample a finite number $N$ of observations of a given CSLS $S(\mathbf{G}(V, E), \mathbf{\Sigma})$. One observation consists in an ordered pair of points in the state space defined above i.e., $\mathbb{R}^n \times V$. The $i$-th observation is a couple of initial and final states and nodes. It is noted $((x_i, u_i), (y_i, v_i)) \in (\mathbb{R}^n \times V)^2$ where $(u_i, v_i, \sigma_i) \in E$ for some label $\sigma_i \in \{ 1, \dots, m \}$, and $y_i = A_{\sigma_i}x_i$. For any $i = 1, \dots, N$, $x_i$ and $(u_i, v_i, \sigma_i)$ are drawn randomly, uniformly and independently from respectively $\mathbb{S}$ and $E$. We attract the attention of the reader on the fact that the sampled mode is not known.
We define the sample set $\omega_N$ as
\begin{equation}
\omega_N = \{ (x_i, (u_i, v_i, \sigma_i), \, i = 1, \dots, N\},
\end{equation}
where $x_i, u_i, v_i$ and $\sigma_i$ are as described above. Note that $\omega_N$ is a subset of $N$ elements of $\Delta$.
Now, for a given set $\omega_N$, let us define the \emph{sampled optimization problem} $\mathcal{P}(\omega_N)$:
\begin{subequations}
\label{PomegaN}
\begin{align}
\mathcal{P}(\omega_N):& \min_{\substack{\{P_u, \, u \in V\} \\\gamma \geq 0}} \gamma \\
\textrm{s.t. } &\forall (x, (u, v, \sigma)) \in \omega_N:
(A_\sigma x)^TP_v(A_\sigma x) \leq \gamma^2 x^TP_ux \label{lmisSampled}\\
&\forall u \in V: P_u \in \{ P : I \preceq P \preceq CI \}, \label{compacitySampled}
\end{align}
\end{subequations}
for a large $C \in \mathbb{R}_{\geq 0}$. We denote by $\gamma^*(\omega_N)$ and $\{ P^*_u(\omega_N),\, u \in V \}$ the solution of $\mathcal{P}(\omega_N)$. The problem that we tackle in this paper is the inference, with a user-defined confidence level, of $\gamma^*(\Delta)$, the solution of $\mathcal{P}(\Delta)$ defined in Equation~\eqref{PDelta} from the solution of $\mathcal{P}(\omega_N)$ defined in Equation~\eqref{PomegaN} i.e., the value $\gamma^*(\omega_N)$ and the set $\{P_u^*(\omega_N), \, u \in V\}$.
Problem $\mathcal{P}(\omega_N)$ defined in Equation~\eqref{PomegaN} differs from $\mathcal{P}(\Delta)$ defined in Equation~\eqref{PDelta} in two ways: the LMIs expressed in constraint \eqref{lmisSampled} are restricted to $\omega_N$, and compactness of the domain of the matrices $\{P_u, u \in V\}$ is imposed in constraint \eqref{compacitySampled}. We will need the latter to prove Proposition~\ref{cardinality}.
\subsection{Deterministic lower bound on the CJSR}
In the same fashion as in \cite{ken}, we derive a deterministic lower bound on the CJSR:
\begin{prop}
\label{lowerbound}
Let $\omega_N$ be a set of $N$ observations from $\Delta$ as explained above. Consider the program $\mathcal{P}(\omega_N)$ defined in \eqref{PomegaN} for the CSLS $S(\mathbf{G}, \mathbf{\Sigma})$ with optimal cost $\gamma^*(\omega_N)$. Then the following holds :
\begin{equation}
n^{-1/2} \gamma^*(\omega_N) \leq \rho(\mathbf{G}, \mathbf{\Sigma}).
\end{equation}
\end{prop}
\begin{proof}
Notice that $\mathcal{P}(\omega_N)$ defined in \eqref{PomegaN} is a relaxation of $\mathcal{P}(\Delta)$ defined in \eqref{PDelta}. As a consequence, we have $\gamma^*(\Delta) \geq \gamma^*(\omega_N)$. Following Proposition~\ref{mqlf},
\begin{equation}
\rho(\mathbf{G}, \mathbf{\Sigma}) \geq n^{-1/2} \gamma^*(\Delta) \geq n^{-1/2} \gamma^*(\omega_N),
\end{equation}
which is the desired result.
\end{proof}
\begin{rem}
One can show that the lower bound of Proposition~\ref{lowerbound} can be improved thanks to \emph{Sums-of-Squares approximation methods}, introduced in \cite{Parrilo2008ApproximationOT} for the approximation of the \emph{joint spectral radius} and generalized in \cite{PHILIPPE2016242} for the CJSR.
\end{rem}
\subsection{Probabilistic upper bound on the CJSR}
\label{probupper}
\begin{prop}
\label{cardinality}
Consider the program $\mathcal{P}(\Delta)$ for the CSLS $S(\mathbf{G}(V, E), \mathbf{\Sigma})$ with optimal cost $\gamma^*(\Delta)$. There exists a set $\omega \subset \Delta$ with $|\omega| = |V| n(n+1)/2$ such that $\gamma^*(\omega) = \gamma^*(\Delta)$, where $\gamma^*(\omega)$ is the optimal cost of the program $\mathcal{P}(\omega)$.
\end{prop}
A proof of Proposition~\ref{cardinality} is provided in Appendix~\ref{appProof}.
\begin{rem}
\label{better}
There are two main differences between Proposition~\ref{cardinality} and \cite[Lemma~1]{RUBBENS202167}: the proposition is derived for CSLS instead of arbitrary switching linear systems, and the cardinality of the set is the number of variables of the program minus 1, while it is the number of variables of the program in \cite{RUBBENS202167}.
\end{rem}
Now, let us define the notion of \emph{spherical cap}:
\begin{defn}[\cite{li}]
The \emph{spherical cap} on $\mathbb{S}$, the unit sphere, of direction $c$ and measure $\varepsilon$ is defined as $\mathcal{C}(c, \varepsilon) := \left\{ x \in \mathbb{S} : c^Tx > \| c \| \delta(\varepsilon) \right\}$, where $\delta(\varepsilon)$ is defined as\footnote{
In Equation~\eqref{cap}, $I^{-1}(y; a, b)$ is the \emph{inversed regularized incomplete beta function} (see \cite{Majumder1973InverseOT}). Its ouput is $x > 0$ such that $I(x; a, b) = y$, where $I$ is defined as
\begin{equation}
I(\cdot; a, b) : \mathbb{R}_{>0} \to \mathbb{R}_{>0} : x \mapsto I(x; a, b) = \frac{\int_{0}^x t^{a-1} (1-t)^{b-1} \textrm{d}t}{\int_{0}^1 t^{a-1} (1-t)^{b-1} \textrm{d}t}
\end{equation}
}
\begin{equation}
\delta(\varepsilon) = \sqrt{1 - I^{-1}\left( 2\varepsilon ; (n-1)/2, 1/2 \right)}.
\label{cap}
\end{equation}
\end{defn}
The following proposition provides a bound on the conservatism of the sampled problem $\mathcal{P}(\omega_N)$ defined in \eqref{PomegaN}, with respect to the white-box problem $\mathcal{P}(\Delta)$ defined in \eqref{PDelta} as a function of $N$, the number of points sampled:
\begin{prop}
\label{dist}
Consider the program $\mathcal{P}(\Delta)$ for the CSLS $S(\mathbf{G}(V, E), \mathbf{\Sigma})$ with optimal cost $\gamma^*(\Delta)$. Let $\omega_N = \{ (x_i, (u_i, v_i, \sigma_i)),i = 1, \dots, N \}$ be a set of $N$ samples from $\Delta$ as explained above. Suppose $N \geq |V|n(n+1)/2$. Then, for all $\varepsilon \in (0, 1]$, with probability at least
\begin{equation}
\beta(\varepsilon, m, N) = 1 - |V|\frac{n(n+1)}{2}\left(1 - \frac{\varepsilon}{m |V|} \right)^N,
\end{equation}
there exists a set $\omega'_N = \{ (x'_i, (u_i, v_i, \sigma_i)),i = 1, \dots, N \} \subset \Delta$ such that $\gamma^*(\omega'_N) = \gamma^*(\Delta)$ with $\| x_i - x'_i \| \leq \sqrt{2 - 2\delta(\varepsilon)}$.
\end{prop}
The proof of Proposition~\ref{dist} follows the same lines as the one of \cite[Proposition~2]{RUBBENS202167} except for three points. First the number of variables of the problem is not the same. Second, given that the edges are sampled uniformly (c.f. Section~\ref{setting}), the probability of sampling a certain label $\sigma$ is at least $1/(m|V|)$, while it is $1/m$ in the unconstrained case. Third, Proposition~\ref{cardinality} allows to improve the probability $\beta$ according to Remark~\ref{better}.
We now apply a sensitivity analysis approach in order to obtain from Proposition~\ref{dist} a probabilistic upper bound on $\gamma^*(\Delta)$ the optimal cost of $\mathcal{P}(\Delta)$ (defined in Equation~\eqref{PDelta}) from the sampled optimal variables $\gamma^*{\omega_N}$ and $\{ P^*_u(\omega_N), u \in V \}$ of $P(\omega_N)$ (defined in Equation~\eqref{PomegaN}).
\begin{thm}
\label{1sttheorem}
Consider the program $\mathcal{P}(\Delta)$ defined in \eqref{PDelta} for the CSLS $S(\mathbf{G}(V, E), \mathbf{\Sigma})$ with optimal cost $\gamma^*(\Delta)$. Let $\omega_N$ be a set of $N$ samples from $\Delta$ as explained in Section~\ref{formulation}, with $N \geq |V|n(n+1)/2$. Consider the sampled program $\mathcal{P}(\omega_N)$ defined in \eqref{PomegaN} with solution $\gamma^*(\omega_N)$ and $\{ P_u^*(\omega_N), u \in V \}$. For any $\beta \in [0, 1)$, let
\begin{equation}
\label{varepsilon}
\varepsilon = m |V| \left( 1 - \sqrt[N]{\frac{2(1-\beta)}{|V|n(n+1)}} \right).
\end{equation}
Then, with probability at least $\beta$,
\begin{equation}
\label{bound1sttheorem}
\begin{aligned}
&\gamma^*(\Delta) \leq \gamma^*(\omega_N) \, + \\
& \max_{(x, (u, v, \sigma)) \in \omega_N} \left\{ \sqrt{\frac{\lambda_{\max}^u}{\lambda_{\min}^u}} \gamma^*(\omega_N) + \sqrt{\frac{\lambda_{\max}^v}{\lambda_{\min}^u}} \mathcal{A}(\mathbf{\Sigma}) \right\} d(\varepsilon),
\end{aligned}
\end{equation}
with $d(\varepsilon) = \sqrt{2 - 2\delta(\varepsilon)}$, $\lambda_{\min}^u$ and $\lambda_{\max}^u$ respectively the minimal and maximal eigenvalue of $P^*_u(\omega_N)$, and
\begin{equation}
\label{maxnorm}
\mathcal{A}(\mathbf{\Sigma}) = \max_{A \in \mathbf{\Sigma}} \| A \|.
\end{equation}
\end{thm}
\begin{proof}
For the sake of readibility, let $\gamma = \gamma^*(\omega_N)$ and $P_u = P^*_u(\omega_N)$ for any $u \in V$. By definition, for any $(x, (u, v, \sigma)) \in \omega_N$,
\begin{equation}
\|A_\sigma x \|_{P_v} \leq \gamma \| x \|_{P_u}.
\end{equation}
Consider now for any $P \in \mathcal{S}^n$ its \emph{Cholesky decomposition} $P = L^TL$, where $\mathcal{S}^n$ is the set of \emph{positive semi-definite} \emph{symmetric} matrices. Then the following holds:
\begin{equation}
\|x\|_P = \| Lx \| \leq \| L \| \| x \| \leq \sqrt{\lambda_{\max}(P)}\| x \|,
\end{equation}
where $\lambda_{\max}(P)$ is the maximal eigenvalue of $P$.
Let us now consider an arbitrary constraint $(y, (u, v, \sigma)) \in \Delta$, and define $y = x + \Delta x$ with $(x, (u, v, \sigma)) \in \omega_N$. Then, for any $(x, (u, v, \sigma)) \in \omega_N$, it holds that
\begin{equation}
\begin{aligned}
\label{ineq}
&\| A_\sigma (x + \Delta x) \|_{P_v} \leq \|A_\sigma x \|_{P_v} + \| A_\sigma \Delta x \|_{P_v} \\
&\quad \leq \gamma \|x\|_{P_u} + \|A_\sigma \Delta x \|_{P_v} \\
&\quad = \gamma \| (x + \Delta x) - \Delta x \|_{P_u} + \|A_\sigma \Delta x \|_{P_v} \\
&\quad \leq \gamma \| x + \Delta x \|_{P_u} + \gamma \| \Delta x \|_{P_u} + \|A_\sigma \Delta x \|_{P_v} \\
&\quad \leq \gamma \| x + \Delta x \|_{P_u} + \gamma \| \Delta x \| \sqrt{\lambda_{\max}^u} \\
&\hspace{2.5cm} + \| A_\sigma \| \| \Delta x \| \sqrt{\lambda_{\max}^v} \\
&\quad \leq \gamma \| x + \Delta x \|_{P_u} + \gamma \| \Delta x \| \sqrt{\lambda_{\max}^u} \frac{\|x + \Delta x\|_{P_u}}{\sqrt{\lambda_{\min}^u}} \\
&\hspace{2.5cm} + \| A_\sigma \| \| \Delta x \| \sqrt{\lambda_{\max}^v} \frac{\|x + \Delta x\|_{P_u}}{\sqrt{\lambda_{\min}^u}} \\
&\quad = \left[ \gamma + \left( \sqrt{\frac{\lambda_{\max}^u}{\lambda_{\min}^u}} \gamma + \sqrt{\frac{\lambda_{\max}^v}{\lambda_{\min}^u}} \|A_\sigma \| \right) \|\Delta x\| \right] \\
& \hspace{5cm} \| x + \Delta x \|_{P_u}.
\end{aligned}
\end{equation}
For any $\beta \in [0, 1)$, let $\varepsilon$ be defined such as in Equation~\eqref{varepsilon}, then, given that $N \geq |V|n(n+1)/2$, Proposition~\ref{dist} guarantees the existence of a set $\omega'_N$ with $N$ points such that $\gamma^*(\omega'_N) = \gamma^*(\Delta)$ with probability at least $\beta$, and such that for any $(x, (u, v, \sigma)) \in \omega_N$, there exists $\Delta x$ such that $(x + \Delta x, (u, v, \sigma)) \in \omega'_N$ and $\| \Delta x \| \leq d(\varepsilon)$. Hence, by definition and following Equation~\eqref{ineq},
\begin{equation}
\label{boundwithA}
\begin{aligned}
\gamma^*(\Delta) &= \gamma^*(\omega'_N) \\
&\leq \gamma \, + \\
& \max_{(x, (u, v, \sigma)) \in \omega_N} \left\{ \sqrt{\frac{\lambda_{\max}^u}{\lambda_{\min}^u}} \gamma + \sqrt{\frac{\lambda_{\max}^v}{\lambda_{\min}^u}} \mathcal{A}(\mathbf{\Sigma}) \right\} d(\varepsilon),
\end{aligned}
\end{equation}
with probability at least $\beta$.
\end{proof}
\subsection{Estimation of the maximal norm}
In order to get a data-driven probabilistic bound as expressed in Equation~\eqref{boundwithA}, it remains to approximate $\mathcal{A}(\mathbf{\Sigma})$ as defined in Equation~\eqref{maxnorm}. First, note that the following holds \cite[Proposition~2.7]{jungers_2009_the}:
\begin{equation}
\label{muDelta}
\begin{aligned}
\mathcal{A}(\mathbf{\Sigma}) &= \eta^*(\Delta) \\&= \min_{\eta \geq 0} \eta \textrm{ s.t. } \forall (x, (u, v, \sigma)) \in \Delta: \|A_{\sigma} x \| \leq \eta.
\end{aligned}
\end{equation}
As it is assumed that $\mathbf{\Sigma}$ is not known, in this subsection, we seek to find a probabilistic upper bound on the value of $\mathcal{A}(\mathbf{\Sigma})$, from the given set of observations $\omega_N$. With the same idea as in Section~\ref{probupper}, let us infer the value of $\eta^*(\Delta) = \mathcal{A}(\mathbf{\Sigma})$ from the solution of its sampled problem
\begin{equation}
\label{muomegaN}
\eta^*(\omega_N) = \min_{\eta \geq 0} \eta \textrm{ s.t. } \forall (x, (u, v, \sigma)) \in \omega_N: \|A_{\sigma} x \| \leq \eta,
\end{equation}
with a user-defined confidence level.
The general \emph{chance-constrained} theorem \cite[Theorem~6]{berger} requires a technical assumption \cite[Assumption~8]{berger} that can be violated in our case. We give a proof for Theorem~\ref{2ndtheorem} allowing us to get rid of this assumption.
\begin{thm}
\label{2ndtheorem}
Let $\omega_N$ be a set of $N$ samples from $\Delta$ as explained in Section~\ref{formulation}. Consider the solutions $\eta^*(\Delta)$ and $\eta^*(\omega_N)$ defined in equations \eqref{muDelta} and \eqref{muomegaN} respectively. For any $\beta' \in [0, 1)$, let
\begin{equation}
\label{vareps}
\varepsilon' = 1 - \sqrt[N]{1-\beta'}.
\end{equation}
Then, with probability at least $\beta'$,
\begin{equation}
\label{bound2ndtheorem}
\eta^*(\Delta) \leq \frac{\eta^*(\omega_N)}{\delta(\varepsilon'm|V|/2)}.
\end{equation}
\end{thm}
\begin{proof}
Let the \emph{violating set} $V(\eta) := \{ (x, (u, v, \sigma)) \in \Delta : \|A_\sigma x\| > \eta \}$, and let $f: \mathbb{R} \to [0, 1]: \eta \mapsto f(\eta) = \mathbb{P}[V(\eta)]$ be its measure. Note that $f$ is decreasing. For any $\varepsilon' \in [0, 1]$, we start by showing the following equation:
\begin{equation}
\label{omegaN}
\mathbb{P}^N[\omega_N \subset \Delta: f(\eta^*(\omega_N)) \leq \varepsilon'] = 1 - (1-\varepsilon')^N.
\end{equation}
Consider one sampled constraint $d \in \Delta$, and let $\eta_{\varepsilon'} \in \mathbb{R}$ be such that $f(\eta_{\varepsilon'}) = \varepsilon'$. Then $\mathbb{P}[d \in \Delta: f(\eta^*(\{d\})) > \varepsilon'] = \mathbb{P}[d \in \Delta: f(\eta^*(\{d\})) > f(\eta_{\varepsilon'})]$. Since $f$ is decreasing and has $[0, 1]$ as codomain, $\mathbb{P}[d \in \Delta: f(\eta^*(\{d\})) > f(\eta_{\varepsilon'})] = 1-\varepsilon'$, hence the following holds:
\begin{equation}
\mathbb{P}[d \in \Delta: f(\eta^*(\{d\})) > \varepsilon'] = 1-\varepsilon'.
\end{equation}
Since samples in $\omega_N$ are i.i.d., the following holds:
\begin{equation}
\begin{aligned}
&\mathbb{P}^N[\omega_N \subset \Delta: f(\eta^*(\omega_N)) > \varepsilon'] \\
=\,& \left(\mathbb{P}[d \in \Delta: f(\eta^*(\{d\})) > \varepsilon']\right)^N \\
=\,& (1-\varepsilon')^N,
\end{aligned}
\end{equation}
which is equivalent to Equation~\eqref{omegaN}.
Now, define the projected violating set $\tilde{\mathbb{S}} \subseteq \mathbb{S}$ as follows:
\begin{equation}
\tilde{\mathbb{S}} = \{ x \in \mathbb{S} : \exists (u, v, \sigma) \in E, \|A_\sigma x\| > \eta^*(\omega_N)\}.
\end{equation}
For any $(u, v, \sigma) \in E$, we define:
\begin{align}
\tilde{\mathbb{S}}_{(u, v, \sigma)} = \{ x \in \mathbb{S} : \|A_\sigma x\| > \eta^*(\omega_N)\}.
\end{align}
Thus, $\tilde{\mathbb{S}} = \cup_{(u, v, \sigma) \in E} \tilde{\mathbb{S}}_{(u, v, \sigma)}$. In the worst case, the sets $\{\tilde{\mathbb{S}}_{(u, v, \sigma)} \}$ are disjoint. In this case, $\mathbb{P}_x[\tilde{\mathbb{S}} ]= \sum_{(u, v, \sigma) \in E} \mathbb{P}_x[\tilde{\mathbb{S}}_{(u, v, \sigma)}]$ and
\begin{equation}
\begin{aligned}
\mathbb{P}[V(\eta)] =& \sum_{(u, v, \sigma) \in E} \mathbb{P}_x[\tilde{\mathbb{S}}_{(u, v, \sigma)}] \mathbb{P}_E [\{(u, v, \sigma)\}]\\
\ge & \frac{1}{m|V|}\sum_{(u, v, \sigma) \in E} \mathbb{P}_x[\tilde{\mathbb{S}}_{(u, v, \sigma)}] =\frac{\mathbb{P}_x[\tilde{\mathbb{S}}]}{m|V|},
\end{aligned}
\end{equation}
where $\mathbb{P}_x$ and $\mathbb{P}_E$ denote the uniform (probability) measure on $\mathbb{S}$ and $E$ respectively. This means that $\mathbb{P}[V(\eta)] \le \varepsilon'$ implies $\mathbb{P}_x[\tilde{\mathbb{S}}] \le \varepsilon' m |V| $.
The rest of the proof follows the same lines as the proof of \cite[Theorem~15]{ken}.
\end{proof}
Theorem~\ref{2ndtheorem} allows us to directly derive the following corollary:
\begin{cor}
\label{cor}
Consider the program $\mathcal{P}(\Delta)$ defined in \eqref{PDelta} for the CSLS $S(\mathbf{G}(V, E), \mathbf{\Sigma})$ with optimal cost $\gamma^*(\Delta)$. Let $\omega_N$ be a set of $N$ samples from $\Delta$ as explained in Section~\ref{formulation}, with $N \geq |V|n(n+1)/2$. Consider the sampled program $\mathcal{P}(\omega_N)$ defined in \eqref{PomegaN} with solution $\gamma^*(\omega_N)$ and $\{ P_u^*(\omega_N), u \in V \}$. For any $\beta, \beta' \in [0, 1)$, let
\begin{equation}
\varepsilon = m |V| \left( 1 - \sqrt[N]{\frac{2(1-\beta)}{|V|n(n+1)}} \right),
\end{equation}
and
\begin{equation}
\varepsilon' = \frac{m}{2} \left( 1 - \sqrt[N]{1-\beta'} \right).
\end{equation}
Then, with probability at least $\beta + \beta' - 1$,
\begin{equation}
\label{eqcoro}
\begin{aligned}
&\rho(\mathbf{G}, \mathbf{\Sigma}) \leq \gamma^*(\omega_N) \, + \\
& \max_{(x, (u, v, \sigma)) \in \omega_N} \left\{ \sqrt{\frac{\lambda_{\max}^u}{\lambda_{\min}^u}} \gamma^*(\omega_N) + \sqrt{\frac{\lambda_{\max}^v}{\lambda_{\min}^u}} \frac{\eta^*(\omega_N)}{\delta(\varepsilon')} \right\} d(\varepsilon),
\end{aligned}
\end{equation}
with $d(\varepsilon) = \sqrt{2 - 2\delta(\varepsilon)}$, $\lambda_{\min}^u$ and $\lambda_{\max}^u$ respectively the minimal and maximal eigenvalue of $P^*_u(\omega_N)$
\end{cor}
\begin{proof}
Following Proposition~\ref{mqlf}, Equation~\eqref{eqcoro} holds if Equation~\eqref{bound1sttheorem} and Equation~\eqref{bound2ndtheorem} both hold. Theorem~\ref{1sttheorem} states that Equation~\eqref{bound1sttheorem} holds with probability $\beta$, and Theorem~\ref{2ndtheorem} states that Equation~\eqref{bound2ndtheorem} holds with probability $\beta'$. Thus
\begin{equation}
\begin{aligned}
&\mathbb{P}^N [\omega_N \subset \Delta : \textrm{\eqref{bound1sttheorem} and \eqref{bound2ndtheorem} hold}] \\
= \, & 1 - \mathbb{P}^N [\omega_N \subset \Delta : \textrm{\eqref{bound1sttheorem} or \eqref{bound2ndtheorem} does not hold}] \\
\geq \, & 1 - \mathbb{P}^N [\omega_N \subset \Delta : \textrm{\eqref{bound1sttheorem} does not hold}] \\
&\quad \quad \quad - \mathbb{P}^N [\omega_N \subset \Delta : \textrm{\eqref{bound2ndtheorem} does not hold}] \\
\geq \, & 1 - (1 - \beta) - (1 - \beta') \\
= \, & \beta + \beta' - 1,
\end{aligned}
\end{equation}
which concludes the proof.
\end{proof}
\section{\textsc{Numerical experiments}}
\label{numerical}
Let us consider the CSLS $S(\mathbf{G}, \mathbf{\Sigma})$ introduced in Example~\ref{example}. Using the CJSR white-box approximation method introduced in \cite{PHILIPPE2016242}, we know that the true CJSR $\rho(\mathbf{G}, \mathbf{\Sigma}) \approx 0.48741$.
The simulations are the following: for different values of $N$, we sample $N$ observations as explained in Section~\ref{formulation}. We then compute the optimal variables $\gamma^*(\omega_N)$ and $\{ P^*_u(\omega_N), u \in V \}$ of the problem $\mathcal{P}(\omega_N)$ defined in Equation~\eqref{PomegaN}. From these variables, we compute the lower and upper bounds expressed in Proposition~\ref{lowerbound} and Corollary~\ref{cor}. We provide the results for the example described above in Figure~\ref{exp} for an increasing number $N$ of sampled points i.e. $N \in [1, 50 000]$.
\begin{figure}[h]
\label{exp}
\includegraphics[width = \linewidth]{bounds.pdf}
\caption{Lower and upper bounds derived in Proposition~\ref{lowerbound} and Corollary~\ref{cor} for an increasing number of samples $N$, with confidence levels $\beta + \beta' - 1 \in \{0.95, 0.98, 0.99\}$.}
\end{figure}
We observe that the lower bound fastly converges to a conservative value. We recall though that this lower bound is deterministic. Concerning the upper bounds, we notice that an upper bound becomes tighter for larger values $N$, the number of samples. We also observe that, as expected, the cost of a tighter bound is a smaller confidence level. Indeed, one can see on Figure~\ref{exp} that the bound is tighter for small values of $\beta + \beta' - 1$. We can finally observe that one needs less samples to have stability guarantee (according to Proposition~\ref{stabcertif}), for smaller confidence levels. One needs respectively 20000, 23000 and 26000 samples to have stability guarantee for the considered CSLS with confidence levels of respectively 95\%, 98\% and 99\%.
\section{\textsc{Conclusion}}
In this work, we leveraged approaches such as \emph{scenario optimization} and \emph{sensitivity analysis} to propose a method providing probabilistic guarantees on the stability of an unknown CSLS. We used the CJSR as a tool to approximate the black-box stability of CSLS. In particular, we provided a deterministic lower bound on the CJSR, as well as a probabilistic upper bound on it. We showed that we obtain tighter approximations of the CJSR for a large number of samples, but also for smaller confidence levels. Finally, we demonstrated that the theory holds by applying it to an academic example.
Our work, and our findings, follow the previous work of \cite{ken, berger, RUBBENS202167}. Compared with this previous body of work, we believe that our contribution achieves an important step towards practical applications, and in particular towards hybrid automata and cyber-physical systems. In the future, we plan to pursue further this direction, for instance by considering more involved models of hybrid systems, and by refining our bounds.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.